title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
TAROT: Targeted Data Selection via Optimal Transport
Accept (poster)
Summary: This paper proposes to address the problem of task-specific sample selection from the perspective of distribution matching that can be solved by optimal trasnport. Experiments on influence estimation, semantic segmentation, motion prediction, and instruction tuning show the effectivenss of the proposed method. Claims And Evidence: Please refer to the weakness. Methods And Evaluation Criteria: Yes. But lack of some important baselines. Please refer to the weakness and experimental design. Theoretical Claims: Yes. correct. Experimental Designs Or Analyses: The author did not report the results of TYDIQA like LESS and [1] did. Could the author please explain the reason for this? Is it due to experimental limitations or other considerations? It is necessary to clarify this to make the research more complete and comparable with existing works. [1] TSDS: Data Selection for Task-Specific Model Finetuning. NIPS 2024. Supplementary Material: Yes. Code. Relation To Broader Scientific Literature: Please refer to the weakness and essential references. Essential References Not Discussed: The key contribution is an OT-driven task-specific sample selection paradigm. However, a closely related paper has been overlooked [1], which also employs OT for task-specific data selection and can be based on either semantic embedding or gradient (in particular, this submission paper adpots gradient). On the other hand, both this submission and [1] utilize nearest neighbor search during OT-based sample selection in order to reduce computational cost. in my oponion, it's necessary to discuss the connections and differences between this submission and [1], as well as to provide experimental comparisons with [1]. [1] TSDS: Data Selection for Task-Specific Model Finetuning. NIPS 2024. Other Strengths And Weaknesses: Strengths: 1. The introduction of OT to task-specific data selection is reasonable. 2. Extensive experiments on different settings prove the effectiveness. Weakness: 1. The author attempts to introduce more content into the paper, which makes it difficult to grasp the key points of the paper. For example, the author mentions a variant of the fixed-size selection method (TAROT-FSS), and the experimental results corresponding to it are TAROT-5%/20%/50%? Also, regarding data weighting (Sec.3.4), is it used to weight the samples when constructing the marginal distributions alpha or beta? The definition and significance of this part are not very clear. Moreover, the repetition factor is only used in Section 4.2. Why is it not considered in Sections 4.3 and 4.4? The experimental setup is somewhat confusing. 2. The improvement in experimental results compared to LESS seems minimal, especially in Figures 5 and 7. However, Table 7 shows that TAROT requires significantly more computational cost and time than LESS. This makes the method in the paper appear to lack a good trade-off between performance improvement and computational cost. 3. The author mentions in the abstract (Line 016) that "These methods perform well on limited, unimodal data (i.e., data following a single pattern) but become less effective as target data increases in complexity." However, I don't see design or experiment on sample selection for multi-modal datasets in this paper. Perhaps I missed some details. Maybe Motion Prediction? The Motion Prediction task (Waymo) involves multiple modalities, such as RGB and point clouds. However, from a task setting perspective, motion prediction based on historical data is often considered a single-modality task. Also, the experimental details seem to lack clarification on which modalities were used and how different input data types interacted during the data selection. 4. When introducing OT, the author uses the integral form under continuous distributions in Eq. 3. However, when formulating the marginal distributions alpha and beta, a discrete form is used. Considering that the author employs entropic OT (Sinkhorn), it would be more consistent to use the discrete form of OT notation throughout. 5. Line 161: "due to the correlation of gradient feature" lacks necessary references, experiments, or observations to support it. This makes the motivation for the Whitened Feature Distance not entirely convincing. 6. Cholesky whitening lacks relevant references. Other Comments Or Suggestions: Please refer to the Weakness. Questions For Authors: What's the implementation of $c(z, z_t)$ in Eq.13? Cosine distance or WFD $d_{\mathcal{Z}}^{w}(z,z')$? Please refer to the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > ### Q3: Clarifying Modality By **modality**, we refer to **distributional modality**, rather than input modality. We elaborate on why other tasks exhibit greater distributional multi-modality compared to **instruction tuning**, as summarized in the table below: | | Motion Prediction | Semantic Segmentation | Instruction Tuning | |---------------------|--------------------------------------------------|---------------------------------------------------------------|------------------------------| | Target Dataset Size | 32,186 | 2,975 | 81 (BBH) / 285 (MMLU) | | Domain Coverage | Diverse traffic scenarios from two cities | Stereo video scenes from 50 different cities | Specific LLM task types | As shown, both motion prediction and semantic segmentation involve significantly larger datasets and broader domain coverage, making data selection inherently more challenging. Our method, validated across multiple tasks, demonstrates better generalization in such multi-modal distributional settings. --- > ### Q2: Comparison with LESS We respectfully disagree with the assessment that TAROT offers only limited improvement over LESS. As discussed above, TAROT yields **substantial gains** on tasks characterized by high distributional multimodality: - Compared to LESS, **TAROT achieves:** - **+44%** improvement in **semantic segmentation** - **+102%** improvement in **motion prediction** - Even in **instruction tuning**, which has relatively lower modality, TAROT outperforms LESS using just **2%** of the data. **Computational Cost:** While TAROT incurs additional cost due to OT distance computation, this cost is minimal (118 seconds) compared to gradient calculation time (32 hours). Moreover, gradient calculation is a one-time cost and can be amortized across multiple tasks. --- > ### Comparison with TSDS As noted in our response to Reviewer 5vQX, we discuss differences with TSDS in detail. Here, we provide additional experiments: | Dataset | Llama-3.1-8B MMLU ↑ | Llama-3.1-8B BBH ↑ | Qwen-2.5-7B MMLU ↑ | Qwen-2.5-7B BBH ↑ | |----------------|---------------------|---------------------|---------------------|---------------------| | 5% LESS | 65.7 | 62.6 | **74.3** | 66.3 | | 5% TSDS | 65.2 | 63.1 | 73.9 | 66.2 | | 5% TAROT | **66.0** | **65.0** | 74.1 | 66.9 | | TAROT-OTM | 65.7 (0.13%) | 63.6 (0.21%) | **74.3** (0.09%) | **68.9** (0.13%) | **Time Complexity:** We measured the data selection time on a node with 370 GB RAM, 64 CPUs, and an H100 GPU: | Method | LESS | TSDS | TAROT-Fixed | TAROT-OTM | |--------------------|------|------------|-------------|------------| | Data Selection Time| 46s | **10 hrs** | 59s | 118s | **OOM Issue on Motion Prediction:** TSDS cannot be applied to the motion prediction task (32k samples) due to out-of-memory (OOM) errors. **Additional Results on TydiQA:** Due to resource constraints, we focused on BBH/MMLU. For completeness, we now include results on TydiQA (only 9 target samples): | Dataset | All | 5% Random | 5% LESS | 5% TAROT | TAROT-OTM | |------------------------|-----|-----------|---------|----------|-------------------| | Llama-3.1-8B TydiQA ↑ | 63.1| 61.0 | 69.2 | **71.1** | 66.9 (0.05%) | --- > ### Q5: Motivation for Whitened Feature Distance We visualized the **[covariance matrix of raw gradient features](https://postimg.cc/SXSkcZrX)**, which reveals strong correlations—supporting our motivation for feature whitening. Additionally, this is conceptually aligned with the findings in _“Whitening for Self-Supervised Representation Learning”_, which we will cite in the revision. --- > ### Q1: Clarifications We appreciate the feedback and will revise the manuscript for clarity. Here are key clarifications: - **Fixed-Size vs. OTM:** The 5%, 10%, 20%, and 50% results refer to **TAROT-Fixed**. **TAROT-OTM** dynamically selects the ratio that minimizes OT distance. - **Data Weighting Implementation:** Weighting is applied by **repeating samples** during training based on their assigned weights. - **Training Overhead:** Due to increased computation, we omit data weighting in the **motion prediction** task to demonstrate TAROT’s effectiveness without any training overhead. - **Equation 13:** Refers to the **Whitened Feature Distance (WFD)**. --- > ### Q4 & Q6 Thank you for the suggestions. We will ensure consistent use of the discrete formulation throughout the paper and include citations in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks for the authors feedback. I don't have other concerns and will increase my rating accordingly.
Summary: The paper introduces a framework for targeted data selection by minimizing the distance between selected data and target data distribution. The method addresses the limitations of existing influence-based greedy heuristics, which does not perform well on multimodal data distributions. The framework is evaluated across multiple tasks, including semantic segmentation, motion prediction, and instruction tuning, demonstrating consistent improvements over state-of-the-art methods. The authors also provide a detailed analysis of the computational complexity and ablation studies to validate the effectiveness of their approach. ## Update after rebuttal Thank you for the responses. I keep my original score of 4. Claims And Evidence: Yes, the claims are supported by clear evidence. Methods And Evaluation Criteria: I am not deeply familiar with the empirical work and existing evaluation approaches in this field, the evaluation criteria used in the paper either align with established practices from prior research or appear to be well-justified and reasonable. Theoretical Claims: The work is mostly empirical and proposes no major theoretical claims. Experimental Designs Or Analyses: While I am not an expert in assessing the validity of experimental design within this specific field, the experimental designs appear to adhere to standard practices. Supplementary Material: I did not find it necessary to review the appendix, as the main content of the paper provided sufficient detail and clarity for my understanding Relation To Broader Scientific Literature: The method addresses the limitations of existing influence-based greedy heuristics (Xia et al.; Engstrom et al., 2024) for data selection, which does not perform well on multimodal data distributions as noted by (Hu et al., 2024). It is based on the Optimal Transport theory, which was also used in (Just et al., 2023). Essential References Not Discussed: I am not aware of significant missing references that are essential to understanding the key contributions of the paper. Other Strengths And Weaknesses: Strengths: - The paper proposes a novel approach for mitigating a major weakness of the previous data selection method, namely, the summed influence fail to find diverse data. The key technique is novel: they use whitening to decorrelate the features and select data by OT distance minimization. They show by extensive experiments that TAROT mitigates biases from dominant features and outperforms previous methods in tasks including semantic segmentation, motion prediction, and instruction tuning. Weaknesses: - The paper follows a line of work in data selection, but does not seem to discuss other existing approaches for similar data selection problems. For example, the literature on data distillation and the following paper, "Data Valuation using Reinforcement Learning", Jinsung Yoon, Sercan Arik, Tomas Pfister Proceedings of the 37th International Conference on Machine Learning, PMLR. Other Comments Or Suggestions: The paper is well written. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Difference Between Data Distillation and Valuation** Thank you for this insightful comment. We appreciate the opportunity to clarify the relationship between TAROT and other data selection paradigms, particularly **Data Valuation via Reinforcement Learning (DVRL)** [Yoon et al., ICML 2020] and **Data Distillation**. **1. Comparison to DVRL (Data Valuation via Reinforcement Learning)** DVRL frames data valuation as a meta-learning problem, where a Data Value Estimator (DVE) is trained using reinforcement learning (RL) to assign importance weights to training samples. The reward signal is derived from the performance of a predictor model on a validation set. This setup allows DVRL to dynamically prioritize “useful” samples and has shown promising results in settings with noisy labels or domain shifts. **TAROT departs from DVRL in several key aspects:** • **Targeted vs. General Data Value Estimation:** DVRL learns instance-wise importance scores primarily to boost general predictive performance. In contrast, TAROT is explicitly designed for _targeted_ selection: it identifies a subset of data that minimizes the **optimal transport (OT) distance** to a distinct target distribution. This enables TAROT to adapt to complex, potentially multimodal target domains—something DVRL is not explicitly formulated to handle. • **Model-Agnostic Transferability:** TAROT computes data selection using WFD-based embeddings from a lightweight model, but supports downstream training with larger, task-specific models. As demonstrated in Section 4.3, the selected data generalizes across diverse architectures and tasks (e.g., motion prediction, instruction tuning). DVRL, by contrast, couples valuation tightly to the predictor, reducing flexibility for out-of-distribution generalization or transfer across models. • **Computational Simplicity and Scalability:** DVRL’s RL-based training is often computationally intensive and sensitive to hyperparameters, especially in high-dimensional settings. TAROT employs a more efficient and stable OT-based greedy selection method, using gradient feature whitening and normalization to reduce dominant component bias. As shown in our runtime analysis (Appendix F), TAROT scales effectively to large datasets (e.g., 2M motion samples) without requiring complex policy optimization. ---------- **2. Comparison to Data Distillation Approaches** Data distillation typically involves synthesizing or filtering datasets to mimic the performance of a teacher model. Although both data distillation and TAROT aim to curate more efficient training sets, their approaches and goals differ significantly: • **Objective Focus:** • _Data Distillation_ is model-centric, often relying on teacher-student frameworks or aligning predictions/logits. • _TAROT_ is distribution-centric, selecting data that explicitly aligns with the **target distribution** using whitened OT distances computed over gradient features. • **Dependency on a Teacher Model:** TAROT does not require a high-performing teacher model. Instead, it operates in scenarios where such a model may not exist, making it broadly applicable. • **Automatic Selection Ratio:** TAROT automatically infers optimal selection ratios through OT distance minimization (Section 3.3), a capability typically absent in distillation-based methods. ---------- Thank you again for this constructive suggestion. It has helped us better articulate TAROT’s position within the broader landscape of data selection research.
Summary: The paper proposes a data selection method for a specific target domain from a candidate set by posing it as a distribution matching problem. The paper proposes to use whitened gradient features as the base distance to compute the optimal transport between the two sets. Effectiveness of this proposed method is shown on motion prediction, semantic segmentation and instruction tuning tasks. ## Update after the rebuttal I thank the authors for addressing my concerns during the rebuttal. I keep my initial rating of 3. Claims And Evidence: Well supported. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims made. Experimental Designs Or Analyses: The experimental design is standard and makes sense. Supplementary Material: Yes, all aspects. Relation To Broader Scientific Literature: The use of OT for dataset selection is very relevant and related to the previous works in the literature. Essential References Not Discussed: 1. TSDS: Data Selection for Task-Specific Model Finetuning, NeurIPS 2024 2. Data Selection for Language Models via Importance Resampling, NeurIPS 2023 Other Strengths And Weaknesses: Strengths 1. The idea of selecting the data from a candidate set to improve performance on a target set via OT is important and relevant for many applications. 2. The idea of using gradient-based features in OT formulation and the use of whitening and normalization seems to be helpful in improving performance on considered baselines. 3. Empirical improvement across a diverse set of application shows the effectiveness of the method Weaknesses 1. The proposed method is compares only with LESS in the instruction tuning experiment where as comparison with two recent methods TSDS and DSIR should also be included. (The titles of the works are mentioned above.) 2. The amount of target data required to estimate the OT distances is not ablated for the three applications. 3. How many samples from source and target are used to compute the OT distance? Other Comments Or Suggestions: NA Questions For Authors: See Weaknesses above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1. Discussion and Comparison with TSDS and DSIR** Thank you for the suggestion. We have revised the related work section to better highlight how **TAROT** differs from related methods, particularly **TSDS** and **DSIR**, which also address data selection from a distribution-matching perspective. **TSDS** frames data selection as an **optimal transport (OT)** problem, incorporating a diversity regularizer via kernel density estimation. It selects samples based on distances in gradient embedding space. In contrast, **TAROT**: - Uses **Whitened Feature Distance (WFD)**, which removes the confounding effects of gradient covariance and scale. This provides more robust and accurate feature distance estimates, leading to better **influence estimation**—surpassing the state-of-the-art TRAK method. - While TSDS uses a continuous OT-based formulation and supports subset size control via sampling, TAROT differs in its **greedy, deterministic subset selection** that explicitly minimizes the **empirical OT distance** at each iteration. This enables stronger control over the selected subset and much faster speed for data selection. Indeed, we empirically found that TSDS cost significantly more time than TAROT. Please see our results below. - Introduces an **early stopping criterion** for OT-based selection via tracking OT distance increase, allowing estimation of **optimal selection ratios**, which TSDS does not address. **DSIR** similarly uses distribution matching but estimates **importance weights in low-dimensional n-gram space**. While efficient, this space lacks the capacity to capture high-level, task-specific semantics. **TAROT**, on the other hand: - Operates in **task- and model-specific gradient space**, and performs selection to **explicitly minimize OT distance**, making it more effective for complex or multimodal tasks like motion prediction. **Summary**: TAROT improves over TSDS and DSIR by (1) more accurate influence estimation via WFD, (2) optimal selection ratio estimation, and (3) stronger performance across domains where simpler feature spaces fall short. We include **experimental comparisons with TSDS**. Due to time constraints and DSIR’s evaluation overlap with LESS, we do not re-run DSIR experiments. we provide additional experiments: | Dataset | Llama-3.1-8B MMLU ↑ | Llama-3.1-8B BBH ↑ | Qwen-2.5-7B MMLU ↑ | Qwen-2.5-7B BBH ↑ | |----------------|---------------------|---------------------|---------------------|---------------------| | 5% LESS | 65.7 | 62.6 | **74.3** | 66.3 | | 5% TSDS | 65.2 | 63.1 | 73.9 | 66.2 | | 5% TAROT | **66.0** | **65.0** | 74.1 | 66.9 | | TAROT-OTM | 65.7 (0.13%) | 63.6 (0.21%) | **74.3** (0.09%) | **68.9** (0.13%) | **Time Complexity:** We measured the data selection time on a node with 370 GB RAM, 64 CPUs, and an H100 GPU: | Method | LESS | TSDS | TAROT-Fixed | TAROT-OTM | |--------------------|------|------------|-------------|------------| | Data Selection Time| 46s | **10 hrs** | 59s | 118s | **OOM Issue on Motion Prediction:** TSDS could not be applied to the motion prediction task (32k samples) due to out-of-memory (OOM) errors. --- **Q2. Target Data Ablation for OT Distance** We performed an ablation using the **nuPlan** motion prediction dataset. Candidate data consists of 92k samples from four cities; the target set and test set are 4k and 1k held-out Boston samples respectively. We fix the selection percentage at 10% and vary the amount of target data used from 1k to 4k. | Target Data / Selected | DsDm | LESS | Random | TAROT | |------------------------|------|------|--------|--------| | 1000 / 9,200 | 3.12 | 3.07 | 2.77 | 2.33 | | 2000 / 9,200 | 2.97 | 3.10 | 2.77 | 2.32 | | 3000 / 9,200 | 3.18 | 3.04 | 2.77 | 2.17 | | 4000 / 9,200 | 3.00 | 3.08 | 2.77 | 2.14 | **TAROT’s performance improves steadily** as more target data is used, while baselines show no consistent gains over random selection. --- **Q3. OT Distance Sample Counts** We detail the number of samples used for OT distance computation: | **Task** | **Target Samples** | **Candidate Samples** | **Target/Candidate %** | |---------------------------|--------------------|------------------------|-------------------------| | Instruction Tuning — MMLU | 285 | 270,679 | 0.1% | | Instruction Tuning — BBH | 81 | 270,679 | 0.03% | These details will be included in the updated manuscript. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response to my questions. I have some follow up questions. 1. What task was used to report the time complexity results. Which step in TSDS is the bottleneck in terms of time? Is the 10hrs number for TSDS just the time for data selection or does it include other steps as well? 2. For the additional experiments, how does the performance of the three methods change when you have lesser data (like 1% or 0.5%). Is TAROT still better? --- Reply to Comment 1.1.1: Comment: Thanks for your questions. We conducted additional experiments to address them. **Q1. Time Complexity of TSDS** We report the wall-clock time on the instruction tuning task (MMLU dataset, which is more time-consuming than BBH.) The reported time **only includes the data selection step**; all three methods incur the same cost (~32 hours) for gradient computation and caching. Upon inspecting the TSDS codebase, we identified **kernel density estimation (KDE)** as the primary bottleneck. KDE requires an extra round of neighbor searches over a large set of samples, resulting in near-quadratic time complexity with respect to the number of candidates. In contrast, TAROT avoids this step entirely through a deterministic OT-based greedy selection procedure, offering significant speed advantages. **Q2. Additional Comparison with LESS at Low Selection Ratios** Due to time constraints, we ran experiments using the fastest configuration—Qwen-2.5B on the BBH benchmark. We compared TAROT and LESS at finer-grained selection ratios, including 0.13% (OTM ratio), 0.5%, and 1%. Results are shown below and available [here](https://postimg.cc/K1W4q1yx). **Qwen-2.5B BBH** | Method | 0.13% | 0.5% | 1% | 5% | |--------|-------|------|------|------| | LESS | 67.2 | 67.8 | 67.6 | 66.3 | | TAROT | 68.9 | 68.5 | 67.5 | 66.9 | TAROT consistently outperforms or matches LESS, especially at the OTM ratio. This trend reflects the nature of our OT-based selection: as the selection ratio increases, the chosen subset may gradually drift from the target distribution, leading to diminishing returns. This supports our claim that OT distance serves as a reliable signal for estimating the optimal selection ratio.
Summary: The authors formulate targeted data selection as a distribution matching problem and propose a new framework to efficiently select the most suitable training datasets. Massive experiments were conducted to support the effectiveness of the proposed method. Claims And Evidence: The authors conducted massive experiments and ablation studies to support the effectiveness of each component of the proposed method. However, I found (original: do not find) no supporting evidence (original: clues) for the claim, 'This work identifies two primary factors contributing to this limitation: (ii) the restrictive linear additive assumptions inherent in greedy selection strategies. Methods And Evaluation Criteria: Yes, the proposed method is well-aligned with the data selection problem. However, I think that the result about wall-clock time (corrected from wall-lock) should be included to furthur evaluate the proposed method's effectiveness. Theoretical Claims: There is no theoretical claims. Experimental Designs Or Analyses: I have reviewed the code in the supplementary material. There are no scripts or detailed instructions for reproduction, and I hope the authors can provide the detailed code for reproduction during the review process. Supplementary Material: I have reviewed the code in the supplementary material. There are no scripts or detailed instructions for reproducibility, and I hope the authors can provide detailed code to facilitate reproduction during the review process. Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: The notation is abused, and I suggest that the author unify the notation system. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: No supporting evidence for this claim: the linear additive assumptions inherent in greedy selection strategies are restrictive.** Thank you for highlighting this. The limitations of linear additive assumptions in influence estimation have been thoroughly examined in the paper _Most Influential Subset Selection: Challenges, Promises, and Beyond_. We summarize their key findings that support our claim: **Failure Mode 1: Inaccurate Influence Estimates** Even under linear regression, influence estimates can be misleading if the **leverage score** $h_{ii}$ is ignored. The actual individual effect is: $A_{-{i}} = \frac{x_{\text{test}}^\top N^{-1} x_i r_i}{1 - h_{ii}},$ which differs from the estimate $v_i$ by a factor of $\frac{1}{1 - h_{ii}}$, often resulting in underestimation for high-leverage samples. **Failure Mode 2: Violation of Additivity** Even if individual effects are accurately estimated, heuristics such as LAGS (Leverage-Adjusted Greedy Selection) may still fail due to non-additive group influence. **Amplification** Group influence can be _super-additive_ when samples are similar (e.g., duplicates). For $c$ identical samples $(x_i, y_i): \frac{A_{-{i}^c}}{A_{-{i}}} = \frac{c(1 - h_{ii})}{1 - c h_{ii}} > c.$ **Cancellation** Conversely, group influence can be _sub-additive_, meaning $A_{-{i,j}} < A_{-{j}}$, due to cross-leverage $h_{ij}$ and residual interactions: $$A_{-{i,j}} = \frac{(1 - h_{ii})(1 - h_{jj})(A_{-{i}} + A_{-{j}}) + h_{ij} x_{\text{test}}^\top N^{-1}(x_i r_j + x_j r_i)}{(1 - h_{ii})(1 - h_{jj}) - h_{ij}^2}$$ These findings collectively illustrate the core limitations underpinning our claim. Additionally, our empirical comparisons with data selection methods like LESS and DSDM—which rely on linear additive assumptions—reinforce this point. For instance, Figure 6 shows that samples selected by DSDM are concentrated near the center of the target distribution, failing to capture diversity effectively. ---------- **Q2. Results about Wall-clock time.** We provided wall-clock time results in Table 7 of the appendix. It show sthat TAROT requires comparable time for gradient computation and slightly longer time for data selection compared to baseline methods (1–2 minutes). We attached the table here for your convenience. | | **Gradient Features Computation** | **Data Selection** | |--------------------------|-----------------------------------|--------------------| | LESS | 32 Hours | 46 Seconds | |TSDS | 32 Hours | 65 Minutes | | *TAROT*-Fixed | 32 Hours | 59 Seconds | | *TAROT*-OTM | 32 Hours | 118 Seconds | ---------- **Q3. Providing Detailed Code Instructions** Thank you for the suggestion. While we would like to update the code during the review phase, current guidelines do not permit this. We commit to releasing the full code along with detailed instructions upon paper acceptance. **Notation System** Thanks for your suggestion. We agree that the current notion system need futher unification and will pay more attention to optimize it.
null
null
null
null
null
null
DTZO: Distributed Trilevel Zeroth Order Learning with Provable Non-Asymptotic Convergence
Accept (poster)
Summary: The paper introduces DTZO (Distributed Trilevel Zeroth Order Learning), a novel framework for solving distributed trilevel learning problems with missing gradient information. This is achieved by constructing a cascaded polynomial approximation without relying on gradients or sub-gradients, leveraging zeroth-order cuts of inner and outer layers and dropping inactive zeroth-order cuts. The author also theoretically carries out the non-asymptotic convergence rate analysis for the proposed framework. Finally, the paper demonstrates the effectiveness through experiments on black-Box trilevel learning and robust hyperparameter optimization. ## update after rebuttal Thank authors for the comprehensive rebuttal. The added experiment results help enhance the findings of the paper and addressed my concerns. I have changed the scores accordingly. Claims And Evidence: Yes, the claims in the submission are supported by clear and convincing evidence. Specifically, the authors prove theoretical analysis of their polynomial approximation and zeroth-cuts methods, provide the distributed algorithm on zeroth order and provide experiment results to validate the performance of the proposed algorithm. Methods And Evaluation Criteria: Yes, authors compared their methods with FedRZO on Black-Box Trilevel Learning in LLMs. They used Qwen 1.8B-Chat as black-box LLM and GLUE benchmark for evaluation, compared the ASR and ACC performance of the 2 algorithms. The paper also compared their methods with FedZOO and FedRZO on hyper-parameter optimization. They leveraged a ReLU neural network and benchmarked on datasets including MNIST, QMNIST, F-MNIST, USPS, demonstrating the effectiveness of the algorithm. Theoretical Claims: Yes, the paper provides rigorous theoretical analysis for the proposed methods. It made claims of the stationarity, boundedness, smoothness and complexity of the method and provided details in the appendix. Experimental Designs Or Analyses: In general the experimental designs and analysis are sound, which is partially discussed in Methods And Evaluation Criteria section. But dataset choice in hyper-parameter optimization lacks diversity since they are largely MNIST-related. And the treatment methods are limited, the paper only compares with 2 methods, FedZOO and FedRZO. Supplementary Material: Yes, I went through the proof analysis part in supplementary material, but due to the extra length of appendix, not all math details are carefully checked. Relation To Broader Scientific Literature: The paper is related to broader scientific literature like distributed trilevel learning, distributed zeroth-order optimization and cutting-plane methods for bilevel/trilevel optimization. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper has good novelty and solid theoretical foundations. But experimental study is limited as mentioned above. In general, it should be above borderline acceptance. Other Comments Or Suggestions: In Assumption 4.4. there is a typo that word "Following" is mis-included in the cite hyperlink. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We truly appreciate your insightful suggestions. We have provided a point-by-point reply to all your questions and hope we have successfully addressed all your concerns.** **(Q1)** In general the experimental designs and analysis are sound, which is partially discussed in Methods And Evaluation Criteria section. But dataset choice in hyper-parameter optimization lacks diversity since they are largely MNIST-related. And the treatment methods are limited, the paper only compares with 2 methods, FedZOO and FedRZO. **(R1)** Thanks for your helpful suggestions. Addressing the trilevel zeroth order optimization problem in a distributed manner is significantly challenging, as even $\textit{finding a feasible solution}$ in a linear trilevel optimization problem is **NP-hard**. This is the **first work** to explore solving trilevel zeroth-order optimization problems in a distributed manner while providing **theoretical guarantees**. We have conducted experiments on several more complex datasets, including CIFAR-10 and CIFAR-100 (using CNN models), as well as time series datasets such as MelbournePedestrian, Crop, and UWaveGestureLibraryAll from the UCR Archive [1], as suggested. Additionally, we have incorporated more state-of-the-art distributed single-level, bilevel, and trilevel optimization methods with zeroth-order estimators [2], including FedAvg+ZO [3], FEDNEST+ZO [4], ADBO+ZO [5], and AFTO+ZO [6], as baseline methods. The experimental results are presented in Table 3.1 below. Please note that combining distributed nested optimization methods with zeroth order estimators does not provide any theoretical guarantees; these methods are included only for comparative evaluation. As shown in Table 3.1, the proposed DTZO exhibits superior performance. This can be attributed to two key factors: (1) Compared to existing methods, the proposed DTZO is capable of effectively addressing higher-nested zeroth order optimization problems with non-asymptotic convergence guarantees. (2) The proposed nonlinear zeroth order cuts facilitate the development of a more refined cascaded polynomial relaxation. Table 3.1 Experimental results on robust hyperparameter optimization. | Datasets | FedAvg+ZO | FedZOO | FEDNEST+ZO | ADBO+ZO | FedRZObl | AFTO+ZO | DTZO | | ---------------------- | --------- | ------ | ---------- | ------- | -------- | ------- | ----------- | | MNIST | 0.5196 | 0.5289 | 0.5503 | 0.5341 | 0.5405 | 0.7501 | **0.7927** | | QMNIST | 0.5204 | 0.5245 | 0.5398 | 0.5487 | 0.5467 | 0.7389 | **0.7804** | | F-MNIST | 0.4786 | 0.4874 | 0.5065 | 0.5102 | 0.5023 | 0.6448 | **0.7007** | | USPS | 0.7211 | 0.7277 | 0.7354 | 0.7323 | 0.7379 | 0.7987 | **0.8513** | | CIFAR-10 | 0.3731 | 0.3829 | 0.4034 | 0.3987 | 0.4079 | 0.4692 | **0.5147** | | CIFAR-100 | 0.1967 | 0.2102 | 0.2243 | 0.2354 | 0.2321 | 0.2774 | **0.3023** | | MelbournePedestrian | 0.6214 | 0.6295 | 0.6454 | 0.6412 | 0.6487 | 0.6924 | **0.7250** | | Crop | 0.5379 | 0.5468 | 0.5607 | 0.5681 | 0.5645 | 0.6016 | **0.6351** | | UWaveGestureLibraryAll | 0.6652 | 0.6714 | 0.6924 | 0.6983 | 0.7002 | 0.7689 | **0.8243** | Furthermore, we have conducted additional experiments to evaluate the proposed DTZO in three aspects: - 1) Experimental results on another domain, i.e., continual learning, please see the results in (R1) to Reviewer 1qiE. - 2) Ablation experiments, please refer to (R3) to Reviewer LX43 for details. - 3) Experimental results when using larger LLMs, i.e., Qwen-2-7B and Llama-3.1-8B, please see (R2) to Reviewer LX43 for results. **(Q2)** The paper has good novelty and solid theoretical foundations. But experimental study is limited as mentioned above. In general, it should be above borderline acceptance. **(R2)** We appreciate your comments and support. Please refer to our modifications regarding the experimental study in (R1). We hope that our revisions have adequately addressed your concerns. **(Q3)** In Assumption 4.4. there is a typo that word "Following" is mis-included in the cite hyperlink. **(R3)** We sincerely appreciate your careful review and thank you for pointing it out. We have corrected it as suggested. **Reference** [1] The UCR Time Series Classification Archive, 2018 [2] A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications [3] Communication-efficient learning of deep networks from decentralized data [4] FEDNEST: Federated bilevel, minimax, and compositional optimization, ICML 2022 [5] Asynchronous Distributed Bilevel Optimization, ICLR 2023 [6] Provably Convergent Federated Trilevel Learning, AAAI 2024
Summary: This paper proposes a zeroth-order constrainted trilevel learning optimizer which is versatile and can be adapted to a wide range of TLL problems. The authors provide both convergence analysis and experiment validation for the proposed DTZO framework. The improvement in performance from the experiments is very significant. Claims And Evidence: The structure of the article is generally complete, and the viewpoints are reasonable. Methods And Evaluation Criteria: The experiments look excellent. I have listed a few questions regarding the experiments in the questions, and I hope the authors can provide detailed answers. Theoretical Claims: The paper analyzes the convergence of the DTZO framework, but the introduction is not detailed enough. It would be beneficial for the authors to provide a table that compares the latest progress with the related analysis, presenting the detailed assumptions and conclusions to highlight the advantages of the current analysis. The current version does not make this clear enough. Experimental Designs Or Analyses: The results demonstrated by the experiments are impressive, but the data sample size is insufficient. The authors should expand the testing to include more models and datasets. Supplementary Material: no supplementary materials Relation To Broader Scientific Literature: It is also beneficial for other ZO methods. Essential References Not Discussed: Enough discussions. Other Strengths And Weaknesses: see above (each block) Other Comments Or Suggestions: see above (each block) Questions For Authors: 1. The performance of the proposed DTZO method in the experiments is significantly better than the baselines, which is excellent. However, I remain unclear about the source of this improvement. Could the authors design ablation experiments to verify the source of this performance boost? For example, they could identify which techniques were added during the transition from the baseline to the DTZO method and validate the importance of each technique individually. 2. What is the core theoretical contribution of this paper compared to existing work? Does this result represent a theoretical improvement, or can the outstanding performance of the DTZO algorithm be reflected through theoretical analysis? 3. Can the experiments be conducted on larger models, such as foundational models like LLAMA-7B? How do their results perform? 4. Could the authors provide more detailed experimental records? For example, the communication bits for each method, the actual training time (in seconds), and other related details. This would help me better understand the performance of the DTZO method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely appreciate your insightful and valuable suggestions. We have provided a detailed point-by-point response to your questions below.** **(Q1)** A table that compares the latest progress with the related analysis. **(R1)** This work is the **first** to explore solving the distributed TLL without relying on first-order information while providing **theoretical guarantees**. Per your suggestion, we have included a table comparing the non-asymptotic convergence results of the proposed DTZO with SOTA distributed nested optimization methods, under the assumption/setting where first-order information is either available (FO) or unavailable (ZO). Please note that ZO is more general and challenging than FO. Table 2.1 |Methods|Bilevel (FO)|Bilevel (ZO)|Trilevel (FO)|Trilevel (ZO)| |-|-|-|-|-| |FEDNEST|$\mathcal{O}(1/\epsilon^2)$|-| -| -| |ADBO|$\mathcal{O}(1/\epsilon^2)$|-|-|-| |MDBO|$\mathcal{O}(1/\epsilon^2)$|-|-| -| |FedBiOAcc| $\mathcal{O}(1/\epsilon^{1.5})$|-|-|-| |FedRZObl|-| $\mathcal{O}(1/\epsilon^2)$|-|-| |AFTO|-|-|$\mathcal{O}(1/\epsilon^2)$|-| |**DTZO**|-|-|-|$\mathcal{O}(1/\epsilon^2)$| **(Q2)** More models and datasets, i.e. LLAMA-7B. **(R2)** Additional experiments are conducted on (1) more datasets and models in robust hyperparameter optimization, and (2) larger LLMs in black-box TLL. Specifically, (1) Due to character limitations, please refer to (R1) to Reviewer urFe. (2) Experiments are conducted using larger LLMs, i.e., Qwen-2-7B and LLAMA-3.1-8B, and the results are shown in Table 2.2. Table 2.2 |Methods|SST-2(ASR)|SST-2(ACC)|COLA(ASR)|COLA(ACC)|MRPC(ASR)|MRPC(ACC)| |-|-|-|-|-|-|-| |FedRZObl (Qwen)|0.9147|0.6253|0.8774|0.7094|0.9384|0.7142| |DTZO (Qwen)|0.9647|0.7053|0.9545|0.7653|0.9571|0.7492| |FedRZObl (LLAMA)|0.9854|0.8591|0.8623|0.6556|0.9710|0.7454| |DTZO (LLAMA)|1.0|0.8968|0.9065|0.6891|0.9910|0.7866| **(Q3)** Ablation experiments. **(R3)** **Ablation Study**. To analyze DTZO’s performance improvements, we conduct an ablation study comparing DTZO against its variants: DTZO(-) and DBZO. DTZO(-) replaces the proposed nonlinear cuts in DTZO with linear cuts, while DBZO removes cascaded polynomial approximation, using only single-layer polynomial approximation. It is seen from Table 2.2 that DTZO outperforms all variants, demonstrating the benefits of cascaded polynomial approximation and nonlinear ZO cuts. In addition, we also compare DTZO with a variant of DTZO without removing inactive cuts. It is seen from Figure 3 in page 33 that removing inactive cuts greatly enhances DTZO’s efficiency, underscoring the importance of this step. Table 2.3 |Methods|F-MNIST|USPS| UWaveGestureLibraryAll| MelbournePedestrian| |-|-|-|-|-| |DBZO|0.5343|0.7492|0.7143|0.6536| |DTZO(-)|0.6685|0.8212|0.7921|0.7013| |DTZO|**0.7007**|**0.8513**|**0.8243**|**0.7250**| **(Q4)** Core theoretical contribution. **(R4)** Existing works focus on single-level and bilevel ZO, while tackling the trilevel ZO is under-explored and significantly more challenging (even finding a feasible solution in TLL is **NP-hard**). This work marks an **initial step** in solving TLL without first-order information. A key theoretical contribution is the **first trilevel ZO framework** and **first non-asymptotic convergence guarantee** (e.g., iteration and communication complexity) for trilevel ZO, aligning with DTZO’s superior performance in experiments, i.e., DTZO outperforms SOTA methods by handling higher-nested ZO with theoretical guarantees. Another key theoretical contribution is the construction of the **first theoretically guaranteed ZO cuts** in nested optimization, enabling cascaded polynomial relaxation without gradients. The proposed ZO cut is also the **first nonlinear** cut in nested optimization, offering better approximations for complex functions than linear cut. This advancement can also be observed in ablation study, where DTZO outperforms its variant without ZO cuts. **(Q5)** More detailed records (communication bits, training time). **(R5)** Compared to single-level and bilevel optimization, solving TLL inherently demands higher complexity. This is because 1) TLL has higher-nested structure, finding a feasible solution in TLL requires solving a bilevel optimization problem; 2) it involves more optimization variables than bilevel and single-level optimization. Per your suggestions, we provide a comparison of communication complexity and training time between DTZO and SOTA distributed TLL method. As shown in Table 2.4, DTZO achieves superior performance due to its non-asymptotic convergence guarantees and simplified optimization procedure. Table 2.4 |Methods|Communication complexity/bits|Crop (training time/s)|UWaveGestureLibraryAll (training time/s)| |-|-|-|-| |AFTO+ZO|NA$^1$|652.3|375.2| |DTZO|$32T(\epsilon)(2d_1+3d_2+3d_3)N+64 N \lfloor \frac{T_1}{\mathcal{T}} \rfloor \mathcal{T}(d_2+d_3)$|485.5|244.6| $^1$Because this method lacks non-asymptotic convergence guarantee.
Summary: This paper introduces DTZO, a framework for solving trilevel learning problems where gradient information is unavailable at all levels like black-box or with partial zeroth order constraints with analysis of the convergence rate and communication complexity. Claims And Evidence: The paper provides an extensive related work section discussing bilevel and single-level zeroth order methods with DZTO to address trilevel zeroth order learning problems. DTZO achieves up to a 40% improvement in performance over state-of-the-art methods from empirical results. The authors also provide a non-asymptotic convergence guarantee. While the framework is theoretically flexible, no experiments explore its adaptability across diverse domains beyond LLM prompt learning and hyperparameter optimization or ablation study to explore the contribution of different optimization structures. Methods And Evaluation Criteria: The benchmarking tasks (LLMs and hyperparameter optimization) are relevant applications for zeroth order learning. Performance metrics such as accuracy, robustness, convergence speed, and computational efficiency are reasonable. However the datasets selection is limited to small or simple ones (which might also not be suitalbe for LLM benchmarking). Also other bilevel methods or other black-box optimization techniques should be tested as references given the motivation here. Theoretical Claims: The theoretical contribution looks correct to me at initial examination Experimental Designs Or Analyses: The stationarity gap and $\epsilon$-stationary point definition are well-established in optimization. The use of penalty-based relaxation aligns with existing literature. However $T_1$ is not bounded, more explanation to this is desired here. Supplementary Material: I checked Notations and Experiment Settings. Relation To Broader Scientific Literature: The paper builds on existing work in bilevel optimization and extends it to trilevel settings with broader connection to black-box optimization, federated learning, and hierarchical machine learning. Essential References Not Discussed: Some other zeroth order optimization methods are missing. Other Strengths And Weaknesses: ## Pros: This paper is well-written and with theoretical contribution with well-structured proofs. ## Cons: No discussion of computational limitations. Dataset and benchmark size is limited. Other Comments Or Suggestions: A comparison with gradient based or partial gradient based methods would help quantify the trade-off between gradient-free vs. gradient-based approaches in cases where some gradient information is available. Questions For Authors: It seems the authors only talked about trade-off between performance and complexity, but not the additional computational trade-off compared to other methods? Also see previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We truly appreciate your insightful and valuable suggestions. We have provided a detailed response addressing each of the questions you raised.** **(Q1)** Experiments on another domain and ablation study. **(R1)** This work is the **first** to investigate solving the distributed TLL problem without relying on first-order (FO) information while providing **theoretical guarantees**. The proposed DTZO exhibits **high adaptability** which can be applied to a broader range of TLL problems, please refer to (R5) for details. To address your concerns, we have incorporated additional experimental results on another domain, i.e., continual learning, along with ablation study, as detailed below. - **Continual Learning**. In the experiment, the adversarial architecture-based continual learning task is considered, which can be formulated as a TLL problem. \begin{align} \min&{\sum_j\sum_{(x,y)\in D_j ^B}\frac{{L([\theta_1,\theta_2],x,y)}}{|D_j^B|}}\\\\ s.t.&\theta_1=\mathop{\arg \min}\limits_{\theta_1'}\sum_j\sum_{(x,y)\in D_j^A}\frac{L(\theta_1',x,y, P_1)}{|D_j^A|}\\\\ &s.t.P_1=\mathop{\arg \max}\limits_{P_1'} \sum_j \sum_{(x,y) \in D_j^A}\frac{L(\theta _1',x,y,P_1')}{|D_j^A|} \end{align} The results are reported in Table 1.1. It is seen that the proposed DTZO outperforms SOTA methods because: 1) DTZO is capable of solving higher-nested zeroth order (ZO) problems; 2) the proposed nonlinear ZO cut has superior approximation performance. - **Ablation study**. Due to character limitations, please refer to (R3) to Reviewer LX43. Table 1.1 |Method|MNIST|USPS| |-|-|-| |FedZOO|0.6103|0.6652| |FedRZObl|0.6967|0.7254| |FEDNEST|0.7014|0.7279| |ADBO|0.6942|0.7316| |AFTO|0.9275|0.9423| |**DTZO**|**0.9536**|**0.9654**| **(Q2)** Additional datasets and baseline methods. **(R2)** Due to character limitations, please see (R1) to Reviewer urFe. **(Q3)** Explanation to $T_1$. **(R3)** In the proposed framework, $T_1<\infty$ is a constant that can be flexibly adjusted based on specific requirements. Specifically, if the distributed system has sufficient computational and communication resources, a relatively larger $T_1$ (but still **finite**) can be chosen to achieve a better cascaded polynomial relaxation. Conversely, if the resources are limited, a smaller $T_1$ can be set to reduce the iteration and communication complexity. **(Q4)** No discussion of computational limitations. Dataset and benchmark size is limited. **(R4)** Solving TLL is highly challenging, as even $\textit{finding a feasible solution}$ for a linear TLL is **NP-hard**. The proposed DTZO is a **distributed optimization** method, making it well-suited for large-scale learning tasks due to the scalability of distributed algorithms. It decomposes large tasks into subproblems assigned to individual workers, enabling efficient problem-solving. Moreover, even in scenarios with limited computational resources, DTZO can effectively handle them due to its flexibility. By setting a smaller $T_1$, the communication and iteration complexities are reduced. Experimental results on more datasets and larger models are reported in (R2) to Reviewer LX43 and (R1) to Reviewer urFe. **(Q5)** Trade-off between gradient-free vs. gradient-based method. **(R5)** The proposed framework is highly adaptable, accommodating both fully gradient-unavailable TLL and cases with partial gradient access with minimal modifications. We further discuss and compare the trade-off below. 1) **Gradients at 1-level are available.** In this case, Eq. (16–18) can be replaced with gradient descent steps in DTZO, which introduce less noise per iteration and improve convergence rate ($O(1/\epsilon)$). However, Eq. (16–18) do not rely on gradients, making them more applicable to scenarios where gradients are unavailable. This represents a trade-off between convergence efficiency and applicability, which can be flexibly adjusted within DTZO. 2) **Gradients at 2-level are available.** In this case, the outer layer ZO cut can be replaced by FO cut: $\nabla\phi_{out}{(\\{{x_{2,j}^t}\\},\\{{x_{3,j}^t}\\},\\{z_i^t\\})^{\top}}[\\{x_{2,j}-x_{2,j}^t\\};\\{x_{3,j}-x_{3,j}^t\\};\\{z_i-z_i^t\\}]+\phi_{out}(\\{{x_{2,j}^t}\\},\\{{x_{3,j}^t}\\},\\{z_i^t\\})\le L/2(\sum_{i=2}^3\sum_j||x_{i,j}-x_{i,j}^t||^2+\sum_i||z_i-z_i^t||^2)+\varepsilon_{out}$. Since FO cut is generated based on gradients, it introduces less noise in generation process and can get a superior polynomial relaxation. In contrast, ZO cut exhibits broader applicability, as its generation does not rely on gradients. This represents a trade-off between polynomial relaxation and applicability, which can be effectively controlled in DTZO. 3) **Gradients at 3-level are available.** Similar to 2), there exists a trade-off between inner layer polynomial relaxation and applicability. **(Q6)** Some ZO references are missing. **(R6)** More discussion on ZO will be added as Appendix J.3. We are happy to include any references the reviewer may suggest.
null
null
null
null
null
null
null
null
The Price of Freedom: Exploring Expressivity and Runtime Tradeoffs in Equivariant Tensor Products
Accept (poster)
Summary: This paper analyzes various tensor product operations in equivariant neural networks for 3D modeling. It introduces measures of expressivity and interactability and improves the Gaunt tensor product (GTP) with a spherical grid, achieving a 30% speedup. The paper also presents microbenchmarks, showing discrepancies between theoretical and empirical runtime, emphasizing the need for application-specific benchmarking. Claims And Evidence: The content has already been provided in the subsequent subsections. Methods And Evaluation Criteria: The content has already been provided in the subsequent subsections. Theoretical Claims: The content has already been provided in the subsequent subsections. Experimental Designs Or Analyses: The content has already been provided in the subsequent subsections. Supplementary Material: The authors did not provide supplementary materials. Relation To Broader Scientific Literature: No. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper provides an extensive theoretical introduction and comparison of different types of tensor products, making it easy for readers to understand the differences between them. 2. The paper introduces a new evaluation method for tensor products, enhancing our understanding of their relative efficiency and expressivity, which can help in selecting the appropriate type of tensor product for networks. Weaknesses: 1. The problem the paper aims to solve is not entirely clear. Specifically, while a new evaluation criterion for tensor-product computations is introduced, the paper does not explain what additional benefits this provides for tensor-product-based equivariant models. For example, is the model's runtime/accuracy proportional to the tensor-product's runtime/expressivity as proposed? Can tensor product types be evaluated solely based on this criterion without model training? The paper also lacks experiments to verify this point. 2. Considering the theoretical improvement made to the Gaunt Tensor Product, the 30% speedup of S2grid should be seen as marginal. Compared to other acceleration works like Gaunt Tensor Product and cuEquivariance, this improvement seems relatively modest. 3. Regarding the evaluation of different tensor products, the actual performance of these tensor products also depends on the degree of optimization applied, which greatly impacts the scaling constants. It is necessary to discuss or clarify the implementation method in the paper. Otherwise, such comparisons would be unfair and could not be strictly aligned with the theoretical complexity. Other Comments Or Suggestions: N/A. Questions For Authors: 1. Based on Section 6.2, the lack of antisymmetry in GTP seems to significantly affect its performance. Should further experiments be conducted on general datasets to explore this issue? Additionally, how do the evaluation criteria proposed in the paper inspire solutions for addressing this problem? 2. For single tensor products not embedded in networks, performance benchmarking often faces challenges such as very short execution times or inaccurate timing tools. How did you address these issues in your actual experiments? 3. Overall, the paper gives the impression that it touches on many subtopics but provides only a superficial investigation of each, including the evaluation criteria, improvements to GTP, and the study of antisymmetry. Have you considered organizing these topics into separate papers, each with more detailed experiments, such as embedding the tensor products into networks and testing results on popular small-molecule or materials datasets? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for your reading of our work and feedback. We appreciate that the reader finds our explanation of differences between TPOs clear. Regarding the weaknesses: ## Weaknesses 1. We would like to clarify that the main point of the paper is understanding how alternatives to CGTP are achieving their performance gains. Crucially, we want to clarify that direct speed comparisons between GTP and CGTP are unfair. Instead, **GTP is performing a different operation from CGTP**. This realization was a major motivation for our work and in fact through our work we see that GTP does not give speed ups compared to CGTP in a fairer comparison. The key point is our framework provides a **systematic way** to analyze the differences between any TPOs (such as GTP, MTP, CGTP) and compare speedups in a fairer setting. We believe that this is extremely valuable for designing new TPOs. We do not claim that our measure of expressivity has direct correlation with actual training performance. In fact, combining the previous observation that GTP can achieve similar performance as CGTP and our result showing it **provably** has fewer degrees of freedom is already interesting. The **true benefit of GTP should not be understood as a speedup but rather the elimination of many degrees of freedom with minimal impact**. These observations illuminate potential avenues for further improvement. 2. By identifying that we can directly use a S2Grid, we highlight a possible asymptotic runtime improvement over the original implementation of GTP by using a S2FFT. This is the **first** algorithmic improvement which cannot simply be explained by a reduction in degrees of freedom. We believe this is quite interesting and can potentially have a significant impact in the future. However, we emphasize that the current $\ell$s used are far too small to see these asymptotic benefits from the complicated S2FFT algorithm. Despite this, using a S2Grid and a seminaive S2 Fourier transform (instead of S2FFT) gives a much simpler implementation of GTP which already sees performance gains over the original. 3. This is a good point, and we did try to mitigate implementation differences as much as possible (e.g. using JAX for everything). We would like to highlight that it is not the results themselves but the method of benchmarking that is important. Our microbenchmarks provide valuable insight on how different implementations can be improved. In particular, the discrepancy between FLOP counts (which are invariant) and walltime indicates potential for significant acceleration (eg. custom kernels) by better utilizing the GPU. ## Questions 1. This is a great question! Our experiment in 6.2 gives a simple demonstration highlighting this failure. However, we actually strongly suspect most common molecular datasets are minimally impacted. Loosely speaking, this is because irrep types of commonly predicted quantities such as energy or forces can be constructed purely from symmetric tensor products of the input irrep types. We are actively exploring the impact of antisymmetric tensor products in future work. Further, we have come up with a new GTP-like tensor product which does not suffer the antisymmetry issues and has the same potential asymptotic benefit from S2FFT. However, we feel it distracts from the main message of this paper and have omitted it. 2. Thanks for the question! We chose the highest batch size (10,000) that we could fit onto the GPU without going out of memory. To offset any profiling overheads we chose to measure our walltime/FLOPs/throughput metrics using hardware instruction counters at the GPU driver level which is based on Roofline Toolkit Framework for Deep Learning (https://arxiv.org/abs/2009.05257) and has been successfully used in FourCastNet (https://dl.acm.org/doi/10.1145/3592979.3593412). 3. We are providing a framework for systematically analyzing how different TPOs achieve efficiency gains. Theoretically, we provide a way to see whether these came from a clever reduction in degrees of freedom or from actual algorithmic improvements. Our microbenchmarks provide a way to see runtimes of TPOs in practice and avenues for implementation improvements. Many previous works simply state their method is faster and achieves similar training results. Our work provides a way to analyze why their method was faster and whether there may be possible improvements or limitations. Our experiments are designed with this goal in mind. As such, the comparison of different methods in training results is not the central focus of this work (those results can be found in the original papers introducing the specific TPOs). However we understand the importance of training performance and agree that evaluation of training performance is also crucial for new TPO proposals. --- Rebuttal Comment 1.1: Comment: I sincerely appreciate the authors' detailed response, especially the clarification and explanation regarding the theme of the paper. Overall, I fully recognize the theoretical contribution of this work in the analysis of tensor product operations, and I will raise my socre to 2 while my main concerns remain. Firstly, similar to the views expressed by Reviewers a8uP and t24S, the novelty of the proposed GTP and its performance gains appear to be limited. If the authors attribute this to the current maximal order being too small, then the question arises: would higher orders be truly beneficial for practical applications? According to some literature like equiforme v2, increasing the maximal order further seems to yield diminishing returns, which poses limitations for the applications. In addition, regarding the benchmarking methodology and the reported results: First, FLOP counts are not invariant. Given that certain computations can be reused (as is indeed the case in mainstream libraries such as e3nn and cuEquivariance), this can significantly affect the constant factors in computation cost and, consequently, the actual runtime. Moreover, based on your statement that "microbenchmarks provide valuable insight on how different implementations can be improved," it would be more appropriate to perform benchmarks using official implementations from current mainstream software, or at least provide a comparison against your own simplified implementations. This helps ensure that the insights remain applicable and timely for currently used implementations. Cause engineering optimizations that reduce redundancy have been extensively studied, and even well-optimized methods can differ by orders of magnitude depending on implementation. Therefore, discussing benchmark insights in isolation from these practical factors seems of limited value. [1]https://developer.nvidia.com/blog/accelerate-drug-and-material-discovery-with-new-math-library-nvidia-cuequivariance [2]Geiger M, Smidt T. e3nn: Euclidean neural networks[J]. arXiv preprint arXiv:2207.09453, 2022. [3]Liao Y L, Wood B, Das A, et al. Equiformerv2: Improved equivariant transformer for scaling to higher-degree representations[J]. arXiv preprint arXiv:2306.12059, 2023. —————————————————————————————————— I appreciate the author’s further clarifications on their Rebuttal Comment Reply, and I will put my further comments here. I fully agree with the statement that *“higher degrees are consistently helpful”*. However, the precision gains tend to exhibit diminishing returns when weighed against the loss in computational efficiency. Therefore, most molecular models opt to cap the maximum degree at 2. For instance, the spherical grid approach benchmarks presented by the author also select MACE, which uses a maximum degree of 2. Thus, if strong performance is only observed in higher-degree networks, it may indeed limit the method’s applicability. Furthermore, I appreciate that the author provided many examples of high-order tensor methods. If the goal is to highlight the spherical grid approach’s capabilities at higher degrees, perhaps considering those methods — rather than MACE — as benchmark baselines would be more appropriate. Regarding the implementation details, I appreciate the additional clarifications. However, in the absence of supplementary material, I am unable to offer further comments on the technical specifics. One reason I suggested “using official implementations from current mainstream software” is that, according to the open-source official implementation of GauntTP (https://github.com/lsj2408/Gaunt-Tensor-Product), its relative wall-time compared to the CGTP implementation in `e3nn` appears to be the exact opposite of what is shown for CGTP in your Figure 4. This raises a reasonable concern that there may still be unresolved issues requiring further investigation. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reconsidering their score, and address their final questions here. For the question “would higher orders be truly beneficial for practical applications”, we provide some evidence from prior work: - the neural atomic potentials MACE (Fig. 2 in https://arxiv.org/pdf/2206.07697), NequIP (Table 2 in https://www.nature.com/articles/s41467-022-29939-5 and Fig. 6 in https://www.nature.com/articles/s41524-024-01264-z) and FLARE (Fig. 7 in https://www.nature.com/articles/s41524-024-01264-z), - charge density prediction (Fig. 2 in https://arxiv.org/pdf/2210.04766, Fig. 2 in https://iopscience.iop.org/article/10.1088/2632-2153/acb314#mlstacb314f2 and Fig. 3 in https://www.nature.com/articles/s41524-024-01343-1 from ChargE3Net), - and the autoregressive molecular generative model Symphony (Fig. 12 in https://openreview.net/pdf?id=MIEnYtlGyv#page=23.62) where we see significant benefit from increasing the maximum degree L of the irreps in the hidden layers. Indeed, even in the EquiformerV2 paper (Table 1c in https://arxiv.org/pdf/2306.12059#page=5.94), the results start from L = 4 which is already much higher than usual applications of equivariant networks. Note that the paper also makes the claim that ‘higher degrees are consistently helpful’ in the caption of Table 1c. For our implementations, we used primitives from e3nn-jax as much as possible. Note that these primitives are already significantly faster than those in e3nn-torch, even with the new torch.compile functionality. We will add a comparison between these primitives in the camera-ready version of our submission, if accepted. We also apologize for not uploading our code as supplementary information, which we will add in our camera ready version, if accepted. Importantly, we precompute all constants (such as the Clebsch-Gordan coefficients and change-of-basis matrix for the Fourier transform on S2) during compile-time. These computations are not measured in the FLOPs we report. In fact, we were very careful to choose hardware-level counters instead of relying on the FLOP counter in JAX because of a bug in XLA that gives incorrect FLOP counts for certain operations (https://github.com/openxla/xla/issues/10479). Regarding cuEquivariance, the blog linked by the reviewer shows speedups for a specific tensor product used in MACE and NequIP (as part of DiffDock). Note that cuEquivariance provides kernels for a weighted version of the tensor products we use here, which can potentially have different runtimes because of compiler optimizations. In fact, the unweighted version of their tensor product kernel (https://github.com/NVIDIA/cuEquivariance/blob/fd8484b9ae93a6866e358a16dfb3a2e5474b0524/cuequivariance/cuequivariance/group_theory/descriptors/irreps_tp.py#L86-L146) matches the operations we benchmark but performs significantly worse. We will add a discussion of this issue to the camera-ready version of our submission, if accepted. The point of our work is to create a level playing field across tensor products to appropriately normalize their input and output spaces. In particular, we wanted to decouple algorithmic improvements from engineering improvements (such as those done by cuEquivariance and OpenEquivariance). We hope this response clarifies the questions brought up by the reviewer.
Summary: The paper presents a comprehensive analysis of tensor products and tensor product operations used in $E(3)$-equivariant models based on spherical tensors, including the Clebsch-Gordan tensor product (CGTP), Gaunt tensor product (GTP), and Matrix tensor product (MTP). The authors introduce expressivity and interactability as key measures to characterize these operations, demonstrating that speedups presented in prior work often come at the cost of expressivity. Furthermore, the paper proposes a more efficient implementation of GTP using a spherical grid, demonstrating practical performance improvements. ## update after rebuttal The authors addressed all my questions. However, I maintained my score at 4 as I remain uncertain about the broader impact of the work. Overall, I recommend this work for publication. Claims And Evidence: The paper's main claim is that tensor product operations proposed as an alternative to CGTP achieve computational efficiency at the cost of expressivity. This claim is supported through theoretical analysis and numerical experiments. The authors comprehensively compare asymptotic computational complexities and runtimes, showing that CGTP retains the highest expressivity but can be more computationally expensive. Furthermore, the paper provides empirical evidence that GTP cannot represent antisymmetric interactions using a 3D Tetris classification task. Methods And Evaluation Criteria: The proposed methods, such as the spherical grid implementation of GTP, and evaluation criteria, including 3BPA and 3D Tetris data sets, are well-aligned with the problem. The authors carefully assess theoretical expressivity and runtime complexity as key metrics, including rigorous numerical experiments. Theoretical Claims: The theoretical claims regarding the expressivity of different tensor product operations are well-supported. The derivations of runtime complexities and selection rules for each tensor product align with established results in the literature. The discussion on expressivity offers a clear explanation for why certain tensor products fail to capture specific interactions. However, the statement that CGTP is the only true tensor product is arguable, especially given that the study focuses exclusively on spherical tensors. For example, an alternative tensor product can be defined in the Cartesian basis, an area with a smaller but gradually growing body of literature that deserves mention in this context; see, e.g., https://arxiv.org/abs/2306.06482, https://arxiv.org/abs/2405.14253, https://arxiv.org/abs/2412.18263, and references therein. Experimental Designs Or Analyses: The experimental design is sound, with well-chosen benchmarks and clear comparisons across tensor product operations. However, the benchmarks focus primarily on the forward pass, while the section title for the 3BPA data set states that atomic forces and energies were evaluated. Given the importance of gradients in applications such as machine-learned force fields, it would be beneficial to analyze backward pass performance if possible or clarify whether this analysis has already been conducted for the 3BPA data set. Supplementary Material: All parts of the supplementary material have been reviewed. Relation To Broader Scientific Literature: The paper fits well with the broader literature on equivariant neural networks and tensor product operations. It thoroughly discusses prior work on Clebsch-Gordan coefficients, corresponding tensor products, and equivariant architectures based on spherical tensors. However, it does not address tensor products in alternative bases, e.g., the Cartesian basis. While the spherical basis is currently the dominant choice in the community, emerging studies on Cartesian representations could provide additional context for the paper’s claims. Essential References Not Discussed: The paper does not reference works discussing tensor products in bases other than the spherical one (e.g., https://arxiv.org/abs/2306.06482, https://arxiv.org/abs/2405.14253, https://arxiv.org/abs/2412.18263). Aside from this, the paper sufficiently covers related literature. Other Strengths And Weaknesses: Please refer to the comments in previous sections for all strengths and weaknesses of the presented work. Other Comments Or Suggestions: 1. The authors could clarify why the discussion is limited to the spherical basis and whether their framework extends to other bases. 2. It would be helpful to discuss potential extensions of the framework to “backward pass” computations. 3. Briefly mentioning how alternative bases might impact expressivity and runtime would offer a more comprehensive perspective. Minor comments: 4. The caption of Figure 5 appears redundant, as it is a part of Figure 4. 5. "($\mathbf{x}^{(l_1)}$" appears to be missing in Equation 19. Questions For Authors: My questions would relate to the issues or comments raised above, so addressing them would suffice to change my evaluation of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for your careful reading of our work and positive feedback. We appreciate that the reviewer finds our evaluation criteria and experiments well aligned with the problem, the theoretical claims well supported, and discussion on expressivity a clear explanation for why certain TPOs fail to capture specific interactions. Regarding the comments and suggestions 1. We thank the reviewer for bringing our attention to recent work in the Cartesian basis. This is a very interesting body of work which we will definitely mention in our revised manuscript. We would like to point out that our focus is on irrep based frameworks as in that case maximally expressive linear layers are easy to parameterize (as briefly explained in section 2.1). For the case of Cartesian tensors, it is easy to perform tensor products and remain a Cartesian tensor. However, especially at higher rank, it becomes difficult to parameterize fully expressive equivariant linear layers. Regarding the specific sources: It seems the interaction in https://arxiv.org/abs/2306.06482 is limited to rank 2 tensors and is a special case of MTP. https://arxiv.org/abs/2412.18263 seems to have significant work on decomposing Cartesian tensors to irreps in order to construct equivariant linear layers. It seems https://arxiv.org/abs/2405.14253 introduces a Cartesian tensor product which is also a true tensor product. However, this method has poor asymptotic scaling for 2-fold tensor products which was the focus of this work. Instead, their speedups are for small $\ell$ or large $\nu$-fold tensor products (performing multiple tensor products in succession like in MACE). We believe our framework can also be adapted for analyzing large $\nu$ by replacing the fixed bilinearities with fixed $\nu$-linearities and defining expressivity as the dimension of constructible $\nu$-linearites by inserting equivariant linearities in the inputs and output. We will add a discussion of these points in our final manuscript. 2. This is a good question. We would expect the same optimizations that were accelerating the forward pass to work for the backward pass. We were able to confirm this hypothesis by running a short experiment on the different input/output irreps settings and are able to see similar trends as the forward time. The plots and experiment details are included below. In particular, we create a random $z$ and then benchmark how long it takes to compute $\nabla_x(T(x,y)-z)^2$. https://anonymous.4open.science/r/PriceofFreedom-A835/benchmarking/plots/png/walltime_bwd_gpu_MIMO_10000_RTX-1.png https://anonymous.4open.science/r/PriceofFreedom-A835/benchmarking/plots/png/walltime_bwd_normalized_gpu_MIMO_10000_RTX-1.png https://anonymous.4open.science/r/PriceofFreedom-A835/benchmarking/plots/pof_legend.png 3. We will definitely mention other basis choices in our revised manuscript and discuss Cartesian tensors in particular. The content in Section 2.1 motivates why we focus on an irrep basis. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response but will keep my score at 4, as I am unsure of the work's broad impact for an oral presentation. --- Reply to Comment 1.1.1: Comment: We would like to thank you again for your time spent reviewing our work. Your feedback was very useful in helping improve our work. In addition, we will add our code as supplementary material in the camera-ready version if accepted.
Summary: This paper aims to advance the fundamental understanding of equivariant neural networks by studying different mappings from the product of vector spaces into tensor product spaces (including the well-known CG tensor product), which serve as the building blocks of expressive equivariant architectures. In particular, it introduces a straightforward but novel measure to compare the expressivity of different mapping designs. Additionally, the paper discusses implementation details and provides a comprehensive analysis of each operation in terms of FLOPs, wall-clock time, GPU utilization, and expressivity. Claims And Evidence: The experiments demonstrate the runtime and expressivity, aligning well with the theoretical analysis. Methods And Evaluation Criteria: The proposed measure of expressivity for different mappings into tensor product spaces seems standard and reasonable. The runtime evaluation is rigorous and comprehensive, whereas the assessment of expressivity appears relatively adequate. Theoretical Claims: The theoretical claims appear to be correct overall, as well as their proofs. Experimental Designs Or Analyses: The experimental design is sound. Supplementary Material: I have quickly reviewed the appendices, with a more careful examination of Appendices F and M. Relation To Broader Scientific Literature: Potentially, building expressive yet efficient equivariant neural networks using tensor product operations is a challenging and important problem. I believe this paper contributes to the development of such networks and, consequently, to their application. Essential References Not Discussed: The novelty of the new GTP implementation seems weak. First, the discrete spherical harmonic transform has already been extensively studied. In particular, this transform has a well-known convolution theorem, which appears to be closely related to the proposed implementation. However, it is unclear whether the authors have adequately addressed the overlap with existing works or provided relevant references from related fields. [1] Blais, J. R. (2008, June). Discrete spherical harmonic transforms: Numerical preconditioning and optimization. In International Conference on Computational Science (pp. 638-645). Berlin, Heidelberg: Springer Berlin Heidelberg. Other Strengths And Weaknesses: Weaknesses: - The contribution of the new GTP implementation appears limited (see my comment above). - The evaluation of expressivity is constrained by the simplicity and small size of the dataset used. It would be highly beneficial to see a discussion of the trade-offs when applied to a large-scale real-world dataset. I believe the implications and potential impact of this work are limited by its current application to a relatively simple dataset. Other Comments Or Suggestions: I believe I understand the definition of the selection rule, where $c_*$ appears to represent the multiplicity of irreducible representations in different $G$-spaces. However, in Proposition 3.1, $c_Z$ is a tuple rather than a single number. It seems that the goal is to separate the interaction outputs—essentially, different tuples correspond to different indices. If my interpretation is correct, I believe the definition and related explanations could be revised to improve clarity for a broader audience. Questions For Authors: The theoretical results and discussion on runtime appear rigorous to me. However, the similarities between the new implementation and existing methods should be clarified. Additionally, the real-world trade-offs remain unclear. The experiments in Figures 6 and 7 do not cover all the different methods and do not consider the performance with normalization in terms of runtime. I understand that such trade-offs might not be consistent across different tasks, but I believe that providing more experimental results on existing real-world benchmarks comparing different methods (CGTP, GTP, MTP) with normalization would significantly enhance the contribution. I would consider raising my score if the authors address these concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for your careful reading of our work and helpful feedback. We appreciate that the reviewer finds our measure of expressivity reasonable and our runtime evaluation rigorous and comprehensive. Regarding the weaknesses 1. > Contribution of the new GTP implementation appears limited We very much agree that spherical harmonic transforms (which we refer to as $S^2$ fourier transforms) have been extensively studied and we cited Healy et. al. 2003 which improves upon the seminal work by Driscoll and Healy 1994 (which we will add as a citation). However, much of the work on S2FFTs involves numerical precision issues for super high $\ell$s when using asymptotically fast algorithms. This is beyond the scope of our work as most equivariant networks are still limited to small $\ell$. Instead, we just point out asymptotic improvements exist if equivariant networks scale to super high $\ell$ in the future. Importantly, we emphasize that this connection gives the first asymptotic runtime improvement over sparse CGTP that is not simply a consequence of reduced expressivity. Our actual grid implementation uses a much simpler seminaive algorithm described in Healy et. al. 2003 and Dricoll and Healy 1994. 2. Our experiments in Section 6.1 (Figure 6) mainly serve to show a drop-in replacement of Fourier GTP with Grid GTP leads to speedups. These are done on standard benchmarks of 3BPA and rMD17 and with an equivalent number of GPU hours. If the reviewer is concerned about actual training performance, comparisons against CGTP can be found in the original GTP paper (https://arxiv.org/abs/2401.10216). Our experiment in Section 6.2 (Figure 7) is a simple demonstration that lack of “antisymmetric” interactions can have tangible limitations on model performance. If the reviewer is concerned about how the lack of such interactions affect real world datasets, which is an excellent question, we actually strongly suspect most common molecular datasets are minimally impacted. Loosely speaking, this is because irrep types of commonly predicted quantities such as energy or forces can be constructed purely from symmetric tensor products of the input irrep types. We are actively exploring the impact of antisymmetric tensor products in future work. # Other Comments or Suggestions Your understanding is correct, the $c_*$ are meant to separate the multiplicity of irreps. For CGTP, we wrote $c_Z$ as a tuple to try to highlight which inputs contributed, however in practice, this tuple is simply embedded as an integer. We will explain this more clearly in our revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I will maintain my score. --- Reply to Comment 1.1.1: Comment: We would like to thank you again for your time spent reviewing our work. Your feedback was very useful in helping improve our work. In addition, we will add our code as supplementary material in the camera-ready version if accepted.
Summary: This paper investigates tensor product operations in E(3)-equivariant neural networks, an important class of models for 3D modeling tasks, which have been recently proposed as a faster alternative to the standard Clebsch-Gordan tensor product. In particular, the authors introduce measures of expressivity and interactability, and analyze the runtimes (FLOPs and wall-clock time), GPU utilization, expressivity, and asymptotics of various operations. Finally, they provide a novel implementation of the Gaunt tensor product without sacrificing asymptotics, which is shown to be faster in practice. Claims And Evidence: The main claim in this paper is that, although several operations have been proposed to achieve E(3) equivariance efficiently, efficiency, in many cases, comes at the cost of expressivity. This claim is supported by appropriate expressivity measures, while runtime is examined by carefully designed benchmarks illustrating discrepancies from the theoretical bounds. Additionally, the improvement they proposed is shown experimentally to be indeed a faster version of the GTP tensor operation. Methods And Evaluation Criteria: The evaluation criteria (expressivity and runtime) are reasonable, since it is a typical trade-off in equivariant machine learning. Additionally, the proposed implementation (although not strictly a new method) makes sense as it shows runtime improvements. Theoretical Claims: I have looked at the proof of Theorem 3.2, which appears correct. Experimental Designs Or Analyses: I found the experimental designs/analyses carefully designed, with several factors tested (asymptotic, FLOPs, GPU utilisation, wall-clock time). Supplementary Material: I skimmed the supplementary material for essential information but not thoroughly reviewed it. Relation To Broader Scientific Literature: E(3)-equivariant neural networks are an important class of models for 3D modeling tasks, with the tensor product being the key non-linear operation, in several such architectures. These have found application in, e.g. molecular modeling and physical simulations. The authors consider recently proposed several alternatives to the standard Clebsch-Gordan tensor product in the literature, which offer improved runtimes, thus well-contextualising their work. Essential References Not Discussed: Nothing to note. Other Strengths And Weaknesses: **Strengths** 1. The paper is well-organized and well-presented (although I believe that some prior knowledge of the field is required to carefully follow it). 2. The experimental section is thorough, clearly delivers evidence for the claims made, and illustrates the differences across different tenor product operations. 3. The paper provides important clarifications and clears possible misunderstandings regarding the operations that have been proposed in the literature (Table 1 is quite useful). Additionally, it provides important insights to practitioners by thoroughly analyzing the asymptotics and expressivity of various tensor product implementations. These insights can help pave the way for developing and analysing new tensor product operations. **Weaknesses** 1. Normalizing runtimes for expressivity does not seem sufficiently justified, or more precisely, does not necessarily give sufficient guidance to practitioners on what to choose. For example, generalisation or optimisation might be equally or more crucial for certain real-world tasks, and therefore it might be the case that the cheaper operation should be chosen. 2. Although the paper makes important clarifications for the field, I am a bit concerned that it lacks novelty, since it mainly analyses existing methods without necessarily providing actionable guidelines. 3. It is not evident how much of a fair comparison is made in benchmarking the various tensor product implementations. Could it be the case that different implementations might (significantly) alter the experimental metrics (as with GTP)? Other Comments Or Suggestions: 1. In lines 59-60, the authors state that linear maps between equivalent irreps are multiples of the identity. To my knowledge, this is not the case for real representations of an arbitrary group. However, this does hold by Schur’s lemma for complex representations. 2. Typo in lines 249-250: “(some details about the hardware here)”. 3. Typo in Equation (19). Questions For Authors: 1. Can you justify the contribution of normalizing for expressivity? While the defined measure of expressivity and asymptotic runtimes make sense on their own, for example, the conclusion about the ratio runtime/expressivity is not really clear. 2. In Figure 4, what is the reason for CGTP (Sparse) having low FLOPs but high walltime? 3. Have you considered any examples of real-world datasets where leveraging antisymmetric interactions would improve performance? It would be interesting to evaluate the various tensor product operations on such a problem. **Note**. The paper is technically sound and provides certain valuable insights. I am currently (hesitantly) leaning towards acceptance, but I hold reservations due to my concerns about novelty, and perhaps also relevance/impact (since the improvements proposed are mostly related to implementation). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough feedback on our work. We appreciate that the reviewer finds our work well-organized and presented, the experiments thorough, and provides important insights to practitioners. Regarding the weaknesses ## Weaknesses 1. This is a great question. The motivation is to highlight the difference in runtime improvements caused by cleverly reducing degrees of freedom from improvements caused by actually making tensor products faster. In particular, we also emphasize that the output spaces of these algorithms are different, making a direct comparison unfair. 2. We provide a framework to analyze the expressivity-vs-runtime tradeoffs in popular tensor product operations (TPOs) in equivariant neural networks. These tradeoffs are non-obvious, and can help other practitioners realize that these operations are not necessarily equivalent to each other. Importantly, we highlight that our work has revealed that most existing TPOs actually only get improvements from reducing degrees of freedom. Our connection to $S^2$ fast Fourier transforms provides the first (though for now impractical) algorithm which has an asymptotic improvement on the runtime/expressivity ratio. Our microbenchmarking efforts help highlight the different algorithmic tradeoffs. MTP focuses on maximising GPU utilization with a more matrix-multiplication friendly algorithm which ends up costing more FLOPs and hence higher overall runtime. CGTP on the other hand is FLOPs efficient but suffers from poor GPU utilization. GTP offers a balance between both. 3. This is a great question as indeed, different implementations could have drastically different walltimes in practice and we tried to mitigate implementation differences as much as possible (e.g. implementing all of the algorithms in JAX). We would like to highlight that it is not the results themselves but the method of benchmarking that is important. Our microbenchmarks provide valuable insight on how different implementations can be improved. In particular, the discrepancy between FLOP counts (which are invariant) and walltime indicates potential for significant acceleration (eg. custom kernels) by better utilizing the GPU. ## Other comments and suggestions 1. Good catch! Indeed we do need an algebraically closed field for the maps to be identity. In the case of $SO(3)$, the irreps over complex vector spaces are real irreps. 2. Thanks for catching this! We meant to add 'AMD EPYC'. 3. Thanks for catching this! ## Questions 1. As discussed above, the normalization for expressivity accounts for the fact that the output spaces of the different TPOs are not identical. We agree that there can be many ways to account for this difference. 2. Good question! The algorithm we used for CGTP-sparse (Appendix G) while being able to exploit the sparsity, introduced a lot of overhead due to the conditional logic leading to poor GPU utilization. As is seen in recent efforts (https://developer.nvidia.com/blog/accelerate-drug-and-material-discovery-with-new-math-library-nvidia-cuequivariance/, https://arxiv.org/abs/2501.13986v2), a more GPU friendly algorithm can improve the runtime. 3. This is an excellent question! In most of the popular datasets, we strongly suspect antisymmetric operations to have minimal impact. Loosely speaking, this is because irrep types of commonly predicted quantities such as energy or forces can be constructed purely from symmetric tensor products of the input irrep types. We plan to explore the impact of antisymmetric tensor products further in future work.
null
null
null
null
null
null
DiffMS: Diffusion Generation of Molecules Conditioned on Mass Spectra
Accept (poster)
Summary: The paper introduces DiffMS, a diffusion-based framework for generating molecular structures from mass spectra. DiffMS combines existing approaches in discrete graph diffusion (DiGress), with a pretraining framework in encoder-decoder transformer architecture. The authors conducted experiments and evaluations on two generation datasets, CANOPUS and MassSpecGym. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: The method is built on discrete graph diffusion (DiGress) but integrates a pretraining strategy and molecular formula as additional constraint. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths**: - Strong performance: The authors show strong performance and outperform the previous approaches in the two benchmarks (CANOPUS and MassSpecGym). - Effectiveness of pretraining strategy: The paper includes ablation studies that demonstrate the effectiveness of the pretraining strategy on overall performance, indicating that pretraining on a large number of molecules is crucial for achieving high performance. **Weaknesses**: The method still has some weaknesses. - Reliance on external tools for formula determination: The method relies on using external tools (e.g., SIRIUS) for formula determination, which could lead to some errors in the predicted formula. While the paper argues that chemical formulae can be determined with sufficient accuracy, it does not address the potential errors from these tools or where they might fail. - Lack of discussion on computational requirements; The paper lacks a discussion on the runtime or computational requirement of the proposed method. The paper did not mention the resource/ time requirements of the proposed method. Other Comments Or Suggestions: No. Questions For Authors: Can the chemical formula derived using tools like SIRIUS be ambiguous or incorrect (e.g., due to low-resolution spectra)? If so, are there mechanisms to mitigate or improve such inaccurate formulae? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful feedback and for highlighting the novelty of our discrete diffusion method and pretraining strategies. Below, we have addressed the concerns regarding exact accuracy of annotation: > Reliance on external tools for formula determination: The method relies on using external tools (e.g., SIRIUS) for formula determination, which could lead to some errors in the predicted formula. While the paper argues that chemical formulae can be determined with sufficient accuracy, it does not address the potential errors from these tools or where they might fail. We thank the reviewer for raising an important question about the difficulty of chemical formula inference. Firstly, we would like to point out that the top performing baseline models on CANOPUS, MIST + Neuraldecipher and MIST + MSNovelist, also require the chemical formula to be known. However, to show that this precondition is not restrictive, we provide a new experiment on the CANOPUS dataset which shows that DiffMS achieves state-of-the-art performance even without the ground truth formulae. Specifically, we use MIST-CF [1], a formula prediction tool, to label chemical formulae from spectra, where we find that it achieves 92% top-5 formula annotation accuracy on CANOPUS. To have a fair comparison, we still sample 100 molecules, but split across the top-5 candidate formulae. As before, we order the generated molecules by frequency to obtain the top-10 DiffMS predictions. Below are the results, where we find that DiffMS performance does not significantly deteriorate without access to ground truth formulae: | Model | ACC@1 | MCES@1 | Tanimoto@1| ACC@10 | MCES@10 | Tanimoto@10| |--|--|--|--|--|--|--| | DiffMS: Predicted Formulae | 7.03% | **11.81** | **0.36** | 14.98% | 9.39 | **0.48**| | DiffMS: True Formulae | **8.34%** | 11.95 | 0.35 | **15.44%** | **9.23** | 0.47 | Altogether, we demonstrate that DiffMS can recover true molecules using established formula inference tools; and, under parallel formula strategies, DiffMS can still yield molecules with high structural similarities even when the true formula is not highly ranked. We will include these new results in the revised manuscript. >Lack of discussion on computational requirements; The paper lacks a discussion on the runtime or computational requirement of the proposed method. The paper did not mention the resource/ time requirements of the proposed method. We thank the reviewer for asking this important question about the computational requirements. DiffMS is a relatively lightweight model. All DiffMS experiments were run on NVIDIA 2080ti GPUs which have 12GB of memory. On these GPUs, training DiffMS takes around 1.5 minutes per epoch on CANOPUS and 45 minutes per epoch on MassSpecGym. We train DiffMS for 50 epochs on CANOPUS and 15 epochs on MassSpecGym, for a total training time of 1.25 and 11.25 hours, respectively. Sampling from DiffMS takes around 4 minutes to sample all 100 molecules used to rank the top-10 predictions. We will include a table describing DiffMS computational requirements in the revised manuscript. >Can the chemical formula derived using tools like SIRIUS be ambiguous or incorrect (e.g., due to low-resolution spectra)? If so, are there mechanisms to mitigate or improve such inaccurate formulae? We run a sample experiment which demonstrates the usage of MIST-CF [1], which achieves high formula accuracy and outperforms SIRIUS by a considerable margin. MIST-CF and other formula annotation tools are not perfect, and additional information from the MS1 data collection, such as isotope information and higher resolution of the precursor peak, can greatly further improve annotation. In cases where formula inference is not accurate, MIST’s subformula annotation module, even with incorrect formula, can still allow plausibly accurate subformula annotation for individual peaks; accordingly, we would expect the fingerprints output by MIST even under incorrectly provided subformula will still provide informative substructural knowledge. Indeed, this is reinforced by the performance on unrestricted formula DiffMS inference, where though exact match accuracies slightly worsen, structural similarities are still largely preserved. We hope to explore further mitigating strategies for formula misannotation in future iterations of the method. [1] MIST-CF: Chemical Formula Inference from Tandem Mass Spectra, Goldman et al, https://pubs.acs.org/doi/10.1021/acs.jcim.3c01082
Summary: The paper introduces DiffMS, a novel diffusion-based generative model for de novo molecular structure prediction from mass spectra. This work addresses the inverse mass spectrometry (MS) problem, which involves reconstructing molecular structures based on experimental mass spectra data. Claims And Evidence: The authors did the claims well, however, there are some points may need stronger address: 1. While the DiffMS encoder leverages transformers for mass spectrum embeddings, no ablation is performed on the impact of different conditioning strategies. Does spectral conditioning significantly impact generation, or would a MLP-based conditioning method work just as well? 2. How's the performance of fingerprint prediction influence the final generation accuracy? The authors show the ablation study on different number of pre-train samples, but didn't report the performance of ms encoder, which may give some insights. Methods And Evaluation Criteria: 1. CANOPUS and MassSpecGym are widely used in mass spectrometry applications, making them reasonable choices. However, it would be great to contain the NIST dataset but the current benchmark datasets is also making sense. 2. No ablation study on alternative conditioning strategies (e.g., how much the transformer-based spectrum encoder improves performance). 3. How's the performance on different initialization of molecule graph. Theoretical Claims: I didn't see any issues. Experimental Designs Or Analyses: 1. I would suggest authors adding the experiments for different initialization of graph edges, how the performance will influenced by empty graph, fully-connected graph. 2. The authors utilized MIST in the first stage to label the peaks' formula. How's the performance on that? Should it considered oracle function? Some reference or discussion would be great. 3. The graph decoder is pre-trained with the condition on fingerprints. So I assumed that in the end-to-end framework, the fingerprints prediction of the ms encoder (fingerprints) is feed as condition of decoder. But the authors stated "We extract the final embedding corresponding to the precursor peak as the structural condition y for the diffusion decoder.". It's not make sense if the diffusion is pre-trained with the 0/1 condition (fingerprint), but train with float value (final embedding). Supplementary Material: No Relation To Broader Scientific Literature: It is importance in the molecule de novo generation by the guidance of mass spectra. Essential References Not Discussed: No Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful feedback and suggestions, and respond accordingly below: >While the DiffMS encoder leverages transformers for mass spectrum embeddings, no ablation is performed on the impact of different conditioning strategies. Does spectral conditioning significantly impact generation, or would a MLP-based conditioning method work just as well? Regarding the effectiveness of utilizing transformers to produce a spectral embedding, MIST [1] has been established as a powerful model for extracting structural information from mass spectra. Additionally, the MIST paper compares with an MLP model (denoted by FFN in their paper), where they observe that MLPs perform significantly worse than MIST across all metrics. In Section 4.4, we demonstrate that DiffMS benefits from a powerful pretrained encoder, thus given the findings in [1], simpler MLP-based spectra conditioning is not expected to perform well. >How's the performance of fingerprint prediction influence the final generation accuracy? The authors show the ablation study on different number of pre-train samples, but didn't report the performance of ms encoder, which may give some insights. As stated above, MIST has shown to outperform simple MLP baselines on general, fingerprint-relevant tasks. In [1], the MIST encoder achieves state-of-the-art results on Tanimoto similarity, cosine similarity, and log likelihood for predicted fingerprints. We demonstrate in Section 4.4 that pretraining the encoder on spectra to fingerprint prediction improves the performance of the end-to-end finetuned DiffMS model. Ultimately, given the results shown by the MIST paper, and the empirical success of DiffMS leveraging MIST to help generate chemical matches, we think the MIST architecture is a well-suited choice of encoder for this task. >However, it would be great to contain the NIST dataset but the current benchmark datasets is also making sense. We plan not to prioritize training DiffMS on NIST at this time as NIST is not publicly available without purchase of a license. >I would suggest authors adding the experiments for different initialization of graph edges, how the performance will be influenced by empty graph, fully-connected graph. We thank the reviewer for asking this important question. We provide some experiments below on the CANOPUS dataset with different graph initialization strategies: | Model | ACC@1 | MCES@1 | Tanimoto@1| ACC@10 | MCES@10 | Tanimoto@10| |--|--|--|--|--|--|--| | DiffMS: Fully Connected Initialization | 3.36% | 12.67 | 0.28 | 7.60% | 9.56 | 0.4 | | DiffMS: Empty Graph Initialization | 6.60% | **11.55** | 0.34 | 14.94% | **9.07** | **0.47** | | DiffMS | **8.34%** | 11.95 | **0.35** | **15.44%** | 9.23 | **0.47** | We observe that fully-connected initialization performs poorly. Empty graph initialization performs similarly to random initialization using the marginal prior distribution on Tanimoto similarity and MCES–which aligns with the intuition that bond connectivity is inherently sparse and thus closer to an empty graph compared to a fully connected one–though the accuracies are worse, suggesting that the marginal prior distribution is still optimal. We will include these additional experiments in the revised manuscript. >The authors utilized MIST in the first stage to label the peaks' formula. How's the performance on that? Should it considered oracle function? Some reference or discussion would be great. MIST labels the peaks’ formulae using a combinatorial enumeration of possible substructures. It is reasonable to consider this as an oracle function given the formula of the full molecule, and as mentioned in our response to reviewer udS8, the overall molecular formulae can be annotated with over 90% accuracy. > The authors stated "We extract the final embedding corresponding to the precursor peak as the structural condition y for the diffusion decoder." It's not make sense if the diffusion is pre-trained with the 0/1 condition (fingerprint), but train with float value (final embedding). We thank the reviewer for raising this point and will clarify this wording in the revised manuscript. When finetuning the end-to-end model, we initialize decoder input to be the predicted (binary) fingerprints from the encoder, however, we do not have any auxiliary losses to enforce that the encoder submodule continues to output 0/1 fingerprints; thus, the end-to-end model can ultimately learn different intermediate representations. [1] Annotating Metabolite Mass Spectra with Domain-Inspired Chemical Formula Transformers, Goldman et al, https://www.nature.com/articles/s42256-023-00708-3 --- Rebuttal Comment 1.1: Comment: Thanks authors for the clear rebuttal, I don't have any other concerns, I believe this work is a good contribution to the metabolimics domain. By the way, looks like the madgen has updated the results, please considering to update as well. Overall, I believe this paper should be accepted, thanks again for authors' efforts. --- Reply to Comment 1.1.1: Comment: We thank reviewer iMA4 for their insightful comments and for improving their score. We will integrate all reviewer feedback as well as the updated MADGEN results into our revised manuscript.
Summary: The paper introduces DiffMS, a diffusion-based model for generating molecular structures from mass spectra, addressing the "inverse" MS problem. It uses a pretraining-finetuning framework with large-scale fingerprint-structure datasets and achieves state-of-the-art performance on benchmarks like CANOPUS and MassSpecGym. The model incorporates chemical formula constraints and discrete graph diffusion, enabling accurate and diverse molecular generation. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: None Experimental Designs Or Analyses: Yes, all Supplementary Material: No Relation To Broader Scientific Literature: The Discrete diffusion model is proposed by Digress (ICLR 23). Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - DiffMS is the first to apply discrete diffusion for molecular generation from mass spectra, handling permutation invariance and formula constraints effectively. This approach generates chemically plausible molecules, even when spectra underspecify the exact structure. - The model leverages large-scale fingerprint-structure datasets (2.8M pairs) for pretraining, improving performance with increased data. This scalable approach allows for future enhancements by expanding the pretraining dataset. Weaknesses: - DiffMS struggles with predicting high-accuracy molecules, as seen in its lower performance. This suggests the limited ability to use in real scenarios. - The model relies on accurate chemical formula inference, which may fail in low-resolution spectra or complex mixtures. This dependency could limit its applicability in real-world scenarios. Other Comments Or Suggestions: see above Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful feedback and for highlighting the novelty of our discrete diffusion method and pretraining strategies. Below, we have addressed the reviewer’s concerns about the applicability of DiffMS and shown DiffMS performs comparatively well without formula annotations: >DiffMS struggles with predicting molecules [with high accuracy]… this limits its applicability in real-world scenarios. Though exact matches prove to be a universally challenging task for *de novo* structural elucidation, the chemical similarity metrics including MCES and Tanimoto similarity indicate that the candidates DiffMS proposes are of strong structural value in the analytical chemistry pipeline. Table 3 in the Appendix demonstrates further similarity metrics that another elucidation method, MS2Mol [1], established. These “close match” and “meaningful match” metrics were derived from an empirical scoring study where chemists were asked to rate predicted and actual structures as one of these two labels, or not similar. These labels provide an understanding of what expert practitioners find to be structurally useful candidates, as they might take these structurally similar candidates and conduct further filtering or refinement steps using orthogonal information collected. Altogether, obtaining similar but not exact structural matches is still valuable to the elucidation pipeline, and DiffMS is able to generate molecules with high meaningful and close match percentages. Specifically, DiffMS is able to generate over 16x more meaningful matches on MassSpecGym compared to the best baseline model. Additionally, we agree it is too early to claim that de novo generation is “solved”. We believe DiffMS proves a feasible technical pathway towards *de novo* generation, whereas other baselines are struggling with near-zero accuracies on the challenging MassSpecGym benchmark. We believe this paper, together with training and testing code to be released, will also attract more attention from the machine learning community on studying mass spectrometry, a growing field that has great potential for scientific impact. >The model relies on accurate chemical formula inference, which may fail in low-resolution spectra or complex mixtures. We thank the reviewer for raising an important question about the difficulty of chemical formula inference. Firstly, we would like to point out that the top performing baseline models on CANOPUS, MIST + Neuraldecipher and MIST + MSNovelist, also require the chemical formula to be known. However, to show that this precondition is not restrictive, we provide a new experiment on the CANOPUS dataset which shows that DiffMS achieves state-of-the-art performance even without the ground truth formulae. Specifically, we use MIST-CF [2], a formula prediction tool, to label chemical formulae from spectra, where we find that it achieves 92% top-5 formula annotation accuracy on CANOPUS. To have a fair comparison, we still sample 100 molecules, but split across the top-5 candidate formulae. As before, we order the generated molecules by frequency to obtain the top-10 DiffMS predictions. Below are the results, where we find that DiffMS performance does not significantly deteriorate without access to ground truth formulae: | Model | ACC@1 | MCES@1 | Tanimoto@1| ACC@10 | MCES@10 | Tanimoto@10| |--|--|--|--|--|--|--| | DiffMS: Predicted Formulae | 7.03% | **11.81** | **0.36** | 14.98% | 9.39 | **0.48**| | DiffMS: True Formulae | **8.34%** | 11.95 | 0.35 | **15.44%** | **9.23** | 0.47 | Altogether, we demonstrate that DiffMS can recover true molecules using established formula inference tools; and, under parallel formula strategies, DiffMS can still yield molecules with high structural similarities even when the true formula is not highly ranked. We will include these new results in the revised manuscript. [1] MS2Mol: A transformer model for illuminating dark chemical space from mass spectra, Butler et al., https://chemrxiv.org/engage/chemrxiv/article-details/6492f28ea2c387fa9ab2a465 [2] MIST-CF: Chemical Formula Inference from Tandem Mass Spectra, Goldman et al., https://pubs.acs.org/doi/10.1021/acs.jcim.3c01082 --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I keep my score.
null
null
null
null
null
null
null
null
$\texttt{I$^2$MoE}$: Interpretable Multimodal Interaction-aware Mixture-of-Experts
Accept (poster)
Summary: The paper introduces a new mixture-of-experts framework designed to explicitly model diverse modality interactions and multi-modality fusion, through specialized parameters and weakly-supervised interaction losses. The proposed method is validated on five multimodal datasets, across different modalities, showing state-of-the-art performances. Claims And Evidence: 1. For using triplet margin loss to model uniqueness interactions, what is the margin used in the work, why is the margin loss used here? 2. The author should discuss the computation overhead for the proposed method, compared with its counterparts, such as I2MoE-MulT vs. MulT. 3. MMoE is a very close work to the proposed method, but it is not well compared in the experiment, are there any special concerns about not comparing with it? 4. In section 6.2, how to determine the agreement/disagreement between experts, do the authors use any thresholds? Methods And Evaluation Criteria: The proposed methods and evaluation look sound Theoretical Claims: There are no issues regarding the theoretical claims Experimental Designs Or Analyses: Yes, the experimental design and analysis are sound Supplementary Material: I have reviewed all parts Relation To Broader Scientific Literature: It is a very interesting topic to achive interpretable interaction/fusion on multi-modality data. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your encouraging feedback. Point-to-point responses below. >Q1. For using triplet margin loss to model uniqueness interactions, what is the margin used in the work, why is the margin loss used here? We use triplet margin loss to uniqueness interactions, as it naturally aligns with the modeling of uniqueness—there are both positive and negative examples. For example, for uniqueness expert for modality 1, we treat the full input as the anchor, the one input with modality 1 being masked as a negative, and all other perturbed inputs as positives. Triplet margin loss maximizes the distance between the anchor and a negative and minimizes the distance between an anchor and a positive. We set the margin to 1.0, selected based on validation performance. A sweep across three datasets showed that margin = 1.0 performs best in nearly all cases (see table below). | Dataset | ADNI | | MIMIC | | ENRICO | |--------|-------------|--------|-------------|--------|---------------| | **Margin** | **Acc (3)** | **AUROC** | **Acc (2)** | **AUROC** | **Acc (20)** | | 0.2 | 64.05 ± 1.97 | 80.47 ± 0.95 | 68.07 ± 2.07 | 69.42 ± 1.65 | 46.58 ± 1.75 | | 0.5 | 63.49 ± 1.08 | 81.05 ± 0.83 | 69.21 ± 0.61 | 68.53 ± 0.65 | 47.72 ± 1.78 | | **1 (Current)** | **65.08 ± 1.52** | **81.09 ± 0.02** | **69.78 ± 0.91** | **68.81 ± 0.99** | **48.22 ± 1.61** | | 2 | 62.56 ± 2.06 | 79.49 ± 1.82 | 68.76 ± 1.16 | 69.97 ± 0.42 | 46.80 ± 1.71 | | 5 | 63.96 ± 0.95 | 80.47 ± 1.25 | 68.12 ± 1.86 | 68.07 ± 1.34 | 47.95 ± 1.01 | >Q2. The author should discuss the computation overhead for the proposed method, compared with its counterparts, such as I2MoE-MulT vs. MulT While I2MoE introduces additional forward passes—scaling linearly with the number of modalities—the method remains fully end-to-end and weakly supervised. The fusion overhead increases by approximately (#modalities + 2), corresponding to the number of specialized experts. To quantify the overhead, we compare I2MoE-MulT to MulT across training time per epoch, inference latency, and parameter count (see table below). All experiments are conducted on a single A100 GPU. Despite moderate increases, we find the added cost justified by the significant gains in interpretability and predictive performance. | Dataset | ADNI | MIMIC | IMDB | MOSI | ENRICO | |---------|------|--------|----------|-----------|--------| | **Modality** | I,G,C,B | L,N,C | L,I | V,A,T | S,W | | **Train per epoch (s)** | | | | | | | MulT | 8.98 ± 0.04 | 2.24 ± 0.01 | 3.62 ± 0.00 | 0.70 ± 0.00 | 1.38 ± 0.02 | | I2MoE-MulT | 16.82 ± 0.02 | 33.67 ± 0.67 | 44.20 ± 0.59 | 4.47 ± 0.01 | 6.17 ± 0.03 | | **Inference (s)** | | | | | | | MulT | 1.34 ± 0.00 | 0.15 ± 0.00 | 0.53 ± 0.00 | 0.09 ± 0.00 | 0.20 ± 0.00 | | I2MoE-MulT | 2.29 ± 0.00 | 0.91 ± 0.00 | 3.23 ± 0.00 | 0.48 ± 0.00 | 0.44 ± 0.00 | | **# Parameters** | | | | | | | MulT | 1,072,131 | 268,034 | 1,068,567 | 134,402 | 538,644 | | I2MoE-MulT | 6,696,728 | 1,390,095 | 4,423,008 | 673,935 | 2,352,724 | >Q3. MMoE is a very close work to the proposed method, but it is not well compared in the experiment, are there any special concerns about not comparing with it? While both MMoE and I2MoE aim to model heterogeneous interactions, they differ substantially in design and applicability, making direct comparison non-trivial: 1. **End-to-End vs. Preprocessing Dependency**: I2MoE is fully end-to-end, with interaction specialization emerging via weak supervision. MMoE relies on a seperate preprocessing step to cluster training data by interaction type, breaking end-to-end training. 2. **Local Interpretability**: I2MoE provides instance-level interpretability by quantifying expert contributions per sample. MMoE lacks this capability, limiting its utility in settings requiring explanation of individual predictions. 3. **Generalizability to Higher Modalities and Complex Domains**: MMoE is tailored to vision-language tasks with pretrained LLM/VLM backbones. Its extension to domains like healthcare—where pretrained models for structured data or multi-way modality combinations are lacking—is unclear. In contrast, I2MoE operates without modality-specific pretraining and supports >2 modalities, as demonstrated on ADNI and MIMIC datasets. >Q4. In section 6.2, how to determine the agreement/disagreement between experts, do the authors use any thresholds? We define expert agreement based on predicted labels. For single-label classification tasks (i.e., ADNI, MIMIC, ENRICO), each expert outputs a logit vector, and predictions are obtained via argmax. For multi-label classification tasks (i.e., MM-IMDB), predictions are thresholded at 0.5 after sigmoid activation. For the regression task (i.e., CMU-MOSI), each expert outputs a real number the threshold is set at 0. Agreement occurs when all experts predict the same label(s); disagreement is defined as any mismatch between expert outputs.
Summary: This paper addresses multimodal learning using mixture-of-experts, where dedicated experts learn distinct information from input modalities. The authors introduce a reweighting model to interpretably assign weights to the experts, facilitating understanding of their individual importance. The proposed approach is evaluated on five multimodal benchmarks covering various modalities and tasks. Claims And Evidence: The related work section (lines 93-94) mentions only one previous work applying mixture-of-expert (MoE) to multimodal learning. However, some related studies might have been overlooked, such as [1][2]. [1] Wu, Mike, and Noah Goodman. "Multimodal generative models for scalable weakly-supervised learning." Advances in neural information processing systems 31 (2018). [2] Shi, Yuge, Brooks Paige, and Philip Torr. "Variational mixture-of-experts autoencoders for multi-modal deep generative models." Advances in neural information processing systems 32 (2019). Methods And Evaluation Criteria: Could the authors clarify their motivation for using random vectors to mask modalities? There is concern that using random vectors might add noise to the fused representations. Would modality mean vectors potentially offer a less noisy alternative? Additional clarification on your design choice here would strengthen the paper. Theoretical Claims: All provided equations appear correct. Experimental Designs Or Analyses: Overall, the experimental design and analyses appear sound, but several aspects could benefit from clarification: + In Table 1, the reported results on MM-IMDB (Micro F1: 61.00; Macro F1: 52.38) seem notably lower compared to previous literature such as MFAS [3] (Macro F1: 55.70), CentralNet [4] (Micro F1: 63.90; Macro F1: 56.10), and ViLT [5] (Micro F1: 64.70; Macro F1: 55.30). Given that these methods employ relatively straightforward fusion strategies (e.g., early fusion), could the authors discuss why their proposed MoE approach achieves lower scores? + The reported MM-IMDB result in Figure 5 (49.21) appears different from the value in Table 1 (52.38). Could the authors clarify this inconsistency? + In Figure 3(b), the unique information learned from language modality appears to have lower weights. Does this suggest language is less important for MM-IMDB? It would be valuable to have more insight or discussion on this observation. + The proposed method requires multiple runs during training, which could significantly increase computational costs. It would be helpful if the authors could report metrics related to computational efficiency, such as training time. + Additionally, the studied baselines such as VGG and LSTM seem somewhat outdated. Incorporating more recent baseline methods would strengthen the evaluation. [3] Pérez-Rúa, Juan-Manuel, et al. "MFAS: Multimodal fusion architecture search." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [4] Vielzeuf, Valentin, et al. "Centralnet: a multilayer approach for multimodal fusion." Proceedings of the European Conference on Computer Vision (ECCV) Workshops. 2018. [5] Ma, Mengmeng, et al. "Are multimodal transformers robust to missing modality?." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. Supplementary Material: All the supplementary sections are checked. Relation To Broader Scientific Literature: The idea of interpretable multimodal fusion is valuable, especially in safety-critical domains such as healthcare. Essential References Not Discussed: Please see references noted in the "Claims and Evidence" section above. Other Strengths And Weaknesses: Strengths: + The paper is generally well-organized and clear. + The problem of interpretable multimodal fusion is meaningful and relevant to a broad audience. Weaknesses: - Please refer to the comments provided above for potential improvements. Other Comments Or Suggestions: Some dataset names appear inconsistent (e.g., "MM-IMDB" vs. "MMIMDB"). Questions For Authors: The following clarifications would help strengthen the manuscript: + Could you provide insights on why your approach yields lower performance compared to simpler fusion methods in existing literature? + Can you explain the inconsistency between the MM-IMDB results reported in Table 1 and Figure 5? + Please further elaborate on your insight of using random vector masking for modalities. What are the potential limitations or implications of this design choice? + For more questions, see the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your thoughtful feedback. Point-to-point responses below. **All supplementary on [GitHub](https://anonymous.4open.science/r/I2MoE-rebuttal-8308/README.md)**. >Q1. I2MoE lower scores on MM-IMDB? This is primarily due to **differences in experimental setups**: 1. Evaluation Setup: Our experiments follow the setup in MultiBench [1]. Within this framework, I2MoE achieves Micro-F1: 61.00 and Macro-F1: 52.38, outperforming the SOTA (Micro: 59.3, Macro: 50.2; see Table 23 in [1]). 2. Reimplement: While [2–4] report higher MM-IMDB scores, they adopt different experimental setups and do not release code for exact reproduction. We reimplemented CentralNet and ViLT under the MultiBench setting and observed performance drops compared to their original reports. 3. I2MoE Improvement: We further evaluated I2MoE as a drop-in framework applied to CentralNet and MulT. As shown below, I2MoE consistently improves performance across both Micro and Macro F1 under the MultiBench setup. | Model | **Micro F1** | **Macro F1** | |--------------------|------------------|------------------| | CentralNet | 58.57 ± 0.58 | 49.90 ± 0.24 | | I2MoE-CentralNet | 58.72 ± 0.13 | 50.13 ± 0.21 | | ViLT | 58.38 ± 0.44 | 48.31 ± 0.32 | | I2MoE-ViLT | 59.53 ± 1.84 | 49.66 ± 2.23 | | MulT | 59.68 ± 0.19 | 51.41 ± 0.04 | | **I2MoE-MulT** | **61.00 ± 0.44** | **52.38 ± 0.48** | >Q2. MM-IMDB result in Figure 5 appears different from Table 1 Thanks for catching this–we forgot to update the MM-IMDB result in Figure 5. Updated figures can be found on GitHub. >Q3. …motivation for using random vectors to mask modalities? ..modality mean vectors..? We use random vectors to completely remove information from the masked modality so that any observed uniqueness and synergistic interaction from the masked modality cannot be attributed to leakage. While mean vectors could be less noisy, they are also dynamic in our setting. Since I2MoE is trained end-to-end with modality-specific encoders, the mean vectors evolve across training epochs. To mitigate this instability, we maintain a running average of mean vectors (see GitHub), but this introduces additional complexity and still does not guarantee full information removal. Further, we compare random / mean / zero vectors across five datasets. As shown below, random vectors consistently outperform mean vectors in nearly all settings. | | ADNI | | MIMIC | | IMDB | | MOSI | ENRICO | |--------------|---------------|--------------|---------------|--------------|----------------|--------------|---------------|----------------| | | **Acc** | **AUROC** | **Acc** | **AUROC** | **Micro F1** | **Macro F1** | **Acc** | **Acc** | | **Random** | **65.08 ± 1.52** | **81.09 ± 0.02** | 69.78 ± 0.91 | **68.81 ± 0.99** | **61.00 ± 0.44** | **52.38 ± 0.48** | **71.91 ± 2.20** | 48.22 ± 1.61 | | **Mean** | 59.85 ± 3.52 | 76.40 ± 2.84 | **70.00 ± 1.27** | 67.96 ± 1.43 | 59.36 ± 0.14 | 50.82 ± 0.46 | 68.95 ± 2.37 | **50.00 ± 1.94** | | **Zero** | 59.48 ± 1.61 | 77.06 ± 0.60 | 69.80 ± 0.97 | 64.62 ± 1.39 | 60.57 ± 0.07 | 51.16 ± 0.76 | 70.41 ± 0.66 | 48.63 ± 1.28 | >Q4. Interpretation of Figure 3b Figure 3b shows a local (sample-level) decomposition for a specific MM-IMDB example, where the language modality contributes less unique information for that instance. This does not reflect global trends in the MM-IMDB dataset. For dataset-level insights, we refer the reviewer to Figure 4, which summarizes expert weights across the entire test set. >Q5. (I2MoE) could significantly increase computational costs Please see our response to Reviewer aVYo Q2. >Q6. The studied baselines such as VGG and LSTM seem somewhat outdated We clarify that VGG and LSTM are used only as modality-specific encoders, not as fusion architectures. This setup follows widely adopted practice in multimodal learning benchmarks such as [1], where lightweight encoders are used to ensure a fair comparison of fusion methods. We agree that powerful modality-specific encoders could further boost performance. Due to time constraints, we plan to incorporate them in the final version. >C1. Some related studies might have been overlooked We note that the two works mentioned focus on generative modeling, but we focus on interpretable discriminative modeling. We consider these lines of research orthogonal but will clarify this distinction and cite these works in our related work section. >C2. Some dataset names appear inconsistent (e.g., "MM-IMDB" vs. "MMIMDB"). We will standardize all occurrences to “IMDB” in the final version. [1] Liang et al., 2021. *MultiBench: Multiscale Benchmarks for Multimodal Representation Learning*. NeurIPS. [2] Pérez-Rúa et al., 2019. *MFAS: Multimodal Fusion Architecture Search*. CVPR. [3] Vielzeuf et al., 2018. *CentralNet: A Multilayer Approach for Multimodal Fusion*. ECCV Workshops. [4] Ma et al., 2022. *Are Multimodal Transformers Robust to Missing Modality?*. CVPR.
Summary: In this paper, the authors introduced I2MOE, a novel multimodal model that trains a different set of experts to model each type of multimodal interaction between modalities. Each expert is trained with a different interaction loss specifically designed for the type of interaction it has to deal with, in addition to the main task objective. The model is evaluated on 5 different multimodal tasks, where the model achieves top performance on all of them. The authors also performed additional analysis or demonstrations to show that I2MOE can generalize to different fusion backbones, offers local and global interpretability, and works with more than 2 modalities. There is also ablation studies to justify the design choices of I2MOE. Claims And Evidence: The claims made in the submission is supported by clear and convincing evidence from the experiments. The only minor problem is that, the support for "local interpretation" is only backed by qualitative samples from one task (mm-imdb) in both main text and appendix. The support for the local interpretability would be stronger with either a human evaluation of interpretability (i.e. whether the interaction attributions generated by I2MOE makes sense) or by including more qualitative samples from tasks other than mm-imdb. Methods And Evaluation Criteria: The proposed methods and evaluation criteria makes sense for the problem. Theoretical Claims: No theoretical claim. Experimental Designs Or Analyses: I have checked the soundness and validity of experimental designs, including the main experiment with 5 tasks, and additional analysis and ablations. The experimental designs looks valid and sound to me. Supplementary Material: I have reviewed all of the supplemental materials, including experiment details and additional qualitative examples. Relation To Broader Scientific Literature: Compared to existing works in multimodal machine learning and multimodal interaction quantification/interpretability, the key contribution of this paper is that the proposed method creates one single end-to-end model that (1) achieves high performance in tasks, (2) inherently offers multimodal interaction quantification and interpretability, (3) applies to tasks with more than 2 modalities, and (4) generalizes well to different types of multimodal fusion. While previous works have proposed models or methods that can achieve some of the above, the proposed method seem to be the first that can achieve all of them within one single end-to-end model. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Additional Strength: - The presentation quality of the paper is good. The methodology is easy to follow and the intuition behind each decision is clearly explained. - There is ablation studies that clearly demonstrates the need for each design choice of the proposed method. Additional Weakness: - The font size of the tables and figures are tiny, making them hard to read. - The paper did not specify the complete objective (i.e. the final combination of all task loss and interaction losses). Maybe writing down the complete objective as a mathematical expression (or maybe an algorithm block for the entire training objective) would make things more clear. For example, currently it is not clear how different losses are combined, or whether they are weighted during the combination. Other Comments Or Suggestions: N/A Questions For Authors: Can you clarify how different interaction losses are combined with the main task objectives? If they are added together, how is each loss weighted? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive feedback. Point-to-point responses below. > Q1.The support for "local interpretation" is only backed by qualitative samples from one task Thanks for suggesting to strengthen the evidence for local interpretability. We conducted a human evaluation with 15 participants on 20 movie examples (300 total ratings), asking how reasonable the model’s assigned interaction expert weights were. Participants chose from five options, ranging from “Makes no sense at all” to “Completely makes sense.” 70.4% of responses were positive (Mostly or Completely makes sense), while only 9% were negative, and just 0.7% selected the lowest rating. These results suggest that the model’s expert weights are broadly viewed as reasonable and interpretable by human evaluators. | **Response** | **Percentage of all responses (n=300)** | |-----------------------------|--------------------------------------| | 'Completely makes sense' | 19.7% | | 'Mostly makes sense' | 51% | | 'Neutral' | 19.7% | | 'Makes little sense' | 9% | | 'Makes no sense at all' | 0.7% | Link to the questionnaire: [Link](https://anonymous.4open.science/r/I2MoE-rebuttal-8308/AR-zoTH/Q1_human_eval/Human%20Evaluation%20of%20I2MoE%20Interpretability.pdf) Link to deidentified response: [Link](https://anonymous.4open.science/r/I2MoE-rebuttal-8308/AR-zoTH/Q1_human_eval/human_eval_deid.csv) >Q2. The font size of the tables and figures are tiny, making them hard to read. Thank you for the helpful suggestion. We have increased the font sizes in all figures and tables and will include them in the final version. Updated tables and figures: [Link](https://anonymous.4open.science/r/I2MoE-rebuttal-8308/AR-zoTH/Q2_font_size/table_4.png) >Q3. Writing down the complete objective as a mathematical expression (or maybe an algorithm block for the entire training objective) **Algorithm block** for I2MoE training and inference forward pass: [Link](https://anonymous.4open.science/r/I2MoE-rebuttal-8308/AR-zoTH/Q3_objective/i2moe_algorithm.png) Below we explain the complete objective in math expression: Let \\( \\{F_i\\}_{i=1}^{B} \\) denote the \\( B = n + 2 \\) interaction experts: \\( n \\) uniqueness experts, one synergy expert, and one redundancy expert. For each expert \\( F\_i \\), we obtain outputs from \\( (1 + n) \\) forward passes (one full input and one for each modality replaced): \\[ [\\hat{y}\_i^{(0)}, \\hat{y}\_i^{(1)}, \\dots, \\hat{y}\_i^{(n)}] = F\_i\\mathrm{.forward\\_multiple}(X\_1, \\dots, X\_n) \\] The main prediction is computed as: \\[ \\hat{y} = \\sum\_{i=1}^{B} w\_i \\cdot \\hat{y}\_i^{(0)}, \\quad \\text{where } [w\_1, \\dots, w\_B] = \text{MLPReWeight}(X\_1, \\dots, X\_n) \\] The task loss is defined as: \\[ \\mathcal{L}\_{\\text{task}} = \\ell(\\hat{y}, T) \\] We define the expert-specific interaction losses as follows: **Uniqueness loss** for each \\( F\_i \\) (\\( i = 1, \\dots, n \\)): \\[ \\mathcal{L}\_{\\text{int}}^{(i)} = \\frac{1}{n - 1} \\sum\_{j \\ne i} \\text{TripletLoss}\\left( \\hat{y}\_i^{(0)},\\; \\hat{y}\_i^{(j)},\\; \\hat{y}\_i^{(i)} \\right) \\] **Synergy loss** (\\( F\_{n+1} \\)): \\[ \\mathcal{L}\_{\\text{int}}^{(n+1)} = \\frac{1}{n} \\sum\_{j=1}^{n} \\text{CosSim}\\left( \\text{normalize}(\\hat{y}\_{n+1}^{(0)}),\\; \\text{normalize}(\\hat{y}\_{n+1}^{(j)}) \\right) \\] **Redundancy loss** (\\( F\_{n+2} \\)): \\[ \\mathcal{L}\_{\\text{int}}^{(n+2)} = \\frac{1}{n} \\sum\_{j=1}^{n} \\left( 1 - \\text{CosSim}\\left( \\text{normalize}(\\hat{y}\_{n+2}^{(0)}),\\; \\text{normalize}(\\hat{y}\_{n+2}^{(j)}) \\right) \\right) \\] We then average the interaction loss over all experts: \\[ \\mathcal{L}\_{\\text{int}} = \\frac{1}{B} \\sum\_{i=1}^{B} \\mathcal{L}\_{\\text{int}}^{(i)} \\] The final training objective is: \\[ \\mathcal{L}\_{\\text{total}} = \\mathcal{L}\_{\\text{task}} + \\lambda\_{\\text{int}} \\cdot \\mathcal{L}\_{\\text{int}} \\] Model parameters are updated to minimize \\( \\mathcal{L}\_{\\text{total}} \\). --- Rebuttal Comment 1.1: Comment: Thanks for your response. My review remains positive. --- Reply to Comment 1.1.1: Comment: Dear Reviewer zoTN, Thank you for your response and continued positive evaluation. We're grateful for your constructive feedback, which helped us improve both the clarity and quality of our paper. We truly appreciate the time and effort you dedicated to reviewing our work. Best regards,
 Authors
Summary: The paper introduces $I^2MoE$, an end-to-end mixture-of-experts framework that explicitly models heterogeneous interactions between input modalities. By deploying specialized interaction experts (e.g., uniqueness, synergy, redundancy) and a reweighting module, $I^2MoE$ not only improves task performance—demonstrated by accuracy gains on datasets like ADNI, CMU-MOSI, and MM-IMDB—but also provides both local and global interpretability of multimodal fusion. The method leverages weakly supervised interaction losses based on modality perturbation to guide expert specialization. Claims And Evidence: The authors claim that: - Modeling distinct multimodal interactions via dedicated experts improves predictive performance and interpretability. - The reweighting mechanism provides sample-level and dataset-level insights. Ablation studies support the necessity of each design component. These claims are backed by extensive experiments (including ablations) and visualizations (e.g., Figure 3 and Table 4). However, the rationale behind the specific random vector perturbation strategy is more heuristic than theoretically grounded. Methods And Evaluation Criteria: The dual-objective loss (task loss plus interaction loss) and the use of a reweighting model are central to the design. Experiments across five datasets—with comparisons to both vanilla fusion and advanced baselines (e.g., SwitchGate, MoE++)—demonstrate consistent performance gains. One concern is the sensitivity of the method to hyperparameters, particularly in imbalanced datasets (e.g., MIMIC), where accuracy drops occur despite AUROC improvements. Theoretical Claims: The paper is motivated by the Partial Information Decomposition (PID) framework; however, the theoretical connection between the proposed weakly supervised interaction loss and formal PID measures is only loosely established. Experimental Designs Or Analyses: The experimental setup is comprehensive: - **Datasets:** Experiments span diverse domains (medical imaging, sentiment analysis, movie genre classification) demonstrating the model’s generality. - **Ablation Studies:** Detailed ablations confirm the contribution of the interaction loss, reweighting module, and perturbation strategy. - **Interpretability Analysis:** Visualizations of local expert contributions and global weight distributions validate the interpretability claims. Supplementary Material: The supplementary material (appendices on preprocessing, encoder configurations, hyperparameter settings, and additional qualitative examples) is detailed and supports reproducibility. Relation To Broader Scientific Literature: The paper is well situated within the multimodal fusion literature. It contrasts with conventional fusion methods and recent MoE-based models. For instance: - **MMoE (Yu et al., 2024):** Similar in spirit to I2MoE, this work also decomposes multimodal interactions into specialized experts. - **MoMa (Jiang et al., 2024):** Introduces modality-aware expert routing for early fusion efficiency, offering insights into modality-specific parameter allocation - **Interpretable Mixture of Experts for Structured Data (Ismail et al., 2022):** Although targeting tabular and time-series data, its inherently interpretable design may offer complementary perspectives for multimodal settings. - **Dividing and Conquering a BlackBox (Ghosh et al., 2023):** Presents a divide-and-conquer approach for extracting interpretable models, highlighting expert specialization strategies . - **LIMoE (Mustafa et al., 2022):** Uses contrastive learning in a MoE framework for vision–language tasks, emphasizing organic emergence of modality-specific experts . Integrating these discussions would further contextualize I2MoE’s contributions. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** - **Novel Architecture:** Explicitly modeling heterogeneous interactions via specialized experts. - **Interpretability:** Provides both local and global explanations, supported by qualitative and quantitative analyses. - **Comprehensive Evaluation:** Extensive experiments and ablation studies across diverse datasets substantiate the claims. **Weaknesses:** - **Theoretical Grounding:** The connection to PID is mainly heuristic; more formal theoretical analysis would be beneficial. - **Perturbation Method:** The use of random vector replacement for modality dropout is ad hoc and might affect stability. Other Comments Or Suggestions: A discussion on computational overhead and scalability with an increased number of modalities is encouraged. Questions For Authors: 1. Could you elaborate on strategies to mitigate accuracy degradation in imbalanced datasets like MIMIC? 2. How does I2MoE scale when extending to more than two modalities or when applied to other multimodal tasks (e.g., video–audio fusion)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your constructive feedback. Point-to-point responses below. >Q1. The connection between interaction loss and PID We would appreciate any insights from the reviewer on this point. Below, we attempt to connect each expert trained on the perturbed input views to a distinct PID component. This might form a contrastive approximation to the constrained information projections discussed in [1, 2]: \\[ I(T; X_1, X_2) = \mathrm{Red}(T; X_1, X_2) + \mathrm{Unq}(T; X_1 \\setminus X_2) + \mathrm{Unq}(T; X_2 \\setminus X_1) + \mathrm{Syn}(T; X_1, X_2) \\] In the two-modality scenario, our model learns four experts, each trained to specialize in a PID component using perturbed modality inputs. **Unique Information [1, 3].** Experts \\( F\_{\text{uni}1} \\) and \\( F\_{\text{uni}2} \\) are trained on inputs where the other modality is replaced with noise: \\[ \\mathcal{L}\_{\text{uni}1} = \\| F\_{\text{uni}1}(X_1, \\tilde{X}\_2) - T \\|, \quad \\mathcal{L}\_{\text{uni}2} = \\| F\_{\text{uni}2}(\\tilde{X}\_1, X_2) - T \\| \\] Assuming \\( \\tilde{X}\_i \\) contains no task-relevant information, these losses approximate: \\[ \\mathcal{L}\_{\text{uni}1} \\propto \mathrm{Unq}(T; X_1 \\setminus X_2), \quad \\mathcal{L}\_{\text{uni}2} \\propto \mathrm{Unq}(T; X_2 \\setminus X_1) \\] **Redundant Information [2, 3].** Expert \\( F\_{\text{red}} \\) is trained to match predictions from either single-modality input: \\[ \\mathcal{L}\_{\text{red}} = \\frac{1}{2} \\left( \\| F\_{\text{red}}(X_1, \\tilde{X}\_2) - T \\| + \\| F\_{\text{red}}(\\tilde{X}\_1, X_2) - T \\| \\right) \\] This loss encourages \\( F\_{\text{red}} \\) to extract information shared by both \\( X_1 \\) and \\( X_2 \\), approximating: \\[ \\mathcal{L}\_{\text{red}} \\propto \mathrm{Red}(T; X_1, X_2) \\] **Synergistic Information [2, 4].** Expert \\( F\_{\text{syn}} \\) is trained to rely on both modalities jointly. It is penalized for performing well on any partial view: \\[ \\mathcal{L}\_{\text{syn}} = \\frac{1}{2} \\left( \\| F\_{\text{syn}}(X_1, X_2) - T \\| - \\| F\_{\text{syn}}(\\tilde{X}\_1, X_2) - T \\| - \\| F\_{\text{syn}}(X_1, \\tilde{X}\_2) - T \\| \\right) \\] This loss isolates information that emerges only through joint modality interaction: \\[ \\mathcal{L}\_{\text{syn}} \\propto \mathrm{Syn}(T; X_1, X_2) \\] >Q2. Random vector replacement for modality dropout Please see our response to Reviewer LfMU Q3. >Q3. Strategies to mitigate accuracy degradation in imbalanced datasets like MIMIC? In imbalanced settings like MIMIC, threshold-independent metrics such as AUROC provide a more reliable measure of performance. We mitigate class-imbalance by applying class weighting (0.25 for negative, 0.75 for positive) during training across all models. To clarify performance across classes, we report per-class accuracy, balanced accuracy, and AUROC below. I2MoE-MulT achieves higher balanced accuracy and AUROC than MulT, indicating improved positive class recognition without sacrificing majority class performance. | Metrics | Acc (+) | Acc (-) | Bal. Acc | Avg. Acc | AUROC | |---------------|---------|---------|----------|----------|--------| | **MulT** | 20.87 | **90.31** | 55.59 | **74.64** | 65.61 | | **i2MoE-MulT** | **50.49** | 75.43 | **62.96** | 69.80 | **69.44** | >Q4. A discussion on computational overhead and scalability is encouraged Please see our response to Reviewer aVYo Q2. >Q5. How does I2MoE scale when extending to more than two modalities or when applied to other multimodal tasks (e.g., video–audio fusion)? Most real-world multimodal datasets ([5, 6]) involve fewer than four modalities. I2MoE performs well in such settings—for example, ADNI includes imaging, genetics, clinical tests, and biospecimens. We also evaluated I2MoE on video–audio fusion using CMU-MOSI (text, video, audio), where I2MoE improves the performance of four different baseline fusion methods (Table 1 and Table 2). >C1. I2MoE contrasts with conventional fusion methods and recent MoE-based models Thanks for the comment. We will expand Section 2 to position I2MoE within the broader MoE literature you mentioned more explicitly. [1] Bertschinger et al., 2014. *Quantifying Unique Information*. Entropy. [2] Williams and Beer, 2010. *Nonnegative Decomposition of Multivariate Information*. arXiv. [3] Wollstadt et al., 2023. *A Rigorous Information-Theoretic Definition of Redundancy and Relevancy in Feature Selection Based on (Partial) Information Decomposition*. JMLR. [4] Wibral et al., 2017. *Partial information decomposition as a unified approach to the specification of neural goal functions*. Brain and Cognition. [5] Liang et al., 2021. *MultiBench: Multiscale Benchmarks for Multimodal Representation Learning*. NeurIPS. [6] Liang et al., 2022. *High-Modality Multimodal Transformer: Quantifying Modality & Interaction Heterogeneity for High-Modality Representation Learning*. arXiv:2203.01311.
null
null
null
null
null
null
Emotional Face-to-Speech
Accept (poster)
Summary: This paper describes an approach for mapping silent video of a talking face to the synthesized voice. The approach is based on a discrete diffusion transformer that is conditioned on the (visual) speaker identity and a learned representation of the facial expression of emotion. Together these help to preserve speaker identity and improve the expressiveness of the speech. In addition, residual vector quantization is used learn a coarse-to-fine tokenization to better capture voice characteristics at different levels of granularity. The approach is evaluated using both objective and subjective assessment against a number of baselines. ### Update I will maintain my score and rate this paper weak accept. The approach used makes sense, but I am disappointed in the evaluation in that the baselines the authors are comparing against are not strictly the true baselines. The authors of the original baselines used different datasets, and the authors here can claim their approach beats the baselines on their experimental configuration, but the "standard" baselines do much better than the examples presented here. Claims And Evidence: I believe the claims are backed by the experiments in the paper. Methods And Evaluation Criteria: The datasets used for training and evaluating are standard. Reasonable baselines have been used to benchmark against too, including face-driven methods and speech-driven methods. Theoretical Claims: No, I did not check the derivations in the appendix beyond skimming them for information. Experimental Designs Or Analyses: I do not have specific concerns about the experimental design. It is good to see a combination of objective and subjective assessment of the approach. I did wonder about asking viewers about rating the identity and expressiveness separately. How much does each individual attribute contribute to the goodness/degradation in the perceived consistency? Supplementary Material: I read the appendices where needed to get additional context for the paper. I also viewed the example video sequences that were provided as supplementary material. Relation To Broader Scientific Literature: There are two main contributions here: 1) conditioning the discrete diffusion transformer on the facial expression representation in addition to the identity representation, and 2) using residual vector quantization, which has been shown to be important in learning speaker representations. Essential References Not Discussed: A potentially relevant reference that is missing: Lu, J.; Sisman, B.; Liu, R.; Zhang, M.; and Li, H. . VisualTTS: TTS with Accurate Lip-Speech Synchronization for Automatic Voice Over. In Proceedings of ICASSP. 2022. Most of the baselines (e.g., from Table 1) are not discussed in the related work section. It would be useful to have the work situated within that broader literature. Explain what their individual limitations are, etc. Other Strengths And Weaknesses: Strengths: + The use of RVQ/curriculum learning and conditioning on the representation of the facial expression clearly improve the quality of the generated speech. + The approach is significantly better then the baseline as evidenced by the results of the objective and subjective tests, and listening to the provided samples (albeit with the concern I have about the provided samples from the baselines that I highlight elsewhere),. Weaknesses: - It would have been nice to have more examples to see how other factors affect the approach. For example, varying the degree of expressiveness from mid-expressive to very expressive. How does this affect generation quality, and does the relative degree affect the contribution of the identity/expression conditioning on the network? Other Comments Or Suggestions: There are places where the word choice or use of incomplete sentences are use makes the paper difficult to follow without having to re-reading. For example, the first sentence of the description of the forward diffusion process (line 141) is not a sentence, the sentence beginning at line 152 is poorly written, and the last sentence before the description of the training objective in Section 3 is also not a sentence. Expand the caption for Figure 4 to more completely describe the figure. In Figure 5, the relative training cost is provided. Does this mean that “baseline” and “ours” are using a different number of steps, or are they using the same number of steps within a sub-figure, but a different number between the WER, emotion/speaker similarity figures? In the introduction you mention: “Considering that facial expressions are the most direct indicators of emotion“ — it might be worth qualifying this and saying facial expressions are the most VISUAL indictor — tone of voice and other acoustic cues are equally indicative of emotion state too. In the Datasets section of Section 5, what does “Additionally, these datasets lack sufficient semantic units in real-world environments, making it challenging to train a TTS model.” mean? What are the “sufficient semantic units”? Do you mean these audiovisual datasets are too small in terms of sample size (hours of speech) to train a high-quality TTS system? Questions For Authors: Q1: In the introduction, the wording refers to “... the one-to-many issues inherent in continuous speech features." — what are these one-to-many issues? Given the context of the work (lip motion to speech mapping) I was wondering if it is the mapping of a visual lip gesture, e.g., lip closure of a bilabial plosive, mapping to many speech sounds, e.g., /b/, /p/, /m/. This does not sit with "continuous speech features." though so I am unclear. Q2: For Figure 4 — these are all data points from synthetic speech. How do data points for synthetic vs. real speech align in this low-dimensional projection? For example, are all of the data points for the synthetic voice of a speaker aligned with/distinct from the data points for the corresponding real voice. What is presented here shows that the performance of DemoFace aligns better than the baseline, but that is only part of the story. Q3: I have a concern over the examples used in the baselines that are provided by way of comparison. The voice quality in some of the baselines is considerably worse than the quality suggested on the original demo page(s). Why is this the case? From what I understand you are using the original implementations provided by the authors, and in many cases the same datasets (which suggests it is not a distribution shift that might require different hyper-parameters)., Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your kind words and appreciating the significance of our contributions, and we try our best to address your questions as follows. **Q1: Impact of expressiveness variation** Thank you for the insightful suggestion. In this paper, we use one-hot emotion labels to learn identity-agnostic emotional embeddings, ensuring that variations in expression do not significantly affect the generation results. In the future, we plan to incorporate emotional intensity as an additional condition to achieve more natural speech synthesis. **Q2: Extra user study** Thank you for the valuable suggestion. Per your suggestion, we have supplemented an extra objective evaluation with larger participants and new MOS metrics for both attributes. Please refer to the Q5 response for Reviewer 7YBQ for the results, which show that DEmoFace outperforms in naturalness, identity timbre, and emotional prosody consistency. We will revise our manuscript accordingly. **Q3: Clarification on one-to-many mapping issue in continuous speech features** The one-to-many mapping of speech generation denotes that multiple speech sequences can possibly correspond to the same text sequence with diverse pitch, duration and prosody, making the synthesis speech distribution multimodal instead of unimodal. The issue arises from attempting to generate continuous speech features, such as mel-spectrograms, which are highly correlated over time and frequency, leading to over-smoothing during frame-level prosody or linguistic predictions [1]. In addition, in this paper, we focus on extracting identity styles and emotions from facial features, rather than semantics from lip motion. We will certainly clarify this in future version. [1]: FoundationTTS: Text-to-Speech for ASR Customization with Generative Language Model **Q4: Clarification on t-SNE experiments** To further compare our method with real speech data and ablation methods, we have added relevant visualizations referring to Figures 1 and 2 in the anonymous link https://anonymous.4open.science/r/demoface. The results demonstrate that our method shows a similar clustering distribution to that of real speech. **Q5: Implementation of baselines** To ensure a fair comparison, we have included detailed implementation details in the appendix. The speech quality differences in some baselines, compared to original demos, stem from their limited dataset size and lenient evaluation, which may lead to overfitting. For example, they relied on the limited-vocabulary GRID dataset or the speaker-limited V2C dataset. In contrast, our method introduces a mixed dataset and is evaluated on unseen speakers, reflecting more challenging real-world scenarios. **Q6: Clarification on related works, references, and writing** We appreciate your valuable feedback, which has helped us refine our work. 1. We will revise the related work section in detail: Compared with acoustic-guided methods, these non-autoregressive (NAR) methods suffer from over-smoothing, less diversity, and complex alignment issues, despite fast inference. Compared with visual-guided methods, they still pose the above issues by introducing NAR for coarse mel-spectrogram generation and diffusion for refinement and lacking an efficient conditioning paradigm. In contrast, our DEmoFace can flexibly leverage both visual and acoustic conditions while dynamically aligning text for higher-quality and more diverse speech generation, offering practical guidance for optimizing the promising modeling paradigm. 2. We will add the suggested references and compare them with ours. 3. We will refine our writing to improve clarity and enhance readability. **Q7: Training cost comparison in Figure 5** Figure 5 of main text shows a comparison of metrics in the same model checkpoint, under the same training steps or costs, which demonstrates that curriculum learning can improve training efficiency. **Q8: Should facial expressions be called the ``most visual indicator''** Our proposed eF2S problem aims to infer timbre and emotional prosody solely from visual cues instead of acoustic cues, with facial expressions serving as the most relevant source of emotional information naturally. **Q9: Why need extra an LRS3 subset for learning more semantic units** The "semantic unit" refers to text-rich units containing contextualized linguistic details. The audiovisual datasets used in this work exhibit limited linguistic diversity in acted emotional speech, which constrains the model to accurately generate speech for unseen text. Therefore, we incorporate a subset of LRS3 to enhance semantic learning from real-world scenario. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. For Q5: I am still a little unsure why, if you are using the code provided by the authors of the baselines and have used the same data, the example outputs from their system that you have created sound worse (sometimes significantly so) than the examples on the original demo pages. I appreciate what you are saying about limited data, e.g., GRID, being problematic, but if the original authors used the same data would one not expect the equivalent quality when you recreate samples? For Q9: are you referring to "limited phonetic coverage"? This is what the phenomenon used to be referred to in the speech community when a model has not seen/heard phonemes in a sufficient number of contexts, so the model cannot produce the sound with appropriate coarticulation effects taken into account. --- Reply to Comment 1.1.1: Comment: Thank you for your additional questions. We sincerely apologize for any lack of clarity in our previous responses, partly due to space constraints. We appreciate this extra opportunity to address your remaining concerns as thoroughly as possible. **Responses to Q5**: The performance decline of baselines can be attributed to two key factors. 1. **Differences in Training Data:** We fully agree with you that baselines trained on the same data with the same code should yield consistent results. However, this study introduces a new task and a corresponding dataset, which differs from those used in the original baselines. To ensure a fair comparison, we re-trained the baselines using the same training dataset as the proposed DEmoFace while strictly adhering to their original configurations. Here, we would like to highlight that the change of training data can significantly impact generation performance. For example, as shown on the StyleDubber demo page (https://acl2024x.github.io/StyleDubber/#Setting2), one can observe that models trained on the smaller Grid dataset outperform those trained on the larger V2C dataset in generation quality. A possible reason is that the distribution shift between training and test sets in Grid dataset is smaller than that in V2C dataset. Additionally, in our response to Q2 for Reviewer kkEx, our dataset-wise experiments further demonstrate significant performance variations of the same model across different training datasets. Therefore, the performance of these baselines on our more diverse and realistic dataset may differ from the originally reported results. 2. **Differences in Experimental Setup:** In this study, we focus on evaluating the generalization ability of all methods in real-world scenarios by ensuring no speaker overlap between the training and testing sets, which enforces a stricter constraint and hence, makes speech generation more challenging. Here, we would like to highlight that the performance degradation across distinct setups can also be observed on the StyleDubber demo page (https://acl2024x.github.io/StyleDubber/). In the Dub 1.0 and 2.0 settings where the driven speech comes from speakers seen during training, all methods produced high-quality speech due to speaker information leakage, while limiting their applicability to real-world scenarios. However, in the Dub 3.0 setting where the driven speech comes from unseen speakers (**aligning with our setup**), the generated speech quality of all methods significantly declines due to insufficient generalization, exhibiting unclear pronunciation and audio distortion. Therefore, these differences in experimental setups could explain the performance degradation observed in baselines when tested on unseen speakers from a more diverse and realistic dataset in this study. We believe these two factors outlined above are the primary reasons for the observed discrepancies in generation quality. To facilitate the community in fully reproducing our results, we will open-source all the DEmoFace code as well as the re-training code for other baselines. We hope this will further address your concerns. **Responses to Q9**: Thank you for requesting clarification on limited phonetic coverage. Yes, we are referring to “limited phonetic coverage”, which occurs when a model struggles to generate accurate speech due to being trained on a dataset with a restricted set of phonemes. We appreciate your feedback and will update our manuscript to further clarify this concept.
Summary: This paper argues that extracting and applying emotional expressions as well as identities when generating speech based on face prompt input is effective in resolving face-speech mismatch. To this end, we propose an Emotional Face-to-Speech (eF2S) method that goes beyond the existing Face-to-Speech (F2S) method and applies emotions extracted from faces to the generated speech. The proposed framework for eF2S (DEmoFace) generates speech by directly integrating both identity and emotional expressions from face input. Claims And Evidence: I agree that generating speech synchronized with emotion is essential for natural speech synthesis. However, is it necessary to extract emotion from a single video frame? For generating speech that aligns with emotional intent, it is reasonable to assume that text-based emotion should have a more significant influence than facial emotion. The Emotion of mismatch between face and text can lead to inconsistencies in speech generation. For example, if the image shows a smiling face, but the text expresses anger (e.g., “I am angry”), this would represent a perceptual conflicts between the generated speech expression and the text expression. In this paper does not provide a detailed discussion on how to handle such conflicts—specifically, which modality (face or text) should take precedence in determining the speech’s emotion. Methods And Evaluation Criteria: In this paper, the authors did not provide a comparative evaluation across the datasets used in training. While it employs multiple datasets (RAVDESS, MEAD, MELD-FAIR, and LRS3), there is no analysis on how the model performs differently across these datasets. Specifically, it is unclear whether emotion-rich datasets like RAVDESS lead to better emotion modeling compared to conversational datasets like MELD-FAIR. Additionally, there is no discussion on whether training on one dataset generalizes well to others. Theoretical Claims: Appendix B outlines the preliminaries of the discrete diffusion model, offering relevant definitions and concepts. Appendix C presents the full derivation of the Enhanced Predictor-Free Guidance (EPFG) equations (Equations 4-5). However, the explicit mathematical derivations for Equations (1-3) are not detailed in the supplementary materials. Experimental Designs Or Analyses: 1. Lack of experiments about emotion condition: The paper does not experimentally verify which factor—text emotion or face emotion—plays a more dominant role in generating speech emotion when they conflict. 2. Inconsistency in the interpretation of Figure 4 (t-SNE visualization): The paper claims that emotion regulation leads to more natural speech synthesis, but Figure 4 (t-SNE visualization) primarily shows a distribution based on gender differences. 3. Lack of verification of face emotion to speech synthesis: There is no evaluation of performance when generating speech using only identity $c_{id}$ and text $c_{text}$, without the emotion condition $c_{emo}$. Supplementary Material: G. User Evaluation: The subjective evaluation in this paper has major limitations due to the small number of participants and test samples. The number of evaluators (n=15) is relatively low compared to prior speech synthesis studies, which typically involve at least 30–50 participants for reliable MOS evaluation. The test sample size is mentioned in the supplementary as only 10 samples. This is too few to generalize the model’s performance. Relation To Broader Scientific Literature: This paper builds on prior work in Face-to-Speech (F2S) and expressive TTS by focusing on visual emotion conditioning, which has been underexplored in previous works. Unlike previous TTS that primarily focused on identity-based speech generation, this work explicitly decouples identity and emotion to synthesize the speech that consistent emotion between speech and face. Essential References Not Discussed: In this paper, the generated speech is incorporated a emotion of face by extracting emotional features. However, it lacks a discussion on how Facial Emotion Recognition has evolved and its limitations in speech emotion modeling. Other Strengths And Weaknesses: - Strengths: In this paper is proposed a novel speech generation framework, incorporating emotion conditioning from facial features, which has been insufficiently explored in previous works. - Weaknesses: The lack of dataset-specific experiments makes it difficult to evaluate whether this model generalizes well. The t-SNE visualization (Fig. 4) only shows gender-based clustering, which does not directly validate the effectiveness of emotional conditioning. There is no ablation study for removing the emotion condition, which is critical to justify the importance of facial emotion in speech generation. Other Comments Or Suggestions: In the introduction section, when explaining the limitations of existing methods compared to the authors’ approach, the explanation is difficult to follow. (L26~45) Questions For Authors: 1. How does the model resolve conflicts between text emotion and face emotion when they are contradictory? 2. Why does not measure dataset-wise comparison performance? 3. Why does Figure 4 focus on gender-based clustering instead of emotional clustering? 4. What would be the impact of removing the facial emotion condition in ablation studies? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your insightful comments and suggestions, motivating us to rethink a more comprehensive experiment evaluation. We try our best to address your questions as follows. **Q1: Emotion ambiguity between text and face** Thank you for your insightful comment. Determining the dominant and subordinate modalities is crucial for resolving ambiguities in multimodal emotion generation. However, in our eF2S task, we focus on inferring timbre and emotional prosody solely from visual cues, with facial expressions serving as the dominant modality rather than textual cues. In the future, we will explore integrating face, text, and speech for more consistent emotion generation by dynamically adjusting modality impacts through cross-modal shift estimation. **Q2: Dataset-wise experiment for out-of-domain robustness** We appreciate your insight for a more thorough evaluation of our method. To address your concern, we have supplemented dataset-wise experiments on the out-of-domain LRS2 benchmark. Specifically, due to the limited dataset size, we conducted full fine-tuning of the base model for 20 epochs on each of these four datasets to learn dataset-specific knowledge and tested them on LRS2. Please refer to the Q2 response for Reviewer kkEx for quantitative results, and we will include this evaluation and discussion in the future version. **Q3: Clarification on t-SNE experiment** To further compare our method with real speech data and ablation methods, we have added the requested visualizations, which can be found in Figures 1 and 2 in the anonymous link https://anonymous.4open.science/r/demoface. The results demonstrate that our method exhibits a clustering distribution similar to that of real speech. **Q4: Extra ablation study on emotion conditioning** Thank you for helping improve the clarity of our paper. We have conducted an ablation study on the $\boldsymbol{c} _\text{emo}$ condition. The results below show that $\boldsymbol{c} _\text{emo}$ enhances speech naturalness and expressiveness without significantly affecting SpkSim or WER, confirming that DEmoFace effectively decouples different conditions. The associated t-SNE visualizations can be found in the anonymous link provided above. | Methods | EmoSim$\uparrow$ | SpkSim$\uparrow$ | RMSE$\downarrow$ | MCD$\downarrow$ | WER$\downarrow$ | | ------------------------------ | ---------------- | ---------------- | ---------------- | --------------- | --------------- | | wo $\boldsymbol{c} _\text{emo}$ | 0.64 | 0.65 | 104.92 | 7.29 | 21.35 | | **DEmoFace** | **0.70** | **0.67** | **101.18** | **6.86** | **20.78** | **Q5: Limited subjective evaluation** Following the setup of Face-TTS [1] using 17 evaluators, we initially conducted similar evaluations, and we acknowledge that a larger number of evaluators ensures more reliable results. Per your suggestions, we have expanded the evaluation to 50 evaluators with 15 samples each, and we also introduce new MOS metrics for both timbre (MOS$ _\text{id}$) and prosody (MOS$ _\text{emo}$) evaluation as Reviewer 4KNc suggested. The new evaluation results with 95% confidence intervals are as follows: | Methods | MOS$ _\text{nat}\uparrow$ | MOS$ _\text{id}\uparrow$ | MOS$ _\text{emo}\uparrow$ | | --------- | ------------------------ | ----------------------- | ------------------------ | | EmoSpeech | 2.30±0.19 | 2.93±0.09 | 2.78±0.13 | | Face-TTS | 2.28±0.09 | 2.67±0.12 | 2.75±0.09 | | **DEmoFace** | **3.17**±0.18 | **3.20**±0.11 | **3.26**±0.12 | The results show that DEmoFace outperforms EmoSpeech and Face-TTS in naturalness, identity timbre, and emotional prosody consistency. We will revise our manuscript accordingly. \[1\] Imaginary Voice: Face-styled Diffusion Model for Text-to-Speech **Q6: Clarification on existing method limitations** The limitations of existing methods can be categorized into two aspects: 1) **task-level**, where they cannot jointly model speaker identity and emotion solely from visual cues, and 2) **method-level**, which involves issues like one-to-many mapping with limited diversity or inefficiencies in continuous NAR or discrete AR frameworks. Building on these insights, we propose a novel discrete diffusion framework for a novel task—emotional face-to-speech generation. **Q7: References to be discussed** Thank you for your suggestion. We will certainly incorporate the relevant discussion and citations [2-3] in the future version. \[2\]: Deep facial expression recognition: A survey \ \[3\]: Emotion Recognition and Generation: A Comprehensive Review of Face, Speech, and Text Modalities
Summary: The paper introduces a task named Emotional Face-to-Speech (eF2S), which aims to synthesize emotional speech directly from expressive facial cues. The proposed DEmoFace leverages a discrete diffusion transformer with curriculum learning to achieve the SOTA eF2S performance. Claims And Evidence: The claims made in the paper are generally well-supported by clear and convincing evidence. The authors provide extensive experimental results, including both quantitative and qualitative evaluations. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the task. One potential limitation is the dataset size and diversity. While the authors use a combination of RAVDESS, MEAD, and MELD-FAIR, these datasets are relatively small and may not fully capture the variability in real-world scenarios. Theoretical Claims: It seems the theoretical claims are correct in this paper. Experimental Designs Or Analyses: I have checked the experimental designs. It seems the authors have provided comprehensive experiments in the main text and appendix. Supplementary Material: I have reviewed the appendix including the webpage with abundant demos. Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: Weaknesses: 1. Limited Novelty and Efficiency: The paper builds on existing techniques (e.g., discrete diffusion models, curriculum learning) without introducing fundamentally new algorithms. While the combination of these methods is creative, the lack of novel theoretical or algorithmic contributions limits the paper's originality. Additionally, the authors do not discuss the efficiency of the proposed framework, such as computational cost or inference speed, which is critical for real-world applications. 2. Limited Dataset Size and Diversity: The experiments are conducted on relatively small datasets (RAVDESS, MEAD, and MELD-FAIR), which primarily focus on English speakers and Western facial expressions. This limits the generalizability of the results and raises concerns about the model's performance in more diverse cultural and linguistic contexts. The lack of evaluation on larger or more varied datasets hinders the paper's ability to demonstrate the framework's robustness and applicability to real-world scenarios. Other Comments Or Suggestions: None Questions For Authors: Please see the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your positive comments and efforts in reviewing our manuscript. We try our best to address your questions as follows. **Q1: Limited novelty** Thank you for the opportunity to clarify the distinctions from previous methods. Although DEmoFace builds on existing discrete diffusion models (DDMs), our work makes key contributions. 1. While DDMs have shown great promise in text generation, their potential for speech generation—particularly with RVQ codec tokens—remains underexplored. 2. Existing DDMs struggle with effective guidance for multi-conditional generation. Our EPFG addresses this challenge by providing more efficient guidance, supported by strong empirical evidence, while also introducing theoretical insights that enhance the understanding of multi-conditional generation in DDMs. 3. DEmoFace is a unified framework for both acoustic- and visual-guided speech generation, with extensive results demonstrating its efficiency. **Q2: Limited data size** To address concerns about model generalization, we randomly sample 1,500 utterances from the LRS2 benchmark to evaluate its performance on real-world out-of-domain data. Results demonstrate that DEmoFace consistently achieves high-quality generation and outperforms the Face-TTS, even for unseen speakers and content. | Methods | EmoSim$\uparrow$ | SpkSim$\uparrow$ | RMSE$\downarrow$ | MCD$\downarrow$ | WER$\downarrow$ | | ------------ | ---------------- | ---------------- | ---------------- | --------------- | --------------- | | Face-TTS | 0.64 | 0.13 | 104.39 | 14.29 | **16.60** | | **DEmoFace** | **0.75** | **0.64** | **96.50** | **12.75** | 20.26 | Furthermore, as suggested by Reviewer 7YBQ, we have conducted dataset-wise experiments on LRS2, performing full fine-tuning of the base model for 20 epochs on four datasets to capture dataset-specific knowledge. The results are as follows: | Methods | $\text{Num}_\text{utterance}$ | $\text{Num}_\text{word}$ | $\text{Num}_\text{speaker}$ | EmoSim$\uparrow$ | SpkSim$\uparrow$ | RMSE$\downarrow$ | MCD$\downarrow$ | WER$\downarrow$ | | ---------------- | ----------------------------- | ------------------------ | --------------------------- | ---------------- | ---------------- | ---------------- | --------------- | --------------- | | Finetune-RAVDESS | 1,140 | 25 | 19 | 0.57 | 0.56 | 112.68 | 13.74 | 92.05 | | Finetune-MELD | 2,150 | 2,996 | 143 | 0.72 | 0.53 | 105.82 | 13.08 | 38.16 | | Finetune-MEAD | 8,876 | 6,504 | 36 | 0.70 | 0.50 | 102.47 | 13.06 | 33.64 | | Finetune-LRS3 | 14,601 | 15,401 | 719 | 0.72 | 0.63 | 101.58 | 12.86 | 20.73 | | **DEmoFace** | 26,767 | 15,545 | 917 | **0.75** | **0.64** | **96.50** | **12.75** | **20.26** | We have four key findings: 1) limited semantic content (low utterance and word counts) leads to higher WER; 2) limited speaker diversity (a small number of speakers) negatively affects SpkSim; 3) emotion-rich but small datasets like RAVDESS may not accurately reflect real-world distributions, as acted emotions tend to be exaggerated; and 4) mixed-dataset training improves generalization across all aspects on out-of-domain real-world data. In future, we will expand the dataset to enhance diversity and real-world applicability. **Q3: Clarification on efficiency analysis** We measure latency on 4090 GPU with a mini-batch size of 1 and 32 utterances, and function evaluation numbers (NFE) of 32 and 64. Latency is averaged over the test set utterances, and we report the Real-Time Factor (RTF), which indicates the time (in seconds) required to synthesize one second waveform. The results show that our method with NFE=32 and batch size=1 has the potential to build a real-time TTS system, being 2 times faster than real-time. In future, we will optimize inference efficiency by developing accelerated sampling techniques. | | RTF$\downarrow$ | | -------------------------------- | --------------- | | VoiceCraft (batch size=1, NFE=1) | 1.92 | | ChatTTS (batch size=1, NFE=1) | 0.30 | | Ours (batch size=1, NFE=32) | 0.49 | | Ours (batch size=1, NFE=64) | 0.96 | | Ours (batch size=32, NFE=32) | 0.13 | | Ours (batch size=32, NFE=64) | 0.23 |
Summary: The paper introduces Emotional Face-to-Speech (eF2S), a novel task that synthesizes emotional speech solely from expressive facial cues. The authors propose DEmoFace, a generative framework leveraging a discrete diffusion transformer (DiT) with curriculum learning, integrated with a multi-level neural audio codec. Key contributions include a multimodal DiT block, a coarse-to-fine curriculum learning strategy, and an enhanced predictor-free guidance mechanism. Experimental results demonstrate improved naturalness and emotional consistency compared to baselines, even surpassing speech-driven methods. ## update after rebuttal Thanks for the clarification. I maintain my score. Claims And Evidence: The claim that eF2S generates emotional speech purely from facial expressions is well-supported by experiments. The introduction of a discrete diffusion model and curriculum learning is novel and backed by strong empirical evidence. However, the claim that DEmoFace surpasses speech-driven models needs further justification—some results (e.g., WER) still favor speech-guided methods. Methods And Evaluation Criteria: The proposed method is well-structured, with clear pipeline descriptions. The use of curriculum learning to gradually introduce high-level tokens is innovative. The evaluation metrics (WER, MCD, EmoSim, SpkSim) are appropriate. Theoretical Claims: The paper extends diffusion models to multimodal emotional speech generation, which is a promising direction. The enhanced predictor-free guidance (EPFG) mechanism is theoretically well-motivated, but the justification for its superiority over standard PFG needs further clarity. Experimental Designs Or Analyses: The dataset selection is reasonable, covering RAVDESS, MEAD, MELD-FAIR, and a subset of LRS3. The comparison with state-of-the-art models (e.g., Face-TTS, EmoSpeech) is thorough. Supplementary Material: The appendix provides useful details on hyperparameters, training settings, and loss functions. The examples in the supplementary material demonstrate the effects of the proposed method. Relation To Broader Scientific Literature: The paper builds upon previous face-driven TTS, emotional TTS, and discrete diffusion models. Essential References Not Discussed: The references are sufficient. Other Strengths And Weaknesses: Strengths: • Novel problem formulation (eF2S). • Strong empirical results with extensive benchmarks. • Open-source potential for future research. Other Comments Or Suggestions: Add ablation studies on dataset quality and robustness. Questions For Authors: 1. How does the model handle ambiguous facial expressions (e.g., neutral vs. mild happiness)? 2. How would DEmoFace perform on out-of-domain data (e.g., unseen speakers, languages)? 3. What are the training costs and computational efficiency compared to standard TTS models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your positive feedback and constructive suggestions, and try our best to address your concerns as follows. **Q1: Facial expression ambiguity** Thank you for your insightful comment. In this paper, we leverage a pre-trained facial expression recognition model to generate one-hot emotion labels for learning emotional embeddings, serving as a plug-and-play component. This allows for the integration of stronger models, such as micro-expression recognition, to enhance expression precision and reduce expression ambiguity. We will incorporate this discussion in the future version. **Q2: Limited data size and multilingual extension** Thank you for your insightful comment. To address concerns about model generalization, our submitted paper ensures that test set speakers are unseen for RAVEDSS and MEAD, demonstrating strong generalizability. In addition, per Reviewer kkEx's suggestion, we have supplemented an out-of-domain evaluation on LRS2 by randomly sampling 1,500 utterances, with results provided in the Q2 response to Reviewer kkEx. Experimental results show that DEmoFace achieves better generalizability and outperforms the Face-TTS even with unseen speakers and semantic content. Expanding to unseen languages is an interesting challenge that requires multilingual datasets and handling linguistic and phonetic differences. However, this is beyond the scope of our eF2S task, which focuses on generating speech aligned with facial identity and emotional expression. In the future, we plan to explore multilingual speech generation using International Phonetic Alphabet (IPA) embeddings. **Q3: Training computational efficiency with standard TTS model** Thank you for your valuable suggestion. Following DiTAR [1], which evaluates the FLOPs of standard Non-AutoRegressive (NAR) TTS models, we compare DEmoFace with both Continuous (Cont.) and Discrete (Disc.) NAR methods at Number of Function Evaluations (NFE = 32). As shown in the following table, DEmoFace achieves comparable training efficiency to other NAR TTS models. In the future, we will further optimize its efficiency and extend DEmoFace from cross-modal face-to-speech to standard TTS to explore its broader impact. | Type | Methods | Params | TFLOPS | | --------- | ----------------------- | ------ | ------ | | Cont. NAR | E2-TTS (NFE=32) | 0.3B | ~56.5 | | Cont. NAR | F5-TTS (NFE=32) | 0.3B | ~37.4 | | Disc. NAR | MaskGCT (NFE=50) | 1.1B | ~116.7 | | Disc. NAR | NaturalSpeech 3 (NFE=1) | 0.5B | ~8.9 | | Disc. NAR | **DEmoFace** (NFE=32) | 0.2B | ~12.9 | [1] DiTAR: Diffusion Transformer Autoregressive Modeling for Speech Generation **Q4: Comparison with acoustic-driven methods** In Table 1 of the main paper, the key difference between acoustic-guided and visual-guided methods lies in the modality used to model timbre and emotional prosody, as reflected in SpkSim and EmoSim. However, both approaches rely on textual modality to guide semantic content generation, as indicated by WER. Our acoustic-guided DEmoFace\* outperforms the visual-guided DEmoFace due to a smaller distribution shift from ground-truth (GT) speech. This is because DEmoFace\* is trained and tested with speech modality, while DEmoFace is trained with GT speech but tested with vision modality. **Q5: Superiority of EPFG** The key difference between EPFG and vanilla PFG in discrete diffusion lies in multi-condition disentanglement versus aggregation. As stated in the main text (lines 401 and 432), EPFG significantly enhances multi-conditional generation quality, mitigating semantic confusion caused by aggregation. We will clarify this more explicitly in the future version.
null
null
null
null
null
null
Reinforcement Learning for Quantum Control under Physical Constraints
Accept (poster)
Summary: The authors address the problem of optimal quantum control using reinforcement learning (RL). Specifically, they define an RL framework that incorporates several real-world physical constraints to enhance performance. First, they limit the agent’s possible actions to those that require a small number of simulation steps. This, they argue, has two key advantages:
 (i) It improves training efficiency, as the fidelity component of the reward can be computed more efficiently.
 (ii) It biases the agent toward more physically meaningful solutions due to a connection with adiabatic theory. Second, they employ reward shaping to encourage the agent to generate smoother control signals, achieving similar benefits as (i) and (ii) above. Finally, they benchmark their approach on several experimental settings and demonstrate strong performance. Claims And Evidence: Yes, the claims are clearly stated, and the provided evidence is sufficiently convincing. Methods And Evaluation Criteria: The experiments appear to be reasonable benchmarks for the problem. However, as I am not an expert in optimal quantum control, I cannot accurately assess the difficulty of these benchmarks or whether the achieved performance represents the state of the art. Nonetheless, they do beat rather involved schemes by a significant margin. Theoretical Claims: There are no critical theoretical claims in the paper that I was unable to verify. Experimental Designs Or Analyses: The experiments appear to be reasonable benchmarks for the problem. However, as I am not an expert in optimal quantum control, I cannot accurately assess the difficulty of these benchmarks or whether the achieved performance represents the state of the art. Supplementary Material: No. Relation To Broader Scientific Literature: The questions are central to control, and extensively investigated; the work is well connected and the connections are listed. The authors highlight the novelty of their work in two key aspects: (i) restricting the RL agent’s action space and (ii) defining a novel reward function. Through these mechanisms, they effectively incorporate physical constraints into the RL framework. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: From a technical standpoint, the contributions are somewhat limited, as the approach builds on a standard RL framework for quantum optimal control, incorporating two novel techniques (limiting the agent’s action space and shaping the rewards). However, if these modifications lead to significantly better experimental results compared to the state of the art, this would still constitute a valuable contribution. Unfortunately, I am not able to assess how impressive the experimental results are due to my limited familiarity with the field. A second concern is also honestly raised by the authors in the Limitations and Impact statement. Due to data (interaction) driven nature of the approach this method will require huge numbers of uses of a quantum device and/or unrealistically strong simulators. It seems the only way this would really work would be in the far future, or given access to very good models of the system. However the latter should be impossible by virtue of the hardness of the simulation of quantum systems. So it is not entirely clear what the more immediate impact of this line of approaches will be. There would clearly be value in pushing ML/RL machinery, but I don't think the strenghts of this contribution are there. We do learn interesting facts about the relevance of rescticting to some simpler-to-simulate actions in the search space, but it is not clear to me to what extent this necessarily implies we will be far from globally optimal solutions. Other Comments Or Suggestions: None aside from issues mentioned in Other strenghths and weaknesses. Questions For Authors: None aside from issues mentioned in Other strenghths and weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Reviewer K5by Thank you for recognising our incorporation of physical constraints, which significantly advance the state of the art, as a valuable contribution. ### Weaknesses > "contributions are somewhat limited, as the approach builds on a standard RL framework for quantum optimal control, incorporating two novel techniques (limiting the agent’s action space and shaping the rewards)." A4.1 While we agree that our work lies within applied machine learning, our implementation of physics-informed constraints enables a highly-parallelisable learning algorithm for arbitrary quantum systems, achieving state-of-the-art fidelities. This facilitates easy applications of RL to pulse-controlled quantum systems, aiding in the discovery of optimal and robust solutions, significantly surpassing other RL and non-RL methods. To highlight our contribution, we have conducted an ablation study in the noise-free Lambda system explained in Sec. 5.1. We replace PPO with two alternative algorithms, DDPG [1] and TD3 [2]. Additionally, we compare to a vanilla version of each algorithm, where the reward function includes only a linear fidelity term (as done in many previous works, e.g. [3]). The ablation results (see Table below) highlight the efficiency and effectiveness of our approach in achieving high-fidelity solutions. Notably, none of the vanilla algorithms achieve a fidelity above 0.99. Our PPO implementation significantly outperforms both DDPG and TD3, reaching fidelity >0.99 in less than 1% of the time. In our revised manuscript, we will include this baseline comparison along with a plot illustrating the learning dynamics. |Algorithm|Time to reach mean fidelity >0.99 (Nvidia P100-GPU)| Mean Fidelity (over 256 batches) after convergence| |-|-|-| |Constrained PPO (Ours)|39 s| 0.9997| |Vanilla PPO|never|0.989| |Constrained DDPG |4945 s|0.995 |Vanilla DDPG |never|0.985| |Constrained TD3 |4517 s|0.992 |Vanilla TD3 |never|0.625| > "Due to data (interaction) driven nature of the approach this method will require huge numbers of uses of a quantum device and/or unrealistically strong simulators." A4.2 While our method requires a large number of quantum device interactions, it has been demonstrated in [4] that this is feasible in real world systems, given their fast gate times. We also want to highlight that most quantum systems have extremely well understood theoretical models, once the model parameters are correctly identified, our method can be applied to real physical devices. This is demonstrated in prior work, where good agreement between experimental data and simulation results is shown [5, 6, 7]. > I am not able to assess how impressive the experimental results are due to my limited familiarity with the field See A4.1 for RL baselines. We also wish to highlight that we outperform alternative optimisation methods on the respective problem settings in Sec. 5 with superior fidelities and noise robustness. Moreover, the methods we introduce for pulse-control of physical systems can find application in various fields of physical science such as in Optimising Chemical Reactions [9] and High Energy Physics [8]. ### References: [1] Lillicrap et al. 2016, 'Continuous Control with Deep Reinforcement Learning', ICLR 2016 [2] Fujimoto et al. 2018, 'Addressing Function Approximation Error in Actor-Critic Methods', ICML 2018 [3] Bukov, Marin, et al. Physical Review X 8.3 (2018): 031086. [4] Baum, Yuval, et al. PRX Quantum, vol. 2, no. 4, 2021, p. 040324. [5] Magnard, P., et. al. Physical Review Letters, 121(6), 060502. [6] Willsch, Dennis. arXiv preprint arXiv:2008.13490 (2020). [7] Zhang, X. L. et.al. Physical Review A, 85(4), 042310. [8] Capuano F. et. al. arXiv preprint arXiv:2503.00499 (2025). [9] Zhou, Z. et. al. ACS Central Science (Vol. 3, Issue 12, pp. 1337–1344). American Chemical Society (ACS) (2017). --- Rebuttal Comment 1.1: Comment: I thank the authors for their comments and explanations. The authors responded to three of my points: 1) unclear innovative step; 2) efficacy and; 3) importance of improvement. Regarding 1) I appreciate the explication of the innovations, they are inline with what I originally understood. Regarding 2) this helps but could the authors give explicitly the number of measurements needed for a more ambitious control problem? Regarding 3) the level of improvement, the table helps, but is this a realistic representation of what one could expect in real devices? How many differing settings were the comparisons done in? Is "time to reach fidelity" a critical parameter? Is getting an imporvement in the 3rd digit of precision a game changer? Could be? I feel the points explained do improve my take on the paper, but not to the point I would increase the grade. --- Reply to Comment 1.1.1: Comment: > could the authors give explicitly the number of measurements needed for a more ambitious control problem? The amount of measurements is a function of the amount of RL steps required until convergence, specifically, for the Lambda system and Transmon system it is *num_rl_updates* * *batch_size*. For the noise free Lambda and Transmon system, this number is: 5100 x 256 and 578 x 256. For the Rydberg system, the physical number of measurements is *m* * *num_rl_updates* * *batch_size* (where m = 4 -- which is an overhead associated with the quantum state reconstruction), for our Rydberg environment this number is: 4 x 6800 x 256. In a more complex setting, we expect this number to increase, for example in Ref. [1] the authors show that they generate complete gate sets on a real device with $\mathcal{O}(10^6)$ measurements (i.e. individual steps) per gate. The reported experimental runtime in [1] is in the order of hours, showing the feasibility of scaling to larger measurement numbers in real devices with fast gate times. > "Is this a realistic representation of what one could expect in real devices?" While experimental imperfections may slightly alter absolute fidelities, we expect to have a realistic representation of what one could expect in real devices and refer to papers that found close agreement between simulations and experimental results for the environment we consider, such as [2]. > How many differing settings were the comparisons done in? We have run the full comparison with optimised hyperparameters for every algorithm in the Lambda system environment (Sec. 5.1) so far. We remark that the rebuttal time did not allow to run such extensive benchmarks in all environments. However, we observed similar trends between the different algorithms in the other environments and will add the full results to the final paper. > Is "time to reach fidelity" a critical parameter? Yes, because we show that our method allows training to 0.99 fidelity 100x faster than the other baselines. Effectively, this allows adjusting the control pulse to newly measured system parameters 100x faster than with the other baselines. This is significant as it minimises device time when doing real experimental control. It also is significant for more complex simulated environments with longer update steps, where - with limited compute time - baseline algorithms might never converge. > Is getting an imporvement in the 3rd digit of precision a game changer? Thank you for this great remark. We now clarify in the manuscript that an increase in fidelity from 0.99 to 0.999 is of great significance, as it can enable error-free quantum computing. We refer to seminal works in Quantum Computing [3,4,5], which find that for error-free quantum computations, with error correction, the physical error rate must be below a certain threshold value, often estimated to be around 10⁻³ or lower [3,4]. Thus, increasing fidelity from 0.99 to 0.999 surpasses this critical threshold, making error-corrected quantum computing feasible. [1] Baum et. al. PRX Quantum 2, 040324 (2021) [2] Xu, H. et. al. Nat Commun 7, 11018 (2016) [3] Fowler et. al. Phys. Rev. A 86, 032324 (2012) [4] Gottesman, D. (2002). An introduction to quantum error correction. In Proceedings of Symposia in Applied Mathematics (Vol. 58, pp. 221-236) [5] Preskill et. al. Quantum 2, 79 (2018)
Summary: This paper introduces a RL approach for quantum control under physical constraints, aiming to improve the fidelity and robustness of quantum control tasks in real-world scenarios. Main Findings and Results: 1). The proposed physics-constrained RL algorithm achieves high-fidelity quantum control solutions across three different quantum systems. The fidelities exceed 0.999 across all tested systems, demonstrating superior performance compared to previous methods. 2). The method shows significant robustness to time-dependent perturbations and experimental imperfections. 3). By constraining the solution space to exclude control signals that induce overly fast quantum state dynamics, the algorithm improves computational scalability. Main Algorithmic/Conceptual Ideas: 1). Physics-Constrained RL: The core idea is to incorporate physical constraints (e.g., maximum number of simulation steps) directly into the RL framework. 2). Reward Shaping: The reward function is designed to incentivize high fidelity while penalizing solutions with large pulse areas and non-smooth control signals, which helps in discovering control policies that are easier to implement experimentally and less prone to errors due to signal imperfections. Claims And Evidence: The claims made in the submission are not well-supported by clear and convincing evidence. Specifically, the notion of "physics-constrained" is not sufficiently developed or explained. This lack of clarity undermines the overall persuasiveness of the paper. (1) The term "physics-constrained" is used throughout the paper but is not adequately defined or justified. The authors mention incorporating physical constraints into the RL algorithm but fail to provide a detailed explanation of what these constraints entail and how they are specifically applied. Without a clear understanding of the constraints, it is difficult to assess their impact on the results and whether they truly enhance the robustness and efficiency of the quantum control solutions. (2) While the authors present numerical simulations for three quantum systems, the results are not compared to a wide range of existing methods, especially the existing RL methods for quantum control. (3) The computational efficiency claims are not convincingly demonstrated. The authors assert that their method improves computational efficiency by enabling parallel optimization, but they do not provide detailed comparisons with other methods in terms of computational resources and time. Without such comparisons, it is challenging to evaluate the actual benefits of their approach in terms of scalability and practical applicability. Methods And Evaluation Criteria: The proposed methods and evaluation criteria need more clarity, justification, and comprehensive validation to be considered appropriate for quantum control tasks. For example, the "physics-constrained" approach is vaguely defined, making it not clear to see how it specifically addresses quantum control challenges. The reward function's penalties lack clear justification, questioning their effectiveness in guiding the RL agent. Computational efficiency claims are unconvincing due to the lack of detailed benchmarking against other existing methods. Theoretical Claims: Not applicable. The claims made are largely based on the results of simulations and the design of the algorithm, rather than on formal mathematical proofs. Experimental Designs Or Analyses: Not exactly. The experimental design and analysis in the paper are based on numerical simulations, which are appropriate for the initial validation of the proposed method. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper's key contributions are well-aligned with the broader scientific literature on quantum control and RL. Essential References Not Discussed: There are several related works that are essential for understanding the context and significance of the key contributions, but are not currently discussed in the paper. For example, Universal quantum control through deep reinforcement learning (npj Quantum Inf. 2019), Curriculum-based deep reinforcement learning for quantum control (IEEE TNNLS 2023). Other Strengths And Weaknesses: Strengths: (1)The integration of physical constraints into the RL framework is a novel approach that enhances the practicality and robustness of quantum control solutions. (2)It is a significant advancement of achieving high fidelity and robustness to noise. Weaknesses: (1) The paper relies solely on simulations without experimental validation. How to assess its practical applicability? (2) More comprehensive benchmarking against existing methods especially the existing RL methods for quantum control would strengthen the paper's claims and demonstrate its advantages. Other Comments Or Suggestions: No. Questions For Authors: 1. How does the proposed method scale to larger quantum systems with more qubits or higher-dimensional state spaces? 2. How does the proposed RL approach compare with other existing RL methods for quantum control tasks? 3. How to verify the performance of the proposed approach on real physical systems? 4. How well does the proposed RL method generalize to other types of quantum systems? 5. How sensitive is the method to the choice of physical constraints such as maximum solver steps? How to choose proper physical constraints? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Reviewer 8yQf We thank the reviewer for recognising our integration of physical constraints into RL, which achieves high-fidelity, noise-resilient solutions. ### Weaknesses > "The term "physics-constrained" is used throughout the paper but is not adequately defined or justified." A3.1: Thank you for raising this point, we refer to A1.3 in the response to sD3o and provide additional detail in A3.7. > "The computational efficiency claims are not convincingly demonstrated. " A3.2: We acknowledge the lack of timing comparison in our work and provide it here for the results in Table 2 (Sec. 5.1) (with one additional comparison) for the noise-free λ System Optimisation. On a Mac M1 2020 CPU, our highly parallelisable implementation achieves up to 30x faster runtimes for a single run, while also yielding the highest fidelity. We further show speedups over the baseline time by up to two orders of magnitudes with GPU based parallelisation in Appendix Fig. 11. |Algorithm|Time to convergence| Mean Fidelity| |-|-|-| |Our RL Algo|239.1 s| 0.999| |Reinforce RL Algo [T.1]| 2284.8 s|0.93| |OC (BFGS) [T.2]|274.08 s|0.89 |OC Krotov [T.3]|7216 s|0.99| ### Questions >How does the proposed method scale to larger quantum systems with more qubits or higher-dimensional state spaces? A3.3 Thank you for bringing this up, see response A1.4 to sD3o. >How does the proposed RL approach compare with other existing RL methods for quantum control tasks? [...] "the results are not compared to a wide range of existing methods, especially the existing RL methods for quantum control." [...] "For example, [2], [3]" A3.4: [2] uses discrete actions, unsuitable for our real-world quantum control problems requiring continuous actions. Hence we cannot benchmark against it, but have added it to our related work section. Unfortunately, [3] does not provide an open source implementation and the paper does not provide sufficient detail to allow reimplementation, hence, we are unable to benchmark against it. We want to highlight that we already included [3] in our related work section. We now include a more comprehensive discussion, highlighting its computationally less efficient approach of learning control signal parameters individually at each signal time step. We are confident that our work includes empirical comparisons to all relevant baselines, and we kindly ask the reviewer to specify if any comparisons were missing. We now also include benchmarks against DDPG and TD3 and refer the reviewer to A4.1 in response to K5by. >How to verify the performance of the proposed approach on real physical systems? A3.5 Most quantum systems have well established theoretical models. Given the system parameters, learned pulses can be directly applied to real devices; many prior works show good agreement between simulation and experiment [4,5,6]. In future work, we are planning to extend our experiments to real world devices. Unfortunately, IBM has discontinued pulse-level access, so we are pursuing Rigetti [7] as an alternative. >How well does the proposed RL method generalise to other types of quantum systems? A3.6 Our model-free approach generalises to any quantum system that can be simulated. Our method can also be extended to black box sampling from real physical devices; the feasibility of this has been shown in [1]. >How sensitive is the method to the choice of physical constraints such as maximum solver steps? How to choose proper physical constraints? A3.7 Our ablation study (App. Fig. 12 and 13) shows that larger smoothing kernel standard deviations and smoothing penalties boost fidelity while keeping signals within the bandwidth limits (typically a few hundred MHz) of standard electronics. In the revised manuscript, we will expand on App. Sec. F.3 to explain how we choose physical smoothing constraints that are consistent with electronic bandwidth limitations. Limiting solver steps improves compute time without affecting optimal dynamics if steps are sufficient. We set N_max based on the adiabatic condition, determined by the maximal effective Rabi frequency (N_max ≳ 1/Ω_eff) and increase it until infidelity significantly drops, as outlined in lines 206–214. ### Concluding remarks We kindly ask the reviewer if we’ve addressed their concerns by (1) adding RL benchmarks and distinguishing our work from existing RL work, (2) including the requested timing comparison, and (3) explaining physics-driven constraints and their effects. With these updates and new results, we politely ask the reviewer to reassess their score or highlight remaining issues. ### References: [1] Baum, Yuval, et al. PRX Quantum, vol. 2, no. 4, 2021, p. 040324. [2] Niu et al 2019, npj Quantum Information [3] Ma et al. 2023, IEEE TNNLs [4] Magnard, P., et. al. Physical Review Letters, 121(6), 060502. [5] Willsch, Dennis. arXiv:2008.13490 (2020). [6] Zhang, X. L. et. al. Physical Review A, 85(4), 042310. [7] Rigetti Computing. (2025). pyQuil 4.16.1.
Summary: This paper explores the application of reinforcement learning (RL) for quantum control, introducing constraints aimed at improving learning efficiency. The authors present a rigorous approach to adapting RL for quantum applications and provide detailed reasoning behind the necessary modifications. The study focuses on a specific use case within quantum control, integrating a tailored set of constraints and optimizations to enhance RL performance. ### Update after rebuttal The authors have provided a detailed and constructive response that addresses all previously raised concerns. They clarified the distinction between their pulse-level quantum control setting and prior circuit-level RL work, appropriately situating their contributions within the literature. Code has been made anonymously available and demonstrates compatibility with standard RL libraries. Additionally, they conducted new experiments benchmarking PPO against DDPG and TD3, showing clear advantages in both performance and sample efficiency. The authors also elaborated on their notion of “interpretable quantum state dynamics,” adding further clarity. In light of these clarifications and additions, I updated the score accordingly. Claims And Evidence: The authors make several claims regarding the effectiveness of their constrained RL approach for quantum control. While the paper provides strong theoretical and empirical reasoning for these adaptations, there are notable gaps in supporting evidence: - The paper does not sufficiently discuss prior work on RL applications in quantum computing. - Details on the specific RL algorithm employed and justification for its suitability to this problem are lacking. - The absence of released code raises concerns about reproducibility and the ability to benchmark against alternative methods. Methods And Evaluation Criteria: The proposed methods appear well-suited for the specific quantum control application, with clear justifications for introduced constraints. However, broader ML contributions are limited, as the study primarily focuses on improving RL’s applicability to quantum systems rather than advancing RL methodologies themselves. The evaluation could be strengthened by benchmarking against standard RL methods for quantum applications, incorporating comparisons to prior approaches referenced in the literature. Theoretical Claims: The paper does not primarily focus on new theoretical advancements in RL or quantum computing. However, the mathematical formulations appear sound. Experimental Designs Or Analyses: The experimental setup is well-explained for its intended quantum control application. However: - It is unclear whether the implementation supports standard RL frameworks, which would facilitate broader testing. - The paper would benefit from additional benchmarking against existing RL-based quantum approaches to contextualize performance improvements. Supplementary Material: Unfortunately, no code appendix was provided. I did not further not check the written appendix. Relation To Broader Scientific Literature: While the paper provides a rigorous adaptation of RL for quantum control, it does not sufficiently situate its contributions within the broader field of RL for quantum applications. Prior work such as [1-4] should be discussed to highlight differences and advancements beyond existing methods. Essential References Not Discussed: e.g., the following work on RL for Quantum Computing could have been considered: - van der Linde et al., 2023: RL-based quantum compilation benchmarking - Altmann et al., 2024: Challenges of RL in quantum circuit design - Kölle et al., 2024: RL environment for quantum circuit synthesis - Rietsch et al., 2024: RL for unitary synthesis of Clifford+T circuits Other Strengths And Weaknesses: Strengths: - A rigorous and well-explained adaptation of RL for quantum control. - Provides strong reasoning behind constraints and modifications. - Addresses a specific and well-motivated quantum control problem. Weaknesses: - The paper is heavily skewed towards quantum-specific adaptations, limiting its ML relevance for ICML. - Missing discussion of prior RL-based quantum computing research. - No code provided, reducing reproducibility and practical applicability. Other Comments Or Suggestions: If targeting ICML, consider emphasizing broader ML contributions beyond quantum-specific constraints. Questions For Authors: What do the authors mean by "interpretable quantum state dynamics" in this context? Do the authors plan to release their implementation, and if so, will it support standard RL libraries for benchmarking? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Reviewer vWkD Thank you for recognising our rigorous adaptation of RL for quantum control and well reasoned introduction of constraints. ### Weaknesses > "Missing discussion of prior RL-based quantum computing research". [...] "... Prior work such as [1-4] should be discussed..." A2.1: We thank the reviewer for highlighting these works; we will discuss them in the related work of the updated manuscript. We remark that these works do not address the same problem setting but instead focus on circuit-level quantum *compilation* (in a space of logical quantum gates) which is fundamentally different from the pulse-level *control* problem (in a space of physical control signals) that our work addresses. Hence, these works do not constitute approaches that we could benchmark against. We mention in the conclusion that expanding our pulse level control to multiple sequential pulses (i.e. a circuit) would be an interesting extension. > "No code provided, reducing reproducibility and practical applicability." [...] "The absence of released code raises concerns about reproducibility and the ability to benchmark against alternative methods."" A2.2: We acknowledge the reviewers' concern and are committed to reproducibility. To allow for a better review process, we have anonymously published the code under https://anonymous.4open.science/r/RL4qcWpc/README.md. We will release all code publicly upon acceptance, as stated in Section 5.0.1. > "The paper would benefit from additional benchmarking against existing RL-based quantum approaches..." A2.3: As pointed out in A2.1 above, the highlighted related works [1-4] do not address the same problem setting as our work. We are confident that our work includes empirical comparisons to all relevant baselines and politely ask the reviewer to specify if there were any missing comparisons. > "It is unclear whether the implementation supports standard RL frameworks, which would facilitate broader testing." A2.4: Our implementation supports any RL framework, as can be seen in the published code. To emphasise this, and to further motivate the choice of PPO, we have now conducted an ablation study on replacing PPO by DDPG [5] or TD3 [6]. Our implementation of PPO achieves fidelities of over 0.999 for the experimental setup described in Sec. 5.1, whereas both DDPG and TD3 achieve maximum fidelities of 0.995 and 0.994 and take 100x longer to reach >0.99 fidelity on the same GPU. See A4.1 in response to K5by for more details. > "Details on the specific RL algorithm employed and justification for its suitability to this problem are lacking." A2.5: Thank you for this feedback, we have added motivation for using PPO, an on-policy algorithm chosen for stability (less hyperparameter sensitivity) over DDPG [5] or TD3 [6]. Furthermore, we show that in our quantum control setting, PPO significantly outperforms alternatives, see A2.4 and A4.1 for more details. > "The paper is heavily skewed towards quantum-specific adaptations, limiting its ML relevance for ICML". A2.6: We believe that the paper's focus on physics-constrained RL for Quantum Control exactly fits the scope of ICML defined in the call for papers. Specifically, ICML calls for “Application-Driven Machine Learning [...] driven by needs of end-users in applications [...] such as physical sciences.” Also, see A4.1 for an ML specific benchmark. ### Questions > "What do the authors mean by "interpretable quantum state dynamics" in this context?" A2.7: It refers to the clear, interpretable time evolution of a quantum system under optimised control (e.g. minimal excited state population in a Lambda system), unlike complex, fast dynamics induced by many unphysical optimal control solutions. A plot contrasting these will be added to the revised manuscript. > "Do the authors plan to release their implementation... will it support standard RL libraries for benchmarking?" A2.8: As noted in A2.2, the code is now anonymously available and supports standard RL libraries, shown via ablations with DDPG [5] and TD3 [6] vs. PPO (see A2.4). ### Concluding remarks We kindly ask the reviewer if we were able to address their concerns by (1) clarifying the difference to [1-4], (2) making our code available, (3) demonstrating adaptability through comparisons with other RL algorithms. In light of these clarifications and additional results, we politely ask the reviewer to consider updating their score, or to point out further concerns. ### References: [1] van der Linde et al., 2023: RL-based quantum compilation benchmarking [2] Altmann et al., 2024: Challenges of RL in quantum circuit design [3] Kölle et al., 2024: RL environment for quantum circuit synthesis [4] Rietsch et al., 2024: RL for unitary synthesis of Clifford+T circuits [5] Lillicrap et al. 2016, 'Continuous Control with Deep Reinforcement Learning', ICLR 2016 [6] Fujimoto et al. 2018, 'Addressing Function Approximation Error in Actor-Critic Methods', ICML 2018
Summary: The paper proposes a physics-constrained reinforcement learning algorithm to explore physically realizable pulses for quantum control tasks. The constraint on the pulses ensures smooth transitions and low energies, which result in noise robust pulses that may achieve higher fidelities. Comprehensive experiments are conducted on three quantum control tasks and three quantum architectures, demonstrating the effectiveness of the proposed method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There is no theoretical claim in the paper. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: The paper proposes a novel RL framework to generate more robust quantum control pulses, which is a hard task for quantum control and little discussed in the literature of quantum + AI. Essential References Not Discussed: No. Other Strengths And Weaknesses: Pros: 1. The framework considers a hard balance in quantum control theory: how to find robust pulses that drive the quantum system to a desired state with high precision. This is tackled by an easy but clever combination of constraint design and RL methods to achieve high-robustness and high-fidelity at the same time. 2. The experiments include many interesting cases with multiple architectures, demonstrating the general feasibility of the proposed method. The resulting pulses look impressive in the sense of smoothness and pulse duration, both are essential in the real experiments. 3. The training environment includes a simulation of the Lindbladian of the quantum system, which is precise but demanding computation. The framework makes optimization on the engineering side to maximize the parallelism to speed up the training procedure. Cons: 1. The proposed method requires precise calibration of the target system, and after each calibration the RL algorithm needs to be rerun to obtain high-quality pulses. I wonder, when integrated in a realistic system where calibrations are conducted routinely, how will the proposed RL algorithm perform (in the sense of run time and stability) in real devices. 2. The experiments in the paper are conducted with numerical simulations. For a technique targeting quantum control, it is essential to validate the method on real quantum devices. I understand that it might be hard for the authors to access the limited hardware resource, while I still want to know if the proposed method can be applied to accessible commercial quantum devices, i.e., IBM’s devices. 3. Following point 2, the constraint design in the framework is not convincingly justified in real device experiments. In this sense, the paper lacks an important discussion on the criteria for selecting these constraints. Though, I do not doubt that the included constraints represent important features of robust pulses. 4. Since the RL algorithm relies on a full simulation of the system, it is hard in its current form to scale up, i.e. to more than 10 qubits. The above weaknesses are a bit nitpicking and may be out of scope of the current paper. Overall, I enjoyed reading the paper and like its idea. It is well-executed research in the direction of AI for quantum science. Other Comments Or Suggestions: Minor comments: The second and third term in (2) has the same structure, which I believe is a typo. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Reviewer sD3o We appreciate your positive feedback on our work and are glad that our research in AI for quantum science was well received. ### Weaknesses > "The proposed method requires precise calibration of the target system, and after each calibration the RL algorithm needs to be rerun to obtain high-quality pulses. I wonder, when integrated in a realistic system where calibrations are conducted routinely, how will the proposed RL algorithm perform (in the sense of run time and stability) in real devices." A1.1: We thank the reviewer for this insightful question regarding the handling of system parameter changes after re-calibration. For small drifts the inherent robustness of our learned policy (achieved via training in noisy environments) can avoid requiring retraining. For large deviations in system parameters, our method can be adapted by training an RL policy that conditions on the current system parameters. This policy is trained by sampling environments with varying system parameters, enabling it to adapt to novel system parameters when deployed. We would like to point out an additional scenario that we observed in our experiments, where the learned policy can be attributed to symbolic equations (Appendix Section E.1), as is the case for the learned transmon reset pulse. Only requiring four parameters to define, the learned pulse can be easily tuned in a physical experiment and does not require re-training. > "The experiments in the paper are conducted with numerical simulations. For a technique targeting quantum control, it is essential to validate the method on real quantum devices... hard for the authors to access the limited hardware resource, while I still want to know if the proposed method can be applied to accessible commercial quantum devices, i.e., IBM’s devices." A1.2: Unfortunately, IBM does no longer allow pulse-level access to their devices. For future work, we are investigating alternative hardware providers which offer pulse level device access, including Rigetti [1] and IQM [2] to verify our solutions. Furthermore, we want to highlight that prior work found good agreement between numerical simulations of and experiments with transmon qubits [3,4]. > "Following point 2, the constraint design in the framework is not convincingly justified in real device experiments. In this sense, the paper lacks an important discussion on the criteria for selecting these constraints..." A1.3: Thank you for this remark. We refer to the introduced method as "physics-constrained", as all constraints are derived from real, physical limitations. First, we constrain signal bandwidth (smoothness), and signal area, reflecting the limited instantaneous bandwidth of electronics and signal components in experiments, and limitations in available signal power and duration. We implement these constraints via the reward function (Eq.2), similar to the Lagrange Multiplier technique introduced in [5]. Second, we constrain the policy to solutions that can be simulated within a predefined number of maximum solver steps. This constraint incorporates priors about the physical solution time scales into the algorithm (see lines 209-13), which incentivises adiabatic quantum state dynamics, yielding robust and interpretable solutions. Additionally, this constraint incentivises smoothness (requiring lower bandwidth) of signals, as smooth signals generally require fewer solver steps. Furthermore, the constraint on maximum solver steps significantly reduces computational demand. We will include this explanation in Sec. 4.1 in our revised manuscript to make it clearer for the reader. We will also add a figure in Appendix Sec. F.3 which shows how smoothing constraints translate to real signal bandwidth limitations using an FFT analysis. > "Since the RL algorithm relies on a full simulation of the system, it is hard in its current form to scale up, i.e. to more than 10 qubits." A1.4: We agree with this observation. However, one could replace a full system simulation with an ML based emulator, or direct physical device sampling [6]. Given such improvements, our proposed method could scale to larger systems. We also want to highlight that control of smaller dimensional systems is an important and relevant research direction for advancing quantum technologies. > Minor comment Thanks for spotting this, the third term intentionally has the same structure but there is a typo $S(\Omega_i)$ should read $S(\Delta_i)$. ### References: [1] Rigetti Computing. (2025). Pulses and waveforms. pyQuil 4.16.1. https://pyquil-docs.rigetti.com/en/stable/quilt_waveforms.html [2] IQM. (2025). IQM Pulla 6.15. https://docs.meetiqm.com/iqm-pulla/ [3] Magnard, P., et. al. Physical Review Letters, 121(6), 060502. [4] Willsch, Dennis. arXiv preprint arXiv:2008.13490 (2020). [5] Bhatnagar, S., Lakshmanan, K. J Optim Theory Appl 153, 688–708 (2012). [6] Baum, Yuval, et al. PRX Quantum, vol. 2, no. 4, 2021, p. 040324. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. I have carefully read the other reviews and authors' responses. I believe most issues pointed by reviewers are addressed adequately. My score remains the same. The other reviews remind me of a paper [1] that is missing but worth mentioning in related work, where they use an RL framework to find control schemes for a transmon-cavity system for error correcting. [1] Sivak, Volodymyr V., et al. "Real-time quantum error correction beyond break-even." Nature 616.7955 (2023): 50-55. --- Reply to Comment 1.1.1: Comment: > The other reviews remind me of a paper [1] that is missing but worth mentioning in related work, where they use an RL framework to find control schemes for a transmon-cavity system for error correcting. We thank the reviewer for the comment and for highlighting this paper. It is related to some of the papers mentioned by reviewer Vwkd, and optimises parameters at the circuit level, which differs from our work which addresses pulse level control; the real time feedback aspect is related to the results discussed Sec. 5.4. We now discuss this paper in the related work section. Furthermore, in the conclusion, we state that extending our work to several concatenated pulses (i.e. circuit level control) constitutes an interesting direction for future work.
null
null
null
null
null
null
Online Clustering of Dueling Bandits
Accept (poster)
Summary: The work studies the integrated setting of clustering and dueling bandits. The clustering bandit setting assumes the arms are clustered where each cluster shares the same reward function. The dueling bandit setting assumes that the learner chooses two arms in each iteration and obtains the preference feedback as reward. The paper proposes algorithms for two variants of the problem, linear and neural contextual setting, together with regret bound analysis. Experiments under both synthetic and real-world data are conducted to verify the empirical effectiveness of the proposed approach. Claims And Evidence: - Originality: It is claimed that the work is the first to integrate the setting of clustering and dueling bandits. To my knowledge, the claim is true. - Theoretical guarantee: The proposed approach is companioned with regret bound analysis with complete proofs. On the other hand, the optimality of the algorithm is not sufficiently justified, in particular w.r.t. the information on the clusters, such as cluster number. - Experiments: In my view, the experimental results are solid. Methods And Evaluation Criteria: - The proposed algorithm utilizes relative standard techniques to handle clustering and dueling bandits. In my view, the method makes sense and is technically sound. - The major performance metric is regret bound, which is a standard one for online learning. Theoretical Claims: I have checked the proofs, which is correct in my view. Experimental Designs Or Analyses: The experimental part is solid. Supplementary Material: I have checked the proofs in the appendix. While the attached code is not reviewed. Relation To Broader Scientific Literature: The work involves the clustering and dueling bandit learning scenarios, which have both received thorough studies. The work majorly studies the integration of both settings. From technical perspective, the paper utilizes relative standard techniques in algorithm design and theoretical analysis from the both scenarios. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - The paper is technically sound. The proposed algorithm builds upon mature techniques. The technical proofs are complete. - The paper is clearly written. Weaknesses: - The technical challenge to integrate the clustering and dueling bandits lacks clear explanations. It would be not satisfactory to integrate the two setting without discovering more in-depth technical insights. - From practical perspective, it remains unclear why the proposed scenario is important in practice. In the introduction part, the example of LLM is discussed. While it remains unclear if the clustering assumption widely appears in real situations. Other Comments Or Suggestions: - It would be better to further discuss the optimality of the order of $m$ in Theorem 4.1 and 4.2. In the current results, if $m$ is very large, such as achieving the order of $O(T)$, the regret bound would be significantly worse, which can be worse than without considering the clustering assumption and only assume the adversarial setting. So I wonder whether there is the room of improvement on the term of $m$. - It would be better to introduce real-world datasets that natively follows the clustering and dueling assumptions. Questions For Authors: 1. It would be desirable to better highlight the technical challenges in algorithm design and theoretical analysis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the valuable advice. Our responses are as follows. We will add the discussions. **Q1:** Technical challenges: **A1:** **Novel Algorithm Design:** - Cluster Estimation under Dueling Feedback: Our setting does not allow direct use of prior cluster vector estimation methods in classical CB with numerical feedback. With numerical feedback, closed-form ridge regression enables simple updates of user-level covariance matrices and regressors, followed by cluster vector computation using aggregated statistics (e.g., Lines 6–8 in Algo.1 of Wang et al., 2024a). In contrast, dueling feedback lacks closed-form MLE, making direct adaptation infeasible. To address this, we maintain a history buffer $\mathcal{D}_t$ (e.g., Line 10, Algo.1) recording user indices, arm choices, and dueling feedback. We then filter data for users in inferred clusters and estimate cluster vectors via MLE (e.g., Line 6, Algo.1). - Carefully Designed Edge Deletion Thresholds: Threshold design under dueling feedback is more intricate than in absolute feedback settings and relies on refined calculations (Lemmas B.2 and C.5) to ensure correctness. In the neural setting, this is further complicated by the non-linear reward function. The required Cluster Separation assumption (Assumption 2.7) deviates from those in prior CB works, making its integration into the edge deletion rule for CONDB particularly challenging. **Novel Theoretical Analysis**: We highlight the main technical challenges for CONDB as an example: - Incorporating the novel Cluster Separation assumption (Assumption 2.7) for CONDB into the analytical framework of linear CB poses substantial difficulties. We address this by leveraging a linear approximation of the neural net as the analytical basis (Lemmas C.1 and C.2). - Deriving $T_0$ after which all users are correctly clustered (Lemma C.5), is particularly challenging due to the **non-linear and dueling nature of the reward**. Classical CB analysis bounds the L2 distance between true and estimated **linear parameters**, which is not applicable here. To overcome this, we use neural network linearization and rigorously analyze the resulting approximation error (Lemmas C.3, C.4, and C.5). - A key challenge in proving a sub-linear bound lies in the proper use of the effective dimension $\widetilde d$ (Eq.(16)). Direct extensions of Lemma B.5 from the COLDB (linear rewards) analysis would yield a bound linearly dependent on $p=O(m_{\text{NN}}^2)$, which becomes vacuous as $m_{\text{NN}}$ grows polynomially with $T$. To address this, we adopt techniques from neural bandits to reformulate Lemma B.5 for CONDB, allowing us to express the regret in terms of $\widetilde d$ (Lemma C.9). **Q2:** Setting Importance and LLM applications: **A2:** - Motivation: Leveraging user relations to accelerate learning is well-established in CB literature (e.g., Gentile et al., 2014), particularly in recommender systems where users naturally form groups [1]. Preference-based feedback is also more aligned with human behavior and common in applications like RLHF, motivating the use of user relations under dueling feedback. - Extension to More General Settings: Some works have explored similar-but-not-identical user preferences under numerical feedback (Ban & He, 2021b; Wang et al., 2024b). As the first to study preference feedback in CB, we focus on standard assumptions to address core challenges. Extending to those settings is left for future work. - LLM Applications: Our methods can directly extend recent work on neural dueling bandits for prompt optimization [2] to the multi-user setting. **Q3:** Optimality in $m$ and lower bounds: **A3:** - Following Wang et al. (2024a) and Saha (2021), we can derive a lower bound of $O(\sqrt{dmT})$ for linear case. Algo.1 achieves an upper bound of $O(d\sqrt{mT})$, which is tight up to $\sqrt d$—a common gap in linear bandits (e.g., LinUCB). Thus, our upper bound is optimal in $m$ for the linear case. For the neural case, lower bounds remain open even in the single-user setting. However, we hypothesize that our bound is also optimal in $m$ in the neural setting, as clustering naturally introduces a $\sqrt m$ scaling over the single-user case. - While adversarial bandit algorithms can achieve $O(\sqrt T)$ regret, **the regret definition differs**: adversarial regret compares to the best fixed arm, while stochastic regret compares to the optimal dynamic policy. Direct comparisons are not meaningful. **Q4:** Real-world data natively following clustering and dueling assumptions: **A4:** Our dataset and clustering setup follow standard practice in prior CB works using common benchmarks. We added a real-world dataset (Yelp), where COLDB continues to outperform baselines (see [figure](https://postimg.cc/QHS7rsbP)). We'll explore other datasets in future work. Reference [1] ClustKNN: a highly scalable hybrid model-& memory-based CF algorithm, KDD 2006. [2] Prompt Optimization with Human Feedback, 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses, especially for the detailed explanations on the technical contributions in the theoretical analysis. I will further verify and take them into considerations for final evaluations. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and for taking the time to acknowledge our detailed responses. We deeply appreciate your willingness to further verify our our explanations and consider them in the final evaluation. We hope that our responses could strengthen your confidence in our work.
Summary: This paper primarily investigates dueling bandits in an online clustering setting, where clusters of arms share a reward function. It examines cases where the reward function is linear or modeled by a deep neural network. The authors propose algorithms with sub-linear regret bounds and demonstrate their effectiveness through experiments. Claims And Evidence: The claim of achieving sub-linear regret bounds is substantiated by theoretical proofs. Methods And Evaluation Criteria: The proposed method maintains the clustering structure of users and estimates the reward function using information from users within the same cluster, which is appropriate for the clustering bandits problem. Theoretical Claims: I have reviewed the proof, which appears to be correct. Experimental Designs Or Analyses: The experimental designs are reasonable, and the experimental results are promising. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: Clustering bandits have been extensively studied in the literature, while this paper is the first to explore clustering bandits with dueling feedback. Essential References Not Discussed: The references sufficiently cover the relevant literature. Other Strengths And Weaknesses: Strengths: The paper is well-organized, and the proposed algorithm is straightforward to comprehend. The authors demonstrate that the algorithm can achieve sub-linear regret bounds and provide clear intuition for the two terms in the regret bound. The experimental results further confirm the algorithm's effectiveness. Weaknesses: It is unclear whether the regret bounds are nearly optimal, as no lower bounds have been established. Other Comments Or Suggestions: I would suggest elaborating on the technical challenges involved in the algorithm design and its analysis beyond the application of known results from dueling bandits and clustering bandits. Questions For Authors: Is it necessary for the algorithm to know the number of clusters in advance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your positive feedback and valuable suggestions, our responses are as follows. **Q1:** Lower bounds: **A1:** Thanks for the helpful suggestion. Based on prior techniques (Wang et al., 2024a; Liu et al., 2022) and the single-user dueling bandit lower bound (Saha, 2021), we can derive a lower bound of $O(\sqrt{dmT})$ for the linear setting. Our Algo.1 achieves an upper bound of $O(d\sqrt{mT})$, which is tight up to a $\sqrt d$ factor—a common gap in linear bandits (e.g., LinUCB). Thus, our upper bound is tight and optimal in $m$ for the linear case, and we will include this lower bound in the final version. For the neural case, even the single-user non-dueling setting lacks a known lower bound, making it an open problem. However, we hypothesize that our upper bound is also optimal in $m$ for the neural setting, as clustering naturally introduces a $\sqrt m$ scaling compared to the single-user case. **Q2:** Technical challenges involved in the algorithm design and its analysis: **A2:** **Novel Algorithm Design:** - Cluster Estimation under Dueling Feedback: Our setting does not allow direct use of prior cluster vector estimation methods in classical CB with numerical feedback. With numerical feedback, closed-form ridge regression enables simple updates of user-level covariance matrices and regressors, followed by cluster vector computation using aggregated statistics (e.g., Lines 6–8 in Algo.1 of Wang et al., 2024a). In contrast, dueling feedback lacks closed-form MLE, making direct adaptation infeasible. To address this, we maintain a history buffer $\mathcal{D}_t$ (e.g., Line 10, Algo.1) recording user indices, arm choices, and dueling feedback. We then filter data for users in inferred clusters and estimate cluster vectors via MLE (e.g., Line 6, Algo.1). - Carefully Designed Edge Deletion Thresholds: Threshold design under dueling feedback is more intricate than in absolute feedback settings and relies on refined calculations (Lemmas B.2 and C.5) to ensure correctness. In the neural setting, this is further complicated by the non-linear reward function. The required Cluster Separation assumption (Assumption 2.7) deviates from those in prior CB works, making its integration into the edge deletion rule for CONDB particularly challenging. **Novel Theoretical Analysis**: We highlight the main technical challenges for CONDB as an example: - Incorporating the novel Cluster Separation assumption (Assumption 2.7) for CONDB into the analytical framework of linear CB poses substantial difficulties. We address this by leveraging a linear approximation of the neural net as the analytical basis (Lemmas C.1 and C.2). - Deriving $T_0$ after which all users are correctly clustered (Lemma C.5), is particularly challenging due to the **non-linear and dueling nature of the reward**. Classical CB analysis bounds the L2 distance between true and estimated **linear parameters**, which is not applicable here. To overcome this, we use neural network linearization and rigorously analyze the resulting approximation error (Lemmas C.3, C.4, and C.5). - A key challenge in proving a sub-linear bound lies in the proper use of the effective dimension $\widetilde d$ (Eq.(16)). Direct extensions of Lemma B.5 from the COLDB (linear rewards) analysis would yield a bound linearly dependent on $p=O(m_{\text{NN}}^2)$, which becomes vacuous as $m_{\text{NN}}$ grows polynomially with $T$. To address this, we adopt techniques from neural bandits to reformulate Lemma B.5 for CONDB, allowing us to express the regret in terms of $\widetilde d$ (Lemma C.9). **Q3:** Is it necessary for the algorithm to know the number of clusters in advance? **A3:** No, our algorithms can adpatively cluster the users during the learning process, the number of clusters $m$ does not need to be known.
Summary: The paper introduces the first algorithms for clustering users in dueling bandit settings where feedback is based on preferences between pairs of items rather than absolute numerical rewards. The authors propose two novel approaches: - Clustering of Linear Dueling Bandits (COLDB) for linear reward functions - Clustering of Neural Dueling Bandits (CONDB) for non-linear reward functions modeled by neural networks. Both algorithms adaptively group users with same preferences into clusters, allowing collaboration by sharing data within clusters. The authors provide theoretical analysis showing that both algorithms achieve sub-linear regret bounds that improve when more users belong to the same cluster. Their analysis quantifies the benefits of cross-user collaboration in preference-based feedback scenarios. Experimental results on both synthetic and real-world (MovieLens) datasets demonstrate that the proposed methods outperform independent dueling bandit baselines, validating the theoretical findings. Claims And Evidence: Most claims are supported by theoretical analysis and empirical evaluation, but with some limitations: The theoretical regret bounds for both algorithms are well-established with detailed proofs. The authors show that when clustering works correctly, the regret scales with the number of clusters (m) rather than the number of users (u), providing a formal justification for clustering benefits. However, the empirical evidence is somewhat limited. While the experiments show improved performance over independent baselines, the paper only evaluates two datasets with a small number of simulation settings. The experimental section lacks ablation studies to understand the contribution of different algorithm components, sensitivity analyses for key parameters, and comprehensive evaluations across diverse scenarios. The claim of correct clustering convergence is supported theoretically (Lemma B.2 for COLDB and Lemma C.5 for CONDB), but these critical lemmas are barely mentioned in the main paper. Additionally, there is confusion around the edge removal mechanism and its notation, as the function 'f' is not clearly defined when discussing step 8 in both algorithms yet plays a central role in the proofs. Not sure if this is the right place to mention : - The problem setup of Same preference seems is quite restricted and may limit real-world applicability. A setup with similar preference (same preference with some error tolerance) might be more lucrative. - The paper repeatedly refers to "cross-user collaboration," "user collaboration," and similar terms, but this terminology is seems slightly misleading. What the paper actually describes is centralized information pooling and model sharing managed by a single algorithm, not active communication between users. Users are passive participants, and all "collaboration" is implicitly facilitated through the central algorithm. The authors should clarify this distinction to avoid misrepresenting the nature of their contribution, especially since true cross-user communication would involve different theoretical and practical challenges. - The algorithms assume a central entity with access to all user data, uniform user arrivals, discrete and fixed clusters of users, and the ability to run potentially expensive computations for every interaction. Many practical applications, especially those involving privacy concerns or distributed systems, would violate these assumptions. The paper would be strengthened by discussing these limitations and how the approach might be adapted to more realistic constraints. Methods And Evaluation Criteria: The proposed methods -- CONDB and COLDB -- build upon established techniques in bandits literature and make sensible extensions to the preference-feedback setting. The graph-based clustering approach with adaptive edge deletion is reasonable for identifying user groups with ***same*** preferences. However, there are substantial concerns about practicality. The paper's motivation mentions scenarios with large numbers of options, but CONDB requires running neural networks for each round twice for every round of the for-loop, which is computationally expensive and not scalable for real-world applications. This mismatch between motivation and implementation raises questions about real-world applicability. The evaluation criteria involving cumulative regret is standard and appropriate for the bandit setting, but the experimental evaluation is limited in several ways: - Only two cluster configurations (m=2 and m=5) are tested, which doesn't fully validate the algorithms' performance across varying clustering structures. - The uniform user arrival assumption in the theoretical analysis is unlikely to hold in real applications. Atleast for the simulations, maybe a broader user arrival strategy could be showcased. - The paper lacks comparison against additional baselines beyond independent dueling bandits. - No evaluation of computational efficiency is provided, which is particularly concerning for the neural network approach. (minor point) Theoretical Claims: I skimmed over the main theoretical results (Theorems 4.1 and 4.2) and their supporting lemmas. I did give more focus on Lemma B.2, C.5 since they seem central to the paper. The proofs appear technically sound in deriving the regret bounds, but there are notable limitations: - The paper provides upper bounds on regret but no lower bounds. Without lower bounds, it's unclear whether the derived regret rates are optimal in terms of dimensionality dependence. - The edge deletion mechanism's theoretical guarantees (Lemma B.2 and C.5) don't fully address how the algorithms recover from incorrect edge deletions that may occur early in the process when estimates have high uncertainty. In the worst case of incorrect deletions (early in the algorithmic run or out of high probability zone), what is the worst to expect? - The notation for the function 'f' used in the edge deletion criteria (step 8 in both algorithms) is inconsistently defined between the main text and proofs, creating confusion about this critical component. - For CONDB, the NTK approximation in Lemma C.1 requires neural networks to be extremely wide (equation 66 specifies polynomial dependencies on multiple parameters), raising concerns about theoretical vs. practical guarantees. - The regret decomposition in equation (82) assumes correct clustering but doesn't fully account for potential cascading effects from early misclassifications. Experimental Designs Or Analyses: Potential Concerns: While the simulations serve as a great PoC with a limited setup. I would add the following critique: - The experiments only test two cluster configurations (m=2 and m=5) in the synthetic setting, which doesn't sufficiently validate the algorithms' performance across varying clustering structures. - The neural network experiments use only a simple square reward function (f(x) = (θᵀx)²). Testing more complex non-linear functions would better demonstrate CONDB's capabilities. At this point, the simulations seem to serve a PoC rather than to verify that the algorithm is behaving as expected. - No sim results focussing on the convergence of the clusters. - There's no analysis of how algorithm parameters (like edge deletion thresholds) affect performance or clustering accuracy. - The paper doesn't report confidence intervals or statistical significance of the results, making it difficult to assess the reliability of the performance differences. - There's no evaluation of computational efficiency, which is especially concerning for the neural network-based approach that requires retraining models repeatedly. - The setup may be unrealistic for practical applications - particularly CONDB's requirement to run neural networks twice per round, which would be computationally prohibitive in many real-world settings. Any addressing of the above points would be a great help to any reader. Supplementary Material: I skimmed through the math of the supplementary material. Major parts include: - Complete algorithm details for CONDB (Algorithm 2) - Proofs of all lemmas and theorems mentioned in the main paper (Gave particular focus to B.2 and C.5) - Auxiliary definitions needed for understanding the neural network analysis - Detailed explanation of the NTK-based analysis for neural networks Relation To Broader Scientific Literature: The paper appears to largely combine existing approaches from different areas rather than introducing fundamentally new techniques. It bridges previously separate research areas: - The clustering of bandits approach builds on the work of Gentile et al. (2014) -- CLUB algorithm, which introduced graph-based user clustering for standard (non-dueling) contextual bandits. - The dueling bandits aspect draws from recent work on contextual dueling bandits which established theoretical foundations for linear dueling bandits. - The CONDB algorithm integrates ideas from neural bandits literature and recent work on neural dueling bandits (Verma et al., 2024). While the paper cites relevant literature and contributes by combining these existing approaches. Essential References Not Discussed: The paper has a comprehensive literature review covering most relevant work. There are few plus minuses which can happen, but that is largely an author's choice. Other Strengths And Weaknesses: **Strengths:** - The paper addresses a practically relevant problem of enabling collaboration among users in preference-based feedback settings. - The theoretical analysis provides insight into the benefits of clustering in dueling bandit settings. - The extension to neural networks for handling non-linear reward functions acknowledges the need for more expressive modeling of application setups. **Weaknesses:** - The paper appears to deriving a lot of the existing techniques from different areas rather than introducing fundamentally new algorithmic approaches. - The edge deletion mechanism lacks sufficient analysis on its robustness, particularly in early stages when estimates have high uncertainty. - The practical implementation of CONDB might face computational challenges that aren't addressed. Training neural networks twice per round is typically expensive for real-world applications. - The motivation discussing large-scale recommendation systems with many options conflicts with the proposed methodology's computational requirements. - Missing lower bounds make it difficult to assess whether the derived regret rates are optimal. - The experimental section lacks depth and breadth needed to fully validate the practical performance of the proposed algorithms. - The NTK-based theoretical guarantees require extremely wide neural networks. Atleast a simulation which shows it works with smaller nets would go a long way Other Comments Or Suggestions: **Suggestions/ Comments:** - Mentioning Lemmas B.2 and C.5 (on edge deletion) to the main text would be very beneficial. These are critical for algorithm correctness but barely mentioned. - Would it be possible to clarify the notation for the threshold function 'f' used in the edge deletion criteria, as it is inconsistently defined. - The introduction mentions applications to prompt optimization for large language models. This is very interesting use case but isn't explored further. - Ablation studies (this would include other non-linear reward structures) to understand the contribution of different algorithm components would be a crucial add. - Please Provide confidence intervals or statistical significance for experimental results. Questions For Authors: Would like the authors to expand on just one point: - Your paper frames the approach as enabling "cross-user collaboration," but all information sharing is mediated through a central algorithm with no direct user-to-user communication. How would your approach change if users could only share limited information, had privacy constraints, or were distributed across different systems without centralized coordination? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable suggestions. Our responses are as follows. We will incorporate them after revision. **Q1**: Lemma B.2, C.5, and more robust deletion: **A1**: We will emphasize these lemmas in the main text. Our edge deletion mechanism and analysis follow prior clustering of bandits (CB) works. For stronger early-stage robustness, the algorithms can be modified to restart with a complete graph each round and re-check edges involving previously served users. This ensures correct clustering w.h.p. after $T_0$, mitigating early-stage misclusterings. **Q2:** Definition of f: **A2**: We have defined f at the Input of Algo.1 (top of page 6) it's used in the proof without restating. We will redefinie f in the proof. **Q3:** Setting of similar preference in clusters: **A3:** As the first to study preference feedback in CB, we focus on standard assumptions to address its core challenges. While similar-preference settings with numerical feedback have been studied (Ban & He, 2021b; Wang et al., 2024b), extending our results to that setting is an independent direction which we leave for future work. **Q4:** Terminology of "user collaboration" and the question about the limited shared information setting: **A4**: - We will replace “user collaboration” with clearer terms like “user relation” or “user similarity” as per your advice. - For scenarios without centralized coordination or with limited shared information, these fall within the scope of federated bandits. Since our work follows the standard CB setting, such extensions are beyond our current scope, but we view this as a promising future direction. **Q5**: Assumptions (particularly uniform arrival): **A5**: Our assumptions follow standard practice in CB and are used solely for theoretical analysis. The uniform arrival assumption can be relaxed to general distributions (Lines 161–164). **Q6:** Computation of CONDB: **A6:** Even in the baseline where each user runs NDB independently, an NN must still be trained per iteration—yet this incurs higher regret (scaling with $\sqrt u$). Our CONDB trades off computation for better performance. To reduce computational overhead, we can adopt batched training strategies (e.g., "Batched Neural Bandits" (Gu et al., 2024)), updating the NN only every few iterations. **Q7:** Techinical challenges **A7:** The dueling (and neural) feedback introduces new technical challenges to the classic CB study. Please kindly refer to our **A1 in response to Reviewer AU83** for details. **Q8:** Lower bounds: **A8:** Following Wang et al. (2024a) and Saha (2021), we can derive a lower bound of $O(\sqrt{dmT})$ for linear case. Algo.1 achieves an upper bound of $O(d\sqrt{mT})$, which is tight up to $\sqrt d$—a common gap in linear bandits (e.g., LinUCB). So our upper bound is tight and optimal in $m$ for the linear case. For the neural case, lower bounds remain open even in the single-user setting. However, we hypothesize that our bound is also optimal in $m$ in the neural setting, as clustering naturally introduces a $\sqrt m$ scaling over the single-user case. **Q9:** Experiments: **A9**: Our main contribution is theoretical, but we have added experiments as per your advice to support our findings. Due to time constraints, we leave more extensive experiments as future work. **(1) Only two datasets**: We added an experiment with the Yelp dataset (see [figure](https://postimg.cc/QHS7rsbP)). **(2) Ablation for m**: We added an additional value of m=4 (see [figure](https://postimg.cc/qg74j037)), the results are consistent. **(3) Non-uniform arrival**: We included experiments under non-uniform arrivals, which show consistent performance gains (see [figure](https://postimg.cc/pyr3wNjd)). The user arrival probabilities are randomly sampled from a Dirichlet distribution with a concentration parameter of 10. **(4) Baselines beyond independent dueling bandits**: Prior CB methods do not handle preference feedback, so direct comparisons are not applicable. Second, following standard CB practice, it is common to use independent per-user algorithms as baselines when introducing a new setting. **(5) More complex non-linear functions**: We tested a more complex function: $ (\theta^T x)^4 + cos(\theta^T x)$. CONDB still consistently outperforms the baseline (see [figure](https://postimg.cc/1gpyVkBS)). **(6) Confidence intervals**: We have indeed included confidence intervals (Lines 369-371 and the figures). **Q10:** NTK: **A10:** The NTK assumption follows standard practice in neural bandit theory and is used only for analysis; in practice, very wide networks are not needed. Our experiments indeed use small NNs—1 hidden layer with 32 nodes—which already achieve strong performance. **Q11:** LLM application: **A11:** The recent work "Prompt Optimization with Human Feedback" uses neural dueling bandits for prompt optimization. Our methods can be directly applied to extend their method to the multi-user setting. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their comments. I have increased my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for increasing your score! We are pleased to see that our responses addressed your concerns, and we greatly appreciate the constructive feedback you provided. We will incorporate your suggestions to further strengthen and improve our paper.
null
null
null
null
null
null
null
null
Modeling Multi-Task Model Merging as Adaptive Projective Gradient Descent
Accept (poster)
Summary: This paper views model merging from a multi-task learning angle. It designs an adaptive projective gradient descent method that tries to minimize the gap between the merged model and individual models, subject to the constraint of retaining shared knowledge. Specifically, the method only uses gradients in the orthogonal direction of the shared space of task vectors. Experiments show performance improvement compared to baseline methods. ## update after rebuttal As discussed during the rebuttal, I generally support the acceptance of this paper despite its performance gap issue. The proposal should be helpful for some other researchers working in this field. Claims And Evidence: The main goal of the paper, which is ambitious, is "ensuring the merged model performs comparably to task-specific models on respective tasks". This is not well supported by the experimental results. Based on Tables 1, 2, and 3, the model merging proposal does not maintain the same level of performance as the individually trained ones, and is even worse than multi-task learning in many cases. Theoretically, it is also unclear why "only take gradient steps in the direction orthogonal to the shared space" (Line 68) can help us achieve this ambitious goal. The argument is not convincing. Methods And Evaluation Criteria: The proposed method needs more justification to help me understand why it can help keep task-specific information while model merging. The experimental evaluations are diverse and the proposal shows better performance compared to previous model merging methods. Theoretical Claims: The paper does not provide much theoretical evidence. Experimental Designs Or Analyses: The experimental designs are satisfactory, comparing the proposal with SOTA methods and analyzing different modules of the method. Supplementary Material: No. Relation To Broader Scientific Literature: This paper is related to many works on model merging and multi-task learning. Essential References Not Discussed: I am not aware of closely related works that were not discussed in the paper. Other Strengths And Weaknesses: **Other strengths:** - This paper is interesting and shows promising performance compared to previous methods based on the experimental results. - The manuscript is well-written, with a proper discussion of previous works. **Other weakness:** - The connection between the motivation (keeping task-specific information) and the method is not very clear. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and detailed comments. We hope the following discussion can address your concerns! ___ > Q1: Based on Tables 1, 2, and 3, the model merging proposal does not maintain the same level of performance as the individually trained ones, and is even worse than multi-task learning in many cases. A1: Transfer learning has driven the proliferation of fine-tuned models, **but deploying separate models for each task creates significant storage and computational burdens.** While multi-task learning could address this, it involves costly training and simultaneous access to all tasks. Additionally, determining the optimal data mixture for effective multi-task training can be complex and resource-intensive. Model merging addresses these challenges by **compressing** task-specific models without requiring access to training data (privacy or copyright). The performance gap between merged model and individual models or multi-task learning is an inherent constraint, as merging multiple trained models into a single model occurs without the benefit of costly computations. While previous methods focus on alleviating conflicts between tasks, our approach takes a more direct path by establishing the minimization of the gap between the merged model and individual models as our explicit optimization objective (Line 24). By formulating and effectively solving this as a data-free constrained optimization problem, we achieve significant performance improvements. On ViT-L/14, our method reaches 92.6% performance, approaching the 93.5% achieved by multi-task learning—a substantial narrowing of the gap. We have revised the description of model merging requirements in Line 18, and greatly appreciate your suggestion. ___ > Q2: Theoretically, it is also unclear why "only take gradient steps in the direction orthogonal to the shared space" (Line 68) can help us achieve this ambitious goal. The argument is not convincing. A2: The optimization objective in Eq. (5) promotes orthogonality between task vectors to mitigate conflicts, **while multi-task learning similarly emphasizes shared representations**. Parameters between similar tasks can be shared (e.g., applying the MNIST task vector improves accuracy on SVHN). Therefore, we propose constructing a shared subspace $S_{share}$ to preserve common representations. By constraining task vector optimization to reduce updates along $S_{share}$, we maintain shared knowledge while minimizing the gap for each task as defined in Eq. (5). Ablation studies demonstrate a 3.5% improvement with $S_{share}$. Table 7 presents a comparison of different gradient directions, revealing dataset-specific performance variations. Our method achieves significant improvements on the DTD dataset while showing decreased performance on SVHN. This pattern stems from DTD's reliance on rich textural features that are preserved in $S_{share}$. In contrast, SVHN's visual representations differ substantially from other tasks, making the primary components in $S_{share}$ less suitable. This observation is further validated by examining the performance gap between pre-trained and fine-tuned models: SVHN exhibits the lowest pre-trained performance (31.4%) but achieves remarkable results after fine-tuning (97.5%), indicating its strong dependence on task-specific features. In summary, our approach effectively preserves shared knowledge across tasks while achieving optimal overall performance. ___ > Q3: The connection between the motivation (keeping task-specific information) and the method is not very clear. A3: To isolate task-specific information, the task vector is defined as $\tau_i = \theta_i - \theta_0$ to capture unique characteristics for each task. While preserving task-specific information through simple vector addition is straightforward, the challenge in model merging lies in managing conflicts between multiple tasks. This challenge becomes evident in Figure 1, which demonstrates how performance consistently declines across all merging methods as the number of tasks increases, directly reflecting increased task conflicts. As shown in Eq. (3), we measure the gap between the merged model and individual models **in terms of task-specific losses**. To alleviate conflicts, we introduce a modification vector $\Delta$ for each task vector. This leads to our optimization objective in Eq. (4), which aims to achieve optimal cross-task performance by optimizing $\Delta$. Through this optimization process, the merged model approximates the behavior of task-specific models while effectively resolving conflicts. In short, by minimizing our proposed loss function, we ensure the merged model preserves essential task-specific information. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing a detailed rebuttal. However, my concerns mentioned above were not solved, including the performance gap and the theoretical advantage of the proposal. Therefore, I decided to maintain my original ratings. --- Reply to Comment 1.1.1: Comment: Thank you again for your thorough review. We incorporate your constructive suggestions to better explain our method. Regarding concerns about the performance gap, please refer to our recent discussion with Reviewer `QBR6`. We acknowledge that theoretical advantage is not our primary contribution. Our paper directly models the multi-task model merging problem and empirically validates our motivation through experimental evidence.
Summary: The authors introduced an approach to merging tasks for a multi-task learning purpose while maintaining performance comparable to task-specific models. They formulated the problem as a constrained optimization task, solved using adaptive projected gradient descent. To facilitate task merging, they introduced a modification vector for each task, acting as a correction mechanism. To achieve this, they constructed a shared subspace using SVD to capture common features, optimizing within this space to minimize task conflicts. The gradient of the modification vector is decomposed into two components: one projected onto the shared subspace and the other orthogonal to it. Additionally, they introduced merging coefficients based on the norm of task vectors to mitigate the dominance of any single task’s gradient influence. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: No Supplementary Material: Yes, sections A, B, C, and D Relation To Broader Scientific Literature: The authors are trying to tackle the issue with merging parameters of the model achieving good performance comparable to task-specific methods by reducing conflict of tasks. They mentioned most of the other literature work that addressed this issue. Essential References Not Discussed: Yes, the Weight-Ensembling Mixture of Experts (WEMoE) method for multi-task model merging was introduced in the paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts", published at ICML 2024. Additionally, an extended arxiv version, "Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging" (E-WEMoE), further refines this approach. Both papers should be included in the related work section for a comprehensive discussion. Other Strengths And Weaknesses: Strengths: They applied their approach on Vision and NLP tasks. Weakness: Each dataset should have a brief description. Most of the included datasets focus on a single task, primarily classification. Can this approach be applied to heterogeneous MTL? SVD is computationally expensive. Can this approach be applied to Llama 2 or Llama 3? Traditional MTL needs to be clarified more. For instance, what is its architecture? The results were not compared against the WEMoE published in this paper “Merging Multi-Task Models via Weight-Ensembling Mixture of Experts” and E-WEMoE frameworks presented in the paper "Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging". Additionally, Figure 3 is similar to one in the paper "Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging”. For vision tasks, your results fall short compared to those reported in the paper “Merging Multi-Task Models via Weight-Ensembling Mixture of Experts” and the paper "Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging". Other Comments Or Suggestions: None Questions For Authors: None Ethical Review Concerns: Figure 3 closely resembles the one presented in the paper "Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging", sharing the same representation and color scheme. However, there is no proper citation to this arXiv paper. Notably, this arXiv paper is an extension of the ICML 2024 accepted paper, "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts", and both papers report better results than the paper currently under review. Given that the authors reproduced a highly similar image without citing the original work—regardless of whether they are the same authors—the omission appears intentional, particularly since the prior work (in both papers) demonstrates superior performance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1: The omission appears intentional, since the prior work demonstrates superior performance. A1: Our approach differs fundamentally from [1,2] in both setting and methodology. Our objective is to close the performance gap between model merging and multi-task learning **without introducing additional computation and memory requirements—a core and previously unresolved challenge in model merging research**. - ### Parameter MoE architecture preserves MLP layers from each fine-tuned task-specific model and the pre-trained model, while additionally training a router module. In contrast, our merged model maintains a standard model size. The parameter comparison for ViT-B/32 (8 tasks) is as follows: ||Total Parameters| |-|:-:| |Individual|113.45M| |Ours|113.45M| |WEMoE|573.96M| The primary motivation for model merging is parameter reduction. If performance were the sole consideration, retaining each task-specific model would be the trivial solution. **Our method aims to compress multiple models (whether 8 or even 20) into a single standard-sized model, which aligns with the typical settings in model merging and multi-task learning**. As MoE methods (89.4%) exceed the performance upper bound of multi-task learning (88.9%), comparing our approach directly with MoE would be inappropriate. - ### Data Requirements MoE approaches employ unlabelled test datasets to train the router module, whereas our optimization of task vectors is data-free. The performance benefits from test-time adaptation are self-evident. **Merging based solely on model parameters is more practical and represents the focus of most model merging methods**. - ### Computational Overhead Static merging maintains **inference costs** equivalent to standard models, while MoE dynamic merging consumes more memory and computational resources (router + activated experts $k$). The inference phase memory usage comparison is as follows: ||ViT-B/32 (8 tasks)|ViT-B/32 (20 tasks)|ViT-L/14 (8 tasks)| |:-:|:-:|:-:|:-:| |Ours|963.42MB|963.42MB|3772.63MB| |WEMoE|2750.65MB|5346.00MB|10063.64MB| Similarly, test-time adaptation incurs additional **training costs**, while our method requires only lightweight training overhead (as shown in Table 10 of our paper): ||ViT-B/32 (8 tasks)|ViT-L/14 (8 tasks)|ViT-B/32 (8 tasks)|ViT-L/14 (8 tasks)| |:-:|:-:|:-:|:-:|:-:| |Ours|729MB|2448MB|2.02min|5.18min| |WEMoE|3744.19MB|24535.53MB|7.07min|56.84min| Notably, our method can be trained layer by layer, enabling model merging for large models with minimal memory requirements. - ### Regarding Figure 3 Figure 3 visualizes task vector magnitudes, highlighting a phenomenon inherently observable across domain benchmarks. E-WEMoE and DOGE propose different approaches to address this phenomenon. Figure 3 was drawn with assistance from E-WEMoE authors to create a new version. **Associating performance gaps with non-citation introduces a conceptual misunderstanding, as fair comparison is impossible due to differing settings.** Meanwhile, we compare our approach with state-of-the-art methods in both data-free and test-time adaptation scenarios (described in lines 314-328). We appreciate your feedback and will introduce MoE-like methods and the clear differences. ___ > Q2: Each dataset should have a description. Most of the included datasets focus on classification. Can this approach be applied to heterogeneous MTL? A2: We will add descriptions for each dataset. Model merging in CV indeed focus primarily on classification tasks, following common experimental settings (as acknowledged by Reviewer `97J9`). Research on heterogeneous model merging remains limited, with existing work mainly centered on VGG and ResNet architectures using CIFAR datasets. We would welcome suggestions for appropriate benchmarks to explore this direction. ___ > Q3: SVD is computationally expensive. Can this approach be applied to Llama 2 or Llama 3? A3: SVD computation only needs to be performed once at the beginning. As shown in Table 10, which details the computation overhead, **our approach requires minimal memory and time**. We conducted experiments following standard LLM settings, completing the merging in 58 min on a single A100 GPU. We report normalized scores on merging WizardLM-13B (Instruction-Following), WizardMath-13B (Math), and llama-2-13b-code-alpaca (Code). Our method achieves optimal average performance across tasks. ||AlpacaEval|GSM8K|MATH|HumanEval|MBPP|Avg.| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Individual|100.0|100.0|100.0|100.0|100.0|100.0| |TA|102.7|91.0|70.5|50.0|87.7|80.4| |TIES|98.1|97.4|68.1|60.0|89.4|82.6| |TA + DARE|103.1|88.0|72.5|63.3|92.9|84.0| |TIES + DARE|107.9|90.3|65.6|80.0|92.4|87.2| |Ours|107.5|105.0|94.4|56.7|86.5|**90.0**| ___ [1] Merging Multi-Task Models via Weight-Ensembling Mixture of Experts. ICML 2024. [2] Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging. ArXiv 2024. --- Rebuttal Comment 1.1: Comment: 1. The current response is contradictory. Authors mentioned "If performance were the sole consideration, retaining each task-specific model would be the trivial solution", yet the paper’s primary comparison focuses only on accuracy. Furthermore, the authors do not provide any direct comparison regarding computation or memory efficiency against the state-of-the-art, which undermines their claim. Regarding the comparison to WEMoE, the authors argue that it is unfair to compare their approach against WEMoE ( MoE dynamic merging) due to differences in merging strategies. However, both methods fundamentally merge parameters, where WEMoE does so dynamically based on test data, while the proposed approach employs a static merging method. Given that both tackle the same problem using the same datasets, the comparison appears valid. Moreover, the prior work, WEMoE, was evaluated using the same benchmark, such as Adamerging (test adaptation), Ties-Merging, and Task Arithmetic (data-free methods), which were used by the authors in this paper to evaluate their approach. Therefore, the justification for claiming unfairness in comparison to WEMoE is unconvincing. The authors should explicitly include a discussion of these prior works in their manuscript, clearly outlining the pros and cons of their approach relative to them. In particular, while their method may offer improvements in computational and memory efficiency, it is important to address the fact that WEMoE surpasses their approach in accuracy. A balanced discussion of these trade-offs—accuracy versus resource efficiency—would provide a more comprehensive evaluation of the contributions of this work. 2. The computational overhead comparison supports the claim that the static merging approach offers significant savings compared to dynamic MoE methods (i.e., WEMoE) . However, the analysis would be stronger if it quantified these benefits—for example, by stating the percentage reduction in memory usage and computation overhead relative to WEMoE, and reporting any corresponding percentage loss in accuracy. Detailed discussion of the trade-offs between resource savings and potential accuracy impacts should be included as it would provide a more comprehensive evaluation of the method's overall effectiveness. Other comments: 3. For Figure 3, it appears similar to one presented in a previous paper. The authors stated explicitly that they created this version with assistance from the E-WEMoE authors, indicating that it is derived from prior work (including their code). Therefore, it is essential that they provide proper citation to the original source in the figure caption. 4. The authors did not respond to this question "Traditional MTL needs to be clarified more. For instance, what is its architecture?". --- Reply to Comment 1.1.1: Comment: Thanks for your time and feedback. Please find point-by-point responses to your concerns below: ___ >Q1: The current response is contradictory, yet the paper’s primary comparison focuses only on accuracy. A1: Our response is not contradictory. The target of model merging is to merge multiple models into a single model that approaches the accuracy of task-specific models. Model merging has developed rapidly, leading to inconsistencies across many works. **This is a current issue in the field, as there is no clear distinction based on parameters, data requirements, and computational costs.** For example, SOTA dynamic merging methods like EMR merging and Twin merging, which function as lightweight WEMoE, also did not compare with WEMoE in their evaluations, instead comparing against AdaMerging (test adaptation) and Ties-Merging (data-free). As stated in the paper, **DOGE is a plug-and-play method**—we incorporat it into classic methods from both test adaptation (AdaMerging) and data-free (Task Arithmetic) categories, achieving SOTA performance in static merging. DOGE can similarly enhance dynamic methods by replacing their weighted averaging components. ___ >Q2: However, the analysis would be stronger if it quantified these benefits—for example, by stating the percentage reduction in memory usage and computation overhead relative to WEMoE. A2: We will provide a comprehensive comparison table and include a detailed discussion of previous works in the manuscript, offering readers a thorough evaluation: | Method| Parameters | Router | Data | Parallel | Performance | |:-|:-:|:-|:-|:-:|:-:| | TA [1]| 1$\times$ | - | - | static | 69.1| | AdaMerging [2]| 1$\times$ | - | unlabeled test dataset | static | 80.1 | | TA+DOGE | 1$\times$ | - | - | static | 81.0 (**$\uparrow$ 11.6**)| | AdaMerging+DOGE| 1$\times$ | - | unlabeled test dataset | static | 85.9 (**$\uparrow$ 5.8**) | | Surgery [3]| >1$\times$| -| unlabeled test dataset | static | 80.9 | |--|--|--|--|--|--| | WEMoE [4]| 5$\times$ | trained router| unlabeled test dataset | dynamic | 89.4| | EMR merging [5]| 4$\times$| perfect router | - | dynamic | 88.7| | Twin merging [6]| 2.25$\times$| trained router| labeled validation dataset | dynamic | 86.1| |--|--|--|--|--|--| | Traditional MTL| 1$\times$ | - | -| - | 88.9| | Multiple Models| 8$\times$ | - | - | - | 90.8 | As shown, merging multiple models into a single model presents significant challenges. DOGE, as a plug-and-play method, substantially improves accuracy. **Dynamic merging face parallelization issues during inference**, requiring either dynamic I/O loading of task-specific modules or storing all modules in GPU memory. EMR merging needs priors during inference to load corresponding modules, while WEMoE and Twin merging train routers to select modules. We believe methods should be classified before conducting fair comparisons within each category. **Otherwise, according to the no free lunch theorem, MoE methods will always outperform any static merging methods simply due to their larger parameter count.** ___ >Q3: It is essential that they provide proper citation to the original source in the figure caption. A3: Thank you for bringing this oversight. We will provide proper citation in the figure caption. ___ >Q4: Traditional MTL needs to be clarified more. For instance, what is its architecture? A4: We apologize for the previous omission. As explained in Appendix C (Lines 582-583), Traditional MTL trains a single base model on all tasks simultaneously. The architecture is the standard base model. ___ To summarize our contribution again: We frame model merging as a constrained optimization problem, propose projective gradient descent that optimizes a data-free objective, and design task-aware merging coefficients. Comprehensive experiments validate our plug-and-play capability. Your discussion regarding MoE methods has helped us provide a more comprehensive evaluation in our paper. **We believe that clearer categorization and comparison will benefit the model merging community as a whole.** Thank you sincerely, and we wish you a pleasant day. ___ [1] Editing Models with Task Arithmetic. ICLR 2023. [2] AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR 2024. [3] Representation Surgery for Multi-Task Model Merging. ICML 2024. [4] Merging Multi-Task Models via Weight-Ensembling Mixture of Experts. ICML 2024. [5] EMR-Merging: Tuning-Free High-Performance Model Merging. NeurIPS 2024. [6] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging. NeurIPS 2024.
Summary: This paper addresses the challenge of merging multiple task-specific models into a unified model without accessing their original training data. The authors identify critical limitations in existing methods, such as discarding task-specific information during conflict resolution and over-enforcing orthogonality, which erodes shared knowledge. They propose DOGE, a constrained optimization framework that minimizes performance gaps via data-free gradient descent, projects updates orthogonally to a shared subspace (preserving common representations), and employs task-aware merging coefficients derived from task vector norms. Claims And Evidence: - Correct me if wrong, to use the Taylor expansion, the expanded point and the pretrained model should be very close. May need to point this out and justify. - In addition, did the author evaluate the performance of using this 1st order Taylor expansion to approximate the loss to validate this choice? Methods And Evaluation Criteria: Methods: - In algorihtm 1, the authors should define $\Delta$, whether it is input or how it is initialized. Evaluation: - The benchmark datasets are commonly used in task vector based model merging. - However, I would like to see the comparison between DOGE with other strong baseline methods such as EMR merging, Twin merging. Theoretical Claims: There is no theoretical claims in this paper. Experimental Designs Or Analyses: Yes, I checked the experiments. As I mentioned before, it would be great to add comparisons between DOGE with other strong baseline methods such as EMR merging and Twin merging. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper is related to prior ideas including twin-merging (modulating shared and exclusive knowledge) and representation surgery (trying to make the representation of the merged model close to each individual task). It uses Taylor expansion to approximate the loss without using any data (similar to the idea in MAP (using Taylor expansion to approximate loss function)). Essential References Not Discussed: For the Taylor expansion part, it would be helpful to cite a related work (MAP: https://arxiv.org/pdf/2406.07529) which also uses Taylor expansion to approximate the loss function / evaluation metric. Other Strengths And Weaknesses: Strengths: - Empirical results (performance gains, robustness to task scaling, cross-domain generalization) convincingly demonstrate DOGE’s practical efficacy. - The plug-and-play design and compatibility with architectures like ViT/LoRA are validated experimentally. Weaknesses: - Since the method requires additional optimization and additional modification vectors for each task, I would like the authors to present the additional time/space that DOGE requires. Other Comments Or Suggestions: - In methodology section, $\lVert\cdot \lVert_{Gap}$ and $\lVert\cdot \lVert_{Sshare}$ make it seems like you are defining some new norms. I would avoid using them as subscripts of the norm symbol. - Table 5 and 6 are not numbered in the order they appeared in the paper. - Table 5: it is interesting the tasks selected MNIST and EuroSAT are the relatively easier tasks to the ViT models. It would be interesting to see the generalization performance on SUN397, DTD, and Cars. Questions For Authors: - Is $\Delta$ a task-specific vector? Since you mentioned $\Delta$ is a modification vector to each task vector, and it is not indexed by the task, it was a bit confusing to me at the beginning. Maybe rephrase it to "a universal modification vector to each task vector". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1: Taylor expansion may need to point this out and justify. A1: During fine-tuning, parameter evolution in pre-trained models is frequently minimal, indicating that training remains within the tangent space where Taylor expansion closely approximates network behavior. This aligns with MAP, which examines task vector magnitudes and employs 2nd Taylor expansion to approximate metrics. It provides a formal proof regarding the negligibility of the remainder in Taylor series, and interestingly proposes using linear regression to estimate Hessian. Thanks for suggesting this related work! It strengthens our theoretical foundation, and we include this reference to further substantiate the rationale behind our approach. We examined the difference between the 1st order Taylor expansion and the original loss, finding them to be within the same order of magnitude, confirming the accuracy of the estimation. Since calculating the gradient $\nabla_{\theta}\mathcal{L}_j(\theta_0)$ requires specific data $\mathcal{D}_j$, we used task vector $\tau_j$ as an approximation. Interestingly, when we attempted to optimize using actual gradients computed from specific data, we observed performance degradation. We attribute this to highly unstable gradients at the initialization, which complicated the optimization process. Thus, approximating the original loss using task vectors appears to be the superior way. ___ > Q2: Whether $\Delta$ is input or how it is initialized. A2: $\Delta$ is initialized as a zero tensor with the same shape as the task vector. ___ > Q3: Strong baseline methods such as EMR merging, Twin merging. A3: As discussed in Twin merging [2], they all belong to dynamic merging, which requires **additional storage for task-specific modules**. Such methods face **parallelization** challenges during inference, necessitating either dynamic I/O loading of task-specific modules or storing all modules in GPU memory. EMR merging requires priors during inference to load corresponding modules, while Twin merging trains a router using validation datasets to select modules. Both EMR merging and Twin merging can be viewed as lightweight WEMoE [3], yet they still impose storage demands (**2.25× our approach**). For instance, EMR merging's proposed mask implementation still uses 8-bit Bool types, and Twin merging's module reconstruction $U\Sigma V$ requires matrix operations that may not reduce peak GPU consumption. Notably, these approaches avoid direct comparison with WEMoE, which is unsurprising. According to the no free lunch theorem, performance increases with the number of retained parameters, with complete task-specific models representing the upper performance bound. Our approach, by contrast, is a static merging plug-and-play method (like TA and Ties merging) that maintains standard model size and enables parallelized inference. We compare our method with SOTA static merging approaches such as AdaMerging and PCB-Merging. **We believe methods should first be classified before conducting fair comparisons within each category. Otherwise, MoE methods will always outperform others simply due to larger parameter count.** ___ > Q4: Present the time/space that DOGE requires. A4: We have reported training time and memory usage in Table 10 of the Appendix, demonstrating remarkably efficient performance with only 121 seconds total training time and a memory usage of 729MB. We will relocate this information to the main text. ___ > Q5: Generalization performance on SUN397, DTD, and Cars. A5: Based on your request, we conducted experiments evaluating generalization on three unseen tasks when merging five other tasks. The results reveal that SUN397, DTD, and Cars datasets pose challenges for ViT models, while MNIST/EuroSAT show limited generalization to these complex tasks. Despite this, our method consistently outperformed other model merging approaches by a significant margin. |Method|Seen||||||Unseen|||| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| ||RESISC45|SVHN|GTSRB|MNIST|EuroSAT|Avg.|SUN397|Cars|DTD|Avg.| |Pre-trained|60.6|23.5|30.4|47.6|45.6|41.5|63.2|59.9|43.9|55.6| |Task Arithmetic|52.8|83.9|71.1|97.7|61.9|73.5|27.9|25.0|26.4|26.4| |Ties-Merging|74.6|89.1|81.8|97.7|73.7|83.4|57.5|51.9|38.7|49.4| |AdaMerging|73.5|76.0|81.5|97.4|69.4|79.6|42.3|37.8|32.0|37.4| |DOGE TA|82.6|89.4|89.0|98.6|92.3|**90.4**|58.7|54.3|41.4|**51.5**| ___ > Q6: Is $\Delta$ a task-specific vector? A6: Thanks for the suggestion. $\Delta$ is a universal modification vector to each task vector. In our experiments, using a universal modification vector yields performance nearly identical to that of task-specific modification vectors, as they are mathematically equivalent when optimizing Eq.(5). ___ [1] EMR-Merging: Tuning-Free High-Performance Model Merging. NeurIPS 2024. [2] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging. NeurIPS 2024. [3] Merging Multi-Task Models via Weight-Ensembling Mixture of Experts. ICML 2024.
Summary: This paper proposes a new perspective on model merging—treating it as a multi-task learning problem rather than merely a parameter-level combination of multiple expert models. The main idea is to preserve each task’s strong performance while reconciling the potential conflicts that arise when unifying several task-specific models into a single, merged network. To that end, the authors introduce an approach they call “Adaptive Projective Gradient Descent” (DOGE). This method formulates the model-merging goal as a constrained optimization problem that minimizes the “performance gap” between the merged model and each of the individual expert models, while explicitly retaining cross-task shared representations. The procedure has three core steps. First, it refines (or “modifies”) each task-specific vector so that merging doesn’t simply discard conflict-ridden parameters that might actually be performance-critical. Second, it projects the gradients of these task modifications onto a shared subspace to maintain overlapping knowledge across tasks rather than forcing all task vectors into near-orthogonality. Third, it adapts the merging coefficients in a “training-free” way that is reminiscent of how adaptive optimizers dynamically adjust the learning rate; effectively, the magnitude of each merging coefficient is scaled inversely by the norm of the corresponding task vector. The overall pipeline is then shown to achieve strong performance across diverse architectures (vision and language) and tasks (classification, generation) without requiring access to the original datasets. Claims And Evidence: - Improved performance on merged models: DOGE is claimed to achieve higher accuracy than previous data-free merging approaches by better preserving task-specific information while retaining shared representations. - Effectiveness of gradient projection: By projecting gradient updates orthogonally to the shared subspace, the method aims to resolve task conflicts without sacrificing common knowledge. - Task-aware coefficient design: The adaptive (training-free) merging coefficients based on task vector norms provide a natural way to balance gradient contributions across tasks. The evidence supporting these claims comes from extensive experimental results on multiple benchmarks in both vision and NLP domains. - Quantitative comparisons across a variety of baselines (non-merging methods, data-free approaches, and test-time adaptation techniques) and detailed ablation studies show significant improvements. Although the experimental evidence is robust, one might wish for more discussion regarding statistical variability (e.g., more error bar analysis or significance testing) to further solidify the claims. Methods And Evaluation Criteria: The proposed method is well-motivated and methodologically sound. Key components include: - Data-Free Objective: Derived via a first-order Taylor expansion, the objective minimizes the loss gap between the merged model and each individual model by approximating the unavailable gradient with the task vector. - Shared Subspace Construction: SVD is used to extract task-specific subspaces, which are then combined and refined to form a shared subspace that guides the gradient projection. - Adaptive Merging Coefficients: Interpreting task vectors as cumulative gradients leads to a natural formulation where merging coefficients play a role akin to adaptive learning rates. - The evaluation criteria—such as average accuracy across tasks (or Spearman’s ρ for STSB in NLP) and performance on out-of-distribution or unseen tasks—are appropriate for demonstrating the effectiveness and generalization of the method. The comprehensive experiments across different architectures and task modalities further validate the method’s practicality. Theoretical Claims: The paper provides heuristic derivations rather than fully formal proofs. Notably: - The use of a first-order Taylor expansion to derive a data-free objective is a reasonable approximation given the unavailability of task data. - Approximating the gradient of the task loss at the pre-trained model using the task vector (i.e., −τ_j) is intuitively justified by interpreting the task vector as an accumulation of gradients. - The decomposition of the gradient into components within and orthogonal to the shared subspace is well-motivated, though the justification remains somewhat heuristic. While these theoretical insights are plausible and backed by empirical evidence, a more rigorous treatment or further formal analysis would help strengthen the theoretical claims. Experimental Designs Or Analyses: The experimental setup is comprehensive: - The authors evaluate on eight-task vision benchmarks using CLIP-based ViT-B/32 and ViT-L/14 models, as well as on eight-task language benchmarks using LoRA fine-tuned Flan-T5 models. - Multiple baselines, including both data-free and test-time adaptation methods, are used for comparison. - Detailed ablations assess the contribution of each module (∆ optimization, shared subspace projection, and adaptive λ), lending credibility to the claims about each component’s effectiveness. - Additional experiments on unseen tasks and corrupted test sets reinforce the method’s robustness. One minor suggestion is to include more explicit details on computational overhead (I found some recent work also update ∆ by gradient descent but not sure how expensive this procedure is) and convergence behavior across varying numbers of tasks. Supplementary Material: The supplementary material (including appendices) appears to provide: - Additional experimental details (e.g., dataset specifics, hyperparameter settings, implementation details). - Extended ablation studies and discussions on sensitivity analyses (e.g., effect of varying the subspace rank). - Further comparisons with baselines and additional visualizations that support the claims in the main text. Relation To Broader Scientific Literature: The paper is well-situated within the broader context of multi-task learning and model merging: - It builds upon previous work in data-free model merging (e.g., Task Arithmetic, Ties-Merging) and test-time adaptation (e.g., AdaMerging). - It draws connections to multi-task learning strategies that emphasize gradient alignment and modular architectures. Essential References Not Discussed: Some relevant work that tackle model merging on subspace need to be discussed: - Gargiulo, A. A., Crisostomi, D., Bucarelli, M. S., Scardapane, S., Silvestri, F., and Rodola, E. Task singular ` vectors: Reducing task interference in model merging. arXiv preprint arXiv: 2412.00081, 2024. - Stoica, G., Ramesh, P., Ecsedi, B., Choshen, L., and Hoffman, J. Model merging with svd to tie the knots. arXiv preprint arXiv: 2410.19735, 2024. Other Strengths And Weaknesses: Strengths: - Comprehensive Evaluation: The extensive experimental validation across both vision and language domains, along with detailed ablation studies, convincingly demonstrates the method’s effectiveness. - Practical Relevance: The approach is designed to work in data-free scenarios, which is particularly appealing in settings where access to original training data is restricted due to privacy or logistical concerns. Weaknesses: - Theoretical Rigor: Some derivations, particularly the gradient approximations and the rationale behind using −τ_j as a proxy for the gradient, could benefit from a more rigorous treatment. - Hyperparameter Sensitivity: The method involves several hyperparameters (e.g., the subspace basis size and global scaling factor η) whose selection may critically affect performance. More discussion on sensitivity analysis would be helpful. - Computational Overhead: A deeper analysis of the additional computational costs (e.g., due to SVD and projection operations) would enhance understanding of the method’s scalability. Other Comments Or Suggestions: - Clarity in Derivations: Some steps in the derivation of the data-free objective could be elaborated further. A step-by-step explanation with more intuition would improve readability. For example, ∥θ∗ − θi∥_Gap apprears in Eq 2 without introduction. - Limitations and Future Work: It would be beneficial for the authors to include a discussion on potential limitations (e.g., cases where tasks are highly heterogeneous) and directions for future research. Questions For Authors: - How sensitive is the overall performance to the choice of the subspace basis size and the global scaling factor η? - Could you provide further empirical or theoretical justification for approximating ∇θLj(θ0) by −τ_j? Under what conditions might this approximation break down? Should I expect gradient-based MTL methods (https://github.com/thuml/awesome-multi-task-learning) can outperform task arithmetic (e.g. MGDA, CAGRAD, PCGRAD, IMTL...) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed comments. We hope the following discussion can address your concerns! ___ >Q1: Some relevant work that tackle model merging on subspace need to be discussed. A1: Thanks for suggesting additional relevant work. We will discuss them in related work: *TSV [1] aggregates task vectors within their subspaces via low-rank approximation and whitens matrices to minimize interference. KnOTS [2] aligns representation spaces between LoRA models using SVD, enabling the application of merging methods. Both these methods and ours recognize parameter low-rankness and implement merging within subspaces.* ___ > Q2: Theoretical Rigor: The rationale behind using $−τ_j$ as a proxy for the gradient, could benefit from a more rigorous treatment. A2: Under the Neural Tangent Kernel assumption (i.e., fine-tuning often occurs in a linear regime), which has been validated in prior work [3,4], $\nabla_ {\theta}\mathcal{L}_ j(\theta_ 0)$ can be estimated as ${k\tau_ i}$ where $k < 0$. Here, $\tau_ j = θ_ T - θ_ 0 = -\sum_ {t=1}^T\alpha_ t\nabla_ {\theta_ t}\mathcal{L}_ j(\theta_t)$, where $\alpha_t$ represents the learning rate and $T$ denotes the update iteration. Given the linearity of parameters $θ_0$ in the vicinity, we have $\nabla_ {\theta_t}\mathcal{L}_ j(\theta_t) = \nabla_ {\theta_0}\mathcal{L}_ j(\theta_0)$. Therefore, we derive $\nabla_ {\theta}\mathcal{L}_ j(\theta_0)=-\frac{τ_j}{\sum_{t=1}^T\alpha_t}$. ___ > Q3: Hyperparameter Sensitivity: The method involves several hyperparameters (e.g., the subspace basis size $k$ and global scaling factor $\eta$) whose selection may critically affect performance. A3: We have conducted experiments on the subspace basis size $k$ in Figure 4, which displays performance with varying rank ratios alongside the explained standard deviation. We also investigated the relationship between different projection directions and basis sizes. Additional sensitivity analysis for the global scaling factor $\eta$ is supplemented as follows: | η |0.01|0.02|0.03|0.04|0.05|0.06|0.07|0.08|0.09| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |ViT-B/32| 79.5 | 80.3 | 80.6 | 80.9 | **81.0** | 80.8 | 80.7 | 80.2 | 79.8 | The evaluation of across values from 0.01 to 0.09 demonstrates that performance remains stable and even achieves higher results. (We did not conduct a specialized grid search, this setting was chosen because the calculated $\lambda$ was close to 0.3). This consistency across different $\eta$ values verifies the robustness of our approach and highlights the practicality of applying task-aware coefficients. ___ > Q4: Computational Overhead: A deeper analysis of the additional computational costs (e.g., due to SVD and projection operations) would enhance understanding of the method’s scalability. A4: We have reported training time and memory usage in Table 10 of the Appendix, showing an efficient total training time of only 121 seconds and memory usage of 729MB. The SVD operation only needs to be executed once at the beginning, with a computational complexity of $O(\min(mn^2, m^2n))$. We appreciate your reminder and will relocate this to the main text. Moreover, we supplement the final version with convergence loss curves for 8 and 20 tasks, showing that convergence is typically achieved within 100 to 200 iterations. ___ > Q5: Clarity in Derivations: A step-by-step explanation with more intuition would improve readability. For example, ∥θ∗ − θi∥_Gap apprears in Eq. (2) without introduction. A5: We apologize for any confusion caused. Eq. (2) is a brief mathematical summary presented before the detailed methodology. We have revised it and explained each symbol's meaning. Combined with the proof presented above, this will enhance the overall clarity of the derivation. ___ > Q6: It would be beneficial for the authors to include a discussion on potential limitations and directions for future research. A6: Thanks for your suggestion. A potential limitation is the lack of consideration for heterogeneous model merging, which requires transformation when task vectors have inconsistent shapes or layer numbers. Regarding future research, we are extending our work to LLMs by merging WizardLM-13B, WizardMath-13B, and llama-2-13b-code-alpaca, achieving SOTA performance. For detailed table results, please refer to our response to Reviewer `QBR6`. ___ > Q7: Should I expect gradient-based MTL methods can outperform task arithmetic. A7: Yes. Task arithmetic implements MTL in a training-free manner and can be viewed as a post-transfer approach for existing models, while MTL methods typically serve as performance upper bounds that we aim to approach. ___ [1] Task Singular Vectors: Reducing Task Interference in Model Merging. CVPR 2025. [2] Model Merging with SVD to Tie the KnOTS. ICLR 2025. [3] Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models. NeurIPS 2023. [4] A Linearized Framework and A New Benchmark for Model Selection for Fine-Tuning. ArXiv 2021.
null
null
null
null
null
null
Learnable Spatial-Temporal Positional Encoding for Link Prediction
Accept (poster)
Summary: This paper introduces a framework L-STEP for learning representations on discrete temporal dynamic graphs, which comprises two key components: 1) LPE (Learnable Positional Encoding): a learnable spatial-temporal positional encoding designed to capture evolving graph topology from a spectral viewpoint. Concretely, LPE approximates positional encodings by applying the Discrete Fourier Transform (DFT) over past positional encodings. 2) A Node-Link-Positional Encoder for computing temporal node representations. Experiments are done to validate the effectiveness of L-STEP. Claims And Evidence: Line 241, the binary cross-entropy loss function in Equation (11) seems incorrect. The second part should be $log(1 - \hat{y})$, instead of $(1 - log(\hat{y}))$. Methods And Evaluation Criteria: Please kindly refer to Other Strengths And Weaknesses. Theoretical Claims: Please kindly refer to Other Strengths And Weaknesses. Experimental Designs Or Analyses: Please kindly refer to Other Strengths And Weaknesses. Supplementary Material: I went through most of the appendix sections. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. The design of LPE is clear. The authors provide a rationale for using DFT in positional encoding computation, that is, positional encodings could be regarded as signals on graphs and DFT can be applied to filter noises caused by randomness or entity activities. Experimental results show that LPE significantly improves the performance of downstream tasks. 2. L-STEP is efficient. The authors show in Section 4 that, in certain scenarios, the proposed encoder is more efficient than methods that rely on neighborhood co-occurrence for node embedding computation. Besides, the L-STEP contains a novel loss function for learning representations on discrete temporal dynamic graphs, as outlined in Equation (13). 3. The paper is well written and easy to follow. Weaknesses: L-STEP is thoroughly evaluated on recent and widely used datasets collected by [1], using evaluation metrics that are commonly adopted in prior studies [1, 2]. However, I do have some concerns regarding the use of negative links in the training and evaluation process. Specifically, in Section 3.3, the authors seem to employ three different negative sampling strategies—random, historical, and inductive—during the loss computation. In contrast, the code implementation from [2] trains models using randomly selected negative edges and tests them on random, historical, and inductive negative edges, respectively. The experiment settings seem to be different. Since the results for the historical and inductive negative edge settings, where L-STEP significantly outperforms the previous state-of-the-art [2], are directly taken from the result tables in [2], the evaluation and subsequent conclusions could potentially be affected. [1]: Poursafaei, Farimah, et al. "Towards better evaluation for dynamic link prediction." Advances in Neural Information Processing Systems (2022) [2]: Yu, Le, et al. "Towards better dynamic graph learning: New architecture and unified library." Advances in Neural Information Processing Systems (2023) If this concern can be addressed with thorough analysis and concrete evidence, my rating can be raised. Other Comments Or Suggestions: Please kindly refer to Other Strengths And Weaknesses. Questions For Authors: Please kindly refer to Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks very much for your review! We are excited to learn your appreciation of our model's design, effectiveness and efficiency, and paper's writing. Your suggestions are actionable and helpful. First of all, we correct the typo in Eq (11) and promise to update it in our camera-ready version. Then, we would like to further discuss your concern in the experimental setting. [**Data Source**] The datasets used in our paper, according to [2], is firstly collected by [1]. However, as a library, [2] provides detailed all dataset processing and baseline implementation, https://github.com/yule-BUAA/DyGLib, and we used all the datasets from [2]. [**Implemtation**] We strictly followed the implementation of [2] and reported the performance. For more clarification, we follow the training and evaluation settings of [2] as follows. - Specifically, during training, we compute the loss function and perform back propagation for the model only using randomly sampled negative links. - Then, with this trained model, we test the model on random, historical, and inductive negative edges. - The corresponding configurations are listed in Appendix I of the paper. [**Proof from Other Sources**] Beyond 13 classic datasets in [2], we also tested our method on the open-source worldwide large-scale benchmark leaderboard, TGB [3] (https://tgb.complexdatalab.com/docs/leader_linkprop/), which also provides the standard dataset split and training paradigm. In TGB, our method also achieves a very competitive performance in temporal link prediction tasks, as shown in Table 2 of the paper. [**Code Release**] We promise to release our code upon publication. --- Reference [1]: Poursafaei, Farimah, et al. "Towards better evaluation for dynamic link prediction." Advances in Neural Information Processing Systems (2022) [2]: Yu, Le, et al. "Towards better dynamic graph learning: New architecture and unified library." Advances in Neural Information Processing Systems (2023) [3]: Gastinger, J., Huang, S., Galkin, M., Loghmani, E., Parviz, A., Poursafaei, F., Danovitch, J., Rossi, E., Koutis, I., Stuckenschmidt, H., Rabbany, R., and Rabusseau, G. TGB 2.0: A benchmark for learning on temporal knowledge graphs and heterogeneous graphs, Advances in Neural Information Processing Systems (2024) --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. Please share your anonymous code link. --- Reply to Comment 1.1.1: Comment: Dear Reviewer uGHA, Thanks very much for your reply and your satisfaction with our rebuttal answer. Regarding your remaining concern about our code release, we prepared the detailed ReadMe file along with the source code, the link is **https://anonymous.4open.science/r/L-STEP-9ED2**. Again, we promise to attach the code repository in our camera-ready version. For your extra information, to verify our performance, Reviewer U1pJ asked for extra baseline comparison, HTGN (Yang et al., Hyperbolic temporal network embedding, TKDE 2022). Now the experimental results are ready as follows. We adopt the official implementation of HTGN, and evaluate the model on CanParl. We provide the link prediction performance, i.e., Average Precision (AP) and Area Under the Receiver Operating Characteristic Curve (AUC-ROC), between HTGN and our L-STEP, along with other SOTAs, such as FreeDyG, DyGFormer, and GraphMixer, in different settings. | Dataset | Method | Transductive (AP) | Transductive (AUC-ROC) | Inductive (AP) | Inductive (AUC-ROC) | | - | - | - | - | - | - | | CanParl | HTGN | 64.22 ± 1.40 | 59.95 ± 3.52 | 71.01 ± 1.74 | 67.03 ± 2.82 | | | FreeDyG | 72.22 ± 2.47 | 81.09 ± 2.2 | 52.96 ± 1.05 | 52.89 ± 1.61 | | | DyGFormer | 97.36 ± 0.45 | 97.76 ± 0.41 | 87.74 ± 0.71 | 89.33 ± 0.48 | | | GraphMixer | 77.04 ± 0.46 | 83.17 ± 0.53 | 55.91 ± 0.82 | 58.32 ± 1.08 | | | L-STEP (Ours) | **98.24 ± 0.11** | **98.97 ± 0.06** | **92.25± 0.24** | **95.06 ± 0.11** | Thanks again for your review, please feel free to let us know if you have any further questions. Best, #8546 Authors
Summary: This work explores the problem of temporal link prediction, and proposes a semi-non-graph model that shows quite good performance across a variety of datasets. Their model, LSTEP, works by introducing a learnable time-dependent positional encoding, where the time dependence is parameterized by a learnable fourier kernel. This idea is simple but seems to provide reasonable performance. Claims And Evidence: The claims are reasonably supported, with good performance across many datasets. I appreciate the ablation experiments that compare LPE with the full model as well as the exploration of parameter sensitivity. It's interesting that LSTEP performs comparatively worse on both TGBL datasets, and that in most settings the lifts aren't statistically significant. Would you care to comment? Methods And Evaluation Criteria: Yes, the experiments absolutely make sense and the benchmark datasets can be viewed is as close to exhaustive as is likely possible. Theoretical Claims: The theoretical claims in section 4 seem ancillary to the point of the paper, and doesn't seem to meaningfully explain why these positional encodings work. Furthermore, the theoretical analysis makes an adiabaticity assumption, which is probably sufficient for the setting where you have a relatively mature graph at time t1, and then it evolves without changing the community structure too much. I wonder how this theorem holds during the initial growth phase of a preferential attachment style generative process. It would see that the adiabaticity of the graph isn't there, and thus the model wouldn't really hold, but I'd love the authors thoughts on the matter. Experimental Designs Or Analyses: I reviewed the experiments presented in section 5 as well as the appendices. As presented, nothing seemed unexpected but I did not reimplement the model. Supplementary Material: Yes. Appendices C, E, H Relation To Broader Scientific Literature: As mentioned above, the ideas in this paper aren't particularly novel. For example, the use of a DFT to compute evolving PEs isn't new. While the theoretical and methodological contributions seem to be limited, they are combined in a way that leads to a simple to understand model with relatively significant performance gains. Because ML is an empirical science, I believe it's important to make space for papers which don't maximize novelty if they refine these ideas in a way that leads to significant empirical gains. Essential References Not Discussed: Not to my knowledge Other Strengths And Weaknesses: **Other Weaknesses** - The paper is poorly written, which can make it hard to follow in spots. For example, the first paragraph of section 3.1 is a large run-on sentence. I would recommend that the authors give the paper a thorough grammatical review. Language modeling tools like chatgpt or claude can be quite effective for this. Other Comments Or Suggestions: See other strengths and weaknesses Questions For Authors: 1. What is the spectrum of filtered noise in your experience? Do you 2. By using the DFT to model temporal propagation, you're essentially assuming wave-like dynamics because the wave equation's time-propagater can be expanded in terms of a the fourier basis. Have you explored other possible bases? For example, cosh/sinh to model diffusion? 3. How does this model work in an inductive setting? Specifically, how is p0 computed for a new vertex? 4. How does the DFT presented here relate to the traditional graph fourier transform that's defined by the eigensystem of the laplacian? Is the filter that's learned in LSTEP expected to be in any way similar to the filters learned by traditional MPNNs? If not, could you comment on why this DFT appears to perform better? 5. How did you implement the complex-valued gradient of W? This can be quite a nuanced topic, so I'm curious as to how it works. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your appreciation of our method’s soundness and experimental design! Addressing your concerns improved the quality of our paper, and we prepared the answer below. > How learnable positional encodings work in dynamic environment. Due to the length limit of response, we sincerely invite you to check our first answer in U1pJ > W1: First paragraph of section 3.1 is a large run-on sentence, use LLM to refine. Thanks, we will. > Q1: What is the spectrum of filtered noise in your experience? We would like to elaborate on what kind of information our learnable filter (used in Eq (1)) learns. Theoretically, we first apply DFT on $u$’s sequence of learned positional encodings at previous timestamps {$\mathbf{p}^{t’}_u$}, then the learnable filter filters out high-frequency noise across dimensions of the learned positional encodings in the spectral domain, and finally, the Inverse DFT transformed the filtered positional encodings back to the original domain. Practically, the filtered noises are those that cannot contribute to link prediction decisions. > Q2: By using the DFT to model ... First, to our understanding, the sequence is not necessarily wave-like to be decomposed by DFT, so we do not constrain our positional encoding to follow any wave-like dynamics. In addition, we did not consider applying cosine or sine bases, since DFT’s bases already contain cosine and sine components. > Q3: How is p0 computed for a new vertex? If a new vertex emerges, we first initialize this vertex’s positional encoding to a zero-valued vector, then proceed with the model's computations to make link predictions involving this vertex. > Q4: How does the DFT relate to ... We clarify the difference between DFT and traditional graph fourier transform, then elaborate on why employing DFT is more suitable for the temporal link prediction problem as follows. DFT is leveraged for processing time-series-like data, while graph fourier transform analyzes the signal associated with nodes of a graph. Intuitively, the signals in time-series are defined along a timeline, i.e., each timestamp on the timeline produces a signal, while graph fourier transform processes signals defined on nodes of a graph, i.e., each node is associated with a signal, for example, each node of the graph has a node feature vector. - In our framework, we are trying to “approximate” a node $u$’s positional encoding at time $t$ based on its positional encodings in previous $L$ most recent distinct timestamps, $\{\mathbf{p}\_u^{t’\_j}\}\_{j = 1}^L (t’\_j < t, \forall j)$, so through the lens of DFT, we can consider $\{\mathbf{p}\_u^{t’\_j}\}\_{j = 1}^L$ as a sequence of signals defined on a timeline, where each timestamp $t’\_j$ produces the signal $\mathbf{p}\_{u}^{t’\_j}$. - Graph fourier transform (GFT) acts as a convolution on the graph, i.e., GFT computes a node’s representation based on other nodes’ information and GFT is defined for static graphs, while we only want to obtain a node’s positional information based on its positional information at previous timestamps, so we think that DFT is a better fit for our goal. - Due to the difference in the definition and setting between DFT (for sequences, temporal data) and GFT (for static graphs), we do not expect the filter of L-STEP to be similar to the filters learned by traditional MPNNs. In short, we aim to analyze the temporal dependencies in the sequence of node $u$’s positional encodings at different timestamps, so we employ DFT, which is a better fit for sequences and time-series-like data, while GFT operates convolution on static graphs. > Q5: How did you implement the complex-valued gradient ... In our implementation, we first initialize $\mathbf{W}\_{filter}$ using nn.Linear() module of PyTorch library and then convert the parameter to torch.complex64 data type, i.e., turning $\mathbf{W}\_{filter}$ into a complex-valued tensor. Now, we present our implementation to obtain Eq (1) as follows. Suppose ```batch_pe``` is $\{\mathbf{p}\_{u}^{t’\_j}\}\_{j = 1}^L$ and ```filter``` is $\mathbf{W}\_{filter}$. The pseudocode for implementing Eq (1) could be described as follows: ```python3 1. filter = nn.Linear(d_P, L, bias = False).to(torch.complex64) // initialization of W_{filter} For epoch in range(num_epochs): // model training phase 2. batch_pe = batch_pe.to(torch.complex64) // convert batch_pe to complex-valued tensor 3. batch_pe = fft.fftn(batch_pe, dim = 1) // apply FFT transformation on batch_pe 4. batch_pe = filter.weight.unsqueeze(0) * batch_pe 5. batch_pe = fft.ifftn(batch_pe, dim = 1) // apply inverse FFT 6. batch_pe = batch_pe.to(torch.float32) // convert back to float32 data type ``` Here, the 4th line of the pseudo code represents $\mathbf{W}\_{filter} \odot \mathcal(\{\mathbf{p}\_{u}^{t’\_j}\}\_{j = 1}^L)$, and the 5th line is equivalent to applying inverse DFT. Finally, we convert ```batch_pe``` back to real-valued tensors.
Summary: The paper introduces a new method for predicting connections in networks that change over time. Instead of using fixed rules or complex models that require a lot of computing power, L-STEP learns how positions in the network change over time using the Fourier Transform. The authors show that their approach keeps important information about the network’s structure and performs as well as more complicated models. They test it on 13 datasets and two large benchmark datasets and find that it predicts connections better than existing methods. Claims And Evidence: - Yes, the results are supported by experiments on datasets. Methods And Evaluation Criteria: yes, the methods and criteria are appropriate Theoretical Claims: I checked the theorem in the appendix. Experimental Designs Or Analyses: I checked theorem 4.1, and it depends on the graph's slow change. For this claim to be dependable, the authors must have shown that slow change is a phenomenon in real graphs. Supplementary Material: yes, I checked additional experimental results and sensitivity analysis Relation To Broader Scientific Literature: - The article studies the long and well-studied problem of predicting edges that have already appeared in the past. In this sense, I do not see much novelty. However, they also propose a learnable positional encoder, and this is novel. Essential References Not Discussed: - I would like to see an HTGN comparison (not our article) to see how it compares to hyperbolic methods, but there are two recent and A+ articles. They seem enough. Yang, M., Zhou, M., Xiong, H. and King, I., 2022. Hyperbolic temporal network embedding. IEEE Transactions on Knowledge and Data Engineering, 35(11), pp.11489-11502. Other Strengths And Weaknesses: Strengths: - The article is quite detailed and have covered almost all potential analyses in the appendix. The baselines are strong and the studied datasets are many. I appreciate the sensitivity analysis in the appendix. Weaknesses: - The article has quite many components that require everything to go as expected. The low changing aspect needs to be tested. Many models break down in sparse graphs or highly dynamic environments, but the authors do not analyze cases where the graph changes too quickly for their positional encoding estimation to remain accurate. - The code is not shared. I would raise a bigger issue about this, but the detailed results in the appendix are the saving grace, and I will assume that the authors will share the code soon. - The hyperparameter analysis suggests that L-STEP performs well within a certain range, but does not discuss how sensitive the model is to bad choices. Other Comments Or Suggestions: - The main article does not contain dataset descriptions and most of the reported results. - Lstep results are bolded in table 2 right panel but these are not the best results. It is misleading to bold them. Questions For Authors: - Why are FreeDyG results missing in table 11? - An interesting assumption is that you try to learn a function to estimate a positional encoding of u at time t. However, predicting an exact positional encoding assumes a high degree of regularity in graph evolution, which may not hold in real-world networks. That part is not quantified by using synthetic graphs. You also do not try to quantify it in datasets and compare your performance as a function of that change. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks very much for your review! We are excited to learn your appreciation of our extensive experiments and corresponding outperformance. Your suggestions are quite actionable, and we seriously prepared the answer in Q&A format below. > W1: The low changing aspect needs to be tested. First, the outperformance of our method across all 15 real-world datasets has proven our theoretical design is valid to some extent. Of course, we understand your concern, and prepared the illustration of the slowly changing pattern in the real-world graphs below, by taking UNVote dataset as an instance. The assumption is further proved, and we promise to add this new content to our camera-ready version. [**Positional Encoding Patterns with Time**] - In the 1st plot of [UNvote](https://docs.google.com/document/d/e/2PACX-1vTPt6EJ-gnmGwZgW9Lth0UgAukqk4pPqSFPFQhJBIy6QRMe4e6Az6kIo5eGRQ1dYOBl_PGI0IHBJkBL/pub), the x-axis represents timestamps, and y-axis represents the distance between each consecutive pair of positional encoding, i.e, $\mathbf{p}^t$ and $\mathbf{p}^{t + 1}, \forall t$. As $\mathbf{p} \in \mathbb{R}^{|V| \times d_P}$, which is a matrix and $i$-th row is the positional encoding of node $i$ - We compute the distance between $\mathbf{p}^t, \mathbf{p}^{t + 1}$ by first computing the Euclidean distance between $\mathbf{p}\_u^t$ and $\mathbf{p}^{t + 1}\_u$ and then let the mean of Euclidean distance of positional encoding overall all nodes, i.e, $\frac{1}{|V|} \sum_{u \in V} d(\mathbf{p}\_u^{t}, \mathbf{p}\_u^{t + 1})$, be the distance between $\mathbf{p}^t$ and $\mathbf{p}^{t + 1}$. - We discover the distance between each consecutive pair of positional encoding, $\mathbf{p}^{t}$ and $\mathbf{p}^{t + 1}$ is consistently small across all timestamps. - In the 2nd plot, we illustrate the distance between our learned positional encoding and the true positional encoding for each graph snapshot - Overall, the distance between our learned positional encoding and the true positional encoding of each snapshot is consistently small across all timestamps. [**Theoritical Explaination**] To further address your concern, we would like to go through Theorem 1 intuitively for you. In short, it shows that when the temporal graph is slowly evolving with respect to time, the learned positional encoding at previous timestamp can well approximate the next close future timestamp’s positional encodings. Theoretically, the significance of Theorem 1 is to demonstrate the effectiveness of our proposed LPEs in the inductive setting, where future graph structural information is hard to observe, such that the “slowly changing” assumption for the coming snapshots is needed for this theoretical derivation. Moreover, defining a degree of change over temporal snapshots is an interesting but largely open question, which may involve the random matrix theory and relevant knowledge [1], and so far no clear definition, to the best of our knowledge. Therefore, **assuming the temporal smoothness [1], i.e., next snapshot will not change dramatically from the previous snapshot** is a quick-win solution for the theoretical derivation, and we are more than happy to dive in this topic for our future research direction. [1] Chi et al., Evolutionary Spectral Clustering by Incorporating Temporal Smoothness, KDD 2007 > W2: Available code. We promise to publish our code upon the paper’s publication. Also, we are preparing an anonymous link, so we can share the implementation shortly (before the deadline for final response). > W3: Hyperparameter sensitivity. Based on our parameter analysis, our model is comparatively robust to bad choices, i.e., we experience drops in performance while varying the hyperparameter but the drops are not significant. For more details and intuition for choosing the hyperparameter, please refer to Appendix H.5. > Q1: Why are FreeDyG results missing in table 11? Original FreeDyG did not run on UNTrade dataset. We follow FreeDyG’s official implementation, but the corresponding performance is around 0.500. We will add this explanation to our camera-ready version. > Comment 1: The main article does not contain dataset descriptions and most of the reported results. Due to page limit, the description is in Appendix G. Results of other experimental settings (historical and inductive negative sampling), ablation study, and parameter analysis are in Appendix H. > Comment 2: L-step results are bolded in table 2 right panel but these are not the best results. Table 2 is ranked from top to bottom with a ranking index; we bold our method for just highlighting its position. > Suggestion: I would like to see an HTGN comparison to see how it compares to hyperbolic methods, but there are two recent and A+ articles. They seem enough. We understand, and we are adapting HTGN to this paper's setting. If finished early, we will post HTGN’s results before the final response deadline. If not, we will add it to our camera-ready version.
null
null
null
null
null
null
null
null
On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists
Accept (poster)
Summary: The paper introduces a method for training language models collaboratively between multiple devices/clients and personalizing them to their on-device data at the same time. This is done by introducing a mixture of generalist experts and specialist experts, where the generalists are trained collaboratively (e.g., through federated averaging) and the specialists are trained locally (without aggregation). The routers are trained locally as well. Thus, only the generalist experts are aggregated and learned collaboratively across clients, which reduces communication overhead. The problem is formulated as a bi-level optimization problem with a training algorithm that optimizes the experts on a training dataset and then optimizes the routers on a validation dataset, done alternatively. This method addresses both data heterogeneity and computational resources heterogeneity, which is purportedly the first to accomplish that , and it also separates model heterogeneity from data quantity as shown in the extensive experiments section. The authors also provide convergence analysis under some standard assumptions. ## update after rebuttal The authors have addressed most of my concerns and I'm satisfied with the rebuttal. I will keep my original positive rating. Claims And Evidence: The authors claim that the bi-level formulation of the MoE learning objective and the alternating minimization algorithm are new. The formulation itself might be, but an alternating minimization algorithm of the router and the experts is quite general, so it is difficult to claim that it is "new", especially since it resembles expectation maximization (which the authors themselves mention). For example, [this work](https://arxiv.org/abs/2410.03497) trains mixture of "generalists" on local (input-independent) routers, which resembles a simplified version of the proposed method. The theoretical analysis is interesting and shows linear convergence. The assumptions are claimed to be "suitable". I agree that the claims are standard. However, I would argue that the existence of a minimizer might not be necessarily be a suitable assumption in language modeling. In general, the loss doesn't go to 0 on these problems. The claim regarding not overfitting even with many experts and little data might only be applicable because this is a fine-tuning problem, but it is still an interesting result. The other claims are true and are well corroborated in the experiments, though I'm not sure whether this work is, indeed, the first that addresses both data heterogeneity and computational resources heterogeneity. Methods And Evaluation Criteria: The proposed method is very sound and the authors provide theoretical and experimental evidence that it works. The authors provide extensive experimentation and ablation studies to evaluate the method, all of which show positive evidence that the proposed method shows good performance, robustness, and efficiency with respect to the baselines. Theoretical Claims: I checked the correctness of the proofs in the Appendix. The analysis is strightforward and clear. I did not find any significant errors. Experimental Designs Or Analyses: The experimental design does capture interesting practical scenarios. For example, the out-of-distribution experiments uses a validation and test set that is from a different distribution from the training dataset, so that assumption 1 is violated. However, the method still performs favorably, which shows its robustness. Other experiments also provide ample evidence for the claims in the contributions list. The authors run experiments on various datasets and compare to many recent baselines. The analysis is also valid for the considered setup. Supplementary Material: I reviewed all of the supplementary material. Relation To Broader Scientific Literature: The contributions of this paper are important to the literature. They tackle multiple issues of interest. For example, the data heterogeneity and systems heterogeneity in federated learning. the proposed method is also relevant to practical applications of LLMs in the real world. Essential References Not Discussed: I do not know of any essential references related to this work that were not discussed. Other Strengths And Weaknesses: This work provides a practical and theoretically well-grounded method for an important application: collaborative training of personalized language models. The solution is sound and the experiments show good performance. A great strength of this paper is that they share well-written code for reproducing the experimental results. One weakness might be that this method mainly works for fine-tuning. It would be interesting to see whether the same benefits still hold for full training. Another could be that this procedure is only applicable to mixture of experts that routes tokens, i.e., transformer-based models and tokenized data. Other Comments Or Suggestions: In line 1435, the authors mention that the goal is to show that (26) is strongly convex, but the next line closes the section by saying that direct computation of the Hessian shows the joint strong convexity. Perhaps the author can write the details to make it clearer to the reader how it holds for sufficiently large $\alpha$ and $\mu$. Minor comment: the simplex defined in line 1361 should actually be $\Delta^N$ because there are $N$ degrees of freedom/dimensions for $N+1$ values (the last value is completely determined by the rest). Typos: There is a little 'z' hanging on the right side of Table 2. Also, in line 1406, there is a typo: "srongly". Questions For Authors: When you have many generalists, and one device can only use a few, how do you choose the generalists which will be assigned to those low-resource devices? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your time and expertise in reviewing our work, as well as for your encouraging and positive feedback. Below, we address all your questions and concerns. If any issues remain, please let us know, and we’ll be happy to provide further clarification. **Novelty of the algorithm** We agree that the alternating minimization methods and its various formulations, such as EM-algorithm, have a long and rich history, which we also mention in our paper. At the same time, we believe that our optimization formulation for a particular MoE problem along with a particular open-sourced implementation (Algorithm 1 on page 11) is substantially novel. We provide it with extensive experiments, showing effective balance between general and personalized knowledge. Thank you for providing an additional reference, which we are happy to add to our work. **Violation of Assumption 1** We would like to note that our theoretical analysis demonstrates that the method is expected to converge quickly, at least under favorable conditions, a property that we believe is naturally expected from a well-designed method. However, we do not claim that this theory accounts for all possible practical scenarios. The primary validation of our algorithm for language modelling is provided through extensive experiments in Section 4. For over-parameterized models (which may include LLMs), it is possible that different $f_{valid}$ and $f_{train}$ share the same optima, which we also note in our paper (Lines 209-212, right). At the same time, as you also highlighted in your review, we see experimentally that the method performs favorably even for different distributions from the training datasets. We leave it as an interesting open question for further research: what are the general and joint conditions on $f_{valid}$ and $f_{train}$ that imply Assumption 1. Thanks. **Proof of strong convexity** Thank you for pointing this out. In the final version of our paper, we include the formal proof of strong convexity for our decoupled objective from Section F.3. Below, we provide a brief overview of our reasoning. Our goal is to show that the following function (see eq. (26) on page 27), $F(\Theta, \Phi, \Lambda) = l( \langle \lambda, \Theta^{\top} x \rangle) - \mu \langle \lambda, \Theta^{\top}x \rangle + \frac{\alpha}{2}( \| \Theta \|_F^2 + \| \Phi \|_F^2 ) + \mu d(\lambda) + \mu s(\Phi^{\top} x )$ is strongly convex, for sufficiently large $\alpha > 0$ and $\mu > 0$. We notice that we can separate our objective in $\Phi$ and $\Theta$, thus constructing $g_1(\Theta, \Lambda) = l(\langle \Lambda, \Theta^\top x\rangle) + \frac{\alpha}{4}\|\Theta\|_F^2 +\frac{\mu}{2}d(\Lambda)$ and $g_2(\Phi, \Lambda) = -\mu \langle \Lambda, \Phi^\top x\rangle + \frac{\alpha}{4}\|\Phi\|_F^2 +\frac{\mu}{2}d(\Lambda)$, so that $F(\Theta, \Phi, \Lambda) = g_1(\Theta, \Lambda) + g_2(\Phi, \Lambda) + [\frac{\alpha}{4} (\|\Phi\|_F^2+\|\Theta\|_F^2) + \frac{\mu}{2} d(\Lambda) + \mu s(\Phi^\top x)]$. Since the log-sum-exp function $s(\cdot)$ is strictly convex, while the negative entropy $d(\cdot)$ is strongly convex, as well as the Frobenius norm, we have the term in square brackets being strongly convex. Therefore, it suffices to prove the convexity for $g_1$ and $g_2$. For that, we compute the Hessian directly and show that it is positive semidefinite, for sufficiently large regularization parameters. For the formal proof, please also see Proposition 1 in the following link, which we will include in our updated appendix: https://anonymous.4open.science/r/CoMiGS/Strongly_Convex_Proof.pdf **Minor typos** Thanks for pointing out the typos and dimensions of the simplex, we will make sure these are fixed.
Summary: The authors focus on the problem of on-device collaborative fine-tuning of LLMs to address both computational resource heterogeneity and data heterogeneity among users. The authors try to develop a framework that can balance general and personalized knowledge for each token generation while being robust against overfitting. The experimental results show the proposed method improves the inference performance and computational efficiency. Claims And Evidence: Yes. The claims are easy to follow and the evidence is clearly supported. Methods And Evaluation Criteria: Yes. The authors use typical LLM models and tasks. Theoretical Claims: Yes. The formulation of distributed model training is clear. Experimental Designs Or Analyses: Yes. The experiments are correctly configured and the insights obtained from the experiments are clearly explained. Supplementary Material: Yes. I have read the appendix in the main submission. Relation To Broader Scientific Literature: This paper is strongly related to the on-device LLM model design and deployment. Essential References Not Discussed: No. I think the references are adequately covered. Other Strengths And Weaknesses: The authors propose separating the experts into generalists and specialists, where generalists are shared across users to provide general knowledge and specialists are localized to provide personalized knowledge. This is a practical idea to guarantee the model robustness. Other Comments Or Suggestions: Overall, this paper is interesting and the technical depth is fine in most aspects. Questions For Authors: The authors propose to use a learnable router to determine the aggregation weights of the experts based on the input tokens. Could you please give more details on how to train this router, especially when the data is unbalanced among the users? Ethical Review Concerns: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your time and expertise in reviewing our work, as well as for your encouraging and positive feedback. Below, we address all your questions and concerns. If any issues remain, please let us know, and we’ll be happy to provide further clarification. Regarding your question about **router training**, we make the following clarifications: Within each user, the router training is the same as in standard MoE training, i.e. updated using gradient methods, apart from the following changes: 1) instead of updating router and expert parameters at the same time, we update the two sets of parameters in an alternating fashion. 2) router parameters are updated less frequently than expert parameters. Our method seems to handle **unbalanced data** very well. For example in Section 4.4, to simulate high and low local data quantities, we assigned 10x tokens to data to French and Italian users as to German and Dutch users (Line 387-389). Our results show that our method is robust to the local data imbalance, no matter whether local resource abundance is positively (Figure 5) or negatively (Figure 6) correlated with local data quantities.
Summary: This paper introduces CoMiGS, a modular federated learning framework for adapting LLMs using a mixture of generalist and specialist LoRA experts. CoMiGS employs a bi-level optimization strategy, alternating between routing and expert parameter updates. Experimental results on GPT-125M and Llama-3.2-1B demonstrate its superior performance over both local and federated baselines, while also offering theoretical and empirical insights into the framework’s behavior. Claims And Evidence: The paper makes a compelling case for the separation of global and user-specific parameters in a modular architecture that is federatedly trained. This enables specializing experts towards specific sub-distributions and efficient learning under non-IID data and heterogeneous target devices. However, while the authors claim enabling on-device training, which is a typical scenario of cross-device federated settings, they do not evaluate in either case. The evaluation measures cost in a device-independent way, while the number of participating clients and participation scheme does not resemble cross-device settings. Methods And Evaluation Criteria: The proposed methods seem well motivated and the evaluation is reasonable. If anything, I would propose some additional areas of exploration to showcase the generality, scalability and applicability of the method. Concretely: * Although perplexity gives a feel on the quality of the output, the number is not definitive on the downstream quality of the model. Towards this setting, it might be beneficial to also have LLM-as-judge reports on the models, or incorporate tasks like QA to understand the Question-Answering capabilities of the models. * How would CoMiGS work with other PEFT methods or adapters (e.g. DORA, VERA)? * How would CoMiGS work under different aggregation methods? * Another interesting avenue for exploration, especially for on-device deployment, would be the interplay of the technique with quantization methods (or other compression schemes), where the router and adapters may operate on a lossy pretrained model. * Since the paper inherits a federated setup, an interesting question arise wrt the tradeoff of utility and privacy when training the generalists under DP. Theoretical Claims: I went over the theoretical alternative minimization convergence proofs in the appendix at a high-level. Looks reasonable. Experimental Designs Or Analyses: For the experimental evaluation, I have the following comments: * It is unclear from the paper how the authors have federated the datasets and whether the distribution is non-IID amongst clients. * Are the expert scores in Figure 4 the results of Eq.1? * In §4.3, where the authors evaluate the adaptation to system heterogeneity in clients, an alternative could be not to have experts (or the same number of experts) across layers, especially given the dynamics of expert selection over the depth of the network. Supplementary Material: I reviewed the extra experimental details and additional experiments in the Appendix, as well as the proof of convergence. Relation To Broader Scientific Literature: CoMiGS builds on top of the fields of federated learning, LLM adaptation and modular networks, by providing a framework that enables the personalization of on-device experts (specialists), while sharing knowledge across users via federated experts (generalists). It does so by taking into consideration both data and system heterogeneity, typical features of federated setups, thus adapting to non-IID multi-device federated environments. Essential References Not Discussed: - Other Strengths And Weaknesses: ### Strengths * The approach of having two sets of experts that are federated trained to specialize and route between global vs. local objectives is well motivated and work competitively to the baselines. * The mixture of generalists and specialists allows the model to simultaneously capture individual user preferences and linguistic styles while still allowing exchange of globally shared knowledge. I like the interplay between generalists and specialists, especially wrt regularization. * The evaluation explores various dimensions of the learning dynamics and modular behavior during inference, which provides interpretability in the operating dynamics of the model. I particularly like the fact that the authors have evaluated both in and out of distribution datasets. ### Weaknesses * The federated paradigm put forward does not seem to be focusing on cross-device setup, but rather to assume small client sizes and full participation. Furthermore, the models have not been deployed "on-device". * The dependence on validation sets may cause lack of robustness in the technique under distribution shifts. * There seems to be much sensitivity of the results to the number of experts and data size. Although evaluated, there seems to be no proposed way to pick them, give a specific learning scenario. Other Comments Or Suggestions: * The figures in the evaluation are barely legible. The authors should probably use a larger font size to increase legibility. * In Table 2, the numbers in the parentheses should be explained in the caption. Questions For Authors: * I am not sure I have fully understood the reasoning behind the size of the trainable parameters (i.e. in the router) and the ability to update less frequently. Also, is the only way to achieve this by skipping updates in the outer loop, or can be effectively achieved by adjusting the learning rate of the outer optimization? * Have the authors tested against also specializing the router function per user instead of federating it? * Does Figure 3 suggest that we might not need the same number of specialists across layers? If so, is there the potential for further optimization? * Following up from the results of Figure 5, what is a good way for practitioners to select how many specialists to dedicate to their learning model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your time and expertise in reviewing our work, as well as for your encouraging and positive feedback. Below, we address all your questions and concerns. If any issues remain, please let us know, and we’ll be happy to provide further clarification. **Methods And Evaluation Criteria** _Choice of perplexity as the metric_: First, we focus on next-word prediction for end users, and perplexity is a well-established metric for evaluating the quality of such predictions. Second, we believe that LLM-as-judge evaluations and existing QA tasks may not be the most appropriate metrics for assessing personalization, as the benchmarks we reviewed are primarily designed to test knowledge understanding. To properly evaluate personalization, we would need to curate custom personalized QA questions. However, since we are not experts in benchmark creation, we have chosen to rely on more standard evaluation metrics. **Experimental Designs Or Analyses** _Dataset distribution across users_: We briefly mentioned our user-specific dataset creation in Lines 259-260 - A distinct category is assigned to a user, as it simulates the most challenging scenario for collaboration. For example, with Multi-lingual Wikipedia dataset, we assign each category (in this case, a language) to each of the four users. The tokens per user are specified in Table 4 in the appendix. The data distribution is thus Non-IID, in the most extreme case. _Expert scores_: Yes you are right. Expert scores in Figure 4 are the results of Eq.1, we will make it clear in the updated manuscript. Eq.1 gives expert scores for each single token, and what we report in Figure 4 has been averaged across all tokens and multiple batches. _Number of specialists per layer_: This is a great question. Indeed we investigated the scenario where some users may not have experts, i.e. they are only equipped with generalists. Please check Figure 7 for this setting. Our experimental result shows that those low-resource users can still benefit from collaborating with high-resource users. **Weakness** _Cross-device setting?_ Due to academic budget constraints, we were unable to conduct cross-device experiments. Our current setup involves a single user equipped with a model containing hundreds of millions to billions of parameters, and we were unable to scale testing to multiple users. Nevertheless, we believe our framework remains applicable in cross-device scenarios. One can envision a hierarchical structure in which neighboring devices form local clusters that apply our method independently. The same approach can then be extended across clusters to enable broader coordination. Importantly, our method does not rely on full user participation—while full participation can accelerate the learning of a generalist model, it is not a requirement for the method to function effectively. _Dependence on valid set._ “The dependence on validation sets may cause lack of robustness in the technique under distribution shifts.” – Quite the opposite, for any OOD target distribution, as long as there is a small valid set, our router can learn a set of task-specific weights to combine generalists and specialists. Thus, CoMiGS gives great flexibility to tackle OOD target scenarios. _Sensitivity of CoMiGS_. Such sensitivity is expected—it would be surprising if a more capable device performed identically to a less capable one. We would like to emphasize that our method effectively disentangles resource availability from data quantity. As demonstrated in Figures 5 and 6, we evaluate two contrasting scenarios: one where low-data-quality users are assigned more experts and high-data-quality users fewer, and another with the reverse setup. In both cases, our CoMiGS method exhibits strong robustness. **Questions** _Router update and personalization:_ We followed standard MoE architecture, so the router is simply a one-layer MLP, that gives weights for each expert. You are right that avoiding router overfitting can as well be achieved with a smaller learning rate used for router update. In our approach, we went for less frequent updates, as it is more resource-efficient. Sorry for not making it clear, but our routers are localized per user, instead of being federated, as illustrated in Figure 2. _Futher optimization regarding #specialists per layer_: Indeed, Figure 3 suggests that different layers may not require the same number of experts. However, since each user exhibits a unique pattern of expert utilization across layers, it becomes challenging to perform further optimization at the level of individual users. _Suggestions to practitioners._ For practitioners, when there is no prior information on the local task complexity, we would suggest they select as many specialists as their devices allow. Our framework can effectively mitigate overfitting. In case the local task is complex, having more specialists can help with better performances, as illustrated in Figure 5.
null
null
null
null
null
null
null
null
Predicting mutational effects on protein binding from folding energy
Accept (poster)
Summary: The authors propose a deep learning method for predicting binding energy differences (i.e. ΔΔGs) for closely related pairs of protein-protein complexes. To do so the authors rely on a well-known identity that relates binding energy differences to folding free energies. The authors then proceed to fine-tune a pre-trained inverse folding model--a proxy for a folding energy predictor--on both folding and binding energy data. In their empirical evaluation the authors demonstrate that the resulting predictor, STAB-DDG, is competitive with, and may indeed outperform, the state-of-the-art Rosetta-based predictor Flex ddG. Claims And Evidence: Yes the claim that STAB-DDG provides reasonably good ΔΔG predictions (this remains a difficult problem, despite the author's contributions) is well supported by the empirical evaluation. In particular the authors seem to have taken care to minimize possible issues with data leakage, and have taken the time to curate an additional test set for consideration (TCR mimics). (Though I should note that I have not examined the information in the supplementary materials about the data splitting strategy). Granted, the limited size of test sets limits the ability to do fine-grained evaluation; nevertheless, it would be great if the authors made some attempt to do so. For example, does STAB-DDG tend to do worse on larger complexes? On certain classes of protein? Aggregate metrics are great, but it would be great if the authors could expand their results to give a better understanding of their method's common failure modes. Methods And Evaluation Criteria: The experimental method and reported methods appear to follow best practices in the field. As stated above, it would be great if the authors could offer more nuanced results that go beyond aggregate metrics. Theoretical Claims: The (straightforward) theoretical results included in the submission, specifically Proposition 3.1, look sound. In this context I note that the authors comment that "the linear model introduces asymmetry". As I understand this is entirely driven by $\phi_0 \ne 0$. Can you please comment and extend your discussion on this point? What value of $\phi_0$ do you learn in practice? Does keeping this term actually make any appreciable difference to the empirical performance? Experimental Designs Or Analyses: As far as I can tell the experimental designs/analyses look sound. Supplementary Material: I did not review the supplementary material in detail. However, I did look to see if the authors report the values of the learned $\phi$ parameters and did not find them. These seem like very natural parameters to report. I'm also curious if the authors can post-rationalize any of the values learned (e.g. their signs or relative magnitudes). Relation To Broader Scientific Literature: The discussion of related work in Sec 4 is pretty thorough, including a discussion of both classical and ML-based approaches. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is generally well-written and easy to follow. While there is nothing particularly surprising about the approach taken, the authors do a good job of motivating the rationale behind their approach and demonstrating that it can perform well in practice. As such I think submission could be of interest to the ICML community and recommend acceptance. Other Comments Or Suggestions: - In Eqn. 5 $M_n$ can presumably be quite variable in practice, with the consequence that some data points will contribute much less to the loss than others. Have you considered using a loss that doesn't normalize by $M_n$? For example, one could normalize by $\sqrt{M_n}$. - You state that "We fit the linear parameters first with the zero-shot predictor Δfθ(s, s′) before fine-tuning θ.". Can you please explain your rationale for doing so in more detail? Why not fit jointly? Why do you prefer a two-stage approach? Have you tried both approaches? - What do you think is the driving factor explaining the (smallish) performance differences between your results and those of Dieckhaus 2024? Is it their particular architecture? Is it something else? - You state that "Importantly, the imbalance in the numbers of mutants across different structures introduces bias to the “Per-Structure” metrics". I guess this would more accurately be described as variance? typos: - double period in prop 3.1 last bullet point - line 200: "denote the a" Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review. We appreciate your thoughtful suggestions and have conducted additional experiments that provide additional insights to our method. Fine-grained evaluation: We examine StaB-ddG’s performance on different subsets of SKEMPI. Complex size: total number of residues of a PPI. (PS RMSE: Per Structure RMSE) | Complex Size | PS RMSE | |---------------------|---------| | < 200 residues | 1.05 | | < 400 residues | 1.33 | | < 600 residues | 1.48 | | < 800 residues | 1.42 | | < 1000 residues | 1.51 | Interface structural rigidity: percent of near-interface residues (defined as within 10A of another chain) with secondary structure type = loop. | Interface Rigidity | PS RMSE | |--------------------|---------| | < 30% loops | 1.254 | | < 40% loops | 1.414 | | < 50% loops | 1.465 | | < 60% loops | 1.508 | | < 70% loops | 1.492 | | < 80% loops | 1.496 | We observe that StaB-ddG does worse on bigger complexes and more flexible interfaces. We believe that this information provides valuable additional insight to our method and will include it in the appendix. Learned linear parameters (negative means destabilizing): $\alpha = 0.24, \phi_0 = -0.19$ Amino acid offsets | ALA | CYS | ASP | GLU | PHE | GLY | HIS | ILE | LYS | LEU | MET | ASN | PRO | GLN | ARG | SER | THR | VAL | TRP | TYR | |-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | 0.00 | -0.80 | 0.04 | 0.10 | -0.55 | 0.13 | -0.20 | -0.40 | 0.12 | -0.33 | -0.47 | 0.01 | 0.33 | -0.01 | -0.23 | 0.00 | -0.07 | -0.27 | -0.68 | -0.50 | Since both the fine-tuning losses and ddG predictions are invariant to additive shifts that might be applied to all per-amino-acid offsets, we report the differences relative to Alanine. We see that the learned $\phi_0$ is negative, indicating that most mutations in the dataset are destabilizing. We note that the value of the bias does not affect the correlation metrics, and find that adding the bias term lowers the overall RMSE from 1.88 to 1.79. We also note that the learned offset for TRP, a bulky hydrophobic residue expected to have larger effects on stability, is the second largest in magnitude and is negative (destabilizing). Two-stage finetuning: Our rationale for our two-stage finetuning is that first fitting the linear parameters can correct for any initial scale mismatch in our predictions that might inflate initial gradients and destabilize training. When we view the linear model as the final layer of a deep predictor, this choice coincides with standard practice in transfer learning [1]. In our revision we will elaborate on this choice. Comparison to ThermoMPNN: We believe the driving factor of the performance difference between our model and ThermoMPNN is (1) the architectural difference, and (2) we train on multiple mutations as well while ThermoMPNN are trained only on single mutations. $\sqrt{M_n}$ : Thank you for your suggestion to try our loss with $\sqrt{M_n}$ . We re-trained the model with this loss and did not observe a meaningful difference with our original loss, we will include the results below in the appendix: | | $M_n$ | $\sqrt{M_n}$ | |------------------|-------|-------| | Overall Pearson | 0.553 | 0.548 | | Overall Spearman | 0.515 | 0.515 | Bias and variance in per-structure metrics: For structures with a small number of variants computed correlations are subject to both significant bias and high variance (see e.g. [2]). Though non-exhaustive, we feel our figure 5 in our submission gives some indication of these effects as we have no reason to expect performance to be different on structures with few mutants. Typos: Thanks for pointing these out! We will fix them accordingly in the revision. [1] Kumar et al. Fine-tuning can Distort Pre-trained Features and Underperform Out-of-Distribution. ICLR 2022. [2] Bishara, Anthony J. and Hittner, James B. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality. Educational and Psychological Measurement.
Summary: This work introduces Stab-DDG, a deep-learning method for DDG prediction that leverages both folding energy and binding energy data during pre-training. The authors relate folding energy to binding energy in a principled manner, which leads to a loss function for pre-training on folding energy data specifically. The authors show that Stab-DDG and its linear variant outperform existing methods on the SKEMPI v2.0 dataset and a TCR mimic case study. The authors also contribute theoretical criteria for DDG predictors overall and show that their proposed method fits these criteria well. Claims And Evidence: The claims of the paper are generally well-supported by the experiments and theoretical analysis. The authors show that Stab-DDG outperforms existing methods on the SKEMPI v2.0 dataset and a TCR mimic case study. The authors also show that Stab-DDG fits the theoretical criteria for DDG predictors well, and the connection between folding and binding energy is fairly straightforward. Methods And Evaluation Criteria: **Strengths** - The proposed model and evaluation metrics for SKEMPI v2.0 and the TCR mimic case are all sensible and customary for the DDG prediction task. - Leveraging folding energy data for pre-training is a novel and interesting approach that is well-motivated. **Weaknesses** - The authors mention that a scalar $\alpha$ is used to scale the inverse folding model's output, and that averaging over noise terms during inference is used to reduce variance, but the exact values of $\alpha$ and the number of noise terms $\epsilon$ are not provided. Is $\alpha$ learned? - If I'm understanding correctly, Stab-DDG requires estimating log-probabilities of both bound complexes and unbound monomers. However, to my knowledge, SKEMPI v2.0 only contains data for bound complexes. It would be helpful to clarify how unbonded monomer data is obtained or if it is used at all for SKEMPI v2.0. Theoretical Claims: All proofs referenced in Appendix B are straightforward to check and are sound. Experimental Designs Or Analyses: **Strengths** - The overall experimental designs are sound. - The authors correctly point out issues in dataset splitting in SKEMPI v2.0 from previous works and provide a solution with documented filtering steps. - The authors include standard errors in their tables with statistical significance tests for per-structure Pearson/Spearman coefficients, which is good practice. **Weaknesses** - It's not clear why similar significance testing is not done for the overall metrics in Table 4. - There is no mention of specific hyperparameters such as hidden state sizes, epoch numbers, learning rates, initializations, optimizers, etc. for each step of the pre-training and fine-tuning procedures, which are crucial for reproducibility. - Similar to the Methods section, there are no parameter sensitivity studies for $\alpha$ and the number of noise terms $\epsilon$. - The authors include comparisons with RDE-PPI and PPIFormer, which are strong baselines. However, there is a wealth of other deep learning-based methods like DiffAffinity [1], Prompt-DDG [2], Surface-VQMAE [3], Boltzmann Alignment [4], ProMIM [5] etc. that are not compared against. I think this is especially important since the authors are offering not only a new method but also a new split and filtering method for SKEMPI v2.0, and it would be helpful to see how Stab-DDG compares to these other methods under this new setting. [1] Liu et al. Predicting mutational effects on protein-protein binding via a side-chain diffusion probabilistic model. NeurIPS 2023. [2] Wu et al. Learning to Predict Mutational Effects of Protein-Protein Interactions by Microenvironment-aware Hierarchical Prompt Learning. ICML 2024. [3] Wu, Fang and Li, Stan Z. Surface-VQMAE: Vector-quantized Masked Auto-encoders on Molecular Surfaces. ICML 2024. [4] Jiao et al. Boltzmann-Aligned Inverse Folding Model as a Predictor of Mutational Effects on Protein-Protein Interactions. ICLR 2025. [5] Mo et al. Multi-level Interaction Modeling for Protein Mutational Effect Prediction. Preprint. Supplementary Material: I read all of the supplementary material. Relation To Broader Scientific Literature: To my knowledge, this is the first work to directly connect folding energy to binding energy for deep learning-based DDG prediction. Essential References Not Discussed: Besides the missing baselines in the experimental section, I think all relevant references are already discussed. Other Strengths And Weaknesses: **Weaknesses** - To my knowledge, the core method and architecture, using the log odds-ratios of wild-type and mutant sequences, is not novel. Other works like Boltzmann Alignment [1] and ProteinMPNN-DDG [2] have used similar approaches using the same backbone inverse folding model to predict DDG. [1] Jiao et al. Boltzmann-aligned inverse folding model as a predictor of mutational effects on protein-protein interactions. ICLR 2025. [2] Dutton et al. Improving Inverse Folding models at Protein Stability Prediction without additional Training or Data. MLSB @ NeurIPS 2024. Other Comments Or Suggestions: I think this work is well-motivated and a valuable contribution. However, I think the main weaknesses are more about completeness in the experimental section, e.g. lack of hyperparameter and optimization details and missing deep learning baselines. Questions For Authors: - Do the authors observe any interesting per-residue patterns in model predictions on the SKEMPI v2.0 dataset? For example, are there certain residues that the model consistently underestimates or overestimates the $\Delta \Delta G$ for? - While Stab-DDG is used for DDG prediction, what kind of performance does it achieve on just "DG", or binding affinity prediction on datasets like SAbDab? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review. We appreciate your recognition that our use of folding energy data to improve binding prediction is novel and well-motivated. We found your suggestions helpful and address them below. Baselines: We agree with the reviewer’s comment that the lack of deep learning baselines was a limitation of our submission. This concern was shared by reviewer yFSX, and we address it by including four new baselines. The results show that on our stricter split these additional methods provide similarly poor generalization performance. Please see our reply to yFSX for details. Reproducibility and hyperparameters: Thank you for pointing out these missing details, which will be included in the revision. In brief, the network architecture and initialization are inherited exactly from ProteinMPNN (1.6M parameters, 3 layers for both encoder and decoder with hidden state size = 128) [1]. We fine-tune on the megascale stability dataset using the ADAM optimizer with learning rate 3e-5 for 150 epochs with a batch size of 50,000 tokens. We fine-tune on SKEMPI using the ADAM optimizer with learning 1e-6 for 50 epochs with a batch size of 50,000 tokens. To further ensure reproducibility our final version will include training and inference code. Value of alpha: alpha is a scaling term learned from folding stability data and we will include them in the appendix (negative means destabilizing) $\alpha = 0.24, \phi_0 = -0.19$ We see that the learned bias ($\phi_0$) is negative, indicating that most mutations in the dataset are destabilizing. Per-residue patterns: we report the RMSE by mutant residue type for single mutants on SKEMPI below. | ALA | CYS | ASP | GLU | PHE | GLY | HIS | ILE | LYS | LEU | MET | ASN | PRO | GLN | ARG | SER | THR | VAL | TRP | TYR | |-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | 1.33 | 3.54 | 2.96 | 1.89 | 0.99 | 1.52 | 1.37 | 1.83 | 1.52 | 1.92 | 1.45 | 1.45 | 1.09 | 2.07 | 1.73 | 1.01 | 1.20 | 0.96 | 1.00 | 1.22 | We note that StaB-ddG achieves lower RMSEs for many bulky hydrophobic residues (F, W, Y, M). Size of ensemble over noise: we performed an ablation experiment and will include a figure for the impact of the size of ensemble in the appendix. We report it partially here: | Ensemble Size | Overall Pearson | |---------------|-----------------| | 1 | 0.524 | | 3 | 0.528 | | 5 | 0.536 | | 10 | 0.546 | | 15 | 0.549 | | 20 | 0.544 | | 40 | 0.548 | We find that ensembling over 10 predictions with random permutation orders and backbone noise leads to near optimal performance. The results in table 4 are obtained with ensemble size = 20. Monomers for Skempi: are obtained by splitting the PDB files by chain (holo structures). We will update the text to clarify this. Standard errors and significance testing on per-structure and overall metrics: we’re glad that the reviewer appreciates that we reported standard errors on the per-structure metrics. For the revision we will include cluster-bootstrap standard errors for overall metrics, and (because their interpretation is non-trivial) a discussion of them in the appendix. In brief, we initially chose not to report intervals for overall metrics because it was not clear what the standard error should represent. For each per-structure metric, the standard error represents uncertainty in the expected value of that metric for a collection of ddG measurements for mutants of some new structure typical of those in the test set (i.e. sampled iid from the same distribution). For each overall metric, we can compute an analogous standard error as the standard deviation of that metric on cluster-bootstrap resample of the test set [2] where on each bootstrap sample we draw full clusters from the test-set clusters with replacement. These standard errors approximate the variability in the overall metrics owing to the choice of structures included in the test set. We will include these standard errors in our revision. Fine-tuning on dG: we thank the reviewer for this suggestion. We have not yet explored predicting direct dG’s because we expect this task to introduce additional complications; for example, the log-likelihood initialization is biased by the length of a sequence such that longer sequences will be predicted to be more stable than shorter sequences. However, we hope to explore this direction in future work. Boltzmann-Alignment and ProteinMPNN-DDG: we will include a discussion in the relevant works section. We hope that addressing these concerns helped strengthen our submission! [1] Dauparas et al. Robust deep learning-based protein sequence design using ProteinMPNN. Science. [2] Cameron, A. Colin and Miller, Douglas L. A Practitioner’s Guide to Cluster-Robust Inference. Journal of Human Resources. --- Rebuttal Comment 1.1: Comment: Thank you for answering my concerns. I think adding some discussion on these per-residue patterns would be worth including in the final version. I think the additional experiments with the other deep learning-based baselines on SKEMPI are also crucial for this work to be convincing. I would also like to point out that Boltzmann-Alignment just made their [code](https://github.com/aim-uofa/BA-DDG) available recently, and comparing with with them would also be beneficial. Overall, I think this is a strong work, and I have raised my score accordingly.
Summary: This paper proposes a novel approach to modeling binding energy by leveraging folding energy and fine-tuning a protein inverse folding model. The proposed STAB-DDG model demonstrates improved performance in predicting binding energy, an area that has often been lacking in experimental results. This method effectively utilizes folding energy data to model binding energy, resulting in better performance compared to baseline models. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, the proposed methods proved essential equations in context, such as properties of proposed models. Experimental Designs Or Analyses: The paper included necessary experimental comparisons with existing models while the metrics in this field is rather limited. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: See comment part. Other Strengths And Weaknesses: 1. The paper is well-written, with clear problem definitions. 2. The proposed model can be naturally generalized to double or multiple mutations; however, the authors stated that data on multiple mutations were discarded. I suggest that the authors consider including comparisons with multiple mutation sites. 3. There are additional models, such as MutateEverything (https://arxiv.org/pdf/2310.12979), that are used for predicting stability. Did the authors attempt to compare their model's performance with these models or evaluate its performance in predicting folding energy? 4. The model's performance appears to be less impressive than that of ThermoMPNN. What advantages does fine-tuning ProteinMPNN provide? Did the authors also consider using sequence-based models for predicting folding energy, as the problem is fundamentally a stability prediction task? Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We appreciate you pointing out our novelty of using folding energy data for binding ddG prediction. We address your comments/questions below. Including multiple mutation sites: We have now evaluated folding ddG performance on multi-mutants in the megascale test set and report the results below. | | Pearson | Spearman | RMSE | |--------------|---------|----------|------| | Single | 0.73 | 0.71 | 0.74 | | Multiple | 0.37 | 0.41 | 1.35 | We find that StaB-ddG performs worse on multi-mutants. We suspect that this is because of the limited number of multi-mutants in the training data. We will include these results in the appendix. We clarify that the results on SKEMPI in our submission include multiple mutations. Why ProteinMPNN for stability prediction: The reviewer observes that since the problem is fundamentally a stability prediction task, other models that can predict stability (e.g. ThermoMPNN and sequence models) could be used instead of ProteinMPNN. This is indeed the case. In our submission we chose ProteinMPNN for 1. Simplicity of applying to complexes: ProteinMPNN accommodates multi-chain complexes natively, whereas the other methods described are implemented only for monomers. Adapting these alternative methods would require heuristics such as adding a glycine linker or a residue gap that might negatively impact performance, 2. ProteinMPNN is light-weight. Compared to ProteinMPNN, ThermoMPNN includes an additional transfer-learning module, and MutateEverything is built on either ESM2 or AF2 which have >10X more parameters and longer runtime compared to ProteinMPNN for a forward pass, 3. The thermodynamic properties of the resulting predictor. Unlike StaB-ddG, a predictor that uses ThermoMPNN or MutateEverything would be inherently asymmetric and so would not satisfy properties 1 and 2 in our Proposition 1, and 4. Strong zero-shot performance: Inverse folding models (including ProteinMPNN) provide stronger zero-shot stability performance than sequence models, and so provide a stronger starting point for fine-tuning [1]. In our revision we will make clear in the text the reasons for this design choice. Additionally we will include an evaluation of StaB-ddG with ESM-IF model as the stability predictor in our revision to confirm our result that including folding stability data improves binding prediction is not specific to our choice of ProteinMPNN. [1] Notin et al. ProteinGym: large-scale benchmarks for protein fitness prediction and design. Neurips 2023. --- Rebuttal Comment 1.1: Comment: The model's performance on the multiple mutation prediction task is not very good, but given that the authors used a dataset with a limited number of multiple mutations, I find this reason acceptable. Considering the novelty of using folding predictions to estimate binding affinity, I have revised my score.
Summary: This paper presents StaB-DDG, a finetuning method for predicting mutational effects on protein binding. Specifically, it uses proteinMPNN, an inverse folding model, to calculate folding energy for a protein and binding energy a protein complex. It then finetunes proteinMPNN on experimental folding and binding DDG data so that the likelihood of proteinMPNN aligns with experimental binding/folding energy. It also includes a consistency training method to make sure the StaB-DDG satisfying symmetry and transitivity. The method is evaluated on standard SKEMPI benchmark and case study on TCR mimics. Claims And Evidence: The claims made in the submission is clear, but it can be more convincing if the authors include more baselines on the SKEMPI benchmark as there are many papers in this area. Methods And Evaluation Criteria: The benchmark datasets make sense for the problem. The case study on TCR-mimics are particularly interesting. However, the authors should compare with more baselines on the SKEMPI benchmark, including DiffAffinity, Prompt-DDG, ProMIM, Surface-VQMAE, and Light-DDG. Theoretical Claims: There is no theoretical claims Experimental Designs Or Analyses: Results on the SKEMPI benchmark will benefit from comparison to additional baselines. The comparison to existing baselines may not be fair because the proposed method is trained on additional folding energy data. Supplementary Material: Yes Relation To Broader Scientific Literature: The key contribution of this paper is including additional training data from folding DDG experimental data. It showed that including folding DDG data is helpful for binding DDG prediction. This is a interesting finding Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper lacks technical innovation. It is a simple finetuning of ProteinMPNN model, with antisymmetry and path independence constraint. Other Comments Or Suggestions: All suggestions are included above Questions For Authors: Can you upload the filtered and clustered SKEMPI dataset for review? The proposed data is substantially different from standard practice. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments and appreciate that they find it interesting that StaB-ddG allows folding ddG data to improve binding ddG prediction. We hope addressing the comments has helped strengthen our submission. Baselines: We agree with your comment that the paper will be made more convincing by including more baselines. To address this we have re-trained DiffAffinity, Prompt-DDG, ProMIM, and VQ-MAE on our SKEMPI splits, and will include the results below in table 4: | Method | PS Pearson | PS Spearman | PS RMSE | Pearson | Spearman | RMSE | AUROC | |----------------|-----------------|----------------|---------------|--------------|---------------|-----------|-------------| | DiffAffinity | 0.262 ± 0.039 | 0.247 ± 0.037 | 1.55 ± 0.13 | 0.309 | 0.326 | 1.88 | 0.64 | | Prompt-DDG | 0.319 ± 0.045 | 0.267 ± 0.044 | 1.41 ± 0.12 | 0.331 | 0.353 | 1.81 | 0.57 | | ProMIM | 0.191 ± 0.055 | 0.153 ± 0.052 | 1.57 ± 0.12 | 0.345 | 0.347 | 1.85 | 0.60 | | Surface-VQMAE | 0.371 ± 0.044 | 0.357 ± 0.039 | 1.40 ± 0.10 | 0.445 | 0.446 | 1.59 | 0.67 | | StaB-ddG | 0.473 ± 0.035 | 0.433 ± 0.037 | 1.52 ± 0.14 | 0.542 | 0.489 | 1.79 | 0.72 | PS: per structure We find these methods perform worse than both StaB-ddG and Stab-ddG zero-shot on 5 of the 7 metrics (all but RMSE). We suspect this better performance by StaB-ddG is due to its folding energy-based parameterization, which provides improved generalization. Out of distribution generalization is particularly important for our stricter data splitting. Filtered and clustered SKEMPI splits: Thank you for pointing out that these clusters were not specified in our submission. We include these below, with each cluster on a separate line: Train: 1AK4 1B2S 1B2U 1B3S 1BRS 1C4Z 1E50 1H9D 1F47 1FFW 1IAR 1KBH 1QAB 1YCS 2AW2 2B42 2C5D 2C5D 4RA0 1EMV 2WPT 2HRK 2J0T 2KSO 2O3B 2VN5 3BP8 3BT1 3EG5 3EQS 3EQY 3F1S 1ACB 1AHW 1BJ1 1CBW 1CHO 1CSE 1CZ8 1DQJ 1DVF 1EAW 1FC2 1FCC 1GC1 1JRH 1MHP 1MLC 1N8O 1N8Z 1NCA 1NMB 1PPF 1R0R 1SMF 1TM1 1UUZ 1VFB 1XGP 1XGQ 1XGR 1XGT 1XGU 1YQV 1YY9 2B2X 2BDN 2FTL 2NY7 2NYY 2NZ9 2SIC 3BDY 3BE1 3BN9 3BX1 3G6D 3HFM 3L5X 3MZW 3N85 3NGB 3NPS 3SE8 3SE9 3SGB 3W2D 4GXU 4JPK 4KRL 4NM8 5C6T 1A22 1BP3 3MZG 3Q8D 3SE3 3SE4 3SZK 3VR6 4HFK 2DVW 3AAA 4HRN 4J2L 4JEU 4K71 4OFY 4PWX 4RS1 1OHZ 4UYP 4UYQ 5M2O 4Y61 5CXB 5CYK 5E6P 5F4E 5K39 Test: 1B41 1FSS 1MAH 1EFN 1GCQ 1C1Y 1GUA 1HE8 1K8R 1LFD 3KUD 4G0N 5TAR 5UFE 5XCO 1KTZ 1REW 2QJ9 2QJA 2QJB 3B4V 3BK3 3HH2 3SEK 1S1Q 1XD3 2OOB 3M62 3M63 1A4Y 1Z7X 4CPA 1JTD 1JTG 2G2U 3QHY 2PCB 2PCC 2AJF 3KBH 3S9D 3SE4 3WWN 4B0M 4CVW 4E6K 1AO7 1BD2 1JCK 1LP9 1MI5 1OGA 1SBB 2AK4 2BNR 2P5E 2PYE 3C60 3HG1 3QDG 3QDJ 3QIB 4FTV 4JFD 4JFE 4JFF 4L3E 4MNQ 4N8V 4OZG 4P23 4P5T 5E9D 4FZA 4NZW 4O27 3SF4 4WND 4X4M 4YFD 4YH7 Our final version will include dataset filtering/splitting scripts.
null
null
null
null
null
null
UGPhysics: A Comprehensive Benchmark for Undergraduate Physics Reasoning with Large Language Models
Accept (poster)
Summary: The paper proposes a new benchmark for reasoning in physics by LLMs, they used 31 LLMs to evaluate the performance on the proposed benchmark, then they have introduced a new method MARJ to evaluate the outputs on these 31 LLM on the benchmark. Overall they show that OpenAI-01-mini gives the best performance on this benchmark. Claims And Evidence: **Claim:** Introduction of UGPhysics, benchmark to evaluate the Physics reasoning with LLMs, **Evidence:** Though they do introduce a benchmark, it’s not totally about reasoning, it’s also about problem solving abilities. **Claim:** Introduction of MARJ for evaluating the outputs given by different LLMs on the benchmark. **Evidence:** They certainly do introduce MARJ, and give details of the method. In section 3.3. **Claim:** Open Ai-o1 Achieves the best score on this benchmark: **Evidence:** Table 5 shows the results of all the models and OpenAI-01 scores the highest. Methods And Evaluation Criteria: The evaluation method used doesn’t really support this work, the human evaluation of the proposed method was only checked with 100 examples, which I feel is a very small sample, apart from this the model used for evaluation is Open AI-4o which in turn is the same family of models that shows the best performance. Theoretical Claims: There aren’t any theoretical claims Experimental Designs Or Analyses: The authors have considered 31 leading models, but the choice of models is not clear, as in the closed source LLM only one family of models are considered, there are other families of closed source models which I feel should have been considered for better evaluation. Also models of Phi family which are specifically trained on text book data are not considered. The selection of models according to the task should be considered even if it is a lesser number of models. Supplementary Material: No supplement material provided Relation To Broader Scientific Literature: If mentioned weaknesses are corrected, the benchmark can be of use for fine tuning and working with LLMs for science. Essential References Not Discussed: The related work is well discussed, they mention the existing work related to Physics, existing benchmarks related to Physics and also reasoning. Other Strengths And Weaknesses: **Strengths**: New benchmark for evaluating how well LLMs can work on Physics problems Evaluating the performance of 31 different models on the proposed benchmark **Weaknesses**: The Evaluation method is not robust enough, as mentioned above in Methods And Evaluation Criteria* The paper writing has a good amount of redundancy, specifically the section 3.1, which was mentioned multiple times and also about the evaluation method. Until line 81 there is not mention of language of the data and then there is a mention of translating into English In line 87 they say rigorous data leakage, but the methods to check data leakage were not that rigorous. More details on checking the data leakage should be provided, like the number of times each problem was run and then checked if in every case the model didn’t give the output as in the question then you could say there is no data leakage. In line 176, there is no reference to the books used for data creation. In the appendix, more examples and one whole example of whole work from textbook to translating to getting output from LLM and then evaluation of it with ground truth would be better. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear wqeU, Thank you for your time and effort to review our work! We will reply to your questions one by one as follows: > the human evaluation of the proposed method was only checked with 100 examples, which I feel is a very small sample, apart from this the model used for evaluation is Open AI-4o which in turn is the same family of models that shows the best performance. Previous studies [1, 2, 3] have shown that using similar or even smaller sample sizes (100 [1], 80 [2], and 50 [3]) is sufficient for human evaluation tasks, even for more subjective tasks such as text summarization [3]. The second-best LLMs (QWQ and DS-Distill) are not within the same family of GPT-4o. Additionally, OpenAI o1-mini is a Long CoT LLM, which is quite different from GPT-4o. [1] has also shown that LLMs-as-judge is a valid approach even if the LLMs are within the same family. > The authors have considered 31 leading models, but the choice of models is not clear, as in the closed source LLM only one family of models are considered, there are other families of closed source models which I feel should have been considered for better evaluation. Also models of Phi family which are specifically trained on text book data are not considered. The selection of models according to the task should be considered even if it is a lesser number of models. Thank you for your suggestion to include Phi. We will **add the results of Phi-4 as follows**: | Mec. and Ther. (EN) | Mec. and Ther. (ZH) | Elec. (EN) | Elec. (ZH) | Modern Physics (EN) | Modern Physics (ZH) | Overall (EN) | Overall (ZH) | Average | |---------------------|---------------------|------------|------------|----------------------|---------------------|--------------|--------------|----------| | 0.3413 | 0.3248 | 0.3651 | 0.2987 | 0.4045 | 0.3586 | 0.3716 | 0.3344 | 0.3530 | From the results, Phi-4 is a very strong fast-thinking LLM. We have listed all the details of the chosen LLMs in Appendix B.1. We have covered OpenAI, Qwen, Llama, DeepSeek, Mistral, Skywork, Yi, Numina, and OpenMath2, which we believe are very diverse. It is quite expensive to cover more closed-source LLMs, especially Claude-series, and we could not afford to do so for more closed-source LLMs. > The paper writing has a good amount of redundancy, specifically the section 3.1, which was mentioned multiple times and also about the evaluation method. Thank you for your suggestion, we will **change the name of Section 3.1** to “UGPhysics and MARJ Overview”. > In line 87 they say rigorous data leakage, but the methods to check data leakage were not that rigorous. More details on checking the data leakage should be provided, like the number of times each problem was run and then checked if in every case the model didn’t give the output as in the question then you could say there is no data leakage. Thank you for pointing this out. We will **change our wording about "rigorous"**. This data leakage detection [4] is widely adopted [1, 5], and is believed to be in some sense useful. All our settings align with [5]. Although this method is not perfect, we believe that **conducting such detection is a merit rather than a shortage**. > In line 176, there is no reference to the books used for data creation. Thank you for your question! There is a risk to reveal the institute of several authors and we give the links to these books at a later stage (if possible). > In the appendix, more examples and one whole example of whole work from textbook to translating to getting output from LLM and then evaluation of it with ground truth would be better. Thank you for your comments. We will consider adding more examples to the appendix and add a whole example as well. Thank you again for your effort and suggestions. We hope our rebuttal has addressed your concerns. Feel free to discuss if you have any further questions or comments. Sincerely, Authors [1] Gao et al., 2024; Omni-Math: A Universal Olympiad-Level Mathematics Benchmark for Large Language Models. [2] Shaib et al., 2024; How Much Annotation is Needed to Compare Summarization Models. [3] Zheng et al., 2023; Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. [4] Xu et al., 2024; Benchmarking Benchmark Leakage in Large Language Models. [5] Huang et al., 2024; OlympicArena: Benchmarking Multi-Discipline Cognitive Reasoning for Superintelligent AI. --- Rebuttal Comment 1.1: Comment: Thank you for the considerations, these additions would definitely help strengthen the paper, after going through the rebuttal, I would like to increase my score.
Summary: The paper introduces a comprehensive bilingual benchmark UGPhysics for evaluating undergraduate physics reasoning, featuring 5520 questions across 13 subjects. The benchmark also comes with a proposed evaluation pipeline that combines rule-based and model-based methods for improved accuracy. Notably, the study finds that even top-performing LLMs achieve less than 50% accuracy on this proposed benchmark, highlighting a critical need for improvement in LLM capabilities for physics reasoning. ## update after rebuttal I have read the author response. I'll keep my score. Claims And Evidence: The proposed benchmark is well-curated and manually reviewed. Methods And Evaluation Criteria: The proposed evaluation pipeline MARJ makes sense to me. However, one concern I have is that in 5.2 Reliability of Evaluation, only 100 random test examples are being examined and it's not clear what are the answer types of those questions. Do they cover all the answer types, or only some of the seven answer types? Theoretical Claims: The paper does not have theoretical claims. Experimental Designs Or Analyses: The experiments that examine 31 leading LLMs' performance on UGPhysics are well executed. Supplementary Material: Yes, I have reviewed the Appendix. Relation To Broader Scientific Literature: While mathematical reasoning has numerous benchmarks, AI for physics remains underexplored, lacking challenging evaluations and diverse question types. This paper addresses this gap by proposing a comprehensive physics benchmark that surpasses previous ones in size, difficulty, and subject coverage. Essential References Not Discussed: The paper covers related work well. Other Strengths And Weaknesses: Please see other sections. Other Comments Or Suggestions: It would be nice to also have a table that lists the information of how many questions are there for each answer type. Questions For Authors: - In line 372-382: "LLMs show varying performance across different subjects, although the disparity is relatively small..." I don't know where the numbers mentioned are coming from (e.g., 27.0%, 22.9%, 13.8% etc), I don't see those numbers from Figure 2 (a), could you clarify this? - For o1 like models, I would like to know how many percentages of the inference generations didn't end due to length limit (i.e., 8192 as mentioned in the Appendix). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer SPQT, Thank you for your valuable suggestions! We will reply to your questions one by one as follows: > However, one concern I have is that in 5.2 Reliability of Evaluation, only 100 random test examples are being examined and it's not clear what are the answer types of those questions. Do they cover all the answer types, or only some of the seven answer types? Thank you for your question! Previous studies [1, 2, 3] have shown that using similar or even smaller sample sizes (100 [1], 80 [2], and 50 [3]) is sufficient for human evaluation tasks, even for more subjective tasks such as text summarization [3]. Regarding the types of answers, after examining 100 randomly selected test examples, we observed that all seven answer types are represented, except for True/False (TF). We believe it is OK because the evaluation of the TF question is relatively straightforward. > In line 372-382: "LLMs show varying performance across different subjects, although the disparity is relatively small..." I don't know where the numbers mentioned are coming from (e.g., 27.0%, 22.9%, 13.8% etc), I don't see those numbers from Figure 2 (a), could you clarify this? Thank you for pointing this out! We apologize that we mistakenly put the wrong numbers after the update of Figure 2(a). We will **correct these numbers accordingly**: "As shown in Figure 2a, the average overall accuracy of eight strong LLMs reveals that they perform particularly well in Semiconductor Physics (31.0%) and Atomic Physics (26.7%). In contrast, their performance is slightly lower in Theoretical Mechanics (16.5%). Additionally, LLMs show minor performance variation across six out of 13 subjects, with accuracies hovering around 20%” > For o1 like models, I would like to know how many percentages of the inference generations didn't end due to length limit (i.e., 8192 as mentioned in the Appendix). Thank you for your insightful question! In our experiments, we found that most cases of o1-mini generation will end within 8192 tokens. From the analysis in Section 5.3, the error incurred by the length limit is around 5% in all failure cases (approximately 2.5% = 5% * 50% in total). After checking the other open-source o1-like LLMs, we find the percentage is much higher than o1-mini. We will **add the following table of this percentage as follows** (in %): | Models | 8192 | 16382 | |-------------------------------|-------|--------| | DeepSeek-R1-Distill-Qwen-32B | 38.55 | 34.47 | | DeepSeek-R1-Distill-Qwen-7B | 44.40 | 38.90 | | DeepSeek-R1-Distill-Llama-70B | 19.16 | 12.25 | | DeepSeek-R1-Distill-Llama-8B | 52.37 | 43.80 | | o1-mini-2024-09-12 | 2.01 | - | | QwQ-32B-Preview | 19.01 | 8.54 | We believe this is also a gap between open-source o1-like LLMs and OpenAI o1 series. We will **add a paragraph in Section 5.1 to discuss this as follows**: "Open-source o1-like LLMs typically consume more tokens compared to OpenAI's o1-mini when solving problems in UGPhysics. When the maximum length of generation is set to 8192 tokens, only around 2% of OpenAI o1-mini’s generations exceed this length limit. In contrast, a significantly higher proportion of inference generations for other open-source o1-like LLMs fail to terminate within the specified limit, as shown in the previous table. To assess whether increasing the maximum generation length improves the performance of these o1-like LLMs, we conducted additional experiments by extending the token limit to 16384. The results, presented in the following table, demonstrate that doubling the maximal generation tokens only slightly improves the performance of o1-like LLMs. Additionally, we report the proportion of cases where the generation did not terminate due to the extended length limit of 16384 tokens. These findings suggest addressing the redundancy in token consumption of o1-like LLMs [4] during reasoning remains an important direction for further research." | Models /Acc (in %) | 8192 | 16384 | |-------------------------------|-------|-------| | DeepSeek-R1-Distill-Qwen-7B | 24.64 | 24.86 | | DeepSeek-R1-Distill-Llama-8B | 13.11 | 14.51 | | QwQ-32B-Preview | 37.34 | 38.90 | | DeepSeek-R1-Distill-Qwen-32B | 31.93 | 32.21 | | DeepSeek-R1-Distill-Llama-70B | 40.17 | 41.77 | Thank you once again for your insightful comments to improve the quality of our work. Feel free to discuss if you have any further questions or comments. Sincerely, Authors [1] Gao et al., 2024; Omni-Math: A Universal Olympiad-Level Mathematics Benchmark for Large Language Models. [2] Shaib et al., 2024; How Much Annotation is Needed to Compare Summarization Models. [3] Zheng et al., 2023; Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. [4] Chen et al., 2024; Do Not Think That Much for 2+3=? On the Overthinking of o1-like LLMs. --- Rebuttal Comment 1.1: Comment: Thanks for your response.
Summary: This paper proposes a new benchmark that targets underground-level physics prompts. The prompts are mined from physics textbooks via a rigorous processing pipeline. The two stage eval protocol is designed for this benchmark, in which a rule based metric used followed by using llm (gpt4o) to double check those marked as wrong. The authors compare many LLMs on this benchmarks and the best one's score is less than 50 out of 100, so it could be a good benchmark for reasoning models for some time. Claims And Evidence: This is a new benchmark paper, so i have less concern on this. Methods And Evaluation Criteria: - The method of creating this benchmark is reasonable. All questions are grounded on physics textbooks and the extracted latex format goes through manual check, so the quality should be good. - Eval for STEM questions is not easy since the gold answer could be freeform (unlike math is more formalized), so the proposed two stage eval protocol is a reasonable approach, though i'd hope there is a "STEM sympy" someday. Theoretical Claims: No theoretical claims in this paper. Experimental Designs Or Analyses: The experiments are mainly comparing LLMs in the proposed benchmark, which i don't find clear concern. Supplementary Material: I mainly check how eval protocol (B.4. MARJ Details) since this is a key part of a benchmark. Relation To Broader Scientific Literature: - This paper mainly targets pushing the frontier of LLM reasoning models. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Weakness: - It would be great if the authors can provide an analysis how robust the MARJ eval method is. e.g., how often it makes wrong judgement? In what scenarios LLMs can't correctly compare the given solution with gold? - It would be great if the authors can run some stats on the complexity / difficulty of this benchmark. e.g., for o1-like reasoning models, how many tokens do them need to solve the problem on average? - I'm curious how does the most frontier model e.g., o3-mini performs on this benchmark, since this basically measures the lifecycle of this benchmakr. (I understand there is lots of overhead to run this, especially if the authors are from academia, so I totally understand if the authors don't give this in rebuttal.) Other Comments Or Suggestions: Please check my previous section. Questions For Authors: - Regrading "The UGPhysics is sourced from several undergraduate-level physics exercise books." What exercise books are used as data source? - Why using math-specialized LLM for this physics benchmark? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 3YX8, Thank you for your helpful comments! We will reply to your questions one by one as follows: > It would be great if the authors can provide an analysis how robust the MARJ eval method is. e.g., how often it makes wrong judgement? In what scenarios LLMs can't correctly compare the given solution with gold? Thank you for your suggestion! We **have conducted such an analysis in Section 5.2**: “We find that our MARJ evaluation achieves an accuracy of 98% when compared to human annotations.” During our manual inspection, we observed that our MARJ sometimes still fails to correctly evaluate answers that are equivalent in physics but require several steps of conversion. For instance, consider the ground-truth answer $RT/\mu$ and the model-generated answer $p/\rho$. While both are physically equivalent ($p/\rho= pV/m = nRT/m = RT/\mu$, using the formula $PV = nRT$ and the definition $μ = m/n$), our MARJ fails to recognize the equivalence due to the need for multi-step conversion. > It would be great if the authors can run some stats on the complexity / difficulty of this benchmark. e.g., for o1-like reasoning models, how many tokens do them need to solve the problem on average? Thank you for your comment. In fact, we have analyzed the difficulty of UGPhysics through “physics reasoning skills” in Section 5.1 (Figure 2b). As suggested, we will also **add the stats of the tokens** DeepSeek-Distill-Qwen-32B used to solve the problem on average: | Dataset | Avg. Tokens | |-------------|--------| | UGPhysics | 4081 | | MATH | 3079 | In this table, we also include the average tokens DeepSeek-Distill-Qwen-32B spent to solve MATH [1] for reference. In addition, the average number of tokens that DeepSeek-R1 spent to solve problems in UGPhysics is 5555. > I'm curious how does the most frontier model e.g., o3-mini performs on this benchmark, since this basically measures the lifecycle of this benchmakr. (I understand there is lots of overhead to run this, especially if the authors are from academia, so I totally understand if the authors don't give this in rebuttal.) Thank you for your valuable question and understanding! There is indeed a lot of overhead to evaluate o3-mini. We will **add the results of DeepSeek-R1**, whose performance is catching up with o3-mini high(90.8% vs. 86.9% on MMLU). (DeepSeek-R1 is much cheaper) |Mec. and Ther. (EN) | Mec. and Ther. (ZH) | Elec. (EN) | Elec. (ZH) | Modern Physics (EN) | Modern Physics (ZH) | Overall (EN) | Overall (ZH) | Average | ---------------------|---------------------|------------|------------|----------------------|---------------------|--------------|--------------|----------| 0.5549 | 0.5667 | 0.5450 | 0.4839 | 0.5990 | 0.5729 | 0.5716 | 0.5553 | 0.5634 | From the table, the overall accuracy is 56.34%, which is higher than o1-mini as expected and there is still much room for improvement. > Regrading "The UGPhysics is sourced from several undergraduate-level physics exercise books." What exercise books are used as data source? Thank you for your question! There is a risk to reveal the institute of several authors and we give the links to these books at a later stage (if possible). > Why using math-specialized LLM for this physics benchmark? As we mentioned in L105-108: “The inclusion of math LLMs aims to assess the extent to which training on specialized math corpus contributes to physics reasoning.” From experiments, we find that “math-specialized LLMs yield only minor improvements over their general-purpose counterparts in UGPhysics, suggesting the compulsion for more high-quality physics corpora.” (L119 –L122). We believe the Reviewer zXs9 gives an interesting discussion about this in the section of "Experimental Designs Or Analyses" in his/her review. We would like to thank you once again for your useful suggestions to improve the quality of our manuscript. Feel free to discuss if you have any further questions or comments. Sincerely, Authors [1] Hendrycks et al., 2021; Measuring Mathematical Problem Solving with the MATH Dataset.
Summary: This paper introduces UGPhysics, a large‐scale, bilingual benchmark specifically designed for evaluating undergraduate-level physics reasoning with large language models. UGPhysics comprises 5,520 distinct physics problems (11,040 when including both English and Chinese versions) spanning 13 subjects and 59 topics. In addition to the dataset, the paper proposes a novel evaluation framework called Model-Assistant Rule-based Judgment (MARJ) that combines rule‐based precision with model‐based flexibility to assess complex, multi-step physics problem solutions. Extensive experiments across 31 LLMs reveal that even state-of-the-art models, such as OpenAI-o1-mini, achieve only about 50% accuracy, underscoring the challenges posed by physics reasoning compared to math-focused tasks. Claims And Evidence: The paper provides sufficient empirical evidence of the claims made. The main claim of the paper that physics reasoning abilities of LLMs have not received sufficient attention and as a result LLMs struggle on the task has been well substantiated by the observation that the best performance on the proposed benchmark is 49.8% on the proposed benchmark while several math reasoning benchmarks have been saturated. Methods And Evaluation Criteria: The methodological description and as described are generally speaking quite sound. Prior works on mathematical reasoning (such as [1]) evaluation of LLMs use evaluation techniques similar to the MARJ evaluation (i.e. using a combination of rule based + LLM-as-a-judge) method described in the paper (without explicitly describing the procedure). Regardless, I believe that stating the use of, and describing the procedure explicitly is a valuable contribution. [1] Didolkar et al., 2024; Metacognitive Capabilities of LLMs: An Exploration in Mathematical Problem Solving Theoretical Claims: The paper does not make any theoretical claims Experimental Designs Or Analyses: The authors provide an elaborate set of experiments and discussion of their results on the benchmarks. The observation that Math specialised LLMs do not necessarily perform better on Physics as compared to their general counterparts is interesting, showing that finetuning on specific maths problems does not necessarily lead to an improvement in general reasoning capabilities. At the same time, the fact that o1 like models which are post-trained predominantly on Math / Code rasoning data, perform the best suggests that RL based post training can lead to improvements in general reasoning capabilities of models. Supplementary Material: I have gone through the Appendix of the paper. No additional supplementary material has been provided. Relation To Broader Scientific Literature: This paper falls within the vast literature on LLM evaluation - specifically evaluating Physics reasoning capabilities of LLMs. While there exist multiple evalution benchmarks for phsics, most of them are either simple for existing LLMs, do not require elaborate CoTs, are limited in size or do not cover a wide range of topics. This value of this work stems from its elaborate subject categorization, support for two languages, difficulty level, larger size and an elaborate evaluation pipeline. Essential References Not Discussed: TheoremQA [2] also contains some physics questions but has not been discussed in the paper. [2] Chen et al., 2024; TheoremQA: A Theorem Driven Question-Answering Dataset Other Strengths And Weaknesses: All strengths and weaknesses of the paper have been discussed in other sections. Other Comments Or Suggestions: Including a comparison of performances of models with some standard physics reasoning benchmarks (such as MMLU Physics subset, PhysicsQA, etc. to that on UGPhysics would help give the reader a better idea of the overall difficulty of the benchmark as compared to existing benchmarks. Questions For Authors: The authors mention that the initial questions are in Chinese and are then translated to English. How is this translation done? If it is done using LLMs / some other machine translation methods, are any measures undertaken in order to ensure a high quality of the translations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer zXs9, Thank you for your constructive feedback! We will reply to your questions one by one as follows: > Prior works on mathematical reasoning (such as [1]) evaluation of LLMs use evaluation techniques similar to the MARJ evaluation (i.e. using a combination of rule based + LLM-as-a-judge) method described in the paper (without explicitly describing the procedure). Regardless, I believe that stating the use of, and describing the procedure explicitly is a valuable contribution. Thank you for pointing out this relevant paper and acknowledging our contribution. After reading [1], they employ a model-based evaluation to get additional metrics from three angles, which differs slightly from our setting. We will **include the following sentence in the "Related Work" section** (in "Answer Judgment") for discussion: "Additionally, several works [1, 2] utilize model-based evaluation to obtain additional metrics for assessing effectiveness." > TheoremQA also contains some physics questions but has not been discussed in the paper. Thank you for pointing this out. We will **add the following line to Table 1**: | Dataset | Level | # Test | # UG | Subjects | # Ans. Types | Language | Eval. | Leak. Det. | |-------------|-------|--------|------|----------|--------------|----------|-------|------------| | TheoremQA | 5 | 131 | 131 | – | 5 | EN | Rule | No | > Including a comparison of performances of models with some standard physics reasoning benchmarks (such as MMLU Physics subset, PhysicsQA, etc. to that on UGPhysics would help give the reader a better idea of the overall difficulty of the benchmark as compared to existing benchmarks. Thank you for your suggestion! We will **add this comparison for the GPT-4o model as follows** (we also include MATH for reference) and will include a figure to illustrate this table in our manuscript as well: | Dataset | Performance | |------------------|-------------------| | Ours | 38.67% | | MMLU (college physics) | 68.6% | | MMLU (high school physics) | 72.8% | | MMLU ((conceptual physics) ) | 92.3% | | MMLU-pro | 75.06% | | OlympicArena | 55.92% | | GPQA | 53.6% | | MATH | 76.6% | > The authors mention that the initial questions are in Chinese and are then translated to English. How is this translation done? If it is done using LLMs / some other machine translation methods, are any measures undertaken in order to ensure a high quality of the translations? Following [3, 4], we leverage LLMs (specifically GPT-4o-2024-08-06) for translation. As demonstrated in [3] (using GPT-4 for translation) and [4] (using GPT-3.5-turbo), the quality of translation produced by LLMs is high. Since we utilize a significantly more powerful model, the translation quality is expected to be even higher. Furthermore, during the initial stages of translation, we manually reviewed several examples (typically 5-20) for each subject (particularly checking whether it can handle physics-specialized terminology). This process confirmed that GPT-4o excels at translating them. Thank you once again for your valuable suggestions to improve the quality of our work. If you have any further questions or feedback, please do not hesitate to reach out to us. Sincerely, Authors [1] Didolkar et al., 2024; Metacognitive Capabilities of LLMs: An Exploration in Mathematical Problem Solving [2] Huang et al., 2024; Olympicarena: Benchmarking Multi-Discipline Cognitive Reasoning for Superintelligent AI. [3] Liu et al., 2024; Mathbench: Evaluating the Theory and Application Proficiency of LLMs with a Hierarchical Mathematics Benchmark. [4] Tang et al., 2024; Mathscale: Scaling Instruction Tuning for Mathematical Reasoning. --- Rebuttal Comment 1.1: Comment: Thank you for the reply and clarifications. I would like to maintain my score.
null
null
null
null
null
null
HashAttention: Semantic Sparsity for Faster Inference
Accept (poster)
Summary: This paper proposes a simple, effective and plug-and-play method for accelerating inference in autoregressive transformers via top-k attention. The authors propose to accelerate top-k operation by learning mappings to encode queries and keys in Hamming space in a way that the ranking induced using negative Hamming distance of encoded queries and keys follows the ranking induced by original exp(<q, k_i>) * ||v_i||. By efficiently identifying the top keys using bit operations, HashAttention reduces attention computation. Authors demonstrate impressive sparsity levels (up to 32×) with minimal quality degradation, and significant latency improvements (up to 4.3× in GPT-FAST and 2.54× in FlashDecode) with modest auxiliary memory requirements (32 bits per token). Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: The proposed method is sufficiently novel. Essential References Not Discussed: none Other Strengths And Weaknesses: ## Strengths 1. HashAttention consistently outperforms existing sparse attention methods across a wide range of benchmarks. Even when trained on generic data, it achieves 16× sparsity with minimal quality loss on LongBench and RULER benchmarks. With task-specific fine-tuning, sparsity can be pushed to 32× for certain tasks. 2. The use of bit operations for Hamming distance computation bitcount(bitwise_xor(phi(q), phi(k))) is computationally efficient. The approach shows impressive latency improvements in real-world inference systems like GPT-FAST and FlashDecode. 3. At just 32 bits of auxiliary memory per token, HashAttention is significantly more memory-efficient than competitive approaches like Double Sparsity, which require more auxiliary memory to achieve similar quality. 4. The paper presents comprehensive evaluations across multiple datasets, models, and metrics, comparing against a range of state-of-the-art baselines. ## Weaknesses 1. Unlike some heuristic-based approaches, HashAttention requires training on task data. While this enables better performance, it introduces additional complexity for deployment. However, this is a minor point. 2. The paper could more thoroughly discuss how hyperparameters like bits per embedding are selected and their impact on performance. It would help to include code in the appendix. 3. The authors acknowledge that for shorter contexts (< 8K tokens), the overhead of computing bit signatures can outweigh the benefits of sparse attention. This limitation should be more prominently discussed - does this overhead become more prominent with quantized KV cache? 4. The method formulation could be cleaner. For example: z = sigmoid(FF(x)), phi(x) = (z.round() - z).detach() + z. Other Comments Or Suggestions: . Questions For Authors: 1. Can you include hyperparameters? How much training was required for learning on OpenWebText? Are the LLM weights kept frozen during the learning of hash functions? 2. What's the performance overhead of computing the Hamming distance across all tokens, and at what point does this become a bottleneck compared to full attention computation? 3. Recent papers report that different layers/heads can benefit from different sparsity levels (e.g., middle layers allow higher sparsity). Have you explored variable sparsity across different layers/heads? 4. How does HashAttention perform with quantized KV cache implementations? Does the additional auxiliary memory requirement become more significant in these settings? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for supporting our paper. Please find responses to the questions and comments below. Kindly let us know if you have any additional follow-up questions. 1. **Hyperparameters for HashAttention Training / Frozen backbone LLM** Yes the LLM weights are kept frozen during training only hash attention mappings are trained locally and independently for each attention head. Our training for 32K length HashAttention on openwebtext looks like | | | |----------------------------|-------------| | Openwebtext samples | ~128K | | 32K length samples | 3750 | | # queries per sample | 8192 | | # total queries in dataset | 30M | | optimizer | Adam(0.001) | 2. **performance overhead of computing the Hamming distance vs. full computation** Kindly refer to the response 2 (Reviewer HrF1) for the table. We make the following observations, A. (Sequence dimension) Improvement with using hamming distances over inner products stays consistent with increasing tokens in KV Cache as expected. B. (Bitwidth dimension) We see that we can go upto as large as 512 bitwidth signatures while maintaining advantageous latency for Hamming distance computation. In our experiments, we use 32 bits for HashAttention. With such an efficient distance computation, the Hamming distance computation is far from being the bottleneck of HashAttention. 3. **different sparsity levels for different layers / heads** We have not explored this dimension. However, we expect such improvements to orthogonally improve all the methods. We leave this for future work. 4. **HashAttention performance with quantized KV cache implementations** Quantization of KV Caches is a way to overall improve the memory footprint of the Cache. Since, in such a setup, HashAttention still would act upon full precision vectors (since we will have access to those while computing signatures), the topk accuracy/sparsity tradeoff would remain the same. Quantization of KV Caches adds another layer of approximation to attention computation. We believe it affects all the sparse attention methods in the same manner. The absolute memory footprint of HashAttention remains the same. Auxiliary memory would, of course be higher in relative terms. However, it is true for all sparse attention baselines. **Other discussion:** 1. **Code**: The anonymous repository for our code used for running long benchmarks is here: https://anonymous.4open.science/r/HashAttention_ICML2025/README.md We plan to release full code on acceptance 2. **Choosing #bits**. In our experiments, we use 32 bits since it works reasonably. As expected, we find that using larger bits helps with the quality of top-k prediction. | Dimension | Cross Entropy Loss @ 250 x 32K samples | |-----------|--------------------| | 16 | 0.243 | | 32 | 0.212 | | 64 | 0.204 | We will add more results and discussion on these hyperparameters in final version. 3. **overhead of signatures for < 8K contexts, Does overhead become more prominent with quantized KV** We will highlight this overhead in the limitation section of the paper for clarity. We do not believe the overhead increases with quantized KV -- firstly, nothing changes in the signature computation even with quantized KV since we have access to full KV when signatures are created. Secondly, using quantized KV adds dequantize step in the attention computation, which would reduce the relative overhead of signature computation. 4. **Formulation**: we will improve the formulation in the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the response - I am maintianing my score of Accept.
Summary: Dynamic sparse attention has been widely explored in long-context scenarios. This paper proposes a learned hash function-based token-level dynamic sparse loading method. Specifically, it formulates the sparse attention top-K problem as a recommendation task, utilizing a learnable hash function to predict top-K tokens. The learned hash function is implemented as a three-layer MLP, generating a 32-bit hash score. During decoding, the method computes the query hash score, selects top-K tokens, and performs token-level gather sparse attention. The approach is evaluated on LLaMA-3.1-8B and Mistral-v0.3-7B using LongBench and RULER, with end-to-end and kernel-level latency comparisons against FlashDecoding and GPT-Fast. Results show that in long-context scenarios, the proposed method achieves up to 4.3× speedup over GPT-Fast. Claims And Evidence: The paper's claims are fairly accurate, as it is the first work to leverage a learnable hash function to address the sparse attention top-K problem. Methods And Evaluation Criteria: The method design is reasonable. Unlike Quest-based approaches, which rely on block-wise min-max values as centroids, this method learns a hash index, which is likely to provide better representations and mitigate OOD issues. The main concern is the generalization capability of the learned hash index. Additionally, the choice of a hash index is justified—the hashing process is GPU-friendly, and storing hash results incurs minimal cost. Theoretical Claims: The paper also provides an error analysis for sparse attention during decoding and reformulates the MIPS problem using cos-sin transformations with an addition operation. However, this Lemma 4.2 has already been widely adopted in the vector retrieval community. Experimental Designs Or Analyses: 1. The paper lacks a comparison with training-free hash function methods, such as MagicPIG[1], which also employs LSH for sparse attention. Evaluating against such baselines would help clarify the advantages and limitations of the proposed approach. 2. Additionally, the token-level approach may not be GPU-friendly, yet there is no latency breakdown provided for the gather sparse attention stage. An ablation study on the performance gain from token-level hash retrieval is also missing. 3. The study lacks OOD experiments, as it is trained on a retrieval dataset but is not tested on reasoning or other domain-specific benchmarks. Evaluating generalization across different domains would strengthen the analysis. 4. Finally, there is no discussion or analysis on whether the learned hash index helps mitigate query-key distribution shift issues in sparse attention top-K retrieval. Would you be able to provide additional experiments or insights on this aspect? [1] MagicPIG: LSH Sampling for Efficient LLM Generation. ICLR 2025. Supplementary Material: The paper provides baseline details, theoretical derivations, and additional experiments in appendix, which strengthen its analysis. Relation To Broader Scientific Literature: Prior work has explored sparse attention methods for long-context LLMs, leveraging attention sparsity through approaches such as StreamingLLM and H2O. More recently, RetrievalAttention has introduced vector retrieval to optimize sparse attention computation. Building on these efforts, this paper proposes a learnable hash index to make sparse attention top-K selection more GPU-friendly. Essential References Not Discussed: MagicPIG, a concurrent study, also utilizes LSH for sparse attention. It would be beneficial to include a discussion comparing the advantages and limitations of these two approaches. Could you provide a discussion comparing the advantages and limitations of these two approaches? Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Additionally, I noticed an issue in #43, where 128GB should be corrected to 32GB, where LLaMA-3.1-8B uses GQA with a group number of 8. Questions For Authors: 1. Do you have an ablation study using only a training-free hash index? 2. Do you provide a latency breakdown for gather sparse attention? 3. Have you conducted any OOD analysis between the training and test domains? 4. Have you analyzed whether the learned hash index mitigates OOD issues in sparse attention top-K retrieval? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for supporting our paper. Please find responses to the questions and comments below. Kindly let us know if you have any additional follow up questions. 1. **training-free hashing** To compare HashAttention to training free methods, we measure quality of sparse attention while using LSH signatures (random signed projections) in place of learned mapping-based HashAttention signatures. As expected, the data-agnostic LSH needs longer signatures to start giving reasonable quality. ||| bits | passage_retrieval_en| multifieldqa_en| hotpotqa|triviaqa|Avg| |-----|---------------|:----:|:---------------------:|:---------------:|:--------:|:--------:|:-------:| | 16x | HashAttention | 32 |99.43 |55.18 | 52.34 | 93.32 | 75.0675 | | 16x | LSH | 32 |48.57 |32.62| 34.38 | 80.02 | 48.8975 | | | LSH | 256 | 89.14 |47.97| 53.97 | 88.07 | 69.7875 | | | LSH | 512 | 85.14 |50.38| 51.42 | 87.6 | 68.635 | | | LSH | 1024 |91.43 |49.75| 55.25 | OOM || | 8x | LSH | 32 |69.14| 42.71| 46.63 | 81.78 | 60.065 | | | LSH | 256 | 82.86 | 50.18| 52.06 | 90.3 | 68.85 | | | LSH | 512 | 95.43 | 52.64| 54.85 | 87.51 | 72.6075 | | | LSH | 1024 | 98.86 | 51.84| 54.67 | OOM || 2. **Token level hashing / gathering in sparse attention / performance efficiency.** Our implementation does not involve gathering pivotal tokens into a contiguous memory space. The implementation is built upon the vLLM page attention framework, where each token corresponds exactly to one page (i.e., page size = 1). After identifying top-k tokens, we use their indices without physically moving or gathering these tokens into a separate memory buffer. The page attention kernel explicitly utilizes these indices to selectively compute attention only for the specified tokens, efficiently ignoring irrelevant tokens. The page attention kernel is highly optimized for GPU memory access patterns. Due to the GPU cache line size of 128 bytes, optimal memory bandwidth utilization is achieved as long as contiguous data access meets or exceeds this cache line size. Each token's head representation has 128 fp16 elements, equivalent to 256 bytes. This naturally exceeds the GPU cache line size, allowing our attention kernel to leverage GPU memory bandwidth effectively. Many state-of-the art inference framework implement attention using paged-attention backbone with page size 1. They find that with correct optimization, the efficiency does not depend on page size. Quoting from official flashinfer documentation *“Some recent work such as LightLLM and sglang uses a special form of PageAttention where page size equals one, for easy management of KV-Cache in complicated serving scenarios such as structured generation. FlashInfer optimizes PageAttention kernels by pre-fetching page indices in GPU shared memory, so that kernel performance is not affected by the page size.”* We will explicitly clarify this and use better naming for the different stages in the paper to avoid potential confusion. 3. **OOD test sets / generalization of learned mappings across tasks.** We show two sets of results in our experiments. The HashAttention is trained on completely unrelated data from openwebtext dataset and tested on OOD LongBench and RULER. The good performance of HashAttention on these datasets is a testament to its strong generalization performance. Additionally, finetuning for specific benchmark, which gives us HashAttention*, further improves the quality of results. 4. **OOD Query** Kindly refer to Response 4 to reviewer HrF1. 5. **MagicPig vs. HashAttention:** MagicPig can be understood and compared with HashAttention in two respects. Firstly identifying important tokens. HashAttention uses succinct bit signatures (32 bits) obtained via learned mappings to compute important tokens. MagicPIG uses data-agnostic LSH to obtain bit signatures. As expected MagicPIG needs much longer bit signatures to obtain reasonable results ( from their paper 1500-2200 bits). MagicPig builds index on this signatures and due to irregular bucket sizes, these indices need to be stored on CPUs -- a reasonable solution in case of KVCache being stored on CPUs. Secondly, MagicPig proposes sampling based estimation instead of top-k estimation. Sampling is performed using LSH tables. The idea sampling based estimation is an interesting direction to be explored w.r.t its application to other sparse attention methods, including HashAttention. We plan to thoroughly compare HashAttention with many concurrent works such as MagicPig, PQCache, Squeeze Attention in our future work. Comparison against these is out of the scope of the rebuttal. 6. **Lemma 4.2** : We would be happy to cite the correct reference in prior literature. Kindly direct us to the same. Please let us know if there are any additional queries. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I have no further questions and maintain my recommendation for acceptance of this paper.
Summary: The authors propose a method to identify relevant tokens in the attention computation by framing it as a MIPS search, using the relationship between MIPs and cosine similarity plus the approximation of cosine similarity in terms of the hamming distance of the corresponding hamming embeddings. The authors propose a framework, which other sparse attention approaches can be fit into, which consists on a scoring, topk and gather attention steps. Hash attention is first trained offline with generic data. The authors learn independent mappings for key-value pairs and queries and then use the hamming distance between these mappings to identify the pivotal tokens. Claims And Evidence: The claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and/or evaluation criteria chosen make sense for the problem or application at hand. Theoretical Claims: Lemma 4.2 presents a true score for token relevance that has the same ordering as using the inner product between queries and key-value pairs plus states the equivalent with respect to cosine similarity. These lemma includes a proof in the appendix and the relevant references to support it. Experimental Designs Or Analyses: Yes, table 1-2 and figure 3 Supplementary Material: Baselines Relation To Broader Scientific Literature: The authors include a wide overview for the works in the area and classify them in fixed and dynamic sparsity plus token eviction methods. Essential References Not Discussed: They cite up to date papers such as Retrieval Attention, Magic Pig and Squeezed attention, among others. Other Strengths And Weaknesses: The paper includes a variety of experiments and evaluations that showcase the different strengths of the method. Other Comments Or Suggestions: It is not clear why retrieval augmented generation approaches are discussed in related work but these are not compared against in the experiments section. Even though RAG is a competitive approach to reduce the context length, it doesn't immediately relate to dynamic sparsity attention computation so I suggest to remove this from the related work section. Questions For Authors: What is the dimension of the hamming codes to guarantee that it is indeed cheaper than computing the cosine similarity? How do you solve the unbalanced buckets issue in LSH, given that you use a single hash table? Could you provide further details of how the learning of independent mappings for key-value pairs and queries remedies the OOD problem? Would it be feasible to include the topk with k-means approach in the baselines to understand how it does wrt LSH partitioning? Could you provide further details about the extra fine-tuning on datasets for the experiments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for supporting our paper. Please find responses to the questions and comments below. Kindly let us know if you have any additional follow up questions. 1. **Discussion on RAG** – We will remove this from related work. 2. **Dimension of hamming codes:** We can answer this question by comparing (1) the latency of inner product computation ( dim=128 which is the standard dimension in most models) and (2) the hamming distance computation based on our kernel implementation. | Latency of SCORE computation |||||| |:----------------------------:|:-------------:|:---------------------------------------------:|:------:|:------:|:------:| | | Inner product | Hamming distance|||| | #tokens | 128 x 16Float | 64bit | 128bit | 256bit | 512bit | | 262144 | 0.13 | 0.085 | 0.084 | 0.087 | 0.112 | | 524288 | 0.266 | 0.085 | 0.086 | 0.117 | 0.218 | | 1048576 | 0.619 |0.087 | 0.129 | 0.226 | 0.429 | | 2097152 | 1.698 |0.144 | 0.247 | 0.445 | 0.85 | | 4194304 | 3.272 |0.28 | 0.485 | 0.88 | 1.694 | | 8388608 | 6.733 |0.552 | 0.96 | 1.755 | 3.378 | | 16777216 | 8.306 |1.1 | 1.911 | 3.504 | 6.746 | | 33554432 | 16.61 |2.188 | 3.817 | 7.005 | 13.48 | We can see that up to 512 bits, the hamming distance is cheaper than the inner product computation. In practice, we do not need such large bit widths. This excludes the cost of running MLP on a query vector, which does not scale with #tokens. 3. **unbalanced buckets issue in LSH** Since HashAttention compares the query signature with all token signatures, it is not directly affected by the unbalanced bucket issue. However, it can affect top-k selection when many tokens share the same Hamming distance. In such cases, we randomly chose $k$ tokens in our implementation. 4. **query OOD issue** The mappings learned in hashattention transform query and key-value into a low-dimensional semantic space where the smaller the hamming distance, better is the relevance of key-value to the query. While training the mappings, this hamming distance is used to classify the key-value to be relevant or irrelevant to the query. The setup naturally promotes transformed query distribution to be closer to key-value distribution, as shown in the table below. We can see that average cosine similarity significantly improves after the transformation. We provide results for both soft embeddings and hard embeddings for HashAttention. | | Average Cosine Similarity of top-32 tokens | | | |--------------|:------------------------------------------:|:--------------------:|:-------------------:| | Layer number | original embeddings | HashAttention (tanh) | HashAttention(sign) | | 0 | -0.111818 | 0.1777 | 0.180198 | | 4 | -0.119951 | 0.4922 | 0.179186 | | 8 | -0.109657 | 0.2676 | 0.1802 | | 16 | -0.110954 | 0.2422 | 0.182292 | | 24 | -0.114441 | 0.293 | 0.179346 | 5. **K-means based topk vs. vanilla LSH vs. HashAttention:** HashAttention uses learned mappings to obtain bit-signatures for retrieval (while motivated by LSH, it does not perform LSH). Due to rising interest in this topic, many concurrent works have explored different approaches from information retrieval. These include MagicPig (which uses LSH tables), SqueezeAttention and PQCache (which use clustering), and RetrievalAttention (which employs graph-based retrieval). We plan to thoroughly compare HashAttention with these works in future research. However, such a comparison is beyond the scope of this rebuttal 6. **Extra fine-tuning details** To fine-tune HashAttention mappings for downstream LongBench, we use 25 samples from all LongBench task each to create a fine-tuning dataset. Then we further train hashAttention on this dataset. The evaluation is performed on the LongBench excluding samples included in the fine-tuning dataset. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed explanations, I encourage you to add these to the appendix in the camera ready version if possible.
Summary: This paper introduces HashAttention, framing pivotal token identification as a recommendation problem. Given a query, HashAttention encodes keys and queries in Hamming space, capturing the required semantic similarity, using learned mapping functions. HashAttention efficiently identifies pivotal tokens for a given query using bitwise operations and computes attention using only these tokens, improving the overall attention efficiency. Trained on generic data, HashAttention reduces tokens used by up to 16× with minimal quality loss, requiring only 32 bits of auxiliary memory per token. Claims And Evidence: From the theoretical analysis, HashAttention transforms the problem into a maximum inner product search problem and approximates it in Hamming space by learning mapping functions, which has a reasonable theoretical. The experimental results support the effectiveness. On multiple data sets and models, HashAttention performs better than baselines in the same auxiliary memory budget. Methods And Evaluation Criteria: Compared with the existing methods, progress has been made in reducing the use of KV cache and improving the efficiency, and improving the efficiency of attention computation. Theoretical Claims: From the theoretical analysis, HashAttention transforms the problem into a maximum inner product search problem and approximates it in Hamming space by learning mapping functions, which has a reasonable theoretical. Experimental Designs Or Analyses: The experimental design was comprehensive, comparing multiple baselines, covering different models and data sets, and considering different assessment indicators, such as quality, efficiency, recall rate. Supplementary Material: Yes, all of them. Relation To Broader Scientific Literature: HashAttention transforms the problem into a maximum inner product search problem and approximates it in Hamming space by learning mapping functions, which has a reasonable theoretical and good application value. Essential References Not Discussed: Not yet. Other Strengths And Weaknesses: Strengths 1.The idea of paper is interesting, e.g., transforming key token recognition into recommendation problem and realize efficient attention calculation by coding in Hamming space. 2.Well-written paper with a clear process, the English writing is easy to follow. 3.Extensive experiments demonstrate the validity of the model. Limitations 1.Supplement the experiment in long context where KV cache is located in CPU RAM, and compare the performance of HashAttention with other baselines. 2.Expand the experiments of HashAttention on complex inference and multimodal tasks. 3.The formula derivation process is elaborated, and the key terms are explained in detail. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for supporting our paper. We are working on extending Hash Attention to scenarios involving KV cache offloading and reasoning tasks, and we will present these in our future work. Please let us know if you have any other questions; we would be happy to clarify.
null
null
null
null
null
null
Mitigating Plasticity Loss in Continual Reinforcement Learning by Reducing Churn
Accept (poster)
Summary: A recent line of research has highlighted a problem where standard deep-learning methods gradually lose plasticity (i.e., the ability to learn new things) in continual learning settings (Lyle et al., 2022; Dohare et al., 2024). This paper examines plasticity loss in deep continual reinforcement learning (RL) through the lens of churn—network output variability for out-of-batch data caused by mini-batch training (Schaul et al., 2022; Tang & Berseth, 2024). The authors identify correlations between plasticity loss and increased churn by studying the rank collapse of the Neural Tangent Kernel (NTK) matrix $N_\theta$, which is defined as the matrix of gradient dot products between all data points (Lyle et al., 2024): $N_\theta(i,j) = \nabla_\theta f_\theta(x_i)^⊤ \nabla_\theta f_\theta(x_j)$ for $x_i, x_j$. They empirically demonstrate that a rank collapse in the NTK matrix is a sympton of disrupted learning dynamics and hence leads to poor performance. To address this, they propose Continual Churn Approximated Reduction (C-CHAIN), a method to mitigate plasticity loss in settings where tasks switch every $N$ steps. They validate their approach with empirical results on four OpenAI Gym environments and 16 ProcGen tasks. **References** 1. Lyle, C., Rowland, M., and Dabney, W. Understanding and preventing capacity loss in reinforcement learning. In ICLR, 2022. 2. Dohare, S., Hernandez-Garcia, J. F., Lan, Q., Rahman, P., Mahmood, A. R., and Sutton, R. S. Loss of plasticity in deep continual learning. Nature, 632(8026):768–774, 2024. 3. Schaul, T., Barreto, A., Quan, J., and Ostrovski, G. The phenomenon of policy churn. arXiv preprint, arXiv:2206.00730, 2022. 4. Tang, H. and Berseth, G. Improving deep reinforcement learning by reducing the chain effect of value and policy churn. In NeurIPS, 2024. 5. Lyle, C., Zheng, Z., Khetarpal, K., van Hasselt, H., Pascanu, R., Martens, J., & Dabney, W. (2024). Disentangling the causes of plasticity loss in neural networks. arXiv preprint arXiv:2402.18762. Claims And Evidence: Here, I present some claims from the paper verbatim and explain why I agree or disagree with them, while also questioning their soundness. > We demonstrate the connection between plasticity loss and increased churn, and show the pathological learning dynamics this connection induces. Yes, they demonstrate this connection for a specific algorithm—PPO—in a particular continual learning setting where tasks switch every few million steps. They do this by showcasing rank collapse in the neural tangent kernel matrix with prolonged training over a sequence of tasks. > We unbox the efficacy of reducing churn in continual RL by identifying a gradient decorrelation effect and a step-size adjustment effect. Honestly, I am unsure what "unboxing the efficacy of reducing churn" means in this context. I would appreciate it if the authors could clarify this in plain language. In addition, I’d like further clarification on the step-size adjustment effect. How does it differ from momentum-based optimizers like Adam or RMSprop, which also implicitly affect the step size? > We propose C-CHAIN and demonstrate it effectively mitigates the loss of plasticity and outperforms prior methods in a range of continual RL settings. This is supported by their experiments. However, I have concerns about the experiments and analyses, which I will discuss later. > We demonstrate that under the continual changes in the data distribution and objective function, the agent gradually loses the rank information of its NTK matrix, leading to highly correlated gradients and eventually the exacerbation of churn. The authors seem to be making an important point here, but it is not entirely clear to me. What exactly do they mean by highly correlated gradients? How does their method prevent this from occurring? Is it by leveraging the gradient information of the reference batch $B_{ref}$​ in their churn regularizer loss? Methods And Evaluation Criteria: The proposed method C-CHAIN is well-suited for the continual learning setting in consideration, especially one where tasks switch every N timesteps. I'm not particularly convinced by some of the design choices, but I want to assure the authors that I'd keep an open mind and re-evaluate my reviews based on their rebuttal response. 1. On the usage of domain randomization to Gym classic control environments: > For Gym Control, we use four environments: CartPole-v1, Acrobot-v1, LunarLander-v2 and MountainCar-v0. For each environment, a task sequence $T$ is built by chaining $k$ instances of the environment with a unique Gaussian observation noise $\epsilon_i \sim \mathcal{N}(0, \sigma^2)$ sampled once for each. Isn't this domain randomization (Tobin et al. 2017), but with the parameters changed every few million timesteps? Is this only to induce non-stationarity in the environment? Adding noise arbitrarily to the observations makes it seem a little contrived. Have you considered more natural tasks such as varying factors like gravity or surface slipperiness or even the terrain in MuJoCo? For inspiration, you can refer to robotics works such as Kumar et al. (2021). 2. In several environments, including Fruitbot, Jumper, and Plunder, the standard errors overlap significantly with the second-best reported method. Is this due to the use of only 6 seeds? Are the authors confident that the method would prove superior with more seeds? 3. Patterson et al. (2024) suggest the following: *"In general, we advocate that you do not report standard errors. They are like a low-confidence confidence interval, and it is more sensible to decide on the confidence interval you want to report."* Could the authors clarify why they chose to report standard errors, especially given that only 6 seeds are used? Would it be possible to use a different, more appropriate metric, such as a bootstrap confidence interval? **References** 1. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017, September). Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 23-30). IEEE. 2. Kumar, A., Fu, Z., Pathak, D., & Malik, J. (2021). RMA: Rapid Motor Adaptation for Legged Robots. Robotics: Science and Systems XVII. 3. Patterson, A., Neumann, S., White, M., & White, A. (2024). Empirical design in reinforcement learning. Journal of Machine Learning Research, 25(318), 1-63. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. Question on C-CHAIN for Continual Supervised Learning > Besides, Permuted-MNIST and RandomLabel-MNIST are simple, where the agent can find a good solution near the initialization. - This is a strange argument—it's as if the authors are saying, "Our method is too strong, which is why it failed..." - Continual Mountain Car seems even simpler than Permuted-MNIST to me. Could the authors propose an alternative plausible explanation for why their method may not be performing well in this case? 2. In Fig. 3, for Mountain Car, why does C-CHAIN (in blue) exhibit a flat curve in the first phase? What changes in the later stages that allow for successful learning? 3. It's unclear to me why some methods were not included in the experimental comparison, especially when they are cited as references. For example, Dohare et al. (2024) proposed continual backprop, and Lyle et al. (2024) suggest that using layer normalization and L2 regularization is effective for maintaining plasticity. Why were these methods not considered? This is a significant concern for me. I would be willing to re-evaluate my score if the authors provide additional empirical comparisons with these two methods. **References** 1. Dohare, S., Hernandez-Garcia, J. F., Lan, Q., Rahman, P., Mahmood, A. R., and Sutton, R. S. Loss of plasticity in deep continual learning. Nature, 632(8026):768–774, 2024. 2. Lyle, C., Zheng, Z., Khetarpal, K., van Hasselt, H., Pascanu, R., Martens, J., & Dabney, W. (2024). Disentangling the causes of plasticity loss in neural networks. arXiv preprint arXiv:2402.18762. Supplementary Material: I looked at the learning curves in Appendix C and the list of hyper-parameters in Appendix A. Relation To Broader Scientific Literature: The paper examines the loss of plasticity in deep neural networks through the lens of churn. While loss of plasticity has been demonstrated in several works, including Lyle et al. (2022), Sokar et al. (2023), and Dohare et al. (2024), its cause is not well understood. The authors make a novel contribution by studying this phenomenon from the perspective of churn and provide interesting insights. Essential References Not Discussed: The authors overlook some relevant references on step-size adaptation. I recommend considering the following citations. *Step-size adaptation* - Sutton, R. S. (1992, July). Adapting bias by gradient descent: An incremental version of delta-bar-delta. In AAAI (Vol. 92, pp. 171-176). - Dabney, W., & Barto, A. (2012). Adaptive step-size for online temporal difference learning. AAAI Conference on Artificial Intelligence (pp. 872-878). - Martens, J., & Grosse, R. (2015). Optimizing neural networks with kronecker-factored approximate curvature. International Conference on Machine Learning (pp. 2408-2417). - Elsayed, M., Vasan, G., & Mahmood, A. R. (2024). Streaming Deep Reinforcement Learning Finally Works. arXiv preprint arXiv:2410.14606. Other Strengths And Weaknesses: **Strengths** - The introduction was well-written, making it easy to understand the problem, the solution, and the authors' contributions. - The visualizations and analysis using the NTK are interesting and insightful. **Weaknesses** - I’ve listed the weaknesses in the previous sections. I’m hopeful the authors will address at least some of these in the rebuttal. - The figures could be clearer. For example, in Fig. 4, consider placing the legend outside the figure, as its current position occludes large portions of the figure. - C-CHAIN is tested only with PPO. It’s unclear whether it is necessary for other methods. Other Comments Or Suggestions: > Reinforcement learning (RL), when coupled with non-linear function approximators, suffers from optimization challenges due to the non-stationarity of the data and the learning objectives, i.e. the deadly triad - Why mention the deadly triad here? What is off-policy about your learning problem? I believe the introductory line doesn’t need to reference the deadly triad. Perhaps the authors mention it implicitly to acknowledge PPO’s clip surrogate loss? I like the motivation and approach of the paper and believe it could be much stronger in its current form. If the authors provide additional experimental evidence and address my concerns, I would be happy to increase my score. Questions For Authors: 1. Why aren’t continual backprop (Dohare et al. 2024) and ReDo (Sokar et al. 2023) included as baselines? 2. Is this method specific to PPO? Could you include a small experiment with SAC or another algorithm to demonstrate its generality? I believe a small experiment like this would significantly strengthen the case for the method’s generality. 3. How does it perform in continuous action spaces? It seems all experiments here use discrete action spaces 4. We sample $B_{ref}$​ and $B_{train}$​ from the buffer $D$. Does the size of $B_{ref}$​​ have on impact on C-CHAIN’s performance? 5. PPO is known to suffer from policy collapse (Dohare et al. 2024). Does this also occur in off-policy algorithms like SAC or TD3? Would using a large replay buffer help in this setting? Intuitively, it seems like it would mitigate catastrophic forgetting rather than plasticity loss. 6. Should the buffer be flushed after each task switch? In other words, does the agent need to be aware of task switches? 7. Out of curiosity, have the authors considered the streaming RL setting (Elsayed, Vasan & Mahmood 2024)? Is it possible to extend this method to the streaming RL setting? 8. Is layer norm + l2 regularization effective in maintaining plasticity (Lyle et al. 2024)? 9. In Fig. 4, the approximate rank appears to be slowly decreasing for C-CHAIN over time. Is this the case? If so, is there an explanation for this behavior? **References** 1. Dohare, S., Hernandez-Garcia, J. F., Lan, Q., Rahman, P., Mahmood, A. R., and Sutton, R. S. Loss of plasticity in deep continual learning. Nature, 632(8026):768–774, 2024. 2. Sokar, G., Agarwal, R., Castro, P. S., and Evci, U. The dormant neuron phenomenon in deep reinforcement learning. In ICML, pp. 32145–32168, 2023. 3. Elsayed, M., Vasan, G., & Mahmood, A. R. (2024). Streaming Deep Reinforcement Learning Finally Works. arXiv preprint arXiv:2410.14606. 4. Lyle, C., Zheng, Z., Khetarpal, K., van Hasselt, H., Pascanu, R., Martens, J., & Dabney, W. (2024). Disentangling the causes of plasticity loss in neural networks. arXiv preprint arXiv:2402.18762. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > On other possible baseline methods, e.g., L2 regularization, LayerNorm (Lyle et al., 2024) continual backprop (Dohare et al., 2024) and ReDo (Sokar et al. 2023) In addition to existing empirical evidence for LayerNorm, L2 Regularization and ReDo, we additionally implemented and ran LayerNorm, ReDo and AdamRel in our continual ProcGen experiments. **[Existing Evidence]** The comparison between LayerNorm and TRAC (i.e., our major baseline and codebase) in continual Gym control tasks can be found in Appendix D, Figure 15 of TRAC paper (Muppidi et al., NeurIPS 2024). Their results show TRAC outperforms LayerNorm (i.e., LayerNorm Adam) in their Figure 15. For L2 regularization, Lyle et al. (2024) mentioned “We found that batch normalization and L2 regularization interfere with learning” in their Section 4.2, RL evaluation. In our work, we did L2 regularization and we found it performed very similarly (slightly worse) to L2 Init (i.e., Regenerative Regularization). In another paper (Juliani and Ash, NeurIPS 2024), their Figure 5, ReDo, LayerNorm and other related methods are compared with L2 Init (i.e., Regen Reg in the figure) and L2 norm in ProcGen CoinRun. The results show that L2 Init performs comparably with ReDo and LayerNorm and slightly outperforms L2 norm (which is consistent with our findings as mentioned above). For continual backpropagation, it is similar to ReDo which uses a different recycling metric, and in Appendix C.4, Figure 25 of the ReDo paper, they achieved similar results. Therefore, we only consider ReDo here. **[Additional Experiments]** Please refer to the response to Reviewer cU5H and Reviewer mKWp. > Questions on the buffer **[On buffer flush and awareness of task switch]** In our method, i.e., C-CHAIN PPO, both $B_{ref}$ and $B_{train}$ are sampled from the online interaction data collected by the policy in the current iteration (e.g., every 2048 interactions). Thus, it does not matter for C-CHAIN PPO whether the buffer is flushed after each task switch and the agent does not need to be aware of task switches. **[The impact of the size of $B_{ref}$]** We did the experiments for different batch sizes for $B_{ref}$ on continual Gym control tasks. Our findings were that using a 2x, 4x or 8x batch size for $B_{ref}$ sometimes improved the learning performance, but not consistently. To some degree, increasing the batch size of $B_{ref}$ acted similarly to increasing the regularization coefficient, as both of them reduce more churn. Thus, we did not search for the best batch size for $B_{ref}$ to alleviate the hyperparameter choice burden. > Questions on concrete experimental results in continual RL **[Explanation for the slow decreasing of the approximate rank for C-CHAIN in Fig. 4]** For C-CHAIN, the approximate rank decreases very slowly, from around 85 to 80 over 10M steps (in contrast, the approximate rank decreases is 75 to 30 for vanilla PPO). We think it is reasonable as the agent continually consumes the plasticity to learn new tasks and our method is not perfect. **[The performance in MountainCar]** We think MountainCar is a bit more difficult than the other three Gym control tasks, as it needs more exploration and TRAC almost totally failed in it (note that MountainCar is not included in TRAC paper). One possible reason for C-CHAIN’s low early-stage performance might also be due to the exploration feature of MountainCar. Reducing churn could also be viewed to prevent the generalization of exploration behavior learned by the policy in some states to similar states. Therefore, C-CHAIN learns slowly but improves steadily, while vanilla PPO learns quicker and collapses. **[The performance on Fruitbot, Jumper, and Plunder]** By checking the task details in the official document of ProcGen, tasks like Leaper, Jumper are stuff-collection tasks with sparse rewards, and especially Leaper is an episodic-reward environment. To gain more understanding, we provided the average scores of the **max/mean/min** curves for Leaper, Fruitbot, Plunder, Jumper below. We can observe that C-CHAIN improves the average Max scores over PPO Vanilla and the average Mean scores (by improving Max or by reducing failures when Max is similar); while it does not fully address the (near-)zero performance due to the limited exploration ability of the PPO base agent. | Method/Task | Leaper | Fruitbot | Plunder | |--------------|--------------|--------------|--------------| | PPO (Oracle) | 2.020 / 0.350 / 0.000 | 6.398 / 1.613 / -0.079 | 11.858 / 7.113 / 2.731 | | PPO (Vanilla) | 2.006 / 0.347 / 0.000 | 5.882 / 1.549 / -0.068 | 8.304 / 4.801 / 1.944 | | C-CHAIN | 4.004 / 0.668 / 0.000 | 6.153 / 1.689 / -0.043 | 16.916 / 10.340 / 4.949 | | TRAC | 2.002 / 0.334 / 0.000 | 3.624 / 0.855 / -0.034 | 17.104 / 9.719 / 2.724 | > **Other remaining questions** **Due to the space limitation, we will provide them during the discussion stage.** --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response, which adequately addressed my questions and concerns. I also appreciate the additional results provided. Accordingly, I have increased my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s very careful and constructive comments. We are glad that our responses addressed the reviewer’s concerns. --- > **More experimental results** Aside from three additional baselines and the Reliable Metrics, we provided more results as suggested by Reviewer mKWp: - We **added three continual learning settings in DMC**, (1) Walker-stand -> Walker-walk -> Walker-run, (2) Quadruped-walk -> Quadruped-run -> Quadruped-walk, (3) Dog-stand -> Dog-walk -> Dog-run -> Dog-trot. The results show that **C-CHAIN PPO outperforms PPO across the three continual DMC settings** (12 seeds). - We added **a continual learning setting in MinAtar**: Space_Invaders -> Asterix -> Seaquest. The results show that **C-CHAIN DoubleDQN outperforms DoubleDQN** (12 seeds). **For concrete scores, please refer to our response to Reviewer mKWp or cU5H**. --- As mentioned, we were not able to post all the responses due to the space limitation. We requested AC to relay our remaining responses. **To avoid any unexpected circumstances where the reviewer may not have seen the remaining responses, we provided them here**: > Questions on the expression of claims **[The meaning of “highly correlated gradients” and how C-CHAIN prevents this from occurring]** “highly correlated gradients” means the case that for two input data $x_i, x_j$ the NTK between them $\nabla_{\theta} f_{\theta}(x_i)^{\top} \nabla_{\theta} f_{\theta}(x_j)$ (which characterizes both the norm of the gradients and the cosine similarity of the gradients) has a high absolute value. The gradient correlation is suppressed by reducing the churn for the reference batch caused by the gradient update based on the training batch, which has the effect of suppressing the off-diagonal of the NTK matrix as described by Equation 11. **[On the "unboxing the efficacy of reducing churn"]** We meant to express that we decomposed the effect of C-CHAIN into two parts as presented in Section 4.3. We will rephrase this sentence and use a plain word. **[Further clarification on the step-size adjustment effect]** Compared with momentum-based optimizers, the step-size adjustment effect we presented in this work differs at two points: - Momentum-based optimizers use the first-order and second-order moments of the historical gradients (which are temporally correlated) to adjust step size; while the step-size adjustment effect of reducing churn uses the projected gradient of an independently sampled reference batch (i.e., the off-training-batch information). - Adam and RMSProp also change the gradient direction; while the step-size adjustment effect of reducing churn alone only changes the scale of the gradient as it has the same (or reverse) direction (the direction is changed by the first effect of reducing churn, i.e., the term 1 in Equation 10). **[On the “deadly triad”]** We agree with the reviewer’s comment. Our aim is to introduce non-stationarity (as it is a problem feature in both RL and continual learning). We will rephrase and remove it. > Question on C-CHAIN for Continual Supervised Learning We provided two possible explanations in Sec. 5.3. First, **a significant difference to note** is that **RL suffers from the chain of churn**, formally characterized in (Tang and Berseth, 2024), which stems from the iterative policy improvement and policy evaluation nature (i.e., GPI), **while SL does not** as it learns from static labels. This means that the exacerbation of churn on both the policy and the value side can further (negatively) affect each other. This explains to some extent why C-CHAIN is not very effective for CSL. Second, there is a connection between the performance of L2 Init, WC and C-CHAIN from our CRL experiments to CSL experiments: - L2 Init and WC perform better in continual Gym control but perform relatively worse in ProcGen. Both L2 Init and WC prevent the agent from going too far away from its initialization. Thus, we assume that this is a possible explanation for their performance in Gym control and ProcGen. - Similarly, the good performance of L2 Init and WC could indicate that the agent finds a good solution near initialization. > On domain randomization We adopted the continual Gym control setting from TRAC paper (Muppidi et al., 2024). We agree this can be viewed as domain randomization in general. One thing we need to note is that one of the major factors of non-stationarity in continual learning is the distribution change of input data. For a representative experimental setting, Permuted-MNIST (Goodfellow et al., 2013; Dohare et al., 2024) applies fixed random permutations to input pixels. The continual Gym used in the TRAC paper is built on the same logic of constructing input distribution non-stationarity. --- **We believe these additional results and responses can further strengthen our work**. Since this is our last opportunity to respond, **we sincerely hope the reviewer could re-evaluate our work accordingly**.
Summary: The paper investigates the loss of plasticity issue in the continual deep reinforcement learning setting from the lens of churn ("undesirable generalization", empirical excess out-of-training-batch variation). They claim that 1) loss of plasticity and high churn are connected: the decrease of the NTK matrix rank indicates high churn and had previously been shown to correlate with plasticity loss. 2) addressing churn has a positive effect on the decorrelation of the NTK, and serves as an implicit adaptive step size 3) their proposed C-CHAIN effectively addresses churn in the continual RL setting, shows the benefits of addressing churn on the NTK empirically, and validates the connection between churn and plasticity. ## update after rebuttal I have read the other reviews and comments from the authors, and I maintain the score of my review. The authors have agreed to discuss the two additional papers I have cited in the references section, to make the presentation modifications about the figure illustrating the connection between NTK and churn, and added new baselines and experiments. Claims And Evidence: All claims are backed by enough theoretical or empirical justification. 1) Connection between churn and plasticity through the NTK: The connection between churn and the NTK is clearly presented and backed by a theoretical derivation. It is also shown empirically in the experiments. The connection between the NTK and plasticity heavility relies on the results from Lyle et al. (2024), which has not been accepted to a peer-reviewed venue yet, which can be a limitation. However, the paper provides evidence of low-rank NTK matrices in the presence of plasticity loss, so the connection does hold independently of the motivation from Lyle et al. (2024). 2) The effects of reducing churn on the NTK and the optimization: This is well justified theoretically in section 4. 3) The proposed C-CHAIN algorithm: The algorithm is competitive with recently proposed methods and empirically shows the benefits of using churn on the NTK. Methods And Evaluation Criteria: Overall, the choice of environments and baselines is good. The ProcGen environments have been used in several previous papers on plasticity and are a valid benchmark for this paper. To my knowledge, the Gym environments have not been used in work on plasticity before, but the non-stationary version introduced by the paper seems to provide a good testbed to validate their claims. The use of mean performance is a valid evaluation metric for this work. The baselines considered give a good understanding of the performance of the proposed C-CHAIN. Theoretical Claims: I checked all the derivations in the main paper. in eq 5, there does not seem to be a term corresponding to the data distribution (as there was an expectation in eq 4). I think it can easily be plugged into $S_x$. The authors can clarify this or say that for illustrative purposes, they assume a uniform distribution. Experimental Designs Or Analyses: There is enough statistical significance, and the hyperparameter optimization seems reasonable. The analysis of the results does not make claims beyond what the results show, and the provided analysis backs up the claims made. Showing the limitations of C-CHAIN in the supervised learning setting is appreciated. Supplementary Material: I did not check the supplementary material. Relation To Broader Scientific Literature: Plasticity is an important problem in continual learning. This work proposes a solution to address it by addressing the dynamics of the NTK matrix, The work discussed other work addressing plasticity loss with other methods; however, addressing the NTK explicitly can have a broader connection to optimizing neural networks, and these broader connections are not discussed. Essential References Not Discussed: The essential literature has been discussed. Nevertheless, the paper would benefit the community more by referencing/discussing the following works to give a broader presentation of the methods addressing plasticity loss in PPO, with which the main results of the paper have been presented. I believe [1] has been discussed in the paper, although [2] and [3], which came earlier, have not. 1. Ellis, Benjamin, et al. "Adam on Local Time: Addressing Nonstationarity in RL with Relative Adam Timesteps." 2. Moalla, Skander, et al. "No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO." 3. Juliani, Arthur, and Jordan Ash. "A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning." All in Advances in Neural Information Processing Systems 37 (2024): 113884-113910. Other Strengths And Weaknesses: Everything is mentioned in the previous sections. Other Comments Or Suggestions: Figure 1 was critical for me to understand the equations using the NTK to relate to churn, as one is a symmetric matrix, and the other contrasts two different batches. I was confused by eq 5 until I saw the figure. I think it would make the paper much easier to grasp if the figure were introduced with eq 5. Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s valuable comments and recognition. Our response aims to add more discussion and clarification on the points mentioned in the inspiring comments. In addition, we provide additional experimental results for AdamRel [1], ReDo and LayerNorm, along with Reliable Metrics (Agarwal et al., NeurIPS 2021). Besides, we provide two additional continual RL experimental setups in DMC. > On the data distribution regarding the expectation in Eq. 4 and $S_x$ We appreciate the reviewer for pointing this out. Yes, $S_x$ denotes the stochastic variable of sampling $B_{train}$, which should be defined along with a sampling distribution. For illustrative purposes, using a uniform distribution is sufficient. And we expect to take the concrete form of the sampling distribution (intuitively, it should depend on factors like on-policy learning/off-policy learning, exploration policy, experience replay strategy, etc.) into the analysis in the future. We will clarify this point as suggested. > On the mentioned related [1], [2], [3] **[Additional Discussions]** For some discussions on [2] and [3], the Proximal Feature Optimization (PFO) regularization proposed in [2] is closely related to DR3 (Kumar et al., ICLR 2022) and CHAIN (Tang and Berseth, NeurIPS 2024). This is because that the feature difference can be viewed as the gradient difference when the network is viewed as a linear approximation i.e., $\pi(s) = \phi(s) W$ as in DR3; in another view, regularizing feature difference should share some overlapped effects with regularizing network-output difference as in CHAIN. And for [3], we found the experimental results for ProcGen CoinRun by the Figure 5 in their paper provide a useful reference of the learning performance of other related methods not included in our submission version. For example, ReDo and LayerNorm perform comparably with Regen reg (i.e., the L2 Init baseline method adopted in our paper). **[Additional Experimental Results]** We added **AdamRel [1], ReDo and LayerNorm** to our experimental comparison in 16 ProcGen tasks. The aggregate comparison is shown below. We can find ReDo and LayerNorm perform comparably with L2 Init, which is basically consistent with the evidence in existing papers. | Method | PPO Oracle | PPO Vanilla | TRAC | **C-CHAIN** | LayerNorm | ReDo| AdamRel | Weight Clipping | L2 Init | |--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------| | ProcGen Agg. | 70.789 | 55.049 | 77.289 | **101.792** | 75.164 | 61.180 | 61.443 | 57.092 | 72.961 | Moreover, we provide the aggregation evaluation by using **Reliable Metrics** (Agarwal et al., NeurIPS 2021). The evaluation uses Mean, Median, Inter-Quantile Mean (IQM) and Optimality Gap, with **95% confidence intervals and 50000 bootstrap replications** (recommended by Reliable Metrics official implementation). It aggregates over **9 methods, 6 seeds for each of 16 ProcGen tasks**, i.e., **864 runs** with 10M steps for each. The results are shown in the table below (the corresponding plot will be added to the revision). We can observe that **C-CHAIN performs the best regarding all four metrics** and **outperforms the second-best with no overlap of confidence intervals for Median, IQM, and a minor one for Mean**. | Method | Median | IQM | Mean | Optimality Gap (lower is better) | |--------------|--------------|--------------|--------------|--------------| | PPO (Oracle) | 1.000 (0.960, 1.036) | 0.969 (0.933, 1.010) | 1.000 (0.908, 1.130) | 0.128 (0.098, 0.155) | | PPO (Vanilla) | 0.835 (0.728, 0.925) | 0.781 (0.722, 0.841) | 0.811 (0.704, 0.949) | 0.300 (0.252, 0.348) | | TRAC | 0.977 (0.894, 1.130) | 0.999 (0.887, 1.103) | 1.017 (0.891, 1.166) | 0.255 (0.205, 0.305) | | **C-CHAIN** | 1.452 (1.287, 1.522) | 1.388 (1.298, 1.472) | 1.461 (1.291, 1.719) | 0.086 (0.058, 0.110) | | Weight Clipping | 0.889 (0.759, 0.948) | 0.822 (0.752, 0.889) | 0.859 (0.744, 0.996) | 0.275 (0.223, 0.328) | | L2 Init | 1.029 (0.981, 1.103) | 1.035 (0.987, 1.092) | 1.116 (1.016, 1.237) | 0.098 (0.067, 0.130) | | LN | 1.057 (0.996, 1.121) | 1.018 (0.952, 1.086) | 1.131 (0.976, 1.311) | 0.146 (0.107, 0.187) | | ReDO | 0.945 (0.815, 0.974) | 0.862 (0.821, 0.904) | 0.912 (0.810, 1.039) | 0.209 (0.169, 0.249) | | AdamRel | 0.905 (0.804, 0.957) | 0.860 (0.792, 0.925) | 0.899 (0.781, 1.037) | 0.250 (0.199, 0.301) | > On other advice **[The position of Figure 1]** We are glad to know that Figure 1 helps to understand the NTK equations. We will move Figure 1 close to Equation 2 and 5 with additional explanations to improve the smoothness of the introduction of the NTK equations. **[The broader connection to optimizing neural networks]** We appreciate the reviewer’s inspiring comments. We will discuss these broader connections in our revision. --- Rebuttal Comment 1.1: Comment: After looking at the other reviews and the comments from the authors, I acknowledge some of the limitations mentioned by the other reviewers, but I maintain the score of my review. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s constructive feedback and positive support! --- Aside from the three additional baseline methods and the Reliable Metrics we mentioned above, we provided more additional experiments as suggested by Reviewer mKWp: - **We added three continual learning settings in DMC**, (1) Walker-stand -> Walker-walk -> Walker-run, (2) Quadruped-walk -> Quadruped-run -> Quadruped-walk, (3) Dog-stand -> Dog-walk -> Dog-run -> Dog-trot. The results show that **C-CHAIN PPO outperforms PPO across the three continual DMC settings**. - **We added a continual learning setting in MinAtar**: Space_Invaders -> Asterix -> Seaquest. The results show that **C-CHAIN DoubleDQN outperforms DoubleDQN** (we applied C-CHAIN to the value network learning, with no modification to the hyperparameters of DoubleDQN). The mean scores and standard errors across 12 seeds are shown below. | Method/Task | Walker/Stand-Walk-Run | Quadruped/Walk-Run-Walk | Dog/Stand-Walk-Run-Trot | |--------------|--------------|--------------|--------------| | PPO (Oracle) | 395.971 $\pm$ 8.116 | 234.529 $\pm$ 19.193 | 137.080 $\pm$ 3.133 | | PPO (Vanilla) | 305.199 $\pm$ 18.519 | 250.153 $\pm$ 26.385 | 129.744 $\pm$ 5.914 | | C-CHAIN | 472.828 $\pm$ 17.865 | 314.510 $\pm$ 44.321 | 174.098 $\pm$ 9.169 | | Method/Task | Space_Invaders/Asterix/Seaquest | |--------------|--------------| | DoubleDQN (Vanilla) | 22.044 $\pm$ 0.733 | | C-CHAIN | 29.513 $\pm$ 0.682 | We will also include these results in our paper to strengthen our experiments further and provide a potentially useful reference for future study.
Summary: This paper studies the loss of plasticity in the continual reinforcement learning problem. The authors present a method that is based on reducing the churn to help prevent the collapse of the NTK rank. Through a series of experiments, the paper shows the effectiveness of the proposed method against other baselines. \ \ \ **Update after rebuttal:** I thank the authors for their effort to improve their empirical evaluation. The new results with more runs and also with more baselines increase my confidence in the paper's conclusion. Thus, I raised my score to reflect that change. Claims And Evidence: - The reason why C-CHAIN performs poorly under the simpler setting of continual supervised learning is not convincing. - The experimental design, where the tasks are not on the same level of complexity (shown by the inconsistent performance of Oracle PPO), makes empirical evaluation hard. - The authors compare their methods against other baselines and show their effectiveness. However, the improvement is not consistent across environments and domains, and little insight is provided to explain it. - Most learning curves have overlapping confidence intervals, which compromises the statistical significance of the results. Methods And Evaluation Criteria: The authors selected standard benchmarking tasks accepted by the community to study loss of plasticity. In addition, they used empirical NTK, which gained popularity when investigating plasticity. Theoretical Claims: I haven’t closely checked the theoretical claims or the math. Experimental Designs Or Analyses: The empirical evaluation doesn't show conclusive results since, in many experiments except for a few, there are overlapping confidence intervals. I suggest the authors would increase the number of independent runs to convince the reader of the validity of their approach. Supplementary Material: I didn’t check the supplementary material. Relation To Broader Scientific Literature: The paper studies continual RL, which is a very important problem, and the results are relevant to a large number of researchers, especially since it’s building on the concept of churn and its relationship to plasticity, which has been studied before. The empirical evaluation itself doesn’t provide conclusive results, but the analysis given might be helpful for future research. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: There are missing steps between Equation 4 and Equation 5, namely about the diagonal $S$ matrix. Also, how did both $\nabla_\theta f_\theta(\bar{x})$ and $\nabla_\theta f_\theta(x)$ become $G_\theta$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s valuable comments and the recognition of the importance of the topic studied in our paper. > On the reviewer’s comments including “the empirical evaluation doesn't show conclusive results”, “the improvement is not consistent”, and “little insight is provided to explain it” **[Additional reliable aggregate evaluation]** To obtain a conclusive evaluation, we provide the aggregation evaluation by using Reliable Metrics (Agarwal et al., NeurIPS 2021). The evaluation uses Mean, Median, Inter-Quantile Mean (IQM) and Optimality Gap, with **95% confidence intervals and 50000 bootstrap replications** (recommended by Reliable Metrics official github implementation). It aggregates over **9 methods, 6 seeds for each of 16 ProcGen tasks**, i.e., **864 runs** with 10M steps for each. The results are shown in the table below (the corresponding plot will be added to the revision). We can observe that **C-CHAIN performs the best regarding all four metrics** and **outperforms the second-best with no overlap of Confidence Intervals for Median, IQM, and a minor one for Mean**. Due to the space limitation, **please refer to the concrete markdown table we will upload in the common response or the discussion response after the rebuttal deadline**. **[On the consistency of improvement]** As summarized by Table 1, our method **C-CHAIN (PPO) consistently outperforms vanilla PPO in all 20 envs**. And **C-CHAIN performs the best in 15 out of 16 tasks** (and performs the second-best in the remaining one) on continual ProcGen benchmark. In this sense, our method achieved overall consistent improvement. **[On the insights and explanations]** For continual CartPole and continual LunarLander, where L2 Init outperforms C-CHAIN, we provided the possible explanation in Line 328-365. Our insight is, for simple tasks like CartPole and LunarLander where the agent can find a good policy near parameter initialization, L2 Init performs the best; in contrast, it limits the policy learning as in continual ProcGen where L2 init is outperformed by C-CHAIN in most tasks. > On the experimental design We **followed the experimental settings used in TRAC** (as also mentioned by the reviewer’s comment “The authors selected standard benchmarking tasks accepted by the community to study loss of plasticity”), and we further **extended the ProcGen tasks from the four ones used in TRAC to all 16 tasks, as well as continual MountainCar where TRAC collapses**. **[On the task diversity and the varying difficulties]** Since the tasks are different in game logic, reward density, visual complexity, and etc., they provide a range of environments of diversity and varying difficulties, which is also a commonly adopted principle of designing a benchmark. Therefore, we think it is natural for Oracle PPO or vanilla PPO to have different levels of performance across different tasks. > On the math derivation **[How did both $\nabla_{\theta} f_{\theta}(\bar x)$ and $\nabla_{\theta} f_{\theta}(x)$ become $G_{\theta}$?]** We refer the reviewer to the definition of the NTK matrix (i.e., the first line for entry definition and the second line for matrix definition in vector form) in Equation 2. As in the second line of Equation 4, $\nabla_{\theta} f_{\theta}(\bar x)^{\top} \nabla_{\theta} f_{\theta}(x)$ becomes $N_{\theta}(\bar x, x)$ by the definition in Equation 2. Then, Equation 5 is the vector form of Equation 4, correspondingly, the entry definition $N_{\theta}(\bar x, x)$ becomes $N_{\theta} = G_{\theta}^{\top}G_{\theta}$ as in Equation 2. **[The transition from Equation 4 and Equation 5, about the diagonal $S$ matrix]** As mentioned above, Equation 5 is the vector form of Equation 4. Therefore, the sampling $x \sim B_{train}$ is replaced by the $S$ matrix as below in Equation 5. More specifically, $S$ has the same size as $N_{\theta}$, i.e., $|X|$ by $|X|$. As mentioned in Line 155-157, $S$ is a diagonal matrix, and its diagonal is ${0,1}$-binary with 1 for the sampled data and 0 for the non-sampled. We appreciate the reviewer for pointing this out. We will make this clearer as suggested by the reviewer. > On the evaluation of C-CHAIN in continual supervised learning We provided two explanations in Section 5.3. **A significant difference** to note is that **RL suffers from the chain of churn**, formally characterized in (Tang and Berseth, NeurIPS 2024), which stems from the iterative policy improvement and policy evaluation nature (i.e., the Generalized Policy Iteration paradigm), **while SL does not** as it learns from static labels. This means that the exacerbation of churn during the process of NTK rank loss on both the policy network side and the value network side can further (negatively) affect each other due to the chain effect. Since C-CHAIN is proposed from the perspective of churn, this explains to some extent why C-CHAIN is not that effective for CSL.
Summary: The manuscript investigates the loss of plasticity in continual reinforcement learning (CRL) from the perspective of churn. The authors establish a connection between plasticity loss and churn through the Neural Tangent Kernel (NTK) framework, demonstrating that churn exacerbation correlates with the rank decrease of the NTK matrix. To address this, the authors propose the Continual Churn Approximated Reduction (C-CHAIN) method, which reduces churn during training, mitigating plasticity loss. Empirical results on OpenAI Gym Control and ProcGen benchmarks show that C-CHAIN outperforms baseline methods in most environments. The manuscript also includes theoretical analyses, experimental evaluations, and discussions on the broader implications of churn reduction in continual learning. Claims And Evidence: The manuscript provides empirical evidence supporting its claims, particularly in demonstrating the efficacy of C-CHAIN in mitigating plasticity loss and improving performance in CRL tasks. However, the connection between plasticity loss, churn, and the NTK matrix rank is not established with sufficient mathematical rigor. For example, while Section 4.2 discusses the interplay between these factors in CRL, the arguments lack precise mathematical formulations. Additionally, the manuscript does not clearly delineate whether the theoretical insights are specific to reinforcement learning or applicable to other continual learning paradigms. Methods And Evaluation Criteria: The proposed C-CHAIN method is well-motivated and aligns with the problem of mitigating plasticity loss in CRL. However, the experiments focus on environments with relatively small task differences, limiting the generalizability of the findings. A broader evaluation across task sequences with greater variability would strengthen the conclusions Theoretical Claims: The manuscript provides theoretical insights into the relationship between churn, NTK rank, and plasticity loss. However, the theoretical claims are not fully substantiated. For instance, the discussion in Section 4.2 is somewhat vague, and additional mathematical clarity—such as explicit equations or proofs—would enhance the credibility of the theoretical framework. Experimental Designs Or Analyses: The experimental design is generally sound, with a focus on evaluating C-CHAIN on 20 environments in Gym Control and ProcGen benchmarks. The results demonstrate the method's effectiveness in most environments, particularly those with dynamic task difficulties. However, the experiments do not include task sequences composed of diverse environments, which would better illustrate the method's ability to handle significant plasticity loss. The authors should also clarify the rationale behind the choice of baseline methods. Supplementary Material: The supplementary material is comprehensive, providing implementation details, additional experimental results, and NTK analyses. However, the manuscript does not adequately discuss certain discrepancies observed in the supplementary figures, such as the divergent results in specific environments (e.g. Leaper in Figure 11). Addressing these inconsistencies would strengthen the overall presentation. Relation To Broader Scientific Literature: The manuscript is well-positioned within the broader literature on continual reinforcement learning and plasticity loss. It builds upon prior work on nonstationarity, catastrophic forgetting, and the loss of plasticity in neural networks. The discussion of related work is thorough. Essential References Not Discussed: The manuscript adequately cites most key references in the field. Other Strengths And Weaknesses: Strengths: The manuscript explores an important problem in CRL, providing both theoretical insights and practical solutions. The proposed method is novel and demonstrates strong empirical performance in most tested environments. The experimental evaluation is extensive, covering a wide range of environments and including analysis studies. Weaknesses: The theoretical analysis lacks mathematical rigor, particularly in establishing the connection between churn, NTK rank, and plasticity loss of CRL. The experimental results are limited to environments with relatively small task differences, reducing the generalizability of the findings. Certain figures, such as Figure 1, lack clear explanations, which detracts from the clarity of the manuscript. Other Comments Or Suggestions: Improve the clarity of Figure 1 by providing a detailed explanation of its purpose and implications. Introduce a direct metric for quantifying plasticity loss to better evaluate the effectiveness of C-CHAIN. Expand the experimental evaluation to include task sequences with greater variability and diversity. Clarify whether the theoretical insights are specific to CRL or applicable to other continual learning paradigms. Questions For Authors: Can the authors provide a more precise mathematical formulation of the connection between churn, NTK rank, and plasticity loss? How does this connection specifically apply to reinforcement learning? Are the theoretical insights on churn reduction in CRL applicable to supervised continual learning? If so, can the authors provide evidence or discussion to support this claim? How does C-CHAIN address the trade-off between plasticity and stability in continual learning? Does it mitigate catastrophic forgetting while improving plasticity? Can the authors evaluate C-CHAIN on task sequences composed of diverse environments, similar to those used in "Loss of Plasticity in Continual Deep Reinforcement Learning" (2023)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s valuable comments and the recognition of our method and experiments. Our response aims to address these aspects in detail. > On the experimental setups, task difference, and additional CRL setups **[Clarification on environment choice and task difference]** We **followed the experimental setting in TRAC paper** (Muppidi et al., NeurIPS 2024) and took it as a major baseline. For a more comprehensive experimental comparison, we **added MountainCar (where TRAC fails almost totally)** to the continual Gym control suite, and we **extended the 4 ProcGen tasks used in TRAC to all 16 tasks** provided by ProcGen suite. These tasks are representative CRL scenarios with **semantically related tasks**, and they are challenging to the vanilla PPO agent in the sense that apparent degradation occurs as the learning goes on (for most of them). **[Additional experimental setups in DMC]** As suggested by the reviewer, instead of chaining totally different Atari games (Abbas et al., 2023), we **additionally established two continual RL settings in DeepMind Control (DMC) suite**: - **Continual Walker**: chain Walker-stand, Walker-walk, Walker-run - **Continual Quadruped**: chain Quadruped-walk, Quadruped-run, Quadruped-walk (repeat because only two Quadruped tasks are available in DMC). Similarly, we compare PPO Oracle, PPO Vanilla, and C-CHAIN in the three settings. We run 1M steps for each task, i.e., 3M in total for each setting. The results are averaged over 12 random seeds for each configuration, as shown below (the corresponding learning curves will be added to our revision). | Method/Task | Walker/Stand-Walk-Run | Quadruped/Walk-Run-Walk | |--------------|--------------|--------------| | PPO (Oracle) | 395.971 $\pm$ 8.116 | 234.529 $\pm$ 19.193 | | PPO (Vanilla) | 305.199 $\pm$ 18.519 | 250.153 $\pm$ 26.385 | | C-CHAIN | 472.828 $\pm$ 17.865 | 314.510 $\pm$ 44.321 | We can observe that **C-CHAIN improves PPO in continual DMC**. Actually, from the learning curves (will be added in the revision), we found PPO learns more slowly with a decreased slope in the second and the third task, and C-CHAIN learns faster and achieves higher scores. > On the choice of baseline methods **[Existing evidence in prior work]** For the choice of other baseline methods, as we mentioned in Line 315-325, TRAC outperforms Concatenated ReLU (Abbas et al., 2023), EWC (Schwarz et al., 2018), Modulating Masks (Nath et al., 2023) in their paper, as well as LayerNorm (Lyle et al., 2024) in Figure 15 of TRAC paper (i.e., LayerNorm Adam). Besides, more related methods (e.g., ReDo) are compared with L2 Init (i.e., Regen Reg) in ProcGen CoinRun in Figure 5 of (Juliani and Ash, NeurIPS 2024). The results show that L2 Init performs comparably with ReDo and LayerNorm. **[Additional results for three more baselines and reliable aggregate evaluation]** Please refer to the response to Reviewer cU5H, due to the space limitation. > On a more precise mathematical formulation of the connection between churn, NTK rank, and plasticity loss In this paper, we are more focused on using formal expressions to describe, analyze and dig out intuitive insights. To give a thorough and rigorous theory, we need (at least) definitions and assumptions from three aspects: - **[RL-oriented definition of plasticity]** To our knowledge, the existing formal definition of plasticity (i.e., Target-fitting capacity, Definition 1 in Clare et al., 2022) is based on a general family of objective functions, which is too vague and broad for RL. This is why a direct plasticity loss metric was not used in our paper, and instead we used the rank loss of NTK since it is the best empirical practice in the literature (Clare Lyle et al., 2024). One possible RL-oriented definition could be developed upon the Value Improvement Path (Definition 1, Dabney et al., 2021). - **[Proper assumptions on deep RL]** A Rigorous theoretical analysis on deep RL learning process is challenging as we need concrete assumptions on network structure, optimization, objective function. Theoretical analysis methods and results under practical assumptions are still lacking in deep RL. - **[Proper assumptions on continual learning setting]** In the literature of continual RL, task sequence and switch scheme are usually manually designed for experimentation, little formal definition and assumption of task distribution and task switch have been established so far. Due to the lack of theoretical foundation above, providing a thorough and rigorous theory is out of the scope of this paper. But as suggested, we will update the discussion in Section 4 to provide more formal support and additional discussions for the connections between churn, NTK and plasticity. > On the results for the tasks like Leaper Please refer to the response to Reviewer 65iv. > **Other remaining questions** **Due to the space limitation, we will provide them during the discussion stage.** --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses and the additional experimental results. Given that CRL and plasticity loss are still emerging areas of research, I understand the challenges in providing a comprehensive and rigorous theoretical framework, as well as direct quantifiable metrics for plasticity loss. The tasks in the manuscript with semantic relationships are indeed challenging for standard RL agents. However, incorporating task sequences composed of completely different Atari games could significantly demonstrate the robustness of the proposed approach. This would provide a clearer picture of C-CHAIN's effectiveness across a broader spectrum of task variability. This can also serve as a direction for further improvement in the future. Regarding the updates in Section 4, could you please provide a brief overview of the additional discussions and formal support you plan to include? I look forward to further discussions on the remaining questions, particularly on the broader applicability of the theoretical insights and the trade-off between plasticity and stability in continual learning. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s very careful and constructive comments. We will provide additional experimental results and the responses to the remaining questions below. --- > **Additional experiment on Continual MinAtar** As suggested by the reviewer, we built **a continual learning setting based on MinAtar** (Young and Tian, 2019), which has the same game features as Atari but allows faster training/evaluation. We chained three tasks: **Space_Invaders -> Asterix -> Seaquest**, by padding the observation space to be $[10, 10, 10]$. We ran 1.5M steps for each task, i.e., 4.5M in total. **We used DoubleDQN as the base agent, and applied C-CHAIN to the training of the value network**, with no modification to the hyperparameters of DoubleDQN. The results over 12 seeds are shown below. | Method/Task | Space_Invaders/Asterix/Seaquest | |--------------|--------------| | DoubleDQN (Vanilla) | 22.044 $\pm$ 0.733 | | **C-CHAIN** | **29.513 $\pm$ 0.682** | The results show that **C-CHAIN also improves the continual learning performance upon DoubleDQN in a sequence of totally different tasks**. We will include these results in our paper to further strengthen our experiments. > **Additional experiment on Continual DMC-Dog** In addition to the two continual DMC settings, we provide the results for **Dog-stand -> Dog-walk -> Dog-run -> Dog-trot**. Similarly, we ran 1M steps for each task, i.e., 4M in total for Continual Dog, with 12 seeds. The results show **C-CHAIN PPO also outperforms PPO in continual Dog**. | Method/Task | Dog/Stand-Walk-Run-Trot | |--------------|--------------| | PPO (Oracle) | 137.080 $\pm$ 3.133 | | PPO (Vanilla) | 129.744 $\pm$ 5.914 | | **C-CHAIN** | **174.098 $\pm$ 9.169** | > **Regarding the updates in Section 4** In addition to our discussion on the missing theoretical foundations, we will consider adding additional formal analysis based on the theoretical tool in recent linear approximation transfer theory (Springer et al., 2025; Gidel et al., 2019). The brief plan is to extend the (one-shot) transfer setting to continual learning (i.e., continual transfer), where the tasks can be assumed to differ mainly at the singular values under the SVD viewpoint. Then, it would be possible to discuss the rank decrease of the learned features under the continual learning dynamics. Reference: - Springer et al. Overtrained Language Models Are Harder to Fine-Tune. 2025 - Gidel et al. Implicit regularization of discrete gradient dynamics in linear neural networks. 2019 > On the applicability of the theoretical insights to continual supervised learning Our formal analysis in Section 4 uses a general loss function form (or the common MSE form). Thus, the derivations **should apply to both continual RL and continual Supervised Learning**. However, **a significant difference** to note is that **RL suffers from the chain of churn**, formally characterized in (Tang and Berseth, NeurIPS 2024), which stems from the iterative policy improvement and policy evaluation nature (i.e., the Generalized Policy Iteration paradigm), **while SL does not** as it learns from static labels. This means that the exacerbation of churn during the process of NTK rank loss on both the policy network side and the value network side can further (negatively) affect each other due to the chain effect. We need to **note that the focus of this work is on continual RL**, which is more non-stationary and less studied compared to continual SL. In Section 5.3, we provided the evaluation of C-CHAIN on continual SL tasks **for a possible useful reference to curious audiences who care about continual SL**. The phenomenon that C-CHAIN (which is proposed from the perspective of churn) does not show superiority as that in contnual RL can be explained by the difference between RL and SL mentioned above to some extent. We will clarify this as suggested. > On the trade-off between plasticity and stability It is non-trivial to balance the trade-off between plasticity and stability when we are considering a model with finite capacity. We think that different learning scenarios should have different preferences on either side more or less. In this work, we follow the previous literature and focus more on plasticity loss. We do not think that C-CHAIN addresses the trade-off between plasticity and stability (sufficiently well) as it is proposed for plasticity. In principle, according to the continual churn reduction regularization objective and the NTK analysis, we think that C-CHAIN improves stability in the sense that it decorrelates the gradients of different data batches, suppresses passive function changes caused by churn and mitigates the learning degradation and even collapse in continual RL. --- **We believe these additional results and responses can help to address the remaining concerns**. Since this is our last opportunity to respond, **we sincerely hope the reviewer could re-evaluate our work accordingly**.
null
null
null
null
null
null
Integrating Intermediate Layer Optimization and Projected Gradient Descent for Solving Inverse Problems with Diffusion Models
Accept (poster)
Summary: This paper proposes a novel algorithm for zero-shot inverse problem solving using diffusion models. The method build off the recent DMPlug model, which optimizes the input to conform with partial methods. The authors highlight a key insight, the optimization through the diffusion sampling process can be done more efficiently when applied to each step separately. The paper validates the result of their method, and offers another variant based on PGD. Claims And Evidence: The claim in the paper are valid but I believe some supporting evidence is missing, see method and evaluation criteria. Methods And Evaluation Criteria: Overall, the metrics do fit the problem, yet, I believe some issue remain: - The metrics shown do not include image perceptual quality metrics, like FID or KID. Without such metrics, it is hard to access the realism of the images, or whether they are closer to MMSE estimations given the observed data $\mathbf{y}$. The example in Fig. 5 does in fact show blurred outputs, as typical of MMSE estimators. - Evaluation datasets are limited in scope. Can the method operate on general images (such as ImageNet)? - How does the method compare to alternative approaches in terms of NFEs? As a method that utilizes multiple derivatives of each diffusion step may be computationally prohibitive. Theoretical Claims: I found the theoretical analysis sufficient. Experimental Designs Or Analyses: The experimental design seems valid to me, withholding the concerns raised in ``Methods And Evaluation Criteria''. Supplementary Material: I reviewed the supplementary material. Relation To Broader Scientific Literature: - The extension to DMPlug is relevant and interesting. - While many algorithms for solving inverse problems with diffusion models exist, the optimization based ones are underrepresented in the field. Essential References Not Discussed: The authors do not mention the Perception-Distortion Tradeoff [1], which is key is analyzing inverse problem solutions using both distortion and perception metrics. [1] Blau, Yochai, and Tomer Michaeli. "The perception-distortion tradeoff." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. Other Strengths And Weaknesses: The paper is well written and easy to follow. Other Comments Or Suggestions: No other comments. Questions For Authors: Could the authors explain why it is insufficient to only optimize through the last timestep of the diffusion process? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your recognition of this paper and the valuable comments and suggestions. Our responses to the main concerns are given as follows. (**The metrics shown do not include image perceptual quality metrics, like FID or KID. & Evaluation datasets are limited in scope. Can the method operate on general images (such as ImageNet)?**) Thanks for the comments. We perform the experiments for linear motion deblurring on the ImageNet dataset, with reporting the FID metric. Due to the time constraint of the rebuttal period, we follow DiffPIR (https://arxiv.org/abs/2305.08995) to calculate the FID on 100 validation images. The following results show that our DMILO method performs well in terms of both realism and reconstruction metrics, demonstrating its ability to effectively balance perceptual quality and distortion. We will present relevant results for more tasks and more datasets in the revised version. **Table D1: Experimental results for the linear motion deblurring task on 100 validation images from ImageNet.** ||LPIPS|PSNR|SSIM|FID| |:---:|:---:|:---:|:---:|:---:| |DiffPIR|0.282|24.79|0.608|115.74| |DMPlug|0.285|25.49|0.696|99.87| |DMILO|0.098|29.67|0.841|53.77| |DMILO-PGD|0.183|27.60|0.755|85.51| (**How does the method compare to alternative approaches in terms of NFEs? As a method that utilizes multiple derivatives of each diffusion step may be computationally prohibitive.**) Our methods build on DMPlug and reduce its computational overhead. For instance, in the inpainting task, our methods outperform DMPlug while using significantly fewer NFEs. Specifically, our methods need only 3,000 NFEs in total, compared to the 15,000 NFEs required by DMPlug. Even when the number of NFEs is the same, our methods are computationally more efficient. This is because our methods employ a smaller gradient graph, which lessens the burden of gradient computation. In Table D2 below, we present the computational cost of reconstructing a validation image from the CelebA dataset for different methods for inpainting using an NVIDIA RTX 4090 GPU. The results demonstrate that our methods require less computational time than DMPlug. **Table D2: Computation cost for different approaches.** ||DDRM|DPS|$\Pi$GDM|RED-diff|DMPlug|**DMILO**|**DMILO-PGD**| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |NFE|20|1000|50|50|15000|3000|3000| |Time (s)|1|40|2|1|925|150|151| (**The authors do not mention the Perception-Distortion Tradeoff [1], which is key is analyzing inverse problem solutions using both distortion and perception metrics.**) Thank you for pointing this out. In the revised version, we will cite the mentioned paper [1] for the Perception-Distortion Tradeoff, and comprehensively explain the performance of our methods in terms of both distortion and perceptual quality. (**Could the authors explain why it is insufficient to only optimize through the last timestep of the diffusion process?**) Thank you for the insightful question. We conduct an ablation study on only optimizing through the last timestep of the diffusion process and find that it also works, although with slightly degraded reconstruction performance (see Table D3 below). We conjecture that the degraded reconstruction performance is primarily attributed to improper initialization. Specifically, when optimizing only through the last timestep of the diffusion process, in principle, the procedure should be initialized with a vector that lies within the range of the composition of functions corresponding to all timesteps except the last one. However, identifying such an initial vector seems to be a challenging task. We plan to delve deeper into this aspect in future research. We believe this insufficiency arises because a single sampling step may introduce errors in detail, which are difficult to correct with only sparse deviations and accumulate during optimization. We will further explore this in the future study. **Table D3: Experimental results for the inpainting task on 100 validation images from CelebA.** ||LPIPS|PSNR|SSIM|FID| |:---:|:---:|:---:|:---:|:---:| |DMPlug|0.066|35.51|0.935|49.98| |DMILO|0.025|36.07|0.951|19.34| |DMILO-LTS|0.041|34.22|0.934|25.54| |DMILO-PGD|0.023|36.42|0.952|19.08| |DMILO-PGD-LTS|0.031|34.32|0.937|19.46| ("LTS" denotes methods that optimize only through the last timestep of the diffusion process.) --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. I do not find that the concerns I have raised have been resolved by the authors' answers. l maintain the original recommendation as I lean towards accepting the paper. --- Reply to Comment 1.1.1: Comment: Thank you for the update. Although we are unsure about which specific concerns remain unresolved, we are truly grateful for your positive attitude regarding the acceptance of our paper.
Summary: This paper introduces two novel methods, DMILO and DMILO-PGD, to address computational and convergence challenges in solving inverse problems (IPs) using diffusion models (DMs). Claims And Evidence: The core claims, such as memory efficiency, and improved convergence, are not supported by experiments. Thus, the claims are not convincing. Methods And Evaluation Criteria: Yes. Theoretical Claims: ### 1. Simplified Composition - The theorem assumes **G = g₁ ∘ g₂**, whereas DMs involve **multi-step compositions** (*g₁ ∘ g₂ ∘ ⋯ ∘ gₙ*). The analysis does not explicitly address whether the bound generalizes to *N > 2*, leaving open questions about scalability. ### 2. Practical Relevance of Assumptions - The **Lipschitz continuity of g₁** may not hold strictly for real-world DMs due to non-linearities in neural networks. However, this is a common simplification in theoretical analyses of DMs (Chen et al., 2023; Li & Yan, 2024). ### 3. Sparse Deviation Regularization - The theorem uses **ℓ₁-regularization** for *ν*, but the paper’s experiments employ Adam optimization without explicit guarantees of sparse recovery. This creates a gap between theory (exact **ℓ₁** minimization) and practice (approximate optimization). Experimental Designs Or Analyses: The paper addresses the solution of inverse problems. While both linear and nonlinear inverse problems are considered, the selection of problems is rather limited and primarily confined to natural images. It is recommended that the authors expand their scope to include inverse problems in other modalities, such as linear sparse view CT reconstruction and nonlinear metal artifact reduction in CT imaging. Such an expansion would enhance the study's credibility and practical applicability. Supplementary Material: Yes. All. Relation To Broader Scientific Literature: The authors proposed DMILO and DMILO-PGD, integrating ILO with and without PGD respectively. The authors offered an intuitive theoretical analysis of the proposed approach. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. The abstract is overly lengthy and contains redundant information. It should be more concise and focused on the key contributions and findings. Additionally, the inclusion of citations within the abstract is unconventional and should be removed. A well-structured abstract should briefly introduce the problem, describe the proposed methods, and summarize the main results without excessive detail or references. 2. The experimental setup is limited and does not comprehensively address the claimed issues of heavy computational demands and suboptimal convergence in DM-based methods. While the paper demonstrates reduced memory usage, it lacks detailed analysis of computational efficiency and convergence behavior. More rigorous experiments are needed to validate these claims. 3. The selected inverse problems (super-resolution, inpainting, nonlinear deblurring, and blind image deblurring) are limited in scope and lack diversity in modalities (e.g., medical imaging, audio, or 3D data). Including a broader range of tasks and modalities would strengthen the generalizability of the proposed methods and better demonstrate their applicability to real-world scenarios. 4. The proposed methods, DMILO and DMILO-PGD, exhibit noticeable performance fluctuations across different tasks. The authors should provide a detailed analysis of why these fluctuations occur and whether they are related to specific properties of the tasks. 5. The paper does not adequately discuss the limitations of the proposed methods. For example: Can the methods handle highly ill-posed problems or tasks with significant noise? Other Comments Or Suggestions: None. Questions For Authors: None. Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your useful comments and questions. Our responses to the main concerns are given as follows. (**The analysis does not explicitly address whether the bound generalizes to N > 2, leaving open questions about scalability.**) We follow the work for ILO (Daras et al., 2021) to set $N = 2$. We believe that this is sufficient for an intuitive theoretical analysis of the effectiveness of our methods (e.g., Reviewer ve9x found that our claims are “well-supported by extensive empirical experiments and intuitive theoretical analysis”, Reviewer D1CM found that our theoretical analysis “justifies the effectiveness of the proposed methods”, and Reviewer oTx5 found “the theoretical analysis sufficient”). (**The theorem uses ℓ₁-regularization for ν, but the paper’s experiments employ Adam optimization without explicit guarantees of sparse recovery. This creates a gap between theory (exact ℓ₁ minimization) and practice (approximate optimization).**) Firstly, it should be noted that even the globally optimal solutions to the $\ell_1$ minimization problem might not exhibit sparsity. Secondly, given that the objective function of the $\ell_1$ minimization problem is highly non-convex and obtaining its globally optimal solutions is not feasible, in practical applications, we employ the Adam optimizer to approximately solve the $\ell_1$ minimization problem. (**It is recommended that the authors expand their scope to include inverse problems in other modalities, such as linear sparse view CT reconstruction and nonlinear metal artifact reduction in CT imaging. The selected inverse problems (super-resolution, inpainting, nonlinear deblurring, and blind image deblurring) are limited in scope and lack diversity in modalities (e.g., medical imaging, audio, or 3D data).**) We agree that tasks such as sparse-view CT reconstruction and metal artifact reduction in CT imaging are of practical importance. However, prior works closely related to our study, such as DDRM, DPS, $\Pi$GDM, DiffPIR, and DMPlug, have concentrated their experiments on natural images for tasks including super-resolution, inpainting, nonlinear deblurring, and blind image deblurring. As is evident from the fact that these recent papers have been published in the topmost venues in ML or CV, and/or have been highly cited, this line of investigation has been widely accepted in the community and is a highly active area of research. We believe that extending the experiments to tasks like linear sparse-view CT reconstruction and multi-modal data such as medical imaging, audio, or 3D data is beyond the scope of the current work. (**The abstract is overly lengthy and contains redundant information. It should be more concise and focused on the key contributions and findings. Additionally, the inclusion of citations within the abstract is unconventional and should be removed.**) Thank you for the comment. In the revised version, we will shorten the abstract to enhance its conciseness, highlight key contributions, and remove the citation. (**The experimental setup is limited and does not comprehensively address the claimed issues of heavy computational demands and suboptimal convergence in DM-based methods.**) Our methods build on DMPlug and reduce its computational overhead. Due to the character limit, please refer to Table D2 in our responses to Reviewer oTx5 for an illustration. As PGD is known for its capacity to alleviate the problem of suboptimal convergence (Shah & Hegde, 2018), the effectiveness of addressing suboptimal convergence in DM-based methods can be observed from the experimental results presented in Table 2 in the main document. Specifically, for super-resolution and inpainting tasks, DMILO-PGD yields the best reconstructions, thereby demonstrating this advantage. (**The proposed methods, DMILO and DMILO-PGD, exhibit noticeable performance fluctuations across different tasks. The authors should provide a detailed analysis of why these fluctuations occur and whether they are related to specific properties of the tasks.**) We found that for linear deblurring and BID tasks, DMILO achieves the best reconstruction performance, whereas DMILO-PGD gives comparatively inferior results. As mentioned in Section 5.3, performance fluctuations may arise from the naive gradient update, which may not be well-suited for all tasks. Nevertheless, across most tasks, both DMILO and DMILO-PGD show competitive performance. We leave the detailed analysis of why these fluctuations occur and whether they are related to specific properties of the tasks to future work. (**The paper does not adequately discuss the limitations of the proposed methods. For example: Can the methods handle highly ill-posed problems or tasks with significant noise?**) We primarily follow the experimental settings of DMPlug, which does not deal with highly ill-posed problems or tasks with significant noise. We leave the research on these aspects to future work. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns by placing them in the Future Work. However, these issues are fundamental to the completeness of the current study rather than merely potential future extensions. The manuscript in its present version lacks key components necessary for acceptance. I keep my previous score. --- Reply to Comment 1.1.1: Comment: Thank you for the update. We appreciate that the evaluation of a paper's value can vary among readers. For the convenience of the whole reviewing team, regarding your comment “these issues are fundamental to the completeness of the current study rather than merely potential future extensions,” our responses are as follows: In our previous responses, we stated that: (1) Extending our work to handle tasks related to CT and multi-modal data such as medical imaging, audio, or 3D data is beyond the scope of the current work. (2) The intuitive theoretical analysis for the case $N=2$ is sufficient. (3) Handling highly ill-posed problems or tasks with significant noise, as well as conducting a detailed analysis of the performance fluctuations of DMILO and DMILO-PGD across different tasks, will be left to future work. We disagree that “these issues are fundamental to the completeness of the current study,” as detailed below. (1) **Task and modality extension**: The corresponding criticisms are not unique to our paper. They apply to most literature in this field, including DDRM (Kawar et al., 2022), DPS (Chung et al., 2022), ​ $\Pi$GDM (Song et al., 2023), DiffPIR (Zhu et al., 2023), and the most relevant work DMPlug (Wang et al., 2024). These recent papers have been published in topmost ML or CV venues and/or have high citation counts (also shown on the list below), indicating that this line of investigation has been widely accepted in the community and is a highly active area of research. Thus, we believe these extensions are not fundamental to the completeness of the current study and are better addressed in a separate dedicated work. (2) **Extension of the intuitive theoretical analysis**: We follow the nice work for ILO (Daras et al., 2021) to set $N=2$. We believe this suffices for an intuitive theoretical analysis of our methods, and other reviewers concurred. (3) **Extension to highly ill-posed problems or tasks with significant noise & Detailed analysis of the performance fluctuations**: Our experimental setup closely follows that of DMPlug (Wang et al., 2024), which was recently accepted by NeurIPS 2024 without considering highly ill-posed problems or tasks with significant noise. Additionally, through extensive experiments on various tasks (super-resolution, inpainting, linear Gaussian and motion deblurring, nonlinear deblurring, and BID) and datasets (CelebA, FFHQ, LSUN-bedroom, and ImageNet), we have shown that both DMILO and DMILO-PGD perform competitively across most tasks. These results sufficiently demonstrate the effectiveness of our proposed methods. In summary, we believe our experimental and theoretical settings are widely accepted in the active research area of using diffusion models to solve imaging inverse problems, and the issues raised by the reviewer are not fundamental to the completeness of our current study. We sincerely hope that the final editorial decision on our submission would be based on the main contributions of this particular paper, rather than on the general limitations in this broad and popular line of works. References: [1] Kawar et al. "Denoising diffusion restoration models." NeurIPS, 2022. [853 citations] [2] Chung et al. "Diffusion posterior sampling for general noisy inverse problems." ICLR, 2023. [741 citations] [3] Song et al. "Pseudoinverse-guided diffusion models for inverse problems." ICLR, 2023. [272 citations] [4] Zhu et al. "Denoising diffusion models for plug-and-play image restoration." CVPR, 2023. [198 citations] [5] Wang et al. "DMPlug: A plug-in method for solving inverse problems with diffusion models." NeurIPS, 2024. [new paper] [6] Daras et al. "Intermediate layer optimization for inverse problems using deep generative models." ICML, 2021. [102 citations]
Summary: The paper proposes DMILO and DMILO-PGD, two novel methods for solving inverse problems using diffusion models. DMILO introduces Intermediate Layer Optimization (ILO) to reduce memory burden while improving reconstruction by allowing model variations. DMILO-PGD further integrates Projected Gradient Descent (PGD) to mitigate the lack of measurement fidelity in DMILO. The authors provide a theoretical analysis under certain conditions, demonstrating the effectiveness of their methods. Experiments across multiple linear and nonlinear inverse problems show significant improvements over state-of-the-art approaches in terms of memory efficiency and reconstruction quality. Claims And Evidence: The main claims and evidences are the following: **Claim1:** Theoretical analysis justifies the effectiveness of the proposed methods. **Evidence1:** The authors provide a low-dimensional manifold assumption, a Set-Restricted Eigenvalue Condition (S-REC), and a theorem (Theorem 4.4) proving that the learned measurement optimum is close to the true optimum under certain conditions (Section 4). **Claim2:** The proposed method improves over state-of-the-art methods for solving inverse problems. **Evidence2:** Large experiments are performed over a wide range of tasks and confirm the claim of the authors. Methods And Evaluation Criteria: Methods and evaluation criteria (e.g. metrics) are appropriate and convincing. Theoretical Claims: I did check the proof of Theorem 4.4 and did not spot any mistake. Experimental Designs Or Analyses: I did check experimental designs and analyses, they are performed seriously. However, some comparisons are not extremely fair for linear IP. In particular, Table 2 should be completed with linear deblurring (Gaussian or Motion), and baselines such as DPIR https://arxiv.org/abs/2008.13751 and/or DiffPIR https://arxiv.org/abs/2305.08995, which are stronger baselines than e.g. RedDIff or DPS (in my experience). Supplementary Material: I did read the supplementary, in particular the proof of Theorem 4.4. Relation To Broader Scientific Literature: Relation to the scientific literature is well done for comparable methods, but related works to other approaches (non diffusion, e.g. PnP algorithms despite relevant, in particular in the context of implicitly learned priors) are missing, see landmark algorithms https://arxiv.org/abs/2008.13751, https://arxiv.org/abs/2305.08995. Essential References Not Discussed: https://arxiv.org/abs/2008.13751 https://arxiv.org/abs/2305.08995. Other Strengths And Weaknesses: Overall, this is a good paper with convincing results. Its main strength is its theoretical side. Its main limiation is its incremental aspect as well as missing baselines that I think would make the work even more convincing. **Strengths:** 1. The approach, although incremental, is interesting 2. The theoretical justification is rigorously justified 3. Experimental results are convincing 4. The paper is well written **Weaknesses:** 1. This is an incremental work 2. Some important linear inverse problems are missing, e.g. motion or gaussian deblurring 3. Comparisons with other relevant methods could be included, in particular in the relatively easy case considered by the authors (i.e. deblurring with low noise level), where other baselines work particularly well (e.g. PnP, see the DPIR/DiffPIR mentionned above). Other Comments Or Suggestions: None Questions For Authors: - Would the authors be able to add comparisons to DPIR/DiffPIR? This is the reason for my rating, which I'd be happy to reconsider. EDIT: this has been addressed by the authors. - I disagree with the authors in Paragraph 3.2 when they state: "A key difference from conventional PGD is that we minimize $\|\|\mathcal{A}(\mathcal{G}(x))-\mathcal{A}(x)\|\|^2$ (notations simplified) rather than $\|\|\mathcal{G}(x) - x\|\|^2$ (again, notations simplified)." I think that this sentence is not very accurate: PGD minimizes the sum of two terms (here the authors mention only one), and PGD can be used to minimize any type of function - with or without $\mathcal{A}$. Furthermore, the difference between DMILO and DMILO-PGD lies (mainly) in step 4 of the PGD version, which is the usual gradient term from PGD... If I understand what the authors mean, it is probably wrt the proximal step, which would write in a different manner (e.g. $\text{argmin} \mathcal{G}(x) + \frac{1}{2}\|x-u\|_2^2$). All in all, what the authors mean in this sentence is a bit vague and could be made more precise. - (optional) Would the authors be able to precisely state the functional Algorithm 2 minimizes (which is not precisely (12))? If yes this would be an interesting discussion in the appendix. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive assessment of this paper and the valuable comments and suggestions. Our responses to the main concerns are given as follows. (**Table 2 should be completed with linear deblurring (Gaussian or Motion), and baselines such as DPIR and/or DiffPIR. Would the authors be able to add comparisons to DPIR/DiffPIR?**) We conduct additional evaluations on linear Gaussian and motion deblurring tasks, comparing our methods with DPIR and DiffPIR on CelebA and FFHQ. The results are shown in Tables B1-B4, which reveal that while DiffPIR demonstrates outstanding performance in terms of the perceptual quality metric LPIPS and DPIR shows superiority in image distortion metrics such as PSNR and SSIM for linear Gaussian deblurring, our DMILO method generally attains the best performance across nearly all metrics, especially for linear motion deblurring. Note that we follow DMPlug to set the kernel size to $64 \times 64$, which differs from the $61 \times 61$ kernel size initially employed in DPIR and DiffPIR. Such a difference might impact the results. In the revised version, we will further incorporate comparisons with DPIR and DiffPIR for other applicable tasks. We sincerely hope that these additional comparisons have appropriately addressed your comments, and that you can kindly consider increasing your initial rating accordingly. **Table B1: Comparisons of different methods for linear Gaussian deblurring on 100 validation images from CelebA.** ||LPIPS|PSNR|SSIM| |:---:|:---:|:---:|:---:| |DPS|0.109|27.65|0.752| |RED-diff|0.221|29.59|0.808| |DPIR|0.256|**31.30**|**0.861**| |DiffPIR|**0.092**|28.91|0.791| |DMPlug|0.172|29.70|0.776| |DMILO|**0.092**|30.89|0.816| |DMILO-PGD|0.157|30.74|0.811| **Table B2: Comparisons of different methods for the linear Gaussian deblurring task on 100 validation images from FFHQ.** ||LPIPS|PSNR|SSIM| |:---:|:---:|:---:|:---:| |DPS|0.150|25.56|0.717| |RED-diff|0.272|27.15|0.778| |DPIR|0.271|29.06|0.844| |DiffPIR|0.119|26.88|0.769| |DMPlug|0.181|28.27|0.806| |DMILO|**0.110**|**29.60**|**0.852**| |DMILO-PGD|0.176|28.65|0.799| **Table B3: Comparisons of different methods for the linear motion deblurring task on 100 validation images from CelebA.** ||LPIPS|PSNR|SSIM| |:---:|:---:|:---:|:---:| |DPS|0.126|26.62|0.730| |RED-diff|0.229|27.32|0.758| |DPIR|0.192|31.09|0.826| |DiffPIR|0.117|28.35|0.773| |DMPlug|0.164|30.25|0.824| |DMILO|**0.044**|**34.15**|**0.908**| |DMILO-PGD|0.067|33.41|0.884| **Table B4: Comparisons of different methods for the linear motion deblurring task on 100 validation images from FFHQ.** ||LPIPS|PSNR|SSIM| |:---:|:---:|:---:|:---:| |DPS|0.167|24.34|0.676| |RED-diff|0.272|25.40|0.730| |DPIR|0.181|29.67|0.820| |DiffPIR|0.137|26.41|0.740| |DMPlug|0.173|28.58|0.812| |DMILO|**0.044**|**33.21**|**0.909**| |DMILO-PGD|0.079|31.66|0.857| (**I disagree with the authors in Paragraph 3.2 when they state: "A key difference from conventional PGD is that…"**) Thank you for the detailed comment. We will remove the phrase “a key difference” and reword the sentence to enhance its precision. (**Would the authors be able to precisely state the functional Algorithm 2 minimizes (which is not precisely (12))?**) Thank you for the question. In (12), we use $\hat{\mathbf{x}} _ {t _ 0}$ to represent the estimated signal. As Algorithm 1 does not provide an explicit form for $\hat{\mathbf{x}} _ {t _ 0}$, we substitute the inaccessible $\mathcal{A}(\hat{\mathbf{x}} _ {t _ 0})$ with the observed vector $\mathbf{y}$. In Algorithm 2, we first calculate $\mathbf{x} _ {t _ 0}^{(e)}$ via simple gradient descent. This $\mathbf{x} _ {t _ 0}^{(e)}$ serves as an explicit approximation of $\hat{\mathbf{x}} _ {t _ 0}$. We then fix $\mathbf{x} _ {t _ 0}^{(e)}$ and use $\mathcal{A}(\mathbf{x} _ {t _ 0}^{(e)})$ in place of $\mathcal{A}(\hat{\mathbf{x}} _ {t _ 0})$ in (12). --- Rebuttal Comment 1.1: Comment: I thank the authors for replying to my points, in particular for the addition of the suggested baselines DPIR and DiffPIR. While the work is incremental, I do not see any reason for rejecting it and therefore have increase my rating to 4 - Accept. --- Reply to Comment 1.1.1: Comment: Thank you for the update. We are truly grateful for the improved rating and your strong endorsement.
Summary: This paper proposes a novel approach for solving inverse problems using diffusion models through an iterative intermediate layer optimization strategy (DMILO). The optimization process is enhanced by introducing sparse deviations off the manifold of the diffusion trajectory, which allows the model to generalize beyond the range of the pre-trained diffusion model. Additionally, the method is further refined using projected gradient descent (PGD) to mitigate the risk of suboptimal convergence. The method achieves superior performance on linear tasks, nonlinear tasks, and blind image deblurring tasks. Claims And Evidence: The paper makes three primary claims: 1. By replacing optimization over the entire deterministic diffusion sampling process, the proposed intermediate-layer optimization reduces memory burden. 2. By leveraging sparse deviations, the approach gains additional flexibility to recover signals outside the range of the diffusion model. 3. The proposed method achieves improved empirical performance compared to baseline approaches. These claims are well-supported by extensive empirical experiments and intuitive theoretical analysis. Methods And Evaluation Criteria: The method is evaluated on several linear inverse problems, nonlinear deblurring, and blind image deblurring tasks using the CelebA and FFHQ datasets. The evaluation metrics include LPIPS, PSNR, and SSIM. Theoretical Claims: A convergence bound is provided in Theorem 4.4. Experimental Designs Or Analyses: The experiment design is sound and valid. Supplementary Material: I have briefly reviewed the supplementary material. Relation To Broader Scientific Literature: This work contributes to the broader field by presenting a memory-efficient and empirically stronger approach for solving inverse problems through iterative diffusion inversion over the deterministic diffusion sampling process. Essential References Not Discussed: I did not identify any critical missing references. Other Strengths And Weaknesses: Strength: 1. The paper is well-motivated, addressing key challenges such as memory burden and suboptimal convergence in prior methods like DMPlug for solving inverse problems with diffusion models. 2. The writing is clear and provides strong intuition behind each design choice. 3. The proposed method consistently improves empirical performance across selected linear and nonlinear tasks. Weakness: 1. My main concern is the computational cost of the proposed method. As described in Algorithms 1 and 2, DMILO and DMILO-PGD require solving $J\cdot N$ and $E\cdot N$ optimization problems via backward gradient passes through a **one-step** diffusion model. In contrast, the baseline DMPlug only requires solving a single optimization problem over a **three-step** diffusion model. If the total number of function evaluations (NFEs) is fixed, does the proposed method still achieve better performance? Could the authors provide additional insights into the computational efficiency of their approach? 2. The paper states that the proposed method addresses the memory burden in DMPlug. However, an alternative strategy commonly used in practice is gradient checkpointing, which reduces memory consumption at a slight computational overhead. Did the author compare in terms of memory and computation cost by applying checkpointing to DMPlug? 3. A minor issue is that DMILO-PGD performs worse on the blind image deblurring task, though a possible reason is provided. 4. There is no ablation study on the effect of sparse deviations, which is a major component in proposed method. Could the authors provide insights into how adding sparse deviations influences performance? Other Comments Or Suggestions: Figure 1 appears inconsistent with its caption. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your recognition of this paper and the valuable feedback and suggestions. Our responses to the main concerns are given as follows. (**My main concern is the computational cost of the proposed method. If the total number of function evaluations (NFEs) is fixed, does the proposed method still achieve better performance? Could the authors provide additional insights into the computational efficiency of their approach?**) Our methods build on DMPlug and reduce its computational overhead. For instance, in the inpainting task, our methods outperform DMPlug while using significantly fewer NFEs. Specifically, our methods need only 3,000 NFEs in total, compared to the 15,000 NFEs required by DMPlug. Even when the number of NFEs is the same, our methods are computationally more efficient. This is because our methods employ a smaller gradient graph, which lessens the burden of gradient computation. In Table A1 below, we present the computational cost of reconstructing a validation image from the CelebA dataset for different methods for inpainting using an NVIDIA RTX 4090 GPU. The results demonstrate that our methods require less computational time than DMPlug. **Table A1: Computation cost for different approaches.** ||DDRM|DPS|$\Pi$GDM|RED-diff|DMPlug|**DMILO**|**DMILO-PGD**| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |NFE|20|1000|50|50|15000|3000|3000| |Time (s)|1|40|2|1|925|150|151| (**Did the author compare in terms of memory and computation cost by applying checkpointing to DMPlug?**) Following your suggestion, we compare in terms of memory and computation cost by applying gradient checkpointing to DMPlug (see Table A2 below). The gradient checkpointing strategy effectively reduces the memory burden significantly. However, it simultaneously increases the computation cost because of the overhead associated with saving and loading gradients. In contrast, our methods manage to decrease both the memory and computation costs compared to DMPlug. **Table A2: Memory cost for different approaches.** ||DMPlug|DMPlug-Ckpt|**DMILO**|**DMILO-PGD**| |:---:|:---:|:---:|:---:|:---:| |NFE|15000|15000|3000|3000| |Time (s)|925|1256|150|151| |Memory (GB)|6.94|3.01|3.33|3.34| ("DMPlug-Ckpt" denotes DMPlug with gradient checkpointing) (**There is no ablation study on the effect of sparse deviations., which is a major component in proposed method. Could the authors provide insights into how adding sparse deviations influences performance?**) Thank you for your insightful comment. We perform an ablation study on the impact of sparse deviations in the super-resolution task on the CelebA dataset (see Table A3 below). The results demonstrate the effectiveness of adding sparse deviations. In our understanding, adding sparse deviations not only broadens the range of diffusion models but also alleviates error accumulation resulting from inaccurate intermediate calculations, thus leading to improved reconstruction performance. **Table A3: Ablation study on the effect of sparse deviations for the super-resolution task on 100 validation images from CelebA.** ||LPIPS|PSNR|SSIM| |:---:|:---:|:---:|:---:| |DMPlug|0.127|32.38|0.875| |DMILO (w/)|0.133|30.81|0.785| |DMILO (w/o)|0.202|29.23|0.699| |DMILO-PGD (w/)|0.056|33.58|0.906| |DMILO-PGD (w/o)|0.173|32.07|0.870| ("w/" denotes methods employing sparse deviations, and "w/o" denotes methods without adding sparse deviations.) (**Figure 1 appears inconsistent with its caption.**) Thank you for pointing this out. In the revised version, we will correct this to make Figure 1 consistent with its caption.
null
null
null
null
null
null
Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options
Accept (poster)
Summary: The paper introduces the Flow-of-Options (FoO) framework, a structured reasoning approach for large language models (LLMs) that systematically generates and evaluates multiple decision options at each step. Instead of following a single reasoning path, FoO constructs a directed acyclic graph (DAG) where each node represents an option, and edges capture transitions between different decisions. The framework is applied to a range of tasks, including machine learning automation, therapeutic chemistry, reinforcement learning, and symbolic reasoning, aiming to improve decision-making by diversifying the exploration process. Additionally, FoO integrates case-based reasoning (CBR) to reuse past solutions for efficiency. The proposed approach is tested across multiple domains, comparing its performance against existing LLM-based agent systems and AutoML frameworks. Claims And Evidence: This paper has several claims about the performance of FoO framework, including performance, structured reasoning, generalization, and computational costs. The authors used a variety of tasks and other frameworks to compare, showing a robust advantage in performance and capacity in structured reasoning. However, regarding computational costs, it may be oversimplistic to compare the LLM costs simply. Though the paper claims each task cost less than $1, however, implementing and running solutions in parallel can sometimes be very computationally intensive as well. Finetuning a language model is also expensive, but it may directly or quickly provide a good enough solution without testing all possible candidates. Therefore, it's hard to say which one is more computationally efficient. The authors also mention such a framework has a stronger generalizability. However, this framework does not generate new insights. By asking LLM to list out options, this framework is more like implementing these options and validating their performance to choose the best one. This will largely depend on the capacity of the base LLM. If the LLM is not working creatively or has a really less representative or out-of-distribution task scenario, it would largely limit the performance of this framework in a more challenging, unsolved, and even unseen task. Though other frameworks may not overcome this difficulty as well, as an agent workflow, it should not only validate their actions, but also learn from a closed feedback loop to refine their policy, which is lacking in the current paper. Methods And Evaluation Criteria: The paper adopts a wide range of tasks, and the evaluation criteria are consistent with the task property. The workflow shows a generalizability of task domains. However, just I mentioned, if working on an even more challenging and creative task, this may limit the task performance. Theoretical Claims: The paper is mainly empirical. The limited theoretical claims come at the definition of FoF as well as the DAG nodes. I don't see any problems there. Experimental Designs Or Analyses: The experimental design and analyses are robust within a variety of tasks as well as model/agents comparison. However, the tasks can be more challenging (for new knowledge/solution discovery) to emphasize the value of this framework. Supplementary Material: I read all parts of the supplementary materials, and they are mainly about detailed information about task information and results, as well as some example cases. Relation To Broader Scientific Literature: This paper proposes a FoO (flow-of-options) agent framework, optimizing multiple tasks of domain performance compared to the previous traditional frameworks. Though the agent workflow shows promising advantages in performance among these tasks, it does not essentially implement a learning and actively interacting agent, which is important for the stronger AI development. Current pipeline still primarily remains on using a known structure of knowledge to solve known structure of problems, which are more like picking up a best known solution, not coming up with an even better one. In this sense, the impact is limited. Essential References Not Discussed: Some discussions about optimal algorithms in each task (not simply compare the existing agent framework), as well as human cognitive findings would be necessary. For example, when talking about varying different tasks, what if adding reasoning models with tool use like o3-mini-high or deepseek-R1? For some specific tasks, what if using Bayesian Optimization in ML tasks, rather than asking the model to propose different options? There are some known other model pipelines as well as algorithms to achieve sota performance in relevant tasks. The agent framework may need to consider how this agent framework compares with those domain expertise solutions, but not only limited to agent frameworks. On the other hand, discussion about how humans solve this task and potentially introducing a human baseline is also meaningful. What if humans can guide the model better by step-by-step feedback? What kind of heuristics do humans have that may be useful or harmful to the agent execution? Example references: Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica. Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science. Other Strengths And Weaknesses: Weakness: There are really a wide range of tasks, especially across domains. If in the main figures' captions, those short names could be specified, it would be much easier for readers to understand what the tasks mean. Other Comments Or Suggestions: I have no other comments or suggestions. Questions For Authors: 1. I noticed that in the supplementary information, there is one baseline about human coding on TDC leader board (Figure 14). I don;t quite get what the figure tries to reveal, are they saying this agent framework is better or worse than the human baseline? 2. I was wondering what if asking humans to guide the model to do similar tasks, for example, give LLMs compositional problems, ask LLMs to list out options and implement in parallel, what would it be better or worse than the current case? This question wants to ask how much the agent framework can work in the real practical problems compared with humans, with the same core LLM that implements codes or proposes options. Since we can always replace the core model with stronger ones, but we need to figure out how much role the workflow currently plays in the problems. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We have consolidated the reviewer concerns into the following topics and will include them in our updated paper. # 1. Computational cost compared to fine-tuning Fine-tuning introduces two additional implicit costs that do not impact our approach: - Fine-tuning requires a dataset that is reasonably large and diverse. Ensuring quality, quantity, and diversity of the dataset can present a high cost. - Fine-tuning also requires the model weights to be accessible which is not always the case. This enables a broader applicability of our approach. # 2. Tool use **Tool use is synergistic with Flow-of-Options**. In our paper, **FoO seeks to improve the base reasoning capabilities of the LLMs _even in the absence of tools_**. However, tools can be a *force multiplier* in our work. We conducted a small experiment to illustrate this further. We incorporate external research paper retrieval as a tool with FoO. The tool retrieves and provides two papers as context into the Option Generator LLM, for the example task of fine-tuning protein language models. We note the two retrieved papers incorporated into the option generator’s context: ProtBERT (Elnaggar et al. 2021) and ESM (Rives et al. 2021). Following is the resultant FoO produced: https://ibb.co/GQL0P06N. We see that the nodes 1 and 2 in the FoO incorporates the information in the papers when proposing options. In this way, tool use is synergistic with our work. # 3. Creativity and Discovery This is indeed an interesting point. We believe Flow-of-Options incorporates “combinational creativity” as described in Boden 1998. For instance, in the case of the Drug-target Interaction problem (DTI task from Table 4), our approach proposes the following ML model architecture option: Linear → Swish → dropout → Linear → GeLU → Linear (for regression). This is not an existing model per se. Although the individual components, such as the linear and activation layers exist, the specific combination of these layers can be considered novel and performs well on the task. The individual nodes can also be combinations of existing methods. For instance, in the Drug Combination problem (DC task from Table 4), existing human baseline on the leaderboard computes features of the drug molecules using packages such as RDKit, combined with a feed-forward neural network for prediction. In contrast, our approach proposes a novel ML pipeline that combines feature extraction using ChemBERTa (an existing language model for computing the embeddings of drug molecules), with feed-forward neural network for prediction. Although the individual components exist, this combination can be considered novel. In this sense, Flow-of-Options can support combinational creativity and discovery. # 4. Synergy with humans ## Q1 Figure 14 compares the performance of the ML approaches produced by FoO vs. human designed baselines on the TDC leaderboard. The figure shows that our approach mostly achieves > 80% of an expert human’s performance. In other words, on average, it is comparable to (though not always better than) a human expert. In some cases, it outperforms the human baseline. For instance, in the drug combination (DC) task from Table 4, our approach outperforms the human baseline on the TDC leaderboard. For naive users however, FoO can significantly democratize ML. ## Q2 Currently, the LLM proposes the options. However, these options could be informed through human guidance. Similar to our demonstration on tool use incorporating external references, the option generation can be conditioned on user inputs. In this way, the Flow-of-Options data structure can be a synergy of LLM and human knowledge leading to discovery of novel combinations. # 5. FoO with learning Our current implementation of Flow-of-Options seeks to improve the base reasoning capabilities of LLMs. However, it may be complementary to approaches that incorporate closed feedback learning. For instance, it may be possible to learn over walks produced by the Flow-of-Options data structure. Since each walk is associated with a corresponding metric, it can be treated as a supervised learning problem, or incorporated with reinforcement learning. This would make for an interesting future exploration. # 6. Domain expertise solutions We explore two such algorithms (neither use LLM-based agents): AutoGluon (Erickson et al. 2020) for data science tasks in Section 4.1, and DeepMol (Correia et al. 2024) for TDC ADME-Tox tasks in section 4.2. DeepMol is a specialized framework for ADME-Tox domain, and AutoGluon is a specialized framework for typical Data Science domain. Both DeepMol and AutoGluon optimize for ML models without using any LLM agents. AutoGluon has demonstrated improvements over Bayesian Optimization based AutoML such as Auto-Weka and other AutoML optimization frameworks. Hence we chose AutoGluon as the sota model. DeepMol is currently one of the sota AutoML methods for TDC tasks. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for addressing my concerns. Most of my concerns are well addressed. I will update my score to 4 to support the acceptance of the paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for raising the score and for their insightful feedback that has helped improve our paper.
Summary: The paper introduces Flow-of-Options (FOO), an agentic system designed for auto-ML. The core contribution is a framework based on fully-connected network structures of step-by-step solution paths generated by LLMs. The framework is evaluated comprehensively on multiple domains including standard data science tasks, therapeutic chemistry tasks, reinforcement learning, and image generation. Claims And Evidence: The paper presents several compelling results but some key claims lack sufficient supporting evidence: * The claim about the framework's effectiveness would be strengthened by including more comprehensive ablation studies and hyperparameter analysis in experiments. For example, it is important to include comparisons across different LLM backbones, as well as sensitivity analysis of the hyperparameters for both the baseline methods and the proposed approach. Without these analyses, it's difficult to assess the robustness and generalizability of the performance improvements. * Some claims in the paper would be better-supported by control studies, such as directly comparing the proposed fully-connected network structure against alternative architectures (trees, standard DAGs). * Figures 2 and 7, which aim to use word clouds to show improved solution diversity, are weak and unconvincing. A more systematic quantitative analysis of solution diversity would be necessary to support this central claim of the paper. Methods And Evaluation Criteria: * The method is sound and presented in detail. The network structure design choices are well-justified in comparison to previous works. * However, it is not clear if the paper properly tunes hyperparameters for the baseline methods, which raises concerns about the fairness of comparisons. Theoretical Claims: NA Experimental Designs Or Analyses: * The evaluation is primarily conducted with GPT-4o as the underlying LLM. More comprehensive evaluation across different LLM architectures would be needed. (There are some very rough comparisons with GPT 3.5 in the supplemental, but it didn't compare with other baselines and lacks enough details). * There are more ablation studies that can be done, such as evaluating the performance of the planner component and the consistency checker. It will let the readers have a better idea of which aspects of the framework are most critical to its success. * Overall, I do not feel like I learned enough insights from the results beyond performance numbers. The paper should design more experiments that provide deeper understanding of the system. Supplementary Material: As mentioned above, should include more detailed experiment results and comparisons. Relation To Broader Scientific Literature: The paper proposes a fully-connected network to build agentic systems for auto-ML. It improves upon existing approaches in several specific ways: * Compared to SELA FoO offers greater expressivity by using a fully-connected network structure instead of a tree structure and replaces SELA's computationally expensive Monte Carlo Tree Search with a more efficient traversal mechanism. * Compared with Data Interpreter, FoO provides a guarantee of acyclicity in its network structure since LLMs are only used to generate options, not to construct the network itself. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: * Can you design experiments to show the effectiveness of the planner and consistency checker and their impact on the performance? * How does the performance scale with the depth and width of the network? * How do you deal with the scenario when there are multiple ways to decompose the problem? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. We will incorporate the suggestions into our paper. # Q1 We measure the reduction in execution time when consistency checker is added to identify invalid paths (as opposed to w/o it), and planner when the adapter is added (as opposed to w/o it). We also measure costs for each component when it is added. We measured these on average for tasks from Table 1. The results are noted here: https://ibb.co/7d4vmw7f. Adding these elements offer performance benefits at minor additional cost. We also performed a cost ablation for all components: https://ibb.co/LXgW5wBs # Q2 Please note our scaling performance response to Reviewer utUi (Q1) # Q3 This is a good question. We believe there are two cases in this scenario: - Case 1: A single task plan suffices, but has conceptually different option decompositions: For instance, a string denoting a drug molecule can be processed using the specialized RDKit package which computes chemical properties of the molecule such as molecular weight. Alternatively, we can convert the strings to vectors via NLP methods. These are conceptually different. However, the task plan constructed by **the planner is sufficiently high-level that conceptually distinct options are supported via the option generator LLM**. See the example FoO: https://ibb.co/GXXgj5p. Options 1 and 2 are feature processing, denoting the RDKit method and "NLP style" vector embedding respectively. - Case 2: A single task plan cannot denote the different decompositions: The experiments in our paper currently do not fall into this category. However, it is possible to envision a *nested* Flow-of-Options in our future work, with FoO data structure over task plans as well. In this case, the depth will be $n=1$ and width $k$ denotes the different task decompositions. Internally, each task plan node can be captured similar to our work in this paper. Hence, a nested version of FoO can be explored for such problems. # Expanded evaluation on LLM backbones We provide additional results here: https://ibb.co/NRMNTw4. Please see response to Reviewer 4BTz (W4) for more details. We will expand further details around this in our paper as well. # Comparison to the tree and DAG The current implementation of Data Interpreter does not appear to save the produced data structure. SELA produces a text formatted tree shown in the excerpt below: ``` [Node 0-2] Engineer features if necessary to improve model performance. Additionally, generate a correlation matrix heatmap to identify highly correlated features, which might be candidates for removal or transformation. ``` We convert this to a visualization of the tree for a data processing task and demonstrate it alongside FoO for the same task. Tree: https://ibb.co/NdzGtFYW. FoO: https://ibb.co/mVqnMpk4. We note the qualitative option diversity and connectivity improvements of FoO here. # WordClouds We chose WordClouds as a compact representation of the frequency and diversity of the model choices made by our approach. We'd be happy to replace the wordclouds with bar charts (for proportion of options, i.e. frequency) which is more quantitative: https://ibb.co/3yVLzv0d (For Fig 2), https://ibb.co/tTS4Zwmz (Fig 5) # Hyperparams of baselines We have performed hyperparameter tuning on the key *number of iterations* parameters for SELA, DS-Agent, and Data Interpreter prior to running them (the other frameworks do not include hyperparameters apart from LLM backbone to use). - *SELA*: Increasing the number of iterations would lead to a significant explosion in terms of time (5 → 6 iterations increased time from ~21 mins → to ~57 mins on average), but we did not note a corresponding improvement in accuracy beyond 5 iterations. This could be related to the complexity of the problems in our experiments. More complex problems may require more iterations of SELA (albeit at a significant time cost). - *Data Interpreter*: Increasing the number of iterations increases the amount of time (to a much lesser degree than SELA), but like SELA, it did not impact accuracy beyond 5 iterations. It is also worth noting that the execution failures in Data Interpreter did not correlate to the number of iterations (possibly because failures are related to the presence of cycles in the LLM-built DAG, noted as one of the shortcomings of DI). - *DS-Agent*: The number of iterations increases the amount of time, and we also noted an improvement in accuracy however, that would stabilize at about iterations 4 to 5. We set this to 5 to be consistent in our cost/time comparisons across all methods. - *Our approach*: For our approach, the number of iterations increases the amount of time with improvement in accuracy. For consistency with all the baselines, we fix number of iterations to 5. We generally chose 5 iterations to maintain consistency across all our comparisons on time and cost assessments. We note relative advantages of the different methods in Appendix F. --- Rebuttal Comment 1.1: Comment: Thank you for your comment. I have increased my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for raising the score and for their insightful feedback that has helped improve our paper.
Summary: This paper proposes FoO (flow of options) approach to diverse the LLM's reasoning paths. An FoO-based agentic system is developed for solving traditional machine learning tasks including regression, classification, reinforcement learning, and image generation tasks. The authors show that their framework outperforms the existing methods by a large margin with lower cost. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There is no theoretical claims in this submission. Experimental Designs Or Analyses: This paper evaluate their framework on many traditional machine learning tasks with other baseline methods. I have some questions as below: 1. For Table 1, it seems that zero-shot is better than DS-Agent, AutoGluon, and SELA. This is a bit confused and raises concerns that these frameworks may not suitable for the considered tasks, and make me question about the fair comparison among different approaches? 2. Fig 6 seems irrelevant as FoO-based approach by design should improve upon the iterations. However, other approaches are kinda independent among different trials (please correct me if I am wrong here). Supplementary Material: Yes, I read the LLM prompts part. Relation To Broader Scientific Literature: I believe the proposed methods can be related to broader applications other than traditional ML tasks. Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths**: 1. This work use LLMs to generate diverse options to explore. 2. The propose approach can effectively explore the optimal path for a give task. 3. The authors explore the tasks beyond classification and regression, although just two cases. **Weaknesses**: 1. The development part might be not efficient as the proposed methods explore the every path in the graph. When the width of the tree is larger, some bad options will also be visited many times. 2. The evaluation needs to be more rigorous and can be further improved. The comparison may not fair enough based on the current results. 3. More RL or other ML tasks should be explored to demonstrate the generalization of the proposed method. 4. Only GPT-4o is evaluated, how about other open-sourced models? Other Comments Or Suggestions: NA Questions For Authors: In table 3, I am a bit confused by the cost of SELA, it seems that SELA takes the longest time but the cost is not the highest? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. We will incorporate the suggestions into our updated paper. # Experimental Designs or Analyses ## Point 1 The documentation for DS-Agent, AutoGluon, and SELA note that they are indeed suited to tabular tasks similar to the ones explored in our experiments. This is noted in their corresponding papers. Our dataset is not specifically curated by us, but is rather an existing dataset obtained from (Guo et al. 2024). We note the following potential reasons for the Table 1 observations: - *AutoGluon*: AutoGluon explores a fixed set of models which is well suited for some, but not all the data science tasks. Indeed some of the problems in the dataset, such as language-based tasks, are currently not supported by AutoGluon (which is a shortcoming of this method). Nevertheless, AutoGluon is a quite popular framework and is also an example of a non-LLM based AutoML framework. Hence we felt it would be useful to incorporate it as one of the baselines in our experiments. - *SELA and DS-Agent*: Both these frameworks incorporate *self-correction* over the LLM proposed methods. The LLM proposes a method and then repeatedly reflects on it to modify the method and improve the result. In “Large Language Models Cannot Self-Correct Reasoning Yet” by Huang et al. ICLR 2024, the authors note LLM performance can degrade after self-correction. Accuracy drops as the number of repeated self-correction calls increase (Table 3 of the cited paper). Both SELA and DS-Agent perform $>1$ self-correction calls. Besides zero-shot, Data Interpreter (DI) does not incorporate this form of self-correction, and tends to perform better among the baselines (although it has other shortcomings as noted in the paper). **LLM self-correction is not needed with Flow-of-Options, where the different options are already encapsulated and systematically explored via the FoO data structure. However, this does not mean that SELA and DS-Agent are not suited for the types of tasks in our experiments, but rather that there is room for improvement here, which we believe Flow-of-Options contributes.** It is possible that if self-correction were removed from DS-Agent and SELA their performances *could* improve. However, we do not change the baseline implementations available on Github for the purposes of comparison. ## Point 2 This is correct with the exception that DS-Agent (alongside FoO) also incorporates mechanisms to improve upon past iterations. The other approaches do not have this design element. Our intention with Figure 6 is to: - Add experimental and *quantitative* evidence to our claims of being able to improve over subsequent iterations. - Demonstrate that some of the baseline approaches fail in certain iterations (e.g., DeepMol and data interpreter). We wanted Figure 6 to demonstrate the benefits of our approach at the agentic design level in comparison to the baselines. # W1 We seek to mitigate this with the beam search (Section 2: beam traversal, page 2), which selects the top $b$ options for exploration, thereby limiting the explored paths to the more promising options. **This prevents the bad options from being revisited**. We note additional computational improvements in Section 3.2, Page 5 (Also note response to Reviewer utUi Q1). # W2 We hope that our response to Point 1 above regarding baselines in Table 1 offers additional context around our compared baselines. Please also note additional experiments reported for W4 to strengthen our evaluation. # W3 Currently, in addition to the classification/regression tasks, the RL and image generation tasks, we explored the following tasks in Appendix B.2 and B.3 to further demonstrate generalizability: - Clustering - Machine Translation - Traveling Salesman task - Case study on a math problem We hope that these additional tasks help demonstrate broader generalizability of our framework. # W4 We provide additional results for LLMs (including an open-source LLM) over a subset of the TDC tasks from Table 2: https://ibb.co/NRMNTw4. Arrows indicate whether lower or higher metrics are preferred. Our approach helps consistently improve performance across the newly added LLMs and also outperforms the baselines for the tasks. Note that for some cases "--" indicates that the model failed to produce working code within three attempts. Hence, we see that **in addition to improving the overall task performance compared to baselines, FoO can also help mitigate the failure rates in code generation**. # Q. SELA cost A lot of the time consumed in SELA is for the execution of the code from the MCTS rollouts. Each code execution is performed sequentially (it does not support parallelization like our framework -- parallelization is discussed in Section 3.2, Page 5) and therefore takes a significant amount of time. However, the code execution does not involve LLMs per se, therefore it does not reflect in the cost, which is associated with querying LLMs.
Summary: This paper proposes Flow-of-Options, a planning method for LLM agents, that can effectively track an optimal path over the combinations of possible options. More formally, Flow-of-Options can be represented as a directed-acyclic graph (DAG) of depth n, where a node is an option and an edge is a path between options in a sequence. Flow-of-Options finds an optimal path by evaluating possible paths and updating values (the return of the path) in edges. This paper evaluates Flow-of-Options on 16 Data Science (DS) tasks and 17 Therapeutic Data Commons (TDC) tasks. Experiment results show that Flow-of-Options can achieve better scores than SELA (utilizing a MCTS-based planner) and Data Interpreter (utilizing a DAG-based planner) in DS tasks. Claims And Evidence: This paper proposes Flow-of-Options, a planning method for LLM agents. And, it provides comprehensive evaluation results on 16 Data Science (DS) tasks and 17 Therapeutic Data Commons (TDC) tasks. Also, the evaluation results support that Flow-of-Options can achieve better scores than other baselines such as DS-Agent, AutoGluon, SELA, Data Interpreter, and AutoGen on DS tasks. Methods And Evaluation Criteria: ### Strengths of Methods: S1. Flow-of-Options has a structure that can effectively tack an optimal path over possible combinations. ### Weaknesses of Methods: W1. Flow-of-Options seems to be overly customized for data science tasks which goal to find an optimal path consisting of feature engineering and model selection. I am not sure if Flow-of-Options generally works well over diverse tasks that is more complex. W2. I am not sure what is a key advantage of Flow-of-Options over the exhaustive search over all combinations. Theoretical Claims: This paper mainly proposes a method for effective reasoning of LLMs. It does not provide any theoretical claims. Experimental Designs Or Analyses: This paper provides comprehensive experiment results on 16 Data Science (DS) tasks and 17 Therapeutic Data Commons (TDC) tasks. Also, it provide the details on experiments in Section B in the Appendix. It seems that the experiment design is sound and valid. However, it seems that the experiments mainly deal with data science and chemical reaction tasks. Supplementary Material: This paper provides supplementary material that includes the details on experiments, additional discussions, etc. Relation To Broader Scientific Literature: This paper introduces Flow-of-Options, a DAG-based planning method for effective LLM agents. Designing an effective planning algorithm for LLM agents is one of important research areas, since LLM agents are widely applied into diverse domains. Essential References Not Discussed: This paper properly discusses related works. Other Strengths And Weaknesses: ### Other Strengths: N/A ### Other Weaknesses: N/A Other Comments Or Suggestions: ### Other Comments: N/A Questions For Authors: ### Questions: Q1. Can you provide the computational complexity of finding an optimal solution in Flow-of-Options? If the number of options (k) is large and the number of steps (n) is long, the computational complexity can be high, since the number of walks increases exponentially (k^n). Can you provide the average number of options in some example tasks? How do you control the number of options in each step? Q2. In the Development phase of the FoO-based agent framework, how long does it take for the values in edges to converge? Q3. Can you provide some details on beam search over Flow-of-Options? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and will incorporate these suggestions into our updated paper. # W1 While our paper is on *Application-Driven Machine Learning*, **FoO does not explicitly specify steps such as model selection or feature engineering, but rather adapts to the task plan that is produced**. We show this generalizability (beyond classification/regression of sections 4.1 and 4.2) as follows: - RL and Image Generation (Section 4.3) - uses a different ML pipeline than tasks in 4.1 and 4.2 (e.g., reward formulation) - Unsupervised clustering (Appendix B.2.1) – is a different model than the data science tasks - Machine Translation (Appendix B.2.2) – does not involve a specific feature engineering like the data science tasks (only tokenization) - Traveling Salesman Problem (Appendix B.2.3) – involves neither ML model selection nor feature engineering - Case study on a math problem for solving the complex solutions to $x^2 + 2x = i$ using FoO (Appendix B.3) - does not involve an ML model selection nor any feature engineering. In particular, the TSP and math problems are larger deviations from the typical data science pipeline. # W2 **FoO enforces diversity in LLM solutions through compressed, interpretable representations that support memory of past explorations when combined with case-based reasoning**. Specifically, FoO offers the following advantages over exhaustive combinatorial search: - FoO acts as “memory” of key info on previously explored task solutions - can be saved, reused and adapted from one task to another via deployment (Section 3.2). Reusing FoO from task $T_1$ to $T_2$ is fast and achieves good results (Tables 1, 2 and 3). Hence, prior knowledge encapsulated in the FoO can be effectively reused for $T_2$, whereas an exhaustive search would have to be repeated for each task. - Exhaustive combinatorial search w/o FoO assumes expert knowledge to set up the combinations themselves. An expert ML scientist can enumerate $k$ options for each step in the task, but a naive user may lack this knowledge. FoO encapsulates the knowledge of LLMs enabling even naive users to specify their problem in natural language, without demanding expertise. Even for experts, FoO can serve as a "force multiplier”, potentially enumerating options that the expert may not have thought of. - The formulation of FoO also supports integration with tool use (Please see Reviewer cGuq Point 2). # Q1 Average number of options $k = 4$ and average number of steps $n = 3$. These are specified as hyperparameters into the framework. The computational complexity of FoO is indeed dependent on $n$ and $k$ and we have currently implemented the following solutions to mitigate the computational complexity (From Section 3.2, Page 5): - *Parallelization*: We parallelize the executions of walks through the FoO so that even if there are a large number of walks, the computational time taken is reduced. - *Pruning*: We prune some of the low-scoring edges of the FoO (so that they are not explored) which also helps reduce the computational complexity. Lastly, the consistency checker (Section 2, page 3) identifies invalid paths in the FoO. Although the total number of paths are $k^n$, not all paths are valid (as $n$ increases, the proportion of invalid paths is also higher). Example of invalid paths are shown in Fig 4 (Page 3). Consistency checker empirically results in $\approx 22.8$% reduction in the number of paths to explore (Empirical results noted in response to Reviewer zYfD Q1). Hence, in practice, computational complexity scales quite differently. Please see measured time performance with $n$ and $k$ (averaged across three runs on the CW TDC task of Table 2): https://ibb.co/0VvqpBR5. From $n=1, k=3$ to $n=3, k=3$, the number of paths is $9\times$ more. However, corresponding time only scales by $\approx 7\times$ (this includes parallelization and consistency checking, but excludes pruning which can further improve the efficiency). # Q2 The development phase took about 13.29 mins on average in our work for the data science tasks. The development time is dependent on the complexity of the task based on $n$ and $k$ as noted in Q1. # Q3 In beam search, our goal is to narrow the search to the most effective set of options, by selecting the top $b$ options at each level, and exploring paths between them to discover potentially improved combinations of the top options. This can be visualized as exploration over a reduced FoO where the nodes are just the top performing options found thus far. In our experiments, we start with a full beam width of 100% (exploring all the options), and reduce it to the top 50% of the options in the last two iterations of development. In Appendix C (Figure 13), we demonstrate that beam search can discover new combinations of top performing options, resulting in improvements in the final performance of the methods.
null
null
null
null
null
null
CFP-Gen: Combinatorial Functional Protein Generation via Diffusion Language Models
Accept (poster)
Summary: The paper proposes introducing multiple conditions into the protein sequence design process based on DPLM using a method similar to ControlNet. It achieves the integration of various conditions through the designed RCFE and AGFM modules. The performance on protein design tasks with multiple conditions shows a significant improvement compared to baseline models. Claims And Evidence: In my view, this paper is more focused on algorithmic improvements for a specific application, and therefore does not include many new claims. Instead, it represents a natural extension of existing models and methods. Overall, the claims in this paper are reasonable and supported. Methods And Evaluation Criteria: The proposed methods in this paper and the Evaluation Criteria used during the assessment are reasonable. The core method of the paper is to integrate different condition information into DPLM using a ControlNet-like approach. Since ControlNet has already been thoroughly validated as an effective method for introducing generation conditions into diffusion models, this approach is reasonable. The Evaluation Criteria in this paper also follow the evaluation methods used in previous work. Theoretical Claims: This paper is more "application-oriented," and therefore does not include many theoretical claims. Experimental Designs Or Analyses: The structure of this paper is clear and easy to follow. However, the experimental section needs further improvement, including additional analysis and more thorough discussion.Overall, the experimental design and analysis in this paper are reasonable, as the paper attempts to validate the impact of different conditions on model performance and demonstrates that the introduction of multiple conditions improves the generation performance of Diffusion Language Models. However, there are a few issues that need further validation: 1. The DPLM model may encounter some mode collapse issues, such as generating sequences with many repeated segments. However, this paper does not discuss the impact of this issue, such as whether mode collapse is mitigated after introducing additional conditions. I believe this discussion is necessary. 2. More case studies are needed. For example, visualizations of the model’s generated results when a motif is given as a constraint could further validate the model's ability to adhere to different types of conditions. 3. The results in Table 2 need more explanation. CFP-GEN achieves excellent performance on three indicators, except for scTM and pLDDT, where there is a significant performance gap compared to the best baseline model. What is the reason for this phenomenon? Could it be due to overfitting to the data? Supplementary Material: This paper does not provide supplementary materials. Relation To Broader Scientific Literature: This paper is an extension and elaboration of the existing DPLM (Diffusion Protein Language Model), incorporating methods such as condition integration. Compared to ESM3, this model supports more types of conditional inputs. Essential References Not Discussed: The related work discussed in the paper is reasonable. Other Strengths And Weaknesses: The structure of this paper is clear and easy to follow. However, the experimental section needs further improvement, including additional analysis and more thorough discussion. Other Comments Or Suggestions: I don't have other comments. Questions For Authors: I don't have other questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Reviewer s6DK** We appreciate your recognition of the novelty and strong performance of our method. Your questions raise important points, and we provide detailed clarifications and new quantitative results below. We would be happy to receive any additional constructive feedback. --- **Q1.** **Mitigation of sequence pattern collapse over DPLM.** **A1.** Thank you for careful review. To assess whether CFP-GEN mitigates the mode collapse issue observed in DPLM, we analyzed the frequency of repeated ***n*-gram patterns** (*n* = 2, 3, 4, 5, 6) in the generated sequences from the GO-conditioned generation results in original Table 1. Real protein sequences from the validation set were used as a **positive control**. | **Methods** | **2-gram** | **3-gram** | **4-gram** | **5-gram** | **6-gram** | | --- | --- | --- | --- | --- | --- | | Positive Control | 404 | 164 | 0 | 0 | 0 | | DPLM | 315 | 462 | 104 | 46 | 26 | | CFP-GEN (*w/* GO) | 363 | 351 | 15 | 9 | 8 | | CFP-GEN (*w/* GO and IPR) | 365 | 332 | 9 | 5 | 4 | | CFP-GEN (*w/* GO, IPR and Motif) | 377 | 336 | 4 | 1 | 1 | As shown, CFP-GEN produces a similar number of 2-grams to real proteins, while **significantly reducing the number of longer repetitive n-grams, especially 4-gram to 6-gram patterns.** Notably, the more functional conditions (e.g., GO, IPR, Motif) are provided, the fewer repetitive patterns appear in the output, indicating better sequence quality and reduced mode collapse. These results provide strong evidence that CFP-GEN effectively alleviates the mode collapse issue observed in DPLM. Theses discussions will be included in the revised supplementary material. Thanks! --- **Q2. More visualization results given motif as constraints.** **A2.** Thank you for the helpful suggestion. We agree that visualizing protein structures designed under specific motif constraints can provide valuable insights into the model's design choices under different functional conditions. However, due to the limitations of the rebuttal format, we are unable to include full visualizations here. We will provide these results in the revised supplementary material. We appreciate your understanding. --- **Q3.** **Discussion on scTM and pLDDT in Table2.** **A3.** Thank you for your careful and insightful observation. We have conducted an in-depth analysis to better understand the structural performance gap between CFP-GEN and DPLM baseline model. We found that the slight drop in scTM and pLDDT scores is mainly due to its tendency to **generate more novel structural segments**. Specifically, we analyzed the distribution and transformation of secondary structure elements—namely alpha helices (H), beta strands (E), and coils (C)—within the designed proteins in original Table 2. Using structural alignments between the designed and real target proteins, we categorized the transitions of secondary structure elements. For example, H→H represents a correctly preserved alpha helix, while C→H indicates a region originally a coil being redesigned as a helix. The results are shown below (format: **local average pLDDT / number of secondary structure**): | **Method** | **H→H** | **E→E** | **C→C** | **C→H** | **C→E** | | --- | --- | --- | --- | --- | --- | | DPLM | 91.34 / 758,185 | 94.10 / 307,397 | 86.13 / 584,587 | 84.03 / 85,871 | 92.42 / 61,861 | | CFP-GEN | 90.02 / 763,497 | 92.50 / 309,725 | 83.46 / 581,072 | 82.23 / 88,565 | 90.56 / 62,409 | | Δ Difference | –1.32 / +5,312 | –1.60 / +2,328 | –2.67 / –3,515 | –1.81 / +2,694 | –1.86 / +548 | We observe that **CFP-GEN produces more H (helix) and E (strand) segments**, while the number of C (coil) regions is reduced. Many coil regions are transformed into more structured elements (C→H or C→E). This behavior reflects CFP-GEN’s design preference: **to replace non-functional, flexible regions (coils) with more functionally relevant secondary structures (helices and strands).** While this results in a slight drop in local confidence scores (e.g., pLDDT), likely due to the absence of global conformational energy optimization, it reflects a **function-oriented design strategy**. We consider this a **reasonable trade-off** in the context of novel functional protein generation, and view it as a promising direction for future improvement. In particular, we plan to explore energy-based reinforcement learning frameworks to further optimize this aspect. The above analysis and discussion will be included in the revised supplementary material. We appreciate the reviewer’s valuable observation. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal! It addressed all of my concerns, so I've raised my score to 4. I think it's a solid paper — best of luck with the final decision! --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback and for raising your score. Your suggestions have been invaluable in improving the clarity and completeness of our work, and we will incorporate them into the revised version. We sincerely thank you again for your support and wish you all the best!
Summary: This paper presents CFP-GEN, a large-scale diffusion language model developed for Combinatorial Functional Protein Generation under multiple constraints from diverse modalities. CFP-GEN facilitates de novo protein design by jointly incorporating functional, sequence, and structural constraints. It employs an iterative denoising process to refine protein sequences while conditioning on various functional annotations (such as GO terms, IPR domains, and EC numbers), sequence motifs, and 3D structural features. To achieve this, the model introduces two key modules: (1) Annotation-Guided Feature Modulation (AGFM), which dynamically adjusts sequence representations based on composable functional annotations, and (2) Residue-Controlled Functional Encoding (RCFE), which explicitly encodes critical residues and captures residue interactions and evolutionary relationships. Additionally, CFP-GEN supports the integration of 3D structural constraints through off-the-shelf structure encoders. Experimental results show that CFP-GEN can generate novel proteins with functionality comparable to natural proteins and achieves a high success rate in designing multifunctional proteins. Claims And Evidence: See Experimental Designs Or Analyses Methods And Evaluation Criteria: Yes, the proposed method is well designed for the task of protein generation. Theoretical Claims: Not applicable, as there is no proof or theoretical claims. Experimental Designs Or Analyses: The paper claims that CFP-GEN enables combinatorial functional protein generation under multiple constraints from diverse modalities through the introduction of the AGFM and RCFE modules. However, the experimental evidence provided does not fully support these claims. In the Benchmarking Protein Functional Performance experiments, the influence of training data is not sufficiently controlled, leaving open the possibility that performance gains may stem from data memorization rather than the effective use of composable functional annotations. Furthermore, existing models such as ProGen2 and ZymCTRL could, in principle, incorporate multiple annotation types via prompt engineering by extending their vocabularies. The absence of comparative experiments with such baselines raises concerns about whether AGFM provides a meaningful advantage for handling multimodal constraints. Similarly, in the Functional Protein Inverse Folding experiments, CFP-GEN utilizes additional functional labels during generation. Since these labels are not equally available to baseline methods, this experimental setup does not provide clear evidence isolating the effectiveness of RCFE in controlling functional sites or capturing residue-level interactions. Furthermore, the paper lacks ablation studies on the model architecture, which are necessary to isolate the contributions of AGFM and RCFE. Without these analyses, it remains unclear how much each component contributes to the overall performance. Supplementary Material: Yes, all of the supplementary material was reviewed. Relation To Broader Scientific Literature: The proposed CFP-GEN method is built upon prior diffusion protein language models, and extends these models by dynamically adjusting representations. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please refer to the Summary Other Comments Or Suggestions: I strongly advise the authors to conduct additional experiments to make the paper stronger. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Reviewer p2fD** We sincerely thank the reviewer for the positive feedback. We have carefully addressed the concerns below with new analyses and additional experiments, which will be incorporated into the final version. We welcome any further suggestions. --- **Q1. In-depth analysis of performance gain.** **A1.** We appreciate the reviewer’s insightful concern. Here, we provide evidence that the performance gains are not a result of memorizing known sequences: - Novelty and Diversity (plese see **Q1 in Reviewer 1Xgk**): Our generated sequences exhibit much higher novelty and diversity compared to real proteins. This suggests that CFP-GEN does not simply replicate patterns of real proteins, but instead learns to **generate truly novel sequences**. - Mutation Analysis (plese see **Q2 in Reviewer 1Xgk**): We observe plausible mutations even within conserved regions. These mutations tend to preserve functionally critical motifs while introducing reasonable variation, learning **generalizable design principles**. - Secondary Structure Distribution (plese see **Q3 in Reviewer s6DK**): CFP-GEN tends to transform non-functional and flexible **coil regions** into functional **alpha helices and beta strands**. It reflects a **function-oriented redesign strategy.** Taken together, these findings indicate that CFP-GEN’s improvements stem from its ability to learn function-guided design principles, rather than overfitting to the training data. --- **Q2.** **Discussion on annotation-guided generation with ProGen2 and ZymCTRL.** **A2.** We agree that existing autoregressive (AR)-based PLMs such as ProGen2 and ZymCTRL can potentially support more annotations. However, we found that **diffusion models offer several advantages**: **1.** AR models generate strictly left-to-right, limiting their flexibility for tasks like motif scaffolding, which require conditioning at specific positions. In contrast, diffusion models **allow arbitrary conditioning and precise position control**, essential for realistic protein design. **2.** Diffusion models are more readily sequence representations to **discriminative tasks**, while AR-based models are primarily designed for generation-only tasks. **3.** Aligning multimodal prompts into a unified token space is challenging for AR models. Our diffusion framework supports **flexible multimodal fusion**, enabling separate encoding and seamless integration of each modality. Since ProGen2 lacks public training code, we extended ZymCTRL by enlarging its vocabulary to include GO and IPR classes, but this did not improve performance. We attribute this to inconsistent annotation formats: for instance, *EC:xxxx* is split into multiple tokens in ZymCTRL instead of a single semantic unit, making it hard to integrate with *GO:xxxx* or *IPR:xxxx*, which are typically treated as discrete categories. In contrast, CFP-GEN treats each EC/GO/IPR label as an independent class, and integrates them through **learned embeddings with additive fusion**. We believe more work is needed to enable effective prompt engineering for AR models across diverse annotations, and we welcome future comparisons as such approaches become available. We appreciate the reviewer’s understanding. --- **Q3.** **Inverse folding with additional functional labels .** **A3.** Since none of the existing inverse folding methods support functional labels (e.g., GO terms) as input, we implemented a heuristic baseline (**DPLM+DeepGO**). Specifically, we used DPLM to generate 20 candidate sequences per backbone, then applied DeepGO to predict GO terms and selected **the sequence with the highest overlap with the labeled GO**: | **Methods** | **AAR** | **MRR** | **Fmax** | **scTM** | **pLDDT** | | --- | --- | --- | --- | --- | --- | | DPLM | 66.94 | 0.721 | 0.552 | 0.883 | 85.33 | | DPLM+DeepGO | 67.29 | 0.726 | 0.559 | 0.886 | 85.49 | | CFP-GEN (w/ GO) | 72.05 | 0.866 | 0.571 | 0.887 | 83.28 | The marginal gains of this pipeline suggest that functional labels are hard to integrate into existing inverse folding models, motivating our development of CFP-GEN, **an end-to-end solution that jointly reasons over structure and function.** We will include this discussion in the final version. Thank you! --- **Q4.** **The contributions of AGFM and RCFE.** **A4**. Sorry for this confusion. The ablation studies are actually presented in the **GO-conditioned generation results in Table 1 of the manuscript:** - **CFP-GEN (w/ GO and IPR)** corresponds to using **AGFM only**, with an MRR of **0.779**. - **CFP-GEN (w/ Motif)** corresponds to using **RCFE only**, with an MRR of **0.839**. - **CFP-GEN (w/ GO, IPR and Motif)** combines **AGFM and RCFE**, achieving the best performance with an MRR of **0.870**. These results indicate that both modules independently contribute to performance, and their combination yields additive benefits. We will make this more clear in the final version. Thanks!
Summary: This paper introduces CFP-GEN, a diffusion-based language model for combinatorial functional protein generation that integrates multimodal constraints. The proposed Annotation-Guided Feature Modulation and Residue-Controlled Functional Encoding modules enable flexible conditioning across diverse modalities. The model demonstrates superior performance in functional sequence generation, inverse folding, and multi-objective protein design. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. While the paper’s empirical focus is reasonable, a brief theoretical discussion on why AGFM and RCFE improve convergence or functional control would add value. Experimental Designs Or Analyses: The authors should conduct additional analysis of failure cases to provide insights into the model's limitations. Supplementary Material: Yes, I have read all the sections. Relation To Broader Scientific Literature: Expanding the discussion of CFP-GEN's relation to inverse folding techniques and highlighting distinctions from concurrent multimodal PLM designs would improve the paper's clarity. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths:** 1. ​The integration of multimodal constraints into a unified framework addresses a critical gap in controllable protein generation. The composable conditioning mechanism (AGFM/RCFE) is a meaningful advance over single-modality approaches. ​2. Comprehensive experiments across three tasks (functional generation, inverse folding, multi-functional design) validate the method’s superiority. The use of state-of-the-art function predictors and structural metrics strengthens credibility. 3. High success rates in designing multi-functional enzymes and improved sequence recovery in inverse folding suggest tangible applications in enzyme engineering and drug discovery. **Weaknesses:** 1. ​The dataset filters out low-frequency functional annotations (e.g., GO/IPR terms with <100 sequences), potentially limiting generalizability to rare functions. While results on held-out validation sets are strong, long-tail performance remains unverified. 2. The use of a frozen, off-the-shelf structure encoder (GVP-Transformer) without fine-tuning may restrict structural optimization. While the authors claim that the pretrained cross-attention layer from DPLM can be used directly without fine-tuning, this assertion lacks experimental validation. Ablation studies on structure-conditioned generation are lacking. Other Comments Or Suggestions: N/A. Questions For Authors: Can you provide examples of generated sequences that failed to achieve desired functional properties to better understand the model’s limitations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Reviewer JkEt** We appreciate your thoughtful comments and have addressed your concerns in detail below. The corresponding clarifications and expanded discussion will be reflected in the final version. We welcome any valuable suggestions you may have. --- **Q1.** **Generalizability on rare functional annotations.** **A1**. To evaluate the generalizability of CFP-GEN on long-tail functions, we constructed an **expanded dataset** by relaxing the filtering threshold: we included all GO terms with **≥30 sequences in SwissProt**, resulting in a total of **726 GO terms** (a significant increase from 375 in the original dataset). This updated dataset exhibits a **typical long-tail distribution**: - The **head 20%** of GO terms (146 classes) cover **72.7%** of the sequences, - The **tail 20%** (145 classes) cover only **2.4%** of the sequences. We report generation results conditioned on GO/IPR labels, stratified by class frequency: | **Frequency Segment** | **#GO** | **% Seq.** | **MRR↑** | **MMD↓** | **MMD-G↓** | **mic. F1↑** | **mac. F1↑** | **AUPR↑** | **AUC↑** | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Head (Top 20%) | 146 | 72.7% | 0.725 | 0.072 | 0.043 | 0.598 | 0.529 | 0.409 | 0.763 | | Medium (60%) | 435 | 24.9% | 0.702 | 0.074 | 0.044 | 0.601 | 0.497 | 0.370 | 0.745 | | Tail (Bottom 20%) | 145 | 2.4% | 0.687 | 0.114 | 0.072 | 0.565 | 0.496 | 0.362 | 0.744 | These results demonstrate that **CFP-GEN maintains robust performance even in tail categories**. While a slight performance drop is observed in the tail segment (e.g., MRR of 0.687 *vs.* 0.725 in head), the overall scores remain strong. This suggests that **CFP-GEN has learned generalizable design principles** that extend beyond well-represented functions, enabling it to generate proteins even for underrepresented functional categories. These discussions will be added to the supplementary material. Thanks! --- **Q2.** **Structure-conditioned generation with fine-tuned weight.** **A2**. Thank you for the insightful comment. To address this, we conducted additional experiments to compare **frozen vs. fine-tuned structure encoder and cross-attention layers** within CFP-GEN. | **Setting** | **AAR ↑** | **MRR ↑** | **Fmax ↑** | **scTM ↑** | **pLDDT ↑** | | --- | --- | --- | --- | --- | --- | | CFP-GEN (Frozen) | 73.53 | 0.875 | 0.575 | 0.888 | 83.48 | | CFP-GEN(Finetuned) | 76.39 | 0.882 | 0.581 | 0.889 | 83.53 | As shown, fine-tuning further improves performance, while the frozen variant already performs strongly—especially in low-resource settings with limited structural labels. These results highlight CFP-GEN’s flexibility, allowing full fine-tuning when data is sufficient. We will include these results in Table 2 of the manuscript for clarification. Thank you! --- **Q3.** **CFP-GEN's relation to other inverse folding and multimodal PLM works.** **A3.** - ***Multimodal PLMs:*** As discussed in the related work section, concurrent efforts such as DPLM support both sequence and structure modalities. However, in practice, inference is performed using **only one modality at a time, without effective multimodal fusion**. Furthermore, DPLM does not support function labels. While **ESM-3** enables multimodal inputs, its function relies on a limited set of **free-text keywords**, which cannot adequately capture complex functions (GO/IPR/EC descriptions). In contrast, CFP-GEN supports true multimodal fusion of sequence, structure, and functional labels, and demonstrates **superior performance over both DPLM and ESM-3 in original Table 1.** - ***Inverse Folding:*** Existing works, such as ProteinMPNN and ESM-IF, generate sequences **based only on protein backbone structures**. By comparison, CFP-GEN introduces **a new design paradigm that incorporates functional labels as additional input**. This enables function-aware inverse folding, improving the AAR, as shown in Table 2. We believe this is a practical and meaningful extension to traditional inverse folding. These discussions will be added to the revised related work section. --- **Q4.** **Failed examples of the generated sequences.** **A4**. Since CFP-GEN partially builds upon DPLM, it occasionally inherits the **mode collapse issues observed in DPLM**, leading to highly repetitive segments. As discussed in **Q1 in** **Reviewer s6DK**, we demonstrated that **CFP-GEN largely mitigates** this. However, we acknowledge that mode collapse can still occur sometimes: ```jsx UniprotID: Q56217 ...GLALLLLLLLLLLLLPLPPPPPPPPPPPPPPPPPP... UniprotID: D4GWC8 ...KEMKEAEEAEAEAEKKAEAEAEKKAEAEAEKKAEEKKEE... ``` This type of failure can be **mitigated by introducing more diverse conditions and adjusting diffusion hyperparameters**. We also plan to incorporate reinforcement learning feedback to further reduce such failure modes in future work. Thanks!
Summary: This paper proposes a novel protein language model, CFP-GEN, which leverages discrete diffusion generation to design functional proteins. The key innovation lies in incorporating annotated protein labels, such as Gene Ontology (GO) terms, InterPro (IPR) domains, and Enzyme Commission (EC) numbers, during diffusion training, similar to classifier-free guidance in diffusion models. Additionally, CFP-GEN allows conditioning on protein structure. Comprehensive experiments demonstrate that CFP-GEN outperforms previous protein sequence generative models in generating proteins with accurate GO terms, IPR domains, and EC numbers. Claims And Evidence: The authors state that previous PLMs typically generate protein candidates based on a single-condition input from a specific modality. However, providing more references here would strengthen the argument by situating CFP-GEN more clearly within the broader landscape of protein generation models. The remaining claims, such as CFP-GEN’s ability to design multi-functional proteins and its improved performance in inverse folding, are well-supported by experimental evidence. The reported results in Tables 1 and 2 demonstrate that CFP-GEN outperforms previous models in functional protein generation, structural fidelity, and multi-objective optimization, validating the effectiveness of its multimodal conditioning approach. Methods And Evaluation Criteria: The method is based on the diffusion protein language model DPLM, embedding all annotations into the diffusion conditioning module. The authors also propose the Annotation-Guided Feature Modulation (AGFM) module, which effectively adjusts the intermediate representations, and the Residue-Controlled Functional Encoder (RCFE), which enhances controllability over the generated sequences compared to previous approaches. The authors evaluate the model on a protein sequence generation task using different annotations as prompts. For evaluation, they use DeepGO-SE for predicting Gene Ontology (GO) terms, InterProScan for homology-based annotation, and CLEAN for catalytic function prediction. Comprehensive experiments demonstrate that CFP-GEN, by leveraging annotation-based conditioning, outperforms other protein language models in generating functionally relevant sequences. Theoretical Claims: The paper does not focus on formal theoretical claims. Experimental Designs Or Analyses: As mention above, the authors evaluate CFP-GEN on a protein sequence generation task using different annotations as prompts. For evaluation, they utilize DeepGO-SE for predicting Gene Ontology (GO) terms, InterProScan for homology-based annotation, and CLEAN for catalytic function prediction. The experimental results demonstrate that CFP-GEN, by incorporating annotation-based conditioning, outperforms other protein language models in generating functionally relevant sequences. Additionally, the authors assess the model on the inverse folding task, showing that incorporating more conditioning information reduces the uncertainty in sequence generation and leads to a higher sequence recovery rate. This highlights CFP-GEN’s ability to generate sequences that are both structurally and functionally consistent. Supplementary Material: The authors provide comprehensive supplementary material, including detailed descriptions of the datasets, evaluation metrics, hyperparameter settings, implementation details of existing PLMs, and an introduction to multi-catalytic enzymes. Relation To Broader Scientific Literature: This paper presents a novel and creative approach to protein sequence generation by incorporating protein label annotations into a diffusion-based language model. Given the increasing importance of functional protein design in biotechnology and drug discovery, this approach has the potential to influence future research and applications in protein engineering. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: The authors successfully integrate the three most common annotation labels—GO terms, IPR domains, and EC numbers—into a single protein language model. This multimodal conditioning approach makes protein design more controllable, allowing for more precise functional protein generation compared to previous single-condition models. Weaknesses: Although the authors conduct comprehensive experiments demonstrating that conditioning on annotations leads to the generation of functionally relevant proteins, it is unclear whether the model truly generates de novo proteins or primarily memorizes sequences from the training database. Instead of relying on extensive conditioning to generate highly specific proteins that closely resemble known sequences, the authors could explore using fewer conditions to assess whether the model can still generate novel yet functional proteins. This would provide stronger evidence of the model’s ability to innovate beyond known protein sequences. Other Comments Or Suggestions: line 355 natural sequence -> protein sequence Questions For Authors: As mention above, one of the most exciting aspects of protein design is creating novel proteins that do not exist in nature but can perform specific functions. In this paper, the proposed method demonstrates that the generated sequences closely match the properties of the given condition labels. However, have you tested CFP-GEN’s ability to generate truly de novo proteins? For instance, if conditioned only on a specific EC number, can CFP-GEN generate sequences with novel structural motifs rather than sequences closely resembling known proteins? Additionally, have you analyzed the diversity of the generated sequences and structures? Do they all contain conserved regions, or does CFP-GEN exhibit variation in its outputs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Reviewer 1Xgk** Thanks so much for acknowledging the novelty of our work and for providing thoughtful and constructive comments. We provide clarifications to your concerns below, which we will incorporate into the final version. Please let us know if you have any further valuable comments or suggestions. --- **Q1. The novelty and diversity of the generated proteins.** **A1.** Thank you for your insightful advice. To examine whether our model generate truly de novo proteins, we selected 7 diverse EC numbers from different top-level categories: Oxidoreductases, Transferases, Hydrolases, Lyases, Isomerases, Ligases, and Translocases. For each EC number, CFP-GEN generates 30 sequences **conditioned only on the EC number**. We then compared these generated sequences with 30 real proteins from the enzyme validation set with the corresponding EC number. Novelty is computed by measuring how different each generated sequence is from its **most similar real protein** in the training set, while diversity is computed by capturing how different the generated sequences are from the **overall training set**. To ensure both metrics are interpretable in the same direction, we subtract the scores from 1 (i.e., **higher is better**). *R1-Table 1. Novelty comparison between real and designed proteins.* | Method | EC:1.5.1.5 | EC:2.7.11.1 | EC:3.6.4.13 | EC:4.2.1.33 | EC:5.2.1.8 | EC:6.1.1.20 | EC:7.1.2.2 | | --- | --- | --- | --- | --- | --- | --- | --- | | Real Proteins | 0.254 | 0.234 | 0.334 | 0.252 | 0.296 | 0.215 | 0.221 | | CFP-GEN | 0.379 | 0.390 | 0.412 | 0.303 | 0.302 | 0.265 | 0.449 | *R1-Table 2. D*iversity *comparison between real and designed proteins.* | Method | EC:1.5.1.5 | EC:2.7.11.1 | EC:3.6.4.13 | EC:4.2.1.33 | EC:5.2.1.8 | EC:6.1.1.20 | EC:7.1.2.2 | | --- | --- | --- | --- | --- | --- | --- | --- | | Real Proteins | 0.698 | 0.764 | 0.676 | 0.654 | 0.745 | 0.612 | 0.589 | | CFP-GEN | 0.748 | 0.766 | 0.725 | 0.565 | 0.714 | 0.677 | 0.760 | We observe that CFP-GEN consistently achieves higher sequence novelty across all 7 EC numbers, demonstrating its strong potential for ***de novo* protein design beyond simply replicating known sequences**. Moreover, the generated sequences exhibit **high intra-class diversity** in 5 out of the 7 EC categories. These results suggest that the model has learned a more generalized representation of functional proteins, rather than overfitting to training examples, and effectively avoids mode collapse. We will update the final version to reflect these discussions. --- **Q2. Examples of conserved regions with mutations.** **A2.** Thank you for raising this important point. We selected two representative case studies (e.g., EC: 1.5.1.5 and EC: 4.2.1.33) and performed **sequence alignments between CFP-GEN-generated sequences and known proteins from the same EC class**. We observed that CFP-GEN introduces **mutations at specific positions within conserved regions**, rather than simply copying them. The alignment results are presented below: --- ```jsx CFP-GEN(EC:1.5.1.5): ...GTPVFVHAGPFANINHGANS... Real Proteins: ...GTPAFVHGGPFANIAHGNSS... ...GTPLVVHAGPFANIAHGNSS... ...GTPVFVHAGPFANIAHGNSS... ...GTPVLVHAGPFANIAHGNSS... ↑↑ ↑ ↑ ↑↑ ``` ```jsx CFP-GEN(EC:4.2.1.33): ...MTIVCGDSHTSTHGAFGALA... Real Proteins: ...MTVVCGDSHTSTHGAFGCLA... ...MTIACGDSHTSTHGAFGAIA... ...TTIVCGDSHTSTHGAFGALA... ...MTIACGDSHTSTHGAFGNIA... ↑ ↑↑ ↑↑ ``` --- The mutated positions are indicated **by arrows**. In the first case, the CFP-GEN-designed sequence preserves the core motif (e.g., **GTP…GPFANI**), while introducing mutations such as A→V, L→F, or A→N at semi-conserved sites. In the second case, mutations (e.g., T→M, V→I, C→A) occur at non-critical positions, while maintaining key motifs (**CGDSHTSTHGAFG**), supporting the model’s ability to retain functional cores while exploring novel sequence variations. This discussion and the corresponding examples will be included in the revised supplementary material. Thank you! --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments. Regarding the generated proteins, did you identify any novel motifs? For instance, in the case of the EC:1.5.1.5 enzyme, the GTP…GPFANI region is expected to contain a conserved structural motif. Did you observe any newly generated proteins **doesn't have** this specific motif? I understand that structural comparison can be quite time-consuming, it would be valuable to include some structural comparisons in the final version of the paper. Overall, I believe this is a solid paper, and I will maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful suggestion and support for our work. Following your recommendation, we conducted a detailed structural comparison between the generated proteins and known structures. Below, we provide representative examples, and additional results will be included in the revised supplementary material. To better characterize these novel motifs, we also provide their secondary structure (SS) annotations (e.g., helix (H), strand (E), coil (C)). These results demonstrate that CFP-GEN is capable of generating entirely new structural motifs, while still maintaining functional viability. ``` Uniprot ID: A3M4Z0 CFP-Gen: …VSLLQEYVTWEMGKLEKLES… SS Annotation: …HHHHHHHHCCCCHHHHHHHH… Real Protein: …AGFIRRYVSWQPSPLEHIE… SS Annotation: …HHHHHHHHHCCCCHHHHHH… ``` ``` Uniprot ID: A3M9Y1 CFP-Gen: …RPLNQTMPQALALLAPEQRPTVWHQ… SS Annotation: …HHHHHHHHHHHHHCCHHHCCEEEEE… Real Protein: …AKALNERLPPALKQLEVPLNIFHQ… SS Annotation: …HHHHHHHHHHHHHCCCCCEEEEEE… ``` ``` Uniprot ID: A6NJ78 CFP-Gen: …IRIYVNSELEEIEQALKSAERVLAPGGRLSIIS… SS Annotation: …HHHHHHHHHHHHHHHHHHHHHHHCCCCCEEEEE… Real Protein: …LRIFVNNELNELYTGLKTAQKFLRPGGRLVALS… SS Annotation: …HHHHHHHHHHHHHHHHHHHHHHEEEEEEEEEEE… ``` ``` Uniprot ID: Q8T9Z7 CFP-Gen: …AAERQTTFNDMIKIALESVLLGDASGPEGQ… SS Annotation: …HHHHHHHHHHHHHHHHHHHHHHHHCCHHHC… Real Protein: …VPHQLENMIKIALGACAKLATKYA… SS Annotation: …HHHHHHHHHHHHHHHHHHHHHHCC… ``` ``` Uniprot ID: Q9HW26 CFP-Gen: …PVAQALDALESKLVDFSALT… SS Annotation: …HHHHHHHHHHHHHHHHHHHH… Real Protein: …TVEQARERLQEKFDWLRREASAEELAGF… SS Annotation: …HHHHHHHHHHHHHHHHHHHHHHHHHHHH… ``` ``` Uniprot ID: Q9KLJ3 CFP-Gen: …LRIISATAKKLGMSMDN… SS Annotation: …HHHHHHHHHHHHHHHHH… Real Protein: …NIRIIQTLCDLAGIAQDKA… SS Annotation: …HHHHHHHHHHHHHHHHHHH… ``` ``` Uniprot ID: Q9R4E4 CFP-Gen: …GTTMRLMAGVLAGQPFFSVL… SS Annotation: …HHHHHHHHHHHHHCCCCEEE… Real Protein: …AATGCRLTMGLVGVYDFDSTFI… SS Annotation: …HHHHHHHHHHHHHHCCCEEEEE… ``` ``` Uniprot ID: Q9A874 CFP-Gen: …YTRHEYFRRILCQMIGRWVEAGEAPPADIPLLGEMVKNICFNNARDYF… SS Annotation: …HHHHHHHHHHHHHHHHHHHHHCCCCCHHHHHHHHHHHHHHHHHHHHHH… Real Protein: …IPARHDVARRVDSAFLARMVAEHRMDLVEAEELIVDLTYNLPKKAY… SS Annotation: …HHHHHHHHHHHHHHHHHHHHHHCCCHHHHHHHHHHHHHHHHHHHHH… ```
null
null
null
null
null
null
MindLLM: A Subject-Agnostic and Versatile Model for fMRI-to-text Decoding
Accept (poster)
Summary: A large body of research in visual image processing has focused on either brain encoding or decoding, aiming to understand how the brain processes natural scenes or reconstructs images from human brain activity. Recent fMRI brain decoding studies have specifically targeted advanced brain-computer interfaces, where incorporating natural language instructions and fMRI encoder latent vectors into a LLM decoder provides deeper insights into brain mechanisms. However, previous studies, where models were trained independently for each subject, have resulted in poor generalization across subjects. To address this, the current study introduces a novel approach called MindLLM. MindLLM develops a subject-agnostic fMRI encoder capable of accommodating subjects with varying input shapes, thus achieving high-performance subject-agnostic decoding. Additionally, the authors introduce brain instruction tuning, which enhances the model’s ability to capture diverse semantic representations from fMRI signals and generalize to new tasks. Experimental results on the popular NSD (Natural Scenes Dataset) and comprehensive fMRI-to-text benchmarks demonstrate that MindLLM outperforms baselines in downstream tasks, unseen subject generalization, and novel task adaptation. Claims And Evidence: The submission makes several claims about the effectiveness and innovation of the MindLLM approach, particularly in its subject-agnostic encoding and the ability to generalize across subjects. However, some of these claims are not fully supported by clear and convincing evidence: * The authors claim that the encoder is subject-agnostic is made, but the evidence provided is not sufficiently detailed to demonstrate how the shared and unique information across subjects is learned. There is a need for more explicit evidence on how the encoder generalizes across subjects, especially given that different subjects have varying numbers of voxels. The absence of a thorough analysis of this aspect makes the claim less convincing. * While the paper presents results on multiple benchmarks, there is no clear distinction between subject-generalizable benchmarks and those that show subject-specific variations. It would be beneficial to see a more detailed analysis of the performance on individual subjects, and whether the model performs similarly across all subjects on certain tasks or whether it varies significantly. This would help substantiate the claim that MindLLM generalizes effectively. * The authors claim that brain instruction-tuning enhances the model’s performance, but the details on how this is performed (e.g., whether the loss is backpropagated, and how it affects the LLM decoder’s weights) are not provided. The lack of a comparison between results before and after brain instruction-tuning weakens the claim, as it is not clear how much improvement the tuning provides or what specific changes occur in the model's performance due to this process. * The authors claim that the flatbrain maps illustrate important insights from the model is undermined by the difficulty in interpreting the figure due to the colormap choice. Without a clearer visualization and better explanation of what each flatmap represents, particularly in relation to specific subjects and query tokens, the evidence provided in Figure 6 does not strongly support the claims about the model’s effectiveness. Methods And Evaluation Criteria: The proposed methods, including the subject-agnostic encoder and brain instruction-tuning, are suitable for the problem of brain decoding and the application of multimodal language models. However, additional clarifications on how the methods are implemented and evaluated (particularly in terms of how they handle varying voxel counts and the impact of brain instruction-tuning) would enhance the robustness of the approach. The evaluation criteria, including the use of fMRI-to-text benchmarks, are appropriate, but further breakdowns of subject-specific performance would provide a more comprehensive understanding of the model's generalizability. Theoretical Claims: Since the paper does not present formal mathematical proofs to substantiate its theoretical claims, the correctness of any such proofs does not apply. Experimental Designs Or Analyses: The experimental design of the study is generally sound, but there are several areas that need further clarity and validation. These issues are highlighted in the weaknesses section. Supplementary Material: Yes, the supplementary material provides a link to the code, and the link is anonymous. Relation To Broader Scientific Literature: The key contributions of the MindLLM approach build upon several important areas of scientific literature, including brain encoding and decoding, brain-computer interfaces, multimodal llms, and cross-subject generalization. Essential References Not Discussed: Yes, all the relevant works are clearly discussed in the current version, and the related work section is adequate. Other Strengths And Weaknesses: Strengths: 1. The concept of learning a query weight matrix to transform each subject's fMRI data into tokens is intriguing, as this approach can be generalized for new subjects with varying input shapes. 2. The method of handling position encoding for key vectors, particularly the use of Fourier transformers for voxel coordinates, is well-handled. 3. The proposed MindLLM is interesting, as it leverages fMRI-to-text benchmarks and utilizes fMRI encoding vectors passed to an LLM decoder, making MindLLM an fMRI instruction-tuned multimodal large language model. Weaknesses: 1. There are several significant weaknesses in this work, particularly regarding the methodology and experimental results: * The majority of the results presented in this study focus on fMRI-text decoding benchmarks. However, the underlying methodology—the subject-agnostic fMRI encoder—has not been sufficiently explained. Specifically, how was this approach implemented, and was the latent space verified for each subject? Given that fMRI recordings have low signal-to-noise ratio (SNR), it is crucial to understand how the common latent space functions. Furthermore, whether the transition from the latent space to the subject space is accurately reconstructing the voxels and regions is a key issue. Without proper validation of this, it is difficult to trust the results of the fMRI-text decoding. * Additionally, the authors should clarify their subject-agnostic fMRI encoder approach in more detail. With the current explanation, it is unclear how the same query weight matrix is learned across subjects. For example, if one subject has 12,000+ voxels and another has 15,000 voxels, how is the same weight matrix used to project data into a common space? If I misunderstood, the authors should provide a clearer explanation of this methodology. 2. The concept of Mind-LLM with a subject-agnostic encoder is not entirely novel. It seems quite similar to the approach presented in the MindEye-2 paper, where the authors also learn a subject-agnostic encoder and use it to generalize on held-out subjects. Therefore, the authors of the current study could provide a clearer explanation of how the MindLLM approach differs from MindEye-2, especially considering that both papers use the same NSD dataset. 3. There are no details provided on how the authors perform brain instruction-tuning. Specifically, it is unclear whether the loss is backpropagated during brain instruction-tuning. If so, how are the weight parameters in the LLM decoder affected or updated? Additionally, there is no comparison of the results before and after brain instruction-tuning to demonstrate its impact. 4. Since the authors have learned a subject-agnostic encoder, it would be helpful to understand what specific shared information is learned across subjects and what unique information is learned for each subject. Additionally, as the authors perform various benchmarks, it would be insightful to know if any benchmarks exhibit similar performance across subjects, while others show subject-specific variations in results. 5. What are the key conclusions from MindLLM? The flatbrain maps in Figure 6 are difficult to interpret due to the choice of colormap. Additionally, there are six flatmaps, and it is unclear whether each subfigure corresponds to a specific subject. The authors mention that each subfigure is related to a query token, but this makes Figure 6 difficult to read and interpret. As a result, drawing meaningful conclusions from this figure is challenging. 6. The paper lacks a discussion of the findings and implications of the current work. Other Comments Or Suggestions: * Clarity of Figures: Some figures, especially Figure 6 (flatbrain maps), could benefit from clearer labeling, improved colormap choices, and better explanations to enhance readability and interpretation. * The authors could consider including subject-specific and shared variance brain maps to further illustrate the model's ability to generalize across subjects and capture common brain activity patterns. Questions For Authors: * The questions raised in the weaknesses section regarding the subject-agnostic encoder, brain instruction-tuning, and performance across subjects would benefit from further clarification. Please refer to the details discussed in that section for more context. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **C1** It is unclear how the fMRI encoder handles different numbers of voxels. **R1** Each voxel is treated as a token in our model, and the attention layer learns to map sequences of varying lengths into a fixed-dimensional representation. This is similar to using multiple [CLS] tokens in a BERT model to capture the semantics of sentences of various lengths. This works because, in attention, the output dimension is solely determined by the number of queries. We also provide a [mathematical clarification](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/math_clarification.png) **C2** The idea is not entirely novel—see MindEye2. **R2** We respectfully disagree with the statement and would like to emphasize the **fundamental differences** between our approach and MindEye2: 1. MindEye2 requires a separate projector layer trained on each subject to map their features into a shared representation space. 2. The architecture of MindEye2 cannot handle subjects with varying numbers of voxels. Therefore, MindEye2 cannot generalize to unseen subjects without further tuning on data from the unseen subject. In contrast, our model does not need subject-specific parameters or architectures and can generalize to held-out subjects in a zero-shot manner. **C3** The brain instruction tuning lacks details of backpropagation. **R3** The brain instruction tuning trains the model through the loss function in lines 254-255. The gradients are backpropagated from the predicted tokens to the LLM, and all the way back to the fMRI encoder to update its parameter. **C4** No comparison to show the results w/ and w/o the brain instruction tuning. **R4** We would like to politely point out that we did have a comprehensive experiment to show the effects of brain instruction tuning (BIT). As shown in Table 2’s caption, models with $\circ$ are the versions w/o BIT. We can see models w/ BIT significantly outperform their corresponding models w/o BIT. On average, BIT brings 28.0% improvement, as stated in line 340, right column. **C5** Figure 6 is difficult to interpret. **R5** We improved [Figure 6](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/figure6.png)'s clarity by selecting a more perceptually friendly colormap. We also improved the caption to better clarify that each subfigure corresponds to a query token on subject 1. The findings are discussed in section 4.7. **C6** Brain maps across subjects should be included. **R6** We thank the reviewer for the suggestion. We now present the query token from Figure 6(a) applied to additional subjects (Subjects 2–7) below. [Figure](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/brainmaps.png) We observe that the attention maps exhibit highly similar spatial patterns across subjects. This consistency supports the generalizability across subjects to capture common brain patterns. **C7** The shared vs. subject-specific information in the latent space should be verified. **R7** We do not encourage the encoding of subject-specific information in the method, as our model is designed to be *subject-agnostic.* This enables scalable deployment across diverse populations and eliminates the need for costly, subject-specific calibration. To assess how subject-specific and subject-agnostic information evolve through the model, we visualize the latent embeddings at various stages of the encoder below. [Figure](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/latent.png) **C8** Performances on individual subjects should be included. **R8** We appreciate the suggestion. The results in Table 2 are based on subject 1. We have now extended our experiments to include subjects 2 and 5 below. [Table](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/breakdown.png) Across subjects, our model outperforms the baselines in most cases, demonstrating robustness to inter-subject variability. We note that performance varies slightly more on ST-VQA and TallyQA. Models trained on subject 5 outperform those trained on subjects 1 and 2, suggesting that subject 5 provides higher-quality data. We will include results for all subjects in the revised version. **C9** What are the key conclusions from MindLLM? The paper lacks a discussion of the findings and implications of the current work. **R9** While interpretation (Figure 6) supports our model’s motivation, our primary focus is **improving decoding performance**. Key findings: - **Brain instruction tuning** significantly improves the performance. - **Neuroscience-informed attention** significantly outperforms vanilla cross-attention (Section 4.6), offering architectural insights for fMRI decoding. Impact: Our method enables subject-agnostic decoding and easy task adaptation, which unlocks out-of-the-box applications like prosthetic control without subject-specific finetuning. We will update detailed discussions in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing several of my questions, particularly clarifying how MindLLM differs from the MindEye-2 paper. The explanation of how each voxel is mapped into an embedding vector by treating it as a token is also very clear. I appreciate the adjustments made to the figures, which have improved readability. However, further clarification regarding the encoder architectures and training paradigms—ideally through a brief tabular or side-by-side architectural comparison—would help highlight the distinctions between MindEye-2 and MindLLM more clearly. Additionally, an analysis of shared versus subject-specific variance would strengthen the paper’s claims. Therefore, I am raising my score in recognition of the improved clarity on several important points. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for recognizing the contributions of our work and for providing additional thoughtful suggestions. **CC1** The distinction between the proposed model and MindEye2 should be highlighted through a brief tabular or side-by-side architectural comparison. **RR1** We have 1) updated a side-by-side comparison between our model and MindEye2; 2) a table summarizes the distinctions between our model and all important baselines below [Figure & Table](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/comparison.png) **CC2** There should be additional analysis of shared versus subject-specific variance. **RR2** We further updated the [flatbrain maps](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/brainmaps.png) to include quantitative measurements. Specifically, given that voxels differ for different subjects, we propose [spatially weighted cosine similarity (SWCS)](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/SWCS.png), a metric between $[-1, 1]$ to assess similarities between the attention maps of different subjects. As a baseline for comparison, we also generate uniform random attention maps for each subject and compute their pairwise similarities. We further analyze the results below based on the updated flatbrain maps. 1. We observe that the attention maps exhibit highly similar spatial patterns across subjects, indicating that the model captures shared structural-functional correspondences across human brains, which is expected due to the overall anatomical and functional similarity among individuals. 2. At the same time, we do observe moderate subject-specific variations in the attention maps. These reflect differences in voxel-level functional signals and individual variability in precise voxel locations. The model is able to accommodate these differences through flexible attention mechanisms. 3. Based on 1 & 2, we argue that the neuroscience-informed attention layer in our model learns both a global understanding of brain organizations and performs finer-grained alignment across subjects when, for example, voxel $i$ in subject 2 may correspond functionally to voxel $j$ in subject 3. However, the actual fMRI signals at corresponding locations may still vary in distribution across subjects. 4. To address this, our model includes subsequent MLP blocks that further transform the attention-aggregated voxel representations to a shared space, eliminating subject-specific variance, as illustrated in the [Figure](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/latent.png). We also plan to conduct more comprehensive analyses regarding this topic in the revised version and in future work.
Summary: This works a versatile and subject-agnostic model for fMRI-to-text decoding. It demonstrated a brain-instruction tuning approach, inspired from visual instruction tuning framework. This model has specifically designed with application across subjects with varied number of recording voxels, which is a common challenging question in the BCI applications. Compared to existing model using an averaging pooling or sampling strategy, this model proposes an neuroscience-informed attention structure that allows information extraction effectively. In the encoder, the query embeddings are learnable, and key embedding integrate positional information. Meanwhile, it utilized the NSD datasets (MSCOCO datasets) to build the link between visual images and text inputs, which allows diverse and versatile downstream tasks like perception, memory, language, and reasoning. Claims And Evidence: This framework is subjective-agnostic, as the designed attention modules allow flexible information extraction regardless of the mismatch of recorded voxels, the claim is clearly supported. As it is demonstrated in a comprehensive benchmarks, compared to existing methods, MindLLM achieves the state-of-the-art accuracy in different downstream tasks. Meanwhile, it has also been evaluated on a subject held out settings, and achieving the best accuracy. Methods And Evaluation Criteria: NSD dataset is a well-established benchmark dataset in brain decoding, and MSCOCO dataset is a popular dataset in computer vision, and has been extended into other diverse task settings, including captioning, QA, etc. The evaluations and metrics including BLEU, METEOR are properly utilized. Multiple baselines in this field including MindBridge, UniBrain, BrainChat are compared against. Theoretical Claims: This work does not include any theoretical proofs or claims. Experimental Designs Or Analyses: This experiment designs are solid, as it has been demonstrated on largest-scale brain decoding dataset recently released, although including other datasets such as BOLD5000 (cross-dataset generalization) could increase more soundness of the evaluation. Multiple established baselines are compared against. An ablation study of critical components in the models has also been implemented in this study. Supplementary Material: The supplementary material include more details of datasets to improve reproducibility. Relation To Broader Scientific Literature: Brain decoding is a challenging field, recent studies start to integrate brain decoding with existing large language models framework to enhance its capability for text decoding. Subject-agnostic decoding is one of the major challenges in this field, and this work is well-motivated to address such issue. This work covers sufficient literature review, is comprehensively grounded with comparisons with multiple existing baseline model, and the results demonstrate certain level of improvement. Essential References Not Discussed: Although this work only focused on text decoding, while it lacks of comparisons or references on brain-machine alignment or image/video decoding from related literatures, for example [1][2][3]. [1] Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). [2] Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity. [3] Seeing beyond the brain: Conditional diffusion model with sparse masked modeling for vision decoding. Other Strengths And Weaknesses: This work demonstrate an impactful framework for brain instruction tuning, and allows subject-agnostic decoding, and achieves the state-of-the-art accuracy, while it does not outperforms existing baselines significantly. Meanwhile the framework itself is not novel, as it directly mimic the visual instruction tuning. The framework is lack of capability of integrating the image information, which potentially could be a future extension. Other Comments Or Suggestions: N/A Questions For Authors: 1. Computational cost for model's training and inference, compared to other baselines. 2. Does scaling laws of data observed in the results, more subjects trained will increase the accuracy? 3. How is subject-specific information encoded in this framework? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **C1** The manuscript lacks comparisons or references on brain-machine alignment or image/video decoding [1][2][3]. **R1** We thank the reviewer for pointing out these relevant references. [1] [2] deal with fMRI time series, while our method focuses on static fMRI (i.e., fMRI signals at a moment). Therefore, they are not comparable to our work. We will create a new section for multimodal brain decoding in related work and discuss these models in the revised version. For [3], we have included it in the baselines. The results are shown below and will be updated in the revised version (The MindVis column is the new baseline). [Table](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/table2.png) **C2** The framework is not very novel, as it directly mimics the visual instruction tuning. **R2** The training paradigm of brain instruction tuning is similar to visual instruction tuning. However, the construction of the brain instruction tuning datasets is a nontrivial contribution. As discussed in section 3.3 (lines 177-184, right column), we select datasets to capture diverse aspects of semantic information embedded in fMRI signals, which are considered among the most fundamental and essential properties of human brains. **Q1** The computational cost for model's training and inference, compared to other baselines. **A1** We thank the reviewer for this valuable question. We summarize the computational complexity and runtime analysis below. [Table](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/cost.png) Experiments here were conducted on run on an A40. All timings are measured during inference with `batch_size=1`. When comparing with other encoders, we only measure the encoder part (LLM is excluded). We separately time the LLM at the last row. Despite its superior performance, our model introduces only a marginal increase in encoder-side computational cost compared to other baselines. Notably, the majority of inference time and complexity arises from the LLM component, which is shared across all models. This highlights that the design choices in the encoder—while crucial for performance—do not significantly affect runtime. **Q2** Does scaling laws of data observed in the results, more subjects trained will increase the accuracy? **A2** We appreciate the reviewer’s valuable question and suggestion. We conducted experiments to evaluate how model performance scales with the number of subjects and report the performances on the COCO caption task. We examined both in-distribution (seen subjects) and out-of-distribution (held-out subjects) settings. [Table](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/scaling.png) Our show significant performance improvements as the number of training subjects increases, demonstrating that the model benefits from exposure to more subjects. We will include more results on other subjects and datasets in the revised version. **Q3** How is subject-specific information encoded in this framework? **A3** We do not encourage the encoding of subject-specific information in the method, as our model is designed to be *subject-agnostic*—capable of generalizing to individuals not seen during training without requiring additional fine-tuning. This "out-of-the-box" generalization has significant practical benefits: it supports scalable deployment across diverse populations and eliminates the need for costly, subject-specific personalization or calibration. To assess how subject-specific and subject-agnostic information evolve through the model, we visualize the latent embeddings at various stages of the encoder using T-SNE below. [Figure](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/latent.png) This figure illustrates the transition from subject-specific to subject-agnostic representations. Initially, the embeddings exhibit distinct subject-wise clusters, indicating strong subject-specific information. After passing through the neuroscience-informed attention layer, these clusters are still visible. However, as the embeddings propagate through successive MLP layers, the subject-specific patterns gradually dissolve. By the final layer, the latent space forms a well-mixed representation where subject identity is no longer distinguishable. This transformation is crucial for enabling generalization to unseen subjects in a zero-shot manner.
Summary: The paper proposes a subject-agnostic encoding from fMRI recordings into an LLM space to enable text decoding from brain data. The paper claims this approach generalizes across subjects with different numbers of voxel measurements, and that it outperforms existing baselines. Claims And Evidence: As written in Table 1, the proposed MindLLM is the only model that is subject-agnostic. But only a few lines above in the text, it is written that UniBrain (Mai & Zhang, 2023) is capable of doing so: "models for fMRI decoding can not handle varying input shapes and are not subject-agnostic, with only a few exceptions (Mai & Zhang, 2023)". Which is correct? Model performance indeed seems superior to baseline methods, with consistent gains across datasets (Tables 1 and 2). The generalization across subjects is what is most convincing to me (Table 3). The model also seems to adapt well to new tasks (Table 4). Overall, this seems like a good contribution. Methods And Evaluation Criteria: Evaluation methods are fair as I can tell, but I am not an expert in brain-to-text decoding. Theoretical Claims: N/A. Experimental Designs Or Analyses: Using standard benchmarks as far as I can tell. I cannot judge if the choice of baselines is comprehensive. Supplementary Material: skimmed. Relation To Broader Scientific Literature: Decoding text signals from brain data is a broadly studied phenomena, and the use of LLMs for brain data has gained a lot of popularity in recent years. This work adds to that body of literature by proposing a novel subject-agnostic encoder to adapt individual brain recordings into a common LLM space. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: - Other Comments Or Suggestions: Line 32 "results" should be upper-case at beginning of sentence. Something is also off about the grammaticality of the sentence. Questions For Authors: How well does your model scale with the number of subjects? I.e. how much better does it perform with pre-training on e.g. 3 subjects compared to 1 etc? edit after rebuttal: authors have run this and results look very promising. Increased score from 3 to 4. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **C1** The text claims UniBrain is subject-agnostic, but Table 1 lists MindLLM as the only subject-agnostic model—this inconsistency needs clarification. **R1** We apologize for the mistake and thank the reviewer for bringing this to our attention. > with only a few exceptions (Mai & Zhang, 2023). > Should actually be > with only a few exceptions (Wang et al., 2024b) > The models in the two references share the same name, which led to the citation error. We will correct it in the revised version. **C2** Line 32 "results" should be upper-case at the beginning of the sentence. Something is also off about the grammaticality of the sentence. **R2** We thank the reviewer for pointing this out. We have corrected the capitalization and revised the sentence for clarity and grammatical correctness. The updated version reads: > The results demonstrate that our model outperforms the baselines in downstream tasks, generalization to unseen subjects, and adaptation to novel tasks. > **Q1** How well does your model scale with the number of subjects? I.e. how much better does it perform with pre-training on e.g. 3 subjects compared to 1 etc? **A1** We appreciate the reviewer’s valuable question and suggestion. We conducted experiments to evaluate how model performance scales with the number of subjects and report the performances on the COCO caption task. We examined both in-distribution (seen subjects) and out-of-distribution (held-out subjects) settings. [Table](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/scaling.png) Our results show significant performance improvements as the number of training subjects increases, demonstrating that the model benefits from exposure to more subjects during pre-training. We will include more results on other subjects and datasets in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. > As written in Table 1, the proposed MindLLM is the only model that is subject-agnostic. But only a few lines above in the text, it is written that UniBrain (Mai & Zhang, 2023) is capable of doing so: "models for fMRI decoding can not handle varying input shapes and are not subject-agnostic, with only a few exceptions (Mai & Zhang, 2023)". > "with only a few exceptions (Mai & Zhang, 2023)." Should actually be "with only a few exceptions (Wang et al., 2024b)" I'm still confused, I was more pointing to the inconsistency between Table 1 and text, less so the citation. It seems that MindLLM is then **not** the only model that is subject-agnostic. So is Table 1 incorrect? > Our results show significant performance improvements as the number of training subjects increases, demonstrating that the model benefits from exposure to more subjects during pre-training. We will include more results on other subjects and datasets in the revised manuscript. I honestly find this the most interesting result of the paper. I'll increase my score, I think this is worth sharing with the community. --- Reply to Comment 1.1.1: Comment: Thanks for the follow-up. We now better understand the source of confusion and appreciate the opportunity to clarify further. UniBrain (Mai & Zhang, 2023) is not subject-agnostic, and has been included in Table 1. UniBrain (Wang et al., 2024b) is subject-agnostic, and has been included in **Table 2, 3 & 4**, but not in Table 1. *It is worth noting that although it is subject-agnostic, it exhibits limitations as discussed in lines 98-105 (right column) and illustrated in Figure 2.* The reason UniBrain (Wang et al., 2024b) was not originally included in Table 1 is that Table 1 is a widely-used public captioning benchmark, which has been adopted by many prior works. We only included baselines that reported results on this benchmark in their **original publications**. Since UniBrain (Wang et al., 2024b) was not designed for captioning and did not report results on this benchmark, it was excluded. However, to provide a more complete comparison, we have now adapted UniBrain (Wang et al., 2024b) to this benchmark (as we did in Tables 2–4) and present the updated Table 1 below: [Table 1](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/caption.png) In conclusion, both the text (with corrected citations) and Table 1 are correct. It’s just that (Wang et al., 2024b) (i.e., the *exception*) was not in Table 1.
Summary: This paper proposes MindLLM for subject-agnostic and versatile fMRI-to-text decoding. MindLLM consists of an fMRI encoder and an off-the-shelf LLM. The paper evaluates MindLLM on several fMRI-to-text benchmarks. Claims And Evidence: The paper claims "a voxel’s position alone can theoretically serve as effective keys for attention weight computation", but no evidence is provided. It seems more like a guess rather than a fact. It is important to verify the rationale of this operation. Methods And Evaluation Criteria: See the above. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: - It is not clear how much of the improvement is due to the proposed model versus the use of superior LLMs. It is unfair to compare their performances with other state-of-the-art baselines that do not use the same LLM in Table 1. The authors should report the performance of their runs using the same backbones as the compared methods for a fair comparison. - Table 2/3 have similar issues to Table 1. - In the captioning task in Table 1, the METEOR, CIDEr and SPICE metrics are often considered more important. However, the results for these metrics of the proposed method are only slightly superior to or even inferior to the baselines, which raises doubts about the effectiveness of the proposed method. - The ablation study in section 4.6 is not sufficiently convincing. There is no explanation of the task being experimented on, and the authors should provide the final results, not just the loss. Supplementary Material: No. Relation To Broader Scientific Literature: No. Essential References Not Discussed: None. Other Strengths And Weaknesses: See the above problems. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **C1** The paper claims "a voxel’s position alone can theoretically serve as effective keys for attention weight computation", but no evidence is provided. **R1** We would like to point out politely that it is not a guess—it is strongly supported by the ablation study (note the blue line) in section 4.6. As pointed out in lines 403-406, left column, > The vanilla cross attention (Pos Enc. + fMRI) leads to poor performance, while removing fMRI values from the key embeddings (Pos Enc.) yields a significant improvement. > which validates the hypothesis of using a voxel’s position alone. **C2** For Tables 1, 2 & 3, It is not clear how much of the improvement is due to the proposed model versus the use of superior LLMs. It is unfair to compare their performances with other state-of-the-art baselines that do not use the same LLM in Table 1. **R2** We argue that the comparison is fair across Table 1-3, as pointed out in lines 140-142 > In practice, we use Vicuna-7b (Zheng et al., 2023) as our LLM to maintain consistency with our baseline (Xia et al., 2024). > Specifically, - **Table 1**: The strongest baseline, UMBRAE, uses Shikra-7b, a vision-language model improved based on Vicuna-7b—consistent with our choice of LLM. - **Table 2**: Besides UMBRAE, for other competitive baselines (MindBridge, UniBrain), we use the same LLM backbone as ours. - **Table 3**: All baselines share the same LLM backbone as our model. Furthermore, as discussed in Section 4.2, we observe substantial performance gains attributable to the **brain instruction tuning** and the **encoder design**. This supports our claim that the improvements stem from our proposed approach rather than solely from the choice of LLM. **C3** The results for METEOR, CIDEr and SPICE of the proposed method are only slightly superior to or even inferior to the baselines. **R3** While our model does not significantly outperform the baselines in terms of METEOR, CIDEr, and SPICE, it offers several key advantages that the baselines lack: - As shown in Table 1, our model is the only one that is subject-agnostic. In other words, it can generalize to unseen subjects in a zero-shot manner. This is particularly valuable in BCI applications, where users often expect the device to work out of the box without requiring subject-specific training data. - As noted in lines 82–83 (right column), our model can handle tasks beyond simple image-stimulus associations—such as answering memory-related questions—while the strongest baseline, UMBRAE, cannot. We would also like to emphasize that our method demonstrates consistent and broad improvements across a wide range of downstream tasks. For example, beyond brain captioning, our model outperforms all baselines on 33 out of 38 evaluation metrics reported in Table 2. **C4** The ablation study is not sufficiently convincing. There is no explanation of the task being experimented on, and the authors should provide the final results, not just the loss. **R4** We appreciate the reviewer’s feedback and apologize for the lack of clarity regarding the experimental setup. The ablation study is conducted across all tasks during brain instruction tuning on subject 1, and the reported loss is the average loss across these tasks. We will update the details about settings in the revised version. Our decision to present loss curves was intentional: As pointed out in lines 405–411, left column, the comparison of the convergence speeds of the orange and green lines helps illustrate the impact of both region and positional encodings. (We do not plot metrics during training at each step since generation is time-consuming) However, we agree that final performance metrics are essential for a complete picture. We have now computed these metrics for most downstream tasks, and the results are shown below. The results are consistent with our original claim based on the loss curves. Note that it also validates our response to the reviewer's concern 1. [Table](https://anonymous.4open.science/api/repo/MindLLM-FAA4/file/rebuttal/ablation.png) We will include the final results of all tasks in the revised version. --- Rebuttal Comment 1.1: Comment: While I remain skeptical about the theoretical basis for using voxel positions in attention computation (which seems experimentally motivated), I acknowledge the authors' efforts in addressing some concerns. Accordingly, I have adjusted my score from 1 to 2. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their feedback and for reconsidering their score. Given that the reviewer still remains skeptical about the theoretical basis for using voxel positions alone, we would like to politely further clarify that our design choices are not merely *experimentally* motivated, but are motivated by both neuroscientific intuition and established literature, and ultimately validated by empirical results (Section 4.6). Specifically, as noted in lines 124–126 (right column), it is generally accepted that the human brain functions broadly exhibit spatial consistency across subjects. For example, the motion of the right body is mainly related to the left hemisphere, while the right hemisphere handles most of the left body's movement [1]. Furthermore, prior work has shown a coupling between a voxel's cognitive function and the anatomical role of the voxel in MRI, which is to some extent reflected by its spatial location in the brain [2, 3]. Hence, we theoretically infer that voxel positions are related to brain function characterization. Therefore, we argue that our design aligns well with the unique properties of fMRI data, in contrast to images or text. The empirical findings further support this modeling choice. Finally, to address potential variability across subjects (e.g., anatomical and functional shifts), we introduce **region encodings** (lines 147, right column – 203, left column), which provide additional neuroscientific grounding and act as a calibration mechanism to complement the raw positional information. [1] McManus, I. C. (2002). Right hand, left hand: The origins of asymmetry in brains, bodies, atoms, and cultures. Harvard University Press. [2] Zhang, X., Liang, C., Wang, N., Wang, Y., Gao, Y., Sui, C., ... & Wen, H. (2023). Abnormal whole-brain voxelwise structure-function coupling and its association with cognitive dysfunction in patients with different cerebral small vessel disease burdens. *Frontiers in Aging Neuroscience*, *15*, 1148738. [3] Liu, C., Jing, J., Jiang, J., Wen, W., Zhu, W., Li, Z., ... & Wang, Y. (2024). Relationships between brain structure-function coupling in normal aging and cognition: A cross-ethnicity population-based study. *NeuroImage*, *299*, 120847.
null
null
null
null
null
null
Wrapped Gaussian on the manifold of Symmetric Positive Definite Matrices
Accept (poster)
Summary: This paper studies the non-isotropic wrapped Gaussian distribution on the manifold of positive definite (PD) matrices. Specifically, the authors derive theoretical properties of the non-isotropic wrapped Gaussian distribution and propose maximum likelihood estimators for its parameters. They also define an equivalence relation between the set of parameters of two wrapped Gaussian and resolve the non-identifiability issue of the wrapped Gaussian for PD matrices. Finally, the authors provide new interpretations to several known classifiers on PD matrices through the lens of wrapped Gaussian distributions. Claims And Evidence: Overall, all the claims and results in this paper are supported by rigorous proofs and/or simulation results. However, I am not quite convinced by the claim on the second column of Page 8 (Line 403) that "we do not observe a clear dominance of the He-WDA over the Ho-WDA". The major issue of this claim is that the Monte Carlo experiments are only repeated for 5 times, which is clearly not enough. It should be at least 100 times. Moreover, it is not intuitive about why the He-WDA behaves worse than the Ho-WDA on many of data examples. Shouldn't Ho-WDA be a special case of He-WDA when all the covariance matrices for different classes are the same. There are also some minor issues in the paper that I pointed out below. Methods And Evaluation Criteria: The proposed methods and evaluation criteria basically makes sense for the wrapped Gaussian distribution problem and its related classification problems. As a side note, since the authors mentioned that they can sample from the wrapped Gaussian distribution, it would be better to outline the sampling procedure in the main paper or Appendix. Specifically, a procedure without rejection sampling is expected. Theoretical Claims: I have checked all the proofs and results in both the main paper and supplementary materials. Experimental Designs Or Analyses: Yes, I have checked the validity of experimental analyses. The only concern is the Monte Carlo repetition times that I mentioned above. Supplementary Material: Yes, I have reviewed all parts of the supplementary materials. Relation To Broader Scientific Literature: To my knowledge, the wrapped distributions have been studied in directional statistics dating back to at least 1970s. This paper extends this wrapped technique for the Gaussian distribution, or more generally, elliptically contoured distributions, to the manifold of positive definite matrices. As the authors pointed out, wrapped distributions have been studied on homogeneous Riemannian manifolds, but using a different techniques as the one proposed in this paper. More relatedly, some exponential-wrapped distributions on symmetric spaces were studied in the literature as well, but as the authors in this paper pointed out, these related works consider the distribution on the tangent space to always be centered. In this paper, the authors consider a slightly more general setting with the wrapped Gaussian distribution on the manifold of positive definite matrices not necessarily being centered. Essential References Not Discussed: To the best of my knowledge, the paper did a good job in discussing the related works. The only main concern is the novelty of this paper when compared with the prior works that consider the centered distribution on the tangent space. Intuitively, if we know the tangent space of a manifold, it does not seem to be very difficult to center the data and/or distribution with respect to the origin of the tangent space. I encourage the authors to address this concern in more detail. Appendix H could be one example, but still not convincing enough. Other Strengths And Weaknesses: The writing and proofs of this paper are of good standard. The only main concern is the novelty of this paper when compared with the prior works that consider the centered distribution on the tangent space. Intuitively, if we know the tangent space of a manifold, it does not seem to be very difficult to center the data and/or distribution with respect to the origin of the tangent space. I encourage the authors to address this concern in more detail. Appendix H could be one example, but still not convincing enough. Other Comments Or Suggestions: 1. In the Abstract (Line 21-25), the sentence "We introduced a non-isotropic wrapped Gaussian by leveraging the exponential map, we derive theoretical properties of this distribution and propose a maximum likelihood framework for parameter estimation." is not correct in grammar. 2. Second line in Section 4.2: a typo "pus-forward". 3. Proposition 4.9: it should be clearly stated how many ones are there in $\nu=(1,...,1,0,...,0)$. The same applies to Proposition 4.14. 4. I am confused about Remark 4.12: can we just set $\mu_{\alpha}= -t \mathrm{Vect}_{p{\alpha}}(p{\alpha})$ so that the equivalence class can contain $\mu=0$? 5. Figure 3: For the case $d=10$, why is it almost consistently better than the case when $d=5$ for MLE to estimate $p^*$? It seems that $d=10$ is a more challenging problem. Does it mean that the Riemannian conjugate gradient algorithm doesn't fit to this problem? 6. The first column of Page 6 (line 308-310): can we center the data and then apply the method of moment? what difficulties are there for preventing this straightforward adaptation? 7. The first column of Page 7 (Line 366): typo "arex pulled back.." 8. In the conclusion part, it seems to me that it may not be feasible to extend all the classical machine learning models that rely on Gaussian distributions to the manifold of SPD matrices. The computational issues incurred by calculating the exponential and logarithmic maps are huge barriers. 9. Why did the authors present all the proofs in tiny font? It is hard for readers to read them. 10. Line 793 on Page 15: typo: "does dot imply...". 11. Line 812: "$Id$" at the end of the equation should be $I_d$. 12. Second point in Proposition D.1: the random variable $X$ is missing. It should be $\mathrm{Log}_p X$. 13. Line 860 on Page 16: typo "diffeomorphisme". 14. Line 1061 on Page 20: If $||\nu||_2^2 = n d$, then where does the "n" go in the expression of $p_{\min}$? Questions For Authors: Please see my concerns and comments above. I am happy to increase my scores if the authors can carefully address my comments and concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their work, their valuable comments and interesting questions. ## Regarding the "Claims And Evidence": We agree that the He-WDA should be a special case of the Ho-WDA when the covariance matrices for each classe are the same. In order to evaluate the performance of the He-WDA and Ho-WDA, we used a 5-fold cross-validation on each dataset. This is classicly done in ML to evaluate pipelines. We were limited by the amount of data in some datasets. Moreover, as the He-WDA estimates one covariance matrix per class, it has more parameters to estimate and then requires a lot of data per class. If the number of samples per class is low, the estimation of the covariance matrix can be very noisy and the performance of the He-WDA can be worse than that of the Ho-WDA. For example, for the dataset Salinas, some classes have only a few hundred samples, which is not enough to estimate the covariance matrix accurately. The Ho-WDA, on the other hand, estimates a single covariance matrix for all the classes and is less sensitive to the number of samples per class. We will add this explanation in the final version of the paper. ## Regarding the "Methods And Evaluation Criteria": The sampling procedure of a wrapped Gaussian is very simple and does not rely on any sophisticated rejection sampling algorithm. Indeed, to sample from $WG(p;\mu,\Sigma)$, one can simply sample a point $x$ from the Euclidean Gaussian $\mathcal{N}(\mu,\Sigma)$ and then project the samples on the manifold $P_d$ by $Exp_p(Vect^{-1}_p(x))$. The exponential map $Exp_p$ as well as the vectorization $Vect_p$ are both simple to compute. We will add this algorithm in the final version of the paper.  ## Regarding the "Other Strengths And Weaknesses":  We would like to emphasize the fact that allowing the distribution to be non-centered in the tangent space is not just a “slightly more general setting''. For more details on this point, please refer to our answer to point 3 of reviewer 9rYP.  ## Regarding the "Other comments or suggestions": We will correct the different typos highlighted in this review. 3. If the SPD matrices are of size $d \times d$ (like we consider in the paper), then $\nu$ is the concatenation of $d$ ones and $d(d+1)/2-d = d(d-1)/2$ zeros. We will clarify this point in the final version. 4. This is an important point. Let us reformulate remark 4.12: Let us consider a wrapped Gaussian $WG(p;\mu, \Sigma)$. Then, the equivalent wrapped Gaussians are of the form $WG(e^t p; \mu + tVect_p(p), \Sigma)$ for $t \in R$. If $\mu$ and $Vect_p(p)$ are aligned i.e., there exists $\tilde{t}$ such that $\mu = -\tilde{t}Vect_p(p)$, then the equivalence class contains a wrapped Gaussian with $\mu = 0$. However, as $\mu$ and $Vect_p(p) = (1,\cdots,1,0,\cdots,0)$ (the concatenation of $d$ ones and $d(d-1)/2$ zeros, see previous point) are not aligned (for example, take $\mu = \nu + (1,\cdots,0) = (2,1,\cdots,1,0,\cdots,0)$), then there exists no $t$ such that $\mu = -tVect_p(p)$ and the equivalence class does not contain a wrapped Gaussian with $\mu = 0$. We will clarify this point in the final version. 5. We agree with you that we could expect that the problem with $d=10$ is more challenging than the problem with $d=5$. According to this, we would expect the error on the estimation of $p^\star$ to be higher for $d=10$ than for $d=5$. However, the results of Figure 3 show the contrary. Maybe we need to repeat more than 5 times the experiment to have a more accurate estimation of the error. We will try to increase the number of repetitions in the final version of the paper. 6. In order to center the data on the tangent space (to have $\mu^\star = 0$), one needs to know $p^\star$ and $\mu^\star$ (more details on how to center the data are given in appendix D). Indeed, we need to know the tangent plan in which we want to center the data. As here our goal is to estimate the parameters, we do not know $p^\star$ and $\mu^\star$ and thus cannot center the data. Moreover, if one simply “centers'' the data using the Riemannian mean $\mathfrak{G}(x_1,...,x_N)$ , one does not have any guarantees that the “centered” data will have $\mu = 0$.  8. Computing exponential and logarithmic maps remains a bottleneck in SPD matrix geometry. However, a trade-off may exist between computational cost and performance gains. Theoretically, Euclidean Gaussian-based methods extend to $P_d$ via our wrapped Gaussian, though practical challenges will arise in applications, requiring careful choices. 9. We will modify the font size of the proofs to make them more readable. 12. Indeed, the random variable $X$ is missing in the second point of Proposition D.1. We will correct this error. 14. The $n$ in $||\nu||_2^2 = nd$ is a typo. Since $\nu$ consists of $d$ ones and $d(d-1)/2$ zeros, its squared norm is simply $d$. Therefore, the expression for $p_min$ is correct. This error will be fixed. --- Rebuttal Comment 1.1: Comment: The paper has its merit, but the contributions are not extremely groundbreaking. I will keep my score at somewhere between 3-4.
Summary: The authors propose a new version of Gaussian on the SPD manifold (with the affine metric) by wrapping a Gaussian from a tangent space onto the manifold. Their method has two main differences than previously proposed methods 1. the distribution need not be isotropic and 2. the footpoint of the distribution on the manifold need not be the mean of the distribution. The authors further extend the ideas of statistical classification using their wrapped Gaussian distribution. This review has been updated after the rebuttal period. Claims And Evidence: Almost all their claims are supported. The only claims which I am not entirely sure are on the section 6.1 claims of MLE. These claims do seem intuitive, but I think a more thorough justification is necessary. Methods And Evaluation Criteria: Yes, the authors test their methods on both simulated data and real data. The simulated data is somewhat low dimensional with a max d of 10 however the real data is on the same order so it seems appropriate. Theoretical Claims: Only glanced at most proofs but took a longer look at the proof in section H of the appendix. Experimental Designs Or Analyses: Yes, all were checked. Supplementary Material: Yes, all parts were checked. Relation To Broader Scientific Literature: The SPD manifold is often used but the structure is often ignored as it can be difficult to handle, the authors propose a method on the manifold leveraging the structure. Many authors have considered other distributions on SPD such as Hajri et al. who proposed the Laplace and this paper seems like a direct comparison in terms of impact. Essential References Not Discussed: NA Other Strengths And Weaknesses: See comments below. Other Comments Or Suggestions: In the introduction, I think the authors should mention they are considering the affine metric earlier as there are two natural options for a metric on this space. Second page I think there is a hanging sentence "the authors work on symmetric spaces." the sentence seems out of place or incomplete. "In the sequel" seems odd choice of wording. Typos and errors: 4.1 "tangent plan" -> "tangent plane" 4.2 starts with "pus-forward" 5.1 the theta star parameters missing comma 6.1 "They rely on metric that arex pulled" -> "They rely on metrics that are pulled" All paper: Inconsistent notation of $WG(p,\mu,\Sigma$ v $WG(p;\mu,\Sigma)$ Figure 3, the third panel has the star in the wrong location Questions For Authors: In section 5 it's unclear when it is assumed $\Sigma$ is diagonal. It seem in 5.1 that it will be always but then in 5.2 it seems as if only sometimes. Is the normalizing constant in MDM dependent on the mean on the distribution? When describing LDA and QDA the authors mention that "all training points are sent to the tangent space via the exponential map" should this not be the log map? In section 5.1 is the $p^2$ a typo? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their work, their valuable comments and interesting questions. ## Regarding the Claims and Evidence: You say that you have doubts about the claims on the MLE made in section 6.1. The only claim on the MLE made in this section is that the LDA uses an MLE on the training data to learn the parameters of each class. Are you referring to this claim? If it is the case, we refer to section 4.3 of [1] (page 109) where the formulas for the estimated parameters are given for the Euclidean LDA. On can check that they coincide with the formula of the MLE in the Euclidean Gaussian case. ## Regarding the Comments Or Suggestions: 1. We will mention earlier in the introduction that we focus on the affine-invariant metric on the manifold of SPD matrices and mention that other choices could be made (for example Log-Euclidean metric). 2. We will check and correct the different typos and errors throughout the paper. ## Regarding the Questions For Authors: 1. In section 5, $\Sigma$ is never assumed to be diagonal. The experiments lead in the section were always led with a full $\Sigma$. However, we mention that one can assume that $\Sigma$ is diagonal to simply the estimation problem (reducing the number of parameters to estimate from $O(d^4)$ to $O(d^2)$ where $d$ is size of the SPD matrices). This assumption of a diagonal $\Sigma$ is done in the experiments led in Section 6.1 and in Appendix J. We will clarify this point in the final version. 2. The normalizing constant in the MDM does not depend on the mean of the distributions. Indeed, as shown in Proposition 1 of [2], the normalizing constant of the isotropic Gaussian distribution depends only on $\sigma$, not on the mean. 3. Absolutely, in the description of the Tangent Space LDA and QDA, one should read “logarithm map” instead of “exponential map”. We will correct this error. 4. In section 5.1, the $p^2$ is not meant to be $p$ squared but the $^2$ refers to a footnote. This can indeed be confusing and will be corrected. Finally, you mention paper [3] that completely fits in the scope of our paper, and we will make sure to add this reference to our section 2 on related works. [1] Hastie, T., Tibshirani, R., and Friedman, J. The Elements of Statistical Learning. Springer Series in Statistics. Springer, New York, NY, 2009. [2] Said, S., Hajri, H., Bombrun, L., and Vemuri, B. C. Gaussian Distributions on Riemannian Symmetric Spaces: Statistical Learning With Structured Covariance Matrices. IEEE Transactions on Information Theory, 64 (2):752–772, February 2018. [3] Hatem Hajri, Ioana Ilea, Salem Said, Lionel Bombrun, Yannick Berthoumieu. Riemannian Laplace distribution on the space of symmetric positive definite matrices. 2015
Summary: The authors proposed a wrapped Gaussian formalism and give ML estimator for mean and covariance. The authors showed usefulness of proposed formulation in LDA and QDA on several datasets. Claims And Evidence: 1. My biggest concern regarding the intrinsic nature of the formalism as claimed, the formalism is essentially a extrinsic formulation on tangent space, define Gaussian on the tangent space and push-forward on the manifold Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, the claims hold. Experimental Designs Or Analyses: The experiments are sound, although not convincing how much benefit we get using wrapped Gaussian Supplementary Material: No, Relation To Broader Scientific Literature: The ideas are relevant for the community. Essential References Not Discussed: 1. Gaussian distributions on Riemannian symmetric spaces Other Strengths And Weaknesses: Minor comments: The nations are uncommon, for example why use both vector and matrix using Lower case?! In all metrics on SPD, it has non-positive curvature, not just affine-invariant metric: “Once endowed with this metric, Pd is a complete connected Riemannian manifold of non-positive curvature” — this statement can be clarified.Also all Riemannian metrics give same sign of curvature, as the metrics are equivalent in that sense. 3. Section 3.1 reads dense and a laundry list of things, better to clarify saying why (if possible) (and where) you need this to use this. All of these are well-known and well-studied across geometers so the non-familiar reader should know in which section you will use these tools. 4. In section 4.2, “pus-forward” should be “push-forward”. 5. In section 4.2, “The Jacobian determinant” better call it determinant of the Jacobian. Major Comments: 1. Whenever one defines something on Eucliden space and pull over manifold the intrinsic nature is gone. The authors argued in Introduction how classical tools are non-intrinsic: “However, classical Eu- clidean probability distributions fail to capture the intrinsic geometry of the underlying manifold.” So I am confused why authors definition of wrapped Gaussian is non-intrinsic as in section 4.1. 2. The entire argument of being intrinsic doesn’t hold as everything defined in relate of wrapped Gaussian is extrinsic including Therorem 4.2, proposition 4.3 3. In section 4.2., the authors said they allow usage of \mu, which is very trivial extension of centered Gaussian. 4. Why in section 4.3, the CLT will be something novel to prove? As it depends on CLT on the tangent space. Am I missing something here? 5. The estimation of ML estimators of mean and std follows from Euclidean Gaussians, so again section 5 follows naturally from Euclidean MLEs. 6. The performance of LDA and QDA using wrapped Gaussian is not very superior over the competition as in Table 2. So not sure why we will use such a formalism even if we ignore the extrinsic nature of it. Other Comments Or Suggestions: Please see weakness section Questions For Authors: Please see weakness. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ## Regarding the “Essential References Not Discussed” The reference [1] you mention is actually the first reference given in Section 2 devoted to related works, on the first page of our paper. It was a key reference for the development of our theory as the authors proposed an __isotropic__ Gaussian distribution Riemannian Symmetric Spaces (e.g. SPD). Our goal was to study a __non-isotropic__ Gaussian distribution on SPD matrices. This is clearly stated in the first 6 lines of Section 2. ## Regarding the "Other Strengths and weaknesses": ### Minor comments: Regarding the choice of geometry, not all metrics behave the same on the manifold of SPD matrices. We could have chosen a flat metric leading to a Euclidean space. ### Major comments: 1. Although we only used the word "intrinsic" twice in our paper, it may have confused our reader. There is a trade off between working with an anisotropic distribution and having a fully intrinsic distribution. Indeed, as we detail in Section 2, Pennec has defined in [2] a purely anisotropic and intrinsic Gaussian distribution on a Riemannian manifold. However, as detailed in section 2: - The normalizing constant is expressed by an integral over the whole manifold $P_d$ (intractable in practice). - It uses a concatenation matrix rather than a covariance matrix, and the relation between those two matrices requires the computation of an integral over $P_d$. - No sampling method has been studied for it. Hence, we tried to take the best of both world in our work by tackling the anisotropy and to provide an easy-to-sample distribution, and we completed these objectives. 2. The Gaussian distribution defined in [1] also relys (indirectly) on the choice of a particular tangent space. Indeed, in the p.d.f. of the Gaussian, the difference between a given point $y$ and the mean $\bar{x}$ is computed using $Log_{\bar{x}}y$ (differenc in the tangent space). However, this distribution is called “intrinsic”. 3. Allowing the distribution to be non-centered in the tangent space raises several theoretical and practical issues that are not present when $\mu = 0$: - The non-identifiability issue raise by the choice of $\mu \neq 0$ is solved in section 4.4. - As stated in remark 4.12, the equivalence classes of a given wrapped Gaussian (WG) does not necessarily contain a centered WG. Therefore, the choice of $\mu$ has a real impact on the expressivity of our distribution. - The estimation of the parameters is another issue: if one assumes $\mu = 0$, then the estimation of the parameters is straightforward (by the methods of moments as it has been done by Chevallier et al. (2022)). However, if $\mu \neq 0$, then the estimation of the parameters is more complex and requires the use of a full MLE. - We would like to recall that, unlike the Euclidean case, $\mu$ does not model the mean of the distribution, as we introduce a new parameter $p$. Maybe you got confused between those two parameters. Given these precisions, could you argue what is specifically trivial and why ? 4. The CLT on the tangent space is not novel to prove. However, the CLT on the manifold was (to the best of our knowledge) never stated in the literature. We believe that this result is interesting as it shows the interest of considering WG on SPD matrices as a WG appears as the asymptotic distribution of a logarithmic product of i.i.d. random variables on the manifold. If this result is known, could you give a reference ? 5. The estimation of the parameters of a WG is not a trivial extension of the Euclidean Gaussian case. As we explain in proposition 5.1, __in the special case where $p^\star$ is known a priori__, the MLE of $\mu$ and $\Sigma$ are the same as the Euclidean case. However, in the general case, the MLE of $p$ does not have a closed-form solution and is dependent on the MLE of ($\mu$, $\Sigma$). Thus, the estimation of the three parameters $p$, $\mu$ and $\Sigma$ must rely on the full maximum likelihood optimization problem stated in section 5.1. Once again, could you specify what is trivial here ? 6. Using WG for classification with LDA and QDA primarily served to illustrate its applications. Our goal was to demonstrate how the theory developed earlier enables the construction of machine learning tools directly on the manifold of SPD matrices. The main value of WG lies in offering a novel way to model data on SPD matrices. Additionally, in Section 6.2, we showed how classical SPD classifiers can be reinterpreted through the WG framework, providing a unifying probabilistic perspective. We believe that introducing such a perspective is valuable, and this paper represents a first step in that direction. [1] Said, S. et al, Gaussian Distributions on Riemannian Symmetric Spaces: Statistical Learning With Structured Covariance Matrices. IEEE Trans. Inf. Theory, 2018. [2] Pennec, X. Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements. J. Math. Imaging Vis, 2006.
null
null
null
null
null
null
null
null
Universal Approximation Theorem of Deep Q-Networks
Accept (poster)
Summary: The paper is proports some approximation guarantees for Deep Q-networks (Theorem 3.1) for the optimal action-value of certain control problems. The paper is imprecise at various parts. For instance, in the statement of Theorem 3.1, L is lower-bounded by $C(\epsilon,L)/\epsilon$. What is the point of the denominator explicitly depending on $\epsilon$ if the "constant" is not a constant but actually also depends on $\epsilon$? Most troubling of all is the (apparent) claim of the authors that dimension $(n)$ has absolutely no effect on the number of parameters required for their model... The paper is not rigorous and the claims are hard to believe. Further, it seems that large stretches are written by LLMs. Claims And Evidence: Non-rigorous and hard to believe as is. Methods And Evaluation Criteria: Questionable Theoretical Claims: Proofs are not very rigerous, wordy, and possibly LLM generated in long stretches. Experimental Designs Or Analyses: NA Supplementary Material: Not well-polished, wordy, and hardly rigorous. Relation To Broader Scientific Literature: Very few connections to well-known approximation theory paper for deep learning. Essential References Not Discussed: NA Other Strengths And Weaknesses: Very poorly written and non-rigorous. Other Comments Or Suggestions: NA Questions For Authors: Why should the number of network parameters not depend on dimension? Epseicially, I would expect an approximation rate of $\mathcal{O}((T/\varepsilon)^{n})$ or so; but we do not see such a thing. How can this be? Ethical Review Concerns: I think various stretches of the text are written carelessly with LLMs. For instance, line 971 reads "the *unique* fixed"; these asteryxes are typical outputs for GTP where italics should be; so the authors did not only probably use LLMs but they likely copy-pasted directly.... Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We certainly acknowledge the reviewer poured time into this. That being said, honesty compels the expression of profound disappointment regarding both the fundamental rigor and the overall professional conduct reflected in the assessment provided. Frankly, the feedback gives off a strong vibe of having resulted from merely flicking through the pages of our manuscript. The dependability of the LLM detection tool employed by the reviewer is, broadly speaking, something we find questionable. Consider this scenario: feed a mathematics or statistics paper from arXiv, one published pre-2010, into such a detector --- isn't it probable that a high `AI-generated' score would result? These tools currently exhibit difficulty differentiating human versus machine authorship in mathematical contexts, precisely because mathematical expression possesses an intrinsically organized and predictable nature. 1, We address the reviewer's comment that the condition $L > C(\epsilon, T)/\epsilon$ in Theorem 3.1 is imprecise due to $C$'s dependence on $\epsilon$. This critique, frankly, misinterprets established conventions in approximation theory. It's entirely standard for constants, such as the $C$ here, within convergence rate expressions to depend on parameters governing the approximation setup -- accuracy ($\epsilon$), problem characteristics (like smoothness bounds, not explicitly mentioned but relevant), or domain properties (here, $T$). The notation $C(\epsilon, T)$ is used precisely to make this dependence explicit, which is a sign of rigor, not imprecision. The core information conveyed is the scaling behavior with respect to the primary parameter of interest, $\epsilon$. In our case, the inequality $L > C(\epsilon, T)/\epsilon$ clearly indicates that the necessary network depth $L$ scales as $O(1/\epsilon)$, once dependencies on other parameters ($T$ and potentially $\epsilon$ itself, within $C$) are factored into the "constant" term. Questioning the "point" of this notation suggests unfamiliarity with how convergence rates are typically expressed and analyzed in the field. The statement accurately isolates the $1/\epsilon$ scaling while properly acknowledging that the proportionality factor $C$ isn't universal. 2, Turning to the second issue raised -- the dimensions and parameter count: (1) Implicit Dependence: It's true that universal approximation theorems fundamentally promise that \emph{some} network exists for any desired accuracy $\epsilon > 0$, at least on compact sets. Our rate statement back in Theorem 3.1 does focus squarely on this $\epsilon$. However, we haven't ignored the fact that the `constants` involved in these guarantees aren't just numbers pulled out of thin air -- they absolutely depend on other factors like the problem's dimension, Lipschitz constants, and so on. We actually dig into this directly in the paper. If you look at Appendix A.4 (specifically the proof of Lemma 2.4, check out lines 549-614), you'll see how we derive the constants $C_1, C_2, C_3$ (around lines 570-580). These constants are key because they help define the compact set $K_R$ where our approximation works with high probability. And as the derivation clearly shows, they rely on the dimension $n$, various Lipschitz constants ($L_h, L_\sigma, K$), and the time horizon $T$. So, naturally, the main constant $C(\epsilon, T)$ that appears in Theorem 3.1 inherits these dependencies from the underlying setup. It's not just about $\epsilon$ and $T$ in isolation. (2) Focus of Theorem 3.1: Let's be clear about what Theorem 3.1 does: it guarantees we can build a DQN that gets within $\epsilon$ of the target function, provided we stick to a specific important region, the compact set $K_R$. Yes, high dimensions make approximation tough---the curse of dimensionality is real---but it's pretty standard to state the main result in terms of the error $\epsilon$. Figuring out exactly how the network size needs to grow exponentially with the dimension $n$ is important, sure, but it's a separate, much harder problem than just showing existence and getting the $\epsilon$ rate right.To prove our point, we rely on a few building blocks: ResNets' power to approximate basically anything (Lemma 2.2, thanks to Li et al. 2022), the trick of approximating several things at once (Lemma 2.3), and large deviation results to keep our focus on the relevant $K_R$ (Lemma 2.4). The dimensional dependence isn't explicitly spelled out in the main rate, but it's hidden in the complexity required by those first two lemmas and the size of the region defined by the third. (3) No Claim of Dimension-Free Parameters: The reviewer's expectation of an explicit $\epsilon^{-n/d}$ term in the theorem statement misinterprets the typical scope of such existence results versus quantitative complexity bounds. This review is deeply flawed. It misinterprets standard theoretical notation, overlooks details presented in the paper and resorts to unprofessional accusations.
Summary: 1. This paper studies universal approximation theorem for deep Q-network using residual blocks. The paper first connects SDE representation of viscosity solution of HJB, and this can be approximated by residual net. 2. The authors consider a continuous-time Markov decision process. The state-space is a open set and the action space is a compact set (Assumption 1). 3. The main argument relies on the universal approximation theorem using residual blocks by Li et al. **Update after rebuttal** I increase my score from weak reject to weak accept. The UAT in context of RL is new and is a clear contribution in the field. Nonetheless, due to lack of technical novelty--the main results rely heavily on existing literature--, I only recommend weak acceptance. Claims And Evidence: 1. Assumption 3.1 (ii): Boundedness of the target is not usually a common assumption. It is often the problem of boundedness, that makes difficult to apply the ODE arguement. Methods And Evaluation Criteria: This is a theoretical paper. Theoretical Claims: The theoretical claims seem to be solid. Experimental Designs Or Analyses: This is a theoretical paper. Supplementary Material: I have checked the proofs in supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Some related works have been not discussed. For example, Q-function being a unique viscosity solution has been studied in [1]. [1] Kim, Jeongho, and Insoon Yang. "Hamilton-Jacobi-Bellman equations for Q-learning in continuous time." Learning for Dynamics and Control. PMLR, 2020. Other Strengths And Weaknesses: **Strength:** Universal approximation theorems in the field of RL has not been explored in detail, and the paper proves a universal approximation theorem of DQN using a residual block. **Weakenss:** The main universal approximation theorem (UAT) argument follows from the UAT of Li et al. and SDE analysis of Fleming et al., which makes the novelty of the analysis questionable. Other Comments Or Suggestions: I would recommend adding some background on forward backward SDE. Questions For Authors: 1. The context around assumptions seem to be insufficient. For example, can the authors provide more context on Assumption 2.2-2.4? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Here's how we'll handle the points you raised, including the specific manuscript changes. We'll add clarification on this in the revision. 1, Boundedness assumptions are frequent in stochastic approximation theory. They are vital for ODE-based convergence proofs (see Kushner \& Yin, 2003). Such assumptions simplify stability analysis needed for convergence. This might seem limiting. However, for our problem, it is well-grounded. \assumption{2.2}(i) ensures bounded rewards \(r\). The discount \(\gamma\) is less than 1. Also, \assumption{2.4}(i) keeps parameters \(\theta_k\) within a compact set \(\Theta\). Therefore, the term maximizing \(Q^{\theta_k}\) is bounded. This leads to bounded DQN outputs on the relevant compact state-action spaces. 2, Okay, so why do we need this assumption? It's pretty important. It basically makes sure the update steps in the stochastic scheme from \equationref{eq:25} don't just blow up. Plus, it lets us use the standard toolbox of convergence theorems (like that ODE method the reviewer mentioned) to actually prove \theorem{thm:3.2} (check out \appendixref{app:A.8} for the proof). We'll dig into why our specific proof approach really needs this, and how it connects back to things like \(r\) being bounded, \(\gamma\) staying below 1, and \(\Theta\) being a compact set. Regarding the absent Kim \& Yang (2020) citation: we'll put it in the Introduction's related work (\sectionref{1}\seclabel{1}) and the main reference list. We will also add a short note explaining that our study's unique angle involves deep function approximation and establishing corresponding approximation/convergence guarantees for DQNs. Regarding Novelty: We aren't claiming new fundamental theory like UAT or SDEs. The value we add comes from \textit{putting these pieces together in a new way}. Our effort centered on creating a robust framework designed for \textit{continuous-time Deep Q-Networks}. The novel insight is the bridge we built connecting ideas from continuous-time control (think FBSDEs, viscosity solutions) with the nuts and bolts of today's deep RL systems (like ResNet-based DQNs). Making this connection allows the following results: 1, Formally interpret the discrete layer updates of a ResNet-based DQN as an Euler-Maruyama type discretization of an underlying continuous-time process (\remark{2.1}, \sectionref{2.2}). 2, Prove that such DQNs can universally approximate the optimal Q-function (itself potentially non-smooth, hence the use of viscosity solutions) on relevant compact sets with high probability, leveraging ResNet UAT and large deviation bounds (\theorem{3.1}, \lemma{2.1}, \lemma{2.3}). The simultaneous approximation result (\lemma{2.3}) is a necessary technical step tailored to our SDE context. 3, Analyze the convergence of a continuous-time Q-learning algorithm for training these DQNs, adapting stochastic approximation theory to this specific setting (\theorem{3.2}, \appendixref{A.8}). Regarding Questions: 1, Assumption 2.2 (MDP Regularity): Essentially, Assumption 2.2 brings in some standard technical machinery from SDEs and optimal control \citeplaceholder{Fleming & Soner, 2006, Mao, 2007}. The reason for the Lipschitz and linear growth conditions on $h$ and $\sigma$ is simply to make sure the state equation (1) behaves predictably (i.e., has a unique solution). Bounding the reward $r$ is also pretty standard fare in RL work. Putting these together lets us confirm the whole control setup makes sense, guarantees the HJB equation behaves correctly (has viscosity solutions, see Remark 2.3), and allows us to use tools like large deviations (Lemma 2.4). 2, Assumption 2.3 (Q-function Regularity): Assuming the optimal Q-function $Q^*$ is continuous and a viscosity solution to the HJB equation (Eq. 22) is standard when dealing with HJB equations in optimal control, particularly when classical $C^{1,2}$ differentiability may not hold (\citeplaceholder{Bardi & Capuzzo-Dolcetta, 2008}, \citeplaceholder{Fleming & Soner, 2006}). This allows us to rigorously analyze the properties of $Q^*$ and compare it with our DQN approximation $Q^\theta$. Lipschitz continuity of the terminal condition $g$ is also a standard technical assumption. 3, Assumption 2.4 (DQN Parameters): 1, let's talk about the parameters $\Theta$. We usually assume this space is compact. That's pretty standard stuff in theory proofs because it basically keeps the network's weights and biases from going off the rails. Having these bounds is handy for stability, and it also makes sure the $Q^\theta$ values and their gradients stay contained, which you need when proving things converge. 2, the activation function $\eta$. It's normally assumed to be Lipschitz continuous and non-linear. Again, this is typical for neural net theory. The Lipschitz part stops the outputs or gradients blowing up. The non-linear part is key – without it, the network can't learn complex stuff (that's the whole universal approximation idea) --- Rebuttal Comment 1.1: Comment: Thank you for the detailed comments. In terms of non-linear architecture, the boundedness assumption in neural network (NN) setting may be frequent. But this is not the case for general SA literature. It has been one of the key assumption which recent research have focused on, and makes difficult to ensure convergence of Q-learning and TD-learning [1,2]. Therefore, I would recommend adding some comments whether the bounded assumption (Theta being compact) has been used in other literature for proving convergence of Q-learning with NN. [1] Borkar, Vivek S., and Sean P. Meyn. "The ODE method for convergence of stochastic approximation and reinforcement learning." SIAM Journal on Control and Optimization 38.2 (2000): 447-469. [2] Meyn, Sean. "The projected Bellman equation in reinforcement learning." IEEE Transactions on Automatic Control (2024). --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer raising this important point regarding Assumption 2.4(i), which states that the DQN parameter space $\Theta$ is a compact subset of $\mathbb{R}^p$. We agree that in the general theory of Stochastic Approximation (SA), as established in seminal works like \citep{Borkar2000} (Ref [1] provided by the reviewer) and discussed in recent analyses \citep{Meyn2024} (Ref [2]), assuming parameter boundedness is a strong condition, and significant research effort focuses on relaxing it or analyzing projection algorithms. However, when analyzing the convergence of Reinforcement Learning algorithms, particularly Q-learning, combined with powerful non-linear function approximators like Deep Neural Networks (DNNs), the compactness assumption on the parameter space (or closely related assumptions like bounded parameter norms, bounded gradients, or the use of explicit projection mechanisms) becomes a common, often necessary, simplification in the theoretical literature. Here's why this assumption is frequently adopted in the context of DQNs and why we included it: 1, Deep networks introduce tricky non-linear behaviors. Without limits, parameters could fly off during training, causing instability. The compactness idea keeps parameters penned in (bounded). This bounding really helps nail down the stability analysis needed to prove convergence. Many theories rely on showing things like gradients stay controlled, which compactness readily helps guarantee. 2, Compactness really helps with the theoretical side. It unlocks strong analytical results. For example, if a function is continuous (like Q-values depending on parameters $\theta$ in a compact set $\Theta$), we know it must hit a maximum and minimum value, and it's uniformly continuous. This makes proving things like uniform convergence much simpler because terms that show up (like gradients or ensuring uniform Lipschitz properties) are easier to bound across the parameter space. 3, The idea that parameters should stay within a bounded, compact set is pretty common in theoretical work on Q-learning or TD learning using neural networks. Sometimes this is stated directly, but often it's a side effect of other assumptions. For instance, limits on network design, adding weight decay, using gradient clipping, or analyzing algorithms that use projection steps all help keep parameters from growing too large. Looking at specific examples, even when a study like Fan et al. (2020) (which we cite) isn't focused primarily on parameter bounds, the stability their analysis requires often implicitly depends on parameters not exploding. Likewise, much of the foundational theory for RL with non-linear function approximation, building on work like \citep{Tsitsiklis1996}, typically needs parameters to be bounded to make the proofs work out rigorously. Using projection operators, as discussed in stochastic approximation literature \citep{Kushner2003}, is a standard way to formally force parameters into a compact set. Our approach doesn't explicitly model projection, but by assuming $\Theta$ is inherently compact, we achieve a similar effect for the analysis by ensuring the parameters are bounded. 4, Our paper's primary focus is establishing the connection between continuous-time DQNs, FBSDEs, and viscosity solutions, and proving approximation and convergence results within this novel framework. Retaining the standard assumption of parameter compactness allows us to rigorously handle the complex interplay between the continuous-time stochastic dynamics, the HJB equation (in the viscosity sense), the NN approximation properties, and the SA convergence analysis, without adding the further complexity of analyzing unbounded parameters or explicit projection schemes in this already intricate setting. So, while we know that not requiring boundedness is a big deal in general SA theory, it's very common in deep RL research to just assume the parameters don't go wild (parameter compactness, Assumption 2.4(i)). People do this because it makes the analysis much cleaner: it helps ensure things don't blow up (stability) and lets us use the math we need. We think this assumption is a fair starting point for the continuous-time work we've done. Dealing with the case where parameters aren't bounded (like adding projection steps or finding networks that stay stable anyway) is definitely something important to look into later for continuous-time DQN. We will add a comment in the revised manuscript clarifying this context for Assumption 2.4(i), acknowledging the general SA perspective while justifying its use based on common practice in NN-RL theory and the needs of our specific analysis framework. \Kushner, Harold J., and George G. Yin. Stochastic approximation and recursive algorithms and applications. Springer Science \& Business Media, 2003. Tsitsiklis, John N., and Benjamin Van Roy. "Analysis of temporal-difference learning with function approximation." NIPS (1996).
Summary: This paper introduces a connection between deep Q networks and SDEs. By viewing the forward pass as a continuous time process and using tools from stochastic control theory, the paper provides results on approximation theorems for deep Q networks. Claims And Evidence: Claim (i) on page 1: Evidence is given in the remainder of the paper through a sequence of results and discussions. Claim (ii) on page 1: The main claim seems to be an approximation theorem for the optimal Q function, which appears sound but rests on fairly strict assumptions A2.1-2.4. Claim (iii) on page 1: Evidence is given in the end of the paper for such convergence properties, familiar to the discrete-time case. Methods And Evaluation Criteria: N/A There do not seem to be any numerical experiments supporting the theory. Even a small toy example could greatly improve the paper's impact, especially by testing the stringency of assumptions made. Theoretical Claims: I have briefly checked the proofs in the Appendix which appear sound. Given the time constraints, I will continue to do so more carefully in the following days. However, I must admit that SOC theory is not my expertise. Experimental Designs Or Analyses: N/A Supplementary Material: Yes, please see above. Relation To Broader Scientific Literature: Most prior work seems to approach the problem through the discrete time lens, but the connection to continuous time can potentially offer new insights based on new tools. The use of viscosity solutions does offer a new approach that relaxes prior assumptions. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The approach to the problem via continuous time dynamics does seem novel and interesting. Another strength is the use of deep, non-linear networks. However, I am unsure about how realistic the setup is: How many layers are required to achieve a decent approximation? Other Comments Or Suggestions: There seems to be very little discussion between intermediate results (Lemma 2.X). Providing some discussion for the relevance of each result and its connection to the bigger picture/main results of the paper would be helpful. Missing page numbers: was the correct style file used? Questions For Authors: The paper seems to focus on DQN setups, but discusses continuous control problems (continuous action spaces), which does not seem to accord with a standard DQN architecture. Could you explain this discrepancy? Why finite-time horizon as opposed to an infinite-time discounted formulation? It seems you jump to this setting later on. Assumption 2.2: $r(t,s,a)$: Why consider time-dependent rewards? I'm not sure this is a common assumption, even in the finite-time case. Does it induce any non-stationarity? I'm a bit confused. Does the stochastic process (corresponding to the pass through a Q network) also correspond to the dynamics of the MDP? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 5CGE for their constructive feedback and positive assessment. We address the comments below. Lack of Numerical Experiments: We thank the reviewer. While the paper's focus is theoretical, we agree an illustrative example is valuable. We have implemented a discrete-time simulation using a DQN with the proposed residual block architecture (similar to Definition 2.1) learning a continuous-state control task (approximating Eq. 1). This experiment demonstrates the feasibility of the architecture and provides context for the theoretical results (e.g., Theorem 3.1 on approximation, Theorem 3.2 on convergence). We will add a concise summary of this experiment (including comparative runs varying parameters like residual blocks) to the final version or appendix to bridge theory and practice. Specifically, our experiment (as demonstrated in the illustrative code discussed previously) features: 1, A Continuous-State Environment: A 1D control task where the state evolves according to discrete-time dynamics simulating an Euler-Maruyama approximation of an SDE ($ds_t = h(s_t, a_t)dt + \sigma dW_t$, similar to Eq. 1). 2, DQN with Residual Blocks: The Q-network architecture explicitly incorporates residual blocks, directly reflecting the structure analyzed in our work (Definition 2.1, Eq. 2-3, Lemma 2.2/A.3). 3, Q-Learning Algorithm: The agent learns using standard DQN mechanisms (Experience Replay, target network) providing a practical counterpart to the continuous-time convergence analysis (Theorem 3.2, Eq. 25). Plz find the code in an anonymous link: https://github.com/ContinuousTimeDQN/continuous_time_DQN.git Assumptions A2.1-2.4: We appreciate the reviewer finding the approximation theorem sound. Assumptions 2.1-2.4 are largely standard for rigorous analysis in continuous-time control and function approximation: A2.1 (spaces) for well-posedness; A2.2 (MDP regularity: Lipschitz/growth) for SDE/HJB well-posedness; A2.4 (DQN params: compact $\Theta$, Lipschitz $\eta$) for stability/approximation. Notably, A2.3 (viscosity solution for Q*) is a \textit{relaxation} compared to requiring classical smoothness, broadening applicability. Practicality and Layer Requirements: Theorem 3.1 shows a deep enough network exists ($L \propto 1/\epsilon$) for any target accuracy $\epsilon$. But finding the \textit{exact} $L$ needed? That's typically done by experimenting or requires problem-specific math beyond what we cover here. The key takeaway is that depth ($L$, controlling the time step $\Delta t=T/L$) is what makes approximation possible in our continuous-time framework. Discussion Between Lemmas: Thank you for the suggestion. We will add brief transitional text after Lemmas 2.1-2.4 in the revision to clarify how they logically connect and build towards the main approximation result (Theorem 3.1), improving readability. Missing Page Numbers: Page numbers corrected in final version per ICML style. Continuous Actions vs. DQN Focus: Think about what a DQN can fundamentally learn. We're exploring its ability to represent the target value function $Q^*(t, s, a)$, even if actions $a$ come from a continuous set $A \subseteq \mathbb{R}^m$ (like in HJB/FBSDE setups). Our network architecture uses $(s, a)$ as inputs. Don't mix this up with how typical DQNs find the best action---they often chop the action space into pieces ($\max_{a'} Q^\theta(s, a')$). Our interest is purely in how well the network approximates the underlying continuous $Q^*$. Finite vs. Infinite Time Horizon: The framework consistently uses a finite horizon $[0, T]$, driven by the standard formulation of time-dependent HJB equations and FBSDEs which involve terminal conditions (at $T$). The discount factor $\gamma$ acts as a continuous-time rate within this finite horizon. We do not switch to an infinite-horizon setting. Time-Dependent Rewards Assumption: Allowing time-dependence in $r(t,s,a), h(t,s,a), \sigma(t,s,a)$ offers generality, is standard in finite-horizon optimal control theory (e.g., Fleming & Soner, 2006), and allows our framework to handle non-stationary problems where the optimal policy $\pi^*(t,s)$ depends explicitly on time. Time-invariant settings are a special case. Let's clarify the roles here. MDP dynamics, detailed in Eq.~1, map out how the environment's state $s_t$ evolves -- think of it as the rules of the game world. The DQN's forward pass (Eq.~2), on the other hand, is the network's calculation process: starting with an input, it propagates activations $x_k^{(l)}$ layer-by-layer to arrive at a $Q^\theta$ value. These aren't the same process. The vital link (explained in Lemma 2.1) is that the network's learned functions ($h_{\theta_l}$) can effectively mimic the environment's transition and observation functions ($h, \sigma$). This mimicry allows the computed $Q^\theta$ to be a good estimate of the ideal $Q^*$. So, remember: the forward pass outputs a Q-value, it doesn't simulate environment steps.
Summary: This paper develops a theoretical framework for Deep Q Networks (DQNs) in continuous time, by establishing connections among DQN, residual neural networks, stochastic control, and forward-backward SDEs. It links the neural network output at each layer to the Euler discretization of an SDE that models the state evolution in a continuous-time Markov Decision Process (MDP). Also, using Hamilton Jacobi Bellman (HJB) equation and viscosity solutions, the Q function is shown to be the solution to a backward SDE. Leveraging the universal approximation property of deep residual networks, the analysis shows that DQN can approximate the optimal Q function arbitrarily well under mild assumptions. The framework further establishes that a DQN trained by Q-learning asymptotically converges to the optimal Q function by adapting stochastic approximation results. Overall, the paper provides a rigorous theoretical framework for understanding the representational power and the convergence of DQN in continuous-time setting. Claims And Evidence: It’s a theory paper and the claims are well supported by standard assumptions, established results, and proofs. Methods And Evaluation Criteria: Analyzing DQN in a continuous-time framework makes sense. Theoretical Claims: Yes I checked all proofs and did not identify issues. Experimental Designs Or Analyses: The analysis is sound and appears to be a general framework as the assumptions are standard in RL, stochastic control, and stochastic approximation. Supplementary Material: I did not find any supplementary material. Relation To Broader Scientific Literature: It bridges reinforcement learning, deep neural networks and stochastic control. It then draws on the tools, and classical analysis from these fields to prove the universal approximation and convergence property of DQNs. This connection advances our understanding of DQNs that are otherwise difficult to gain. Essential References Not Discussed: Modelling the transformation between NN layers as a SDE has been done before but not discussed in the paper. For instance: Kong, L., Sun, J., & Zhang, C. (2020). Sde-net: Equipping deep neural networks with uncertainty estimates. *arXiv preprint arXiv:2008.10546*. I suggest that the authors add discussions about the the relevant works in the neural SDE literature. Other Strengths And Weaknesses: **Strengths:** Viewing deep networks via the lens of stochastic differential equation is not new, but this paper explores its application to deep RL which is novel to the best of my knowledge. **Weakness:** Although the paper is rich in detail, the writing of this paper sometimes makes the technical content difficult to follow. For instance, the assumptions, equations etc are referenced before their definitions: Assumption 2.2 is referenced on line 110 but isn’t defined until line 339. Definition 2.2 refers to HJB Equation (20) on line 177, which is not defined until on Line 284. Lemma 2.2 is stated before Lemma 2.3 but its proof in the appendix is stated after that of Lemma 2.3. Additionally, the analysis on the forward and backward process could be stated in distinct sections to improve clarity. Other Comments Or Suggestions: - The term “immediate reward” could be easily confused with the reward in the discrete-time setting. I think that “Instantaneous reward” is a more precise term for the continuous-time setting. - The title in the submission is different from the one in the paper pdf. The former says “Universal Approximation Theorem of Deep Q-Networks” while the latter is “A Continuous-Time Framework of Deep Q-Networks”. **Typos:** 1. Line 382 (Left), “We suggests” → suggest 2. Theorem 3.1, “and 2.1” it is redundant Questions For Authors: 1, Could you discuss the limitations of this work? 2, May be relevant to question 1, are there assumptions or techniques that you feel that could be improved or relaxed? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the detailed comments regarding clarity, refs, terms, etc. We address each below and will revise the paper accordingly. Writing Clarity / Organization (Forward Referencing): We'll revise so all assumptions, definitions, equations, and lemmas are defined \textit{before} use. The specific cases mentioned will be fixed. We'll also reorder the Appendix A.2/A.3 proofs to match the Lemma 2.3 / 2.2 order in the text. Suggestion: To make things easier to follow in Sections 2 and 3, we plan to split the discussion. We'll start by just covering the forward state dynamics (SDE (1)) and its characteristics. Once that's established, we'll introduce the Q-value aspect and show its connection to the backward elements (BSDE (17), HJB Equation (22)). Breaking these into their own subsections should really help with readability. Missing References (Neural SDE Literature): We will add commentary connecting to Neural SDE research (including Kong et al., 2020). The idea of SDEs modeling transformations offers a conceptual bridge. Yet, our work differs substantially: (1) We concentrate on \textit{controlled} dynamics. (2) We derive and leverage the backward SDE (BSDE) specifically for the Q-function. (3) We utilize viscosity solutions for the relevant HJB equation. (4) Our application target is the analysis of DQN within RL, particularly its approximation and convergence properties. Use code with caution. Response to Other Comments Or Suggestions: Terminology ("immediate reward"): We will revise the manuscript to use "instantaneous reward" consistently when referring to $r(t, s, a)$ (e.g., lines 110-111, line 289). Title Mismatch: The other title was likely an earlier working title or resulted from an entry error in the submission system. We sincerely apologize for the confusion caused by the title mismatch between the submission system and the PDF manuscript. The correct and intended title is \textbf{"A Continuous-Time Framework of Deep Q-Networks"} as it appears in the PDF. The other title was likely an earlier working title in the submission system. We will ensure the correct title is used consistently in all future versions and communications. Could you discuss the limitations of this work? Our work has several limitations: 1, Assumptions: The analysis relies on standard but potentially restrictive assumptions, such as Lipschitz continuity for dynamics ($h, \sigma$) and reward ($r$), compactness of the action space $A$ and parameter space $\Theta$, linear growth conditions, and ergodicity of the sampling process for convergence (Assumption 3.1). Real-world problems might violate these. 2, Specific DQN Architecture: While connecting DQN layers to Euler steps is central, our analysis implicitly assumes a ResNet-like architecture (Eq. 2) where layer updates resemble SDE discretizations. The direct applicability to vastly different architectures might need further investigation. 3, Finite Time Horizon: The framework is developed for a finite time horizon $T$. Extending the results rigorously to infinite-horizon or average-reward settings would require different techniques (e.g., ergodic HJB equations, different stability conditions). We will add a dedicated subsection in the Conclusion or Discussion section outlining these limitations and suggesting directions for future research. Are there assumptions or techniques that you feel could be improved or relaxed? Several assumptions offer avenues for future work, although relaxing them presents significant technical challenges: 1, Lipschitz Continuity: Relaxing the Lipschitz assumptions on $h, \sigma, r$ would broaden applicability but requires more advanced SDE/PDE theory (e.g., dealing with potential state explosion, using path-dependent PDEs, or weaker solution concepts). 2, Ergodicity for Convergence: The ergodicity assumption (Assumption 3.1(i)) for Q-learning convergence is strong. Exploring convergence under weaker mixing conditions or convergence in probability (rather than almost surely) might be possible using different stochastic approximation frameworks, potentially at the cost of stronger assumptions elsewhere or weaker convergence guarantees. 3, Discretization Scheme: We connect DQNs to the Euler-Maruyama scheme. Exploring connections to higher-order SDE discretization schemes could be interesting but would likely require more complex network architectures and significantly more involved analysis. We will briefly touch upon potential relaxations and the associated challenges within the limitations/future work discussion. We again thank Reviewer 7pHN for their valuable feedback. We hope that the revised paper will be considered suitable for publication at ICML. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. "Use code with caution" at the end of your response regarding the missing references appears disconnected from the rest of the paragraph, and confusing to me. If it isn't an editing oversight, could you please clarify what it refers to? --- Reply to Comment 1.1.1: Comment: Plz find the code in an anonymous link: https://github.com/ContinuousTimeDQN/continuous_time_DQN.git This code provides an example for illustrating the arguments. It seems that the space limitation has removed the code link.
null
null
null
null
null
null
Automatically Interpreting Millions of Features in Large Language Models
Accept (poster)
Summary: The paper presents an automated pipeline for generating and evaluating natural language interpretations of SAE latents in LLMs. The authors introduce five scoring techniques—Detection, Fuzzing, Surprisal, Embedding, and Intervention Scoring—to assess interpretation quality, with Intervention Scoring evaluating the causal effects of latents on model outputs. They test their framework on SAEs trained on two open-weight LLMs and find that different sampling and scoring strategies impact the specificity and generalizability of interpretations. Claims And Evidence: Claim: "The pipeline improves upon previous interpretability work by producing higher-quality interpretations." The paper does not conduct rigorous human evaluations of interpretation quality. While the scoring metrics provide an automated way to assess interpretations, they are correlation-based and may not align with actual human judgment. The paper references some small-scale human evaluations but does not systematically compare their method to existing ones using expert annotations. Methods And Evaluation Criteria: Yes Theoretical Claims: None Experimental Designs Or Analyses: The experimental design is well-structured. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper extends prior work on SAE-based feature extraction by scaling interpretations to millions of latents. It refines simulation-based scoring with five new metrics. While the pipeline improves scalability and efficiency, it offers incremental advances rather than fundamentally new interpretability methods. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: (1) The paper successfully scales LLM-based interpretability methods to millions of SAE latents, a good engineering contribution. (2) The proposed scoring methods, especially Detection and Fuzzing, offer computationally cheaper alternatives to simulation scoring. (3) code is provided with a detailed readme file. Weaknesses: (1) The core methodology builds on existing LLM-based interpretability and SAE approaches, offering mostly incremental improvements. (2) The evaluation relies heavily on automated metrics without systematic human assessment of interpretation quality. (3) Surprisal and Embedding scores have low correlation with human judgment, making their usefulness unclear. Other Comments Or Suggestions: None Questions For Authors: (1) There is no systematic way to verify correctness, as LLM-generated explanations may sound plausible but not reflect actual latent function. (2) Why were Surprisal and Embedding scores included despite their weak correlation with human evaluations? Do you see specific scenarios where these scores are more useful than Fuzzing, Detection, or Simulation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We agree with this comment by the reviewer. Our generated interpretations only interpret the activations of individual latents, and are far from full explanations of their behaviour and downstream impact. Intervention based methods, like the one we proposed, are the ideal candidates to probe the causal role of latents. We believe that a combination of both is the most informative. > (2) Why were Surprisal and Embedding scores included despite their weak correlation with human evaluations? Do you see specific scenarios where these scores are more useful than Fuzzing, Detection, or Simulation? We propose surprisal as a seemingly natural method to score interpretations, with the hope that it inspires future work in that direction, and that future work is aware of the limitation of this technique. Although we probably won’t use surprisal scoring much in the future, we believe it to be valuable for the community to know that it was tried. On the other hand, embedding scoring is much faster and cheaper than all the other techniques, even though such comparisons are less apples-to-apples than the comparison between fuzzing/detection with simulation. This is because, not only are the input tokens lower but the embedding model used can be as small as 700M parameters. This makes for a very compelling reason to use embedding scoring, if one wants to quickly evaluate the interpretability of latents, potentially on the fly while training. The fact that latents with low embedding scores are generally low scoring in fuzzing/detection can be used to filter out incorrectly interpreted latents or find latents which are generally not interpretable.
Summary: This paper introduces an automated pipeline for interpreting the latent features of sparse autoencoders (SAEs), which decompose large language model (LLM) representations. The authors propose five scoring methods to assess interpretation quality, including detection, fuzzing, surprisal, embedding, and intervention scoring. The framework is tested across different SAE architectures and LLMs, and the results suggest that the proposed scoring methods are computationally cheaper while maintaining strong correlations with human evaluations. Claims And Evidence: The authors claim that the proposed methods provide a more scalable and efficient interpretation method while capturing causal effects through intervention scoring. However, the largest problem is that the paper only shows the results for the proposed method, without a systematic comparison with existing methods that have already been discussed in the related works. Methods And Evaluation Criteria: The proposed scoring methods make sense for the interpretability problem at hand. Again, Although the experiments cover different SAEs, LLMs, and datasets, the absence of a detailed baseline comparison weakens the empirical evaluations. Theoretical Claims: The work does not include theoretical guarantees or formal proofs, which is understandable given its empirical focus. Experimental Designs Or Analyses: See *Claims and Evidence* and *Methods and Evaluation Criteria.* Supplementary Material: Yes. The supplementary material provides useful details on experimental setup and implementation. Relation To Broader Scientific Literature: The paper builds on prior work in LLM interpretability. Essential References Not Discussed: The paper covers the most relevant references but lacks experimental comparisons with them. Other Strengths And Weaknesses: Strengths: 1. The paper is well-organized and easy to follow. 2. The proposed method is intuitive and technically sound. Weaknesses: 1. Lack of direct numerical comparison with existing methods. 2. Following 1, it is unclear how much interpretability quality improves or is sacrificed for scalability. 3. Missing the analysis of failure cases. It is important to be aware of the safety boundary of the proposed method in implementation. Other Comments Or Suggestions: I strongly recommend the authors provide direct numerical comparisons between new and existing scoring techniques to clarify performance gains. I’m willing to raise my score if such a comparison is provided. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for this feedback. > Lack of direct numerical comparison with existing methods We compare our scoring techniques with the standard scoring technique at the time of writing, simulation scoring. In Table 1, we compare the costs of both scoring techniques, and we perform human score correlations, lines 322 left. We also perform inter-score correlations, Tables A1 and A2. Is there a specific comparison that the reviewer would like to see? > Following 1, it is unclear how much interpretability quality improves or is sacrificed for scalability. Improvements in scalability were mostly on the generation of scores, which we discuss in Table 1. This means that the proposed techniques do not sacrifice interpretability quality for scalability. Indeed we believe that the proposed guidelines for generating and evaluating latents improve interpretability quality instead of sacrificing it. Some of our proposed scores correlate well with human given scores. > Missing the analysis of failure cases. It is important to be aware of the safety boundary of the proposed method in implementation.” We currently provide one failure case in Figure A2 of the appendix, where the explainer model incorrectly identifies the explanation, one that is more easily gathered from top activations - as we write this we notice that the caption of the figure is missing, and that it indeed discussed this. We discuss this failure mode in the text. It is hard in general to discuss failure cases in interpretability, and even harder to define safety boundaries. We wonder if the reviewer has specific comments on how to improve on this. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the clarifications. After reviewing the rebuttal and the comments from other reviewers, I recognized my earlier misunderstanding of the results in Table 1. I now lean toward acceptance and am happy to raise my score.
Summary: The paper develops an automated pipeline for interpreting latent features identified by sparse autoencoders (SAEs) in LLMs. The authors implement a three-stage approach that first collects latent SAE activations, then generates natural language interpretations using external LLMs, and finally evaluates interpretation quality. Their main contribution is five scoring methods (detection, fuzzing, surprisal, embedding, and intervention) that are more compute-efficient than traditional simulation scoring. Intervention-based evaluation is not commonly used in autointerp pipelines and is a new addition to the mix. It is a causal framework that explains the latents by their effects on model outputs rather than input correlations. The provided evaluation approaches each have specific qualities, and the paper discusses that. The research demonstrates that interpretation quality depends on sampling strategy (stratified sampling across activation distributions produces better interpretations than using only top-activating examples). The practical implication is the most important as it enables scalable assessment of millions of features, providing a foundation for improved model understanding. Claims And Evidence: Yes. The main claims regarding novelty are: "We introduce five new techniques to score the quality of interpretations that are cheaper to run than the previous state of the art. One of these techniques, intervention scoring, evaluates the interpretability of the effects of intervening on a latent, which we find explains latents that are not recalled by existing methods" The paper supports its claim of introducing five more cost-efficient scoring techniques through detailed cost analysis in Table 1, showing methods like fuzzing and detection require 5-30x fewer tokens than simulation scoring. The second claim about intervention scoring is evidenced in Figure 3, which demonstrates a negative correlation between fuzzing and intervention scores, proving that latents scoring poorly on traditional methods can be well-interpreted through their causal effects. Methods And Evaluation Criteria: Yes, the proposed five evaluation methods are quite intuitive and make sense for interpreting latents. Theoretical Claims: There are no major theoretical claims or issues. Experimental Designs Or Analyses: Yes, I can validate the soundness of the following experiments: - Cost comparison of scoring methods (Table 1) - Correlation analysis between scoring methods (Tables A1 & A2) - Example sampling strategy comparison (Figure 2) - Testing how different sampling techniques (top activating examples, random sampling, stratified sampling) affect interpretation quality - Intervention scoring analysis (Figure 3) - Demonstrating that some latents with low fuzzing scores have high intervention scores, and comparing trained SAEs against random baselines The number of latents used for correlation analysis can be more. Supplementary Material: Table A1 A2 Relation To Broader Scientific Literature: Considering that the work is more of an empirical study with practical value, it provides useful tools for the broader research community to interpret activations of various models across different domains. Essential References Not Discussed: Essential references are discussed Other Strengths And Weaknesses: Strengths: - The authors provide a useful tool for the community, although the automated interpretability pipeline itself is not novel - The novelty lies in the evaluation approaches for the interpretations, which are insightful - The intervention-based approach is useful to have in the stack Weaknesses: - While being a useful tool for a broad community (both interpretability community and people who want to use the interpretability toolsets), the overall insightful findings and novelty in the method are limited. The automated interpretability pipeline is not new. Verdict: I consider this work as a valuable project and tool for the community, rather than being scientifically profound. I would require to see more scientific novelty or rigour for a clear accept. Therefore, I consider this work as weak accept. Other Comments Or Suggestions: None. Questions For Authors: No question. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful comments. > While being a useful tool for a broad community (both interpretability community and people who want to use the interpretability toolsets), the overall insightful findings and novelty in the method are limited. The automated interpretability pipeline is not new. At the time of writing, evaluating SAE interpretability focused on evaluating the top activating examples. We show that this is misleading for multiple reasons. Firstly, this gives an illusion that SAEs are more interpretable and monosemantic than they are (we show this in figure 2), ignoring the full distribution. There was also the impression that the lower activations of SAEs were completely uninterpretable and less valuable, which again we show not to be the case. We think this to be a valuable insight. Our scoring techniques improved on the state of the art, and are now the standard instead of simulation scoring. We think that our discussion on interpretability scoring is also helpful for the community, and such an analysis is not common. Intervention scoring was a type of scoring that was missing in the current framing of SAE interpretability. We would love to be able to produce more rigorous results that would convince the reviewer that the paper strongly merits acceptance. If there are specific examples that the reviewer would like to be done more rigorously we would take those into consideration.
Summary: This paper proposes an automatic concept explanation method based on LLM to address the problem of poor human comprehension of sparse autoencoders. Specifically, the author collects highly responsive sentences and corresponding concepts in SAE and carefully designs LLM prompts to prompt LLM. Then LLM automatically explains other units in SAE. The author also provides 5 evaluation metrics to evaluate the effectiveness of the method. ## update after rebuttal Claims And Evidence: The author claims in the title to explain millions of features in LLM, but it seems that this article explains the massive units in SAE latent space. Methods And Evaluation Criteria: The author provides 5 evaluation indicators, which I think are reasonable. Theoretical Claims: No theoretical analysis in the paper. Experimental Designs Or Analyses: Lack of comparison with some baseline methods. Supplementary Material: Many experimental details are provided in the appendix. Relation To Broader Scientific Literature: It is interesting to interpret the parameters in LLM, and this paper proposes an interesting approach. Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths:** - The idea of ​​using LLM to automatically explain SAE is interesting, although there is similar work, such as using GPT-4 to explain GPT-2. **Weaknesses:** - The authors only used one LLM to interpret SAE in this article. It would be more convincing if the authors could consider more LLMs and perform prompt sensitivity analysis. - The author claims in the title to explain millions of features in LLM, but it seems that this article explains the massive units in SAE latent space. - What if we let an LLM explain its own SAE? What if we use an additional LLM to explain another LLM's SAE? The author can explore this. - Can this method identify some latent units that do not have obvious conceptual information? - The method in this paper is very interesting, but the experiments seem to be insufficient, for example, there is a lack of some sensitivity analysis, specific ablation experiments, or more potential downstream applications, etc. Other Comments Or Suggestions: Please see weaknesses, I am willing to raise my score if the author can address my concerns convincingly. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for your helpful comments. >The authors only used one LLM to interpret SAE in this article. It would be more convincing if the authors could consider more LLMs and perform prompt sensitivity analysis. In the current version of the article, we evaluate the explanation given by Llama 3.1 70b, Llama 3.1 8b and Claude 3.5 Sonnet, as shown in table A8. We discuss prompt sensitivity analysis in the last reply of this rebuttal > The author claims in the title to explain millions of features in LLM, but it seems that this article explains the massive units in SAE latent space. In this work we use features to mean interpretable directions in both the residual stream and the MLP outputs. SAEs were developed because directions in the residual stream and MLP (by looking at neurons for instance) were often polysemantic and hard to understand. The interpretability community uses feature to mean interpretable concepts, and so we believe the claim of explaining millions of features to be correct, being similar to the one in Templeton et al., 2024. We have added to the introduction the following sentence: "Throughout this work we are going to be using latent to mean a specific pair of encoder-decoder indices in SAE, and feature to mean the concept that this latent is representing.” > What if we let an LLM explain its own SAE? What if we use an additional LLM to explain another LLM's SAE? The author can explore this. We did both of these in this work. We used Llama 8b to generate explanations for the Llama 8b SAE, and in general have used other LLMs – Llama 70b – to generate the explanations for the SAEs trained on LLama 8b and Gemma 2 9b. Llama 8b is worse at generating explanations than Llama 70b and at the time of writing there were no Llama 70b SAEs, which was the smaller model that gave reasonable explanations, so these were the only experiments we could perform. > Can this method identify some latent units that do not have obvious conceptual information? Some units have simple conceptual interpretations (they fire on the verb _to be_, or on _chair_) and some have more complicated patterns, like firing in text between parenthesis. We are not sure if this is what the reviewer meant to ask. > The method in this paper is very interesting, but the experiments seem to be insufficient, for example, there is a lack of some sensitivity analysis, specific ablation experiments, or more potential downstream applications, etc. We have ablated model size (for both explainer and scorer), number of examples shown, number of tokens per example shown, origin of the examples shown, using chain-of-though or no and, showing token activations or. We also investigated the interpretability of different layers, whether the interpretability of the residual stream SAEs and of SAES trained the output of the MLP, as well as the dependence of interpretability on the SAE expansion factors. Is there a specific ablation that the reviewer has in mind? --- Rebuttal Comment 1.1: Comment: Thanks to the author's reply, most of the concerns have been addressed. Considering the article writing and contribution, I decided to raise the score to weak accept.
null
null
null
null
null
null
Statistical Query Hardness of Multiclass Linear Classification with Random Classification Noise
Accept (oral)
Summary: This paper studies the problem of learning multiclass linear classifiers of form $ f_w(x) := \arg\max_{i} \\{\langle w_i, x \rangle\\},$ under random classification noises with known noise channel. The paper shows that, unlike the binary classification case where efficient SQ algorithm is possible, the case with even 3 classes is SQ-hard with query complexity superpolynomial w.r.t. the input dimension. The main proof technique is based on reducing the hardness to a composite hypothesis testing problem. The difficulty lies in constructing the hypothesis testing distributions consistenting with certain multiclass polynomial classifiers (which future reduces to linear classifiers with a larger input dimention). This is achieved by leveraging the hidden direction distributions from Diakonikolas & Kane (2022) and Nasser & Tiegel (2022), with a more careful design for the distributions on the "hidden direction". The final SQ lower bound then follows by tuning the degree of the polynomials. Claims And Evidence: The claims are clear, and the proofs are convincing. Methods And Evaluation Criteria: This is a purely theoretical paper. The proof technique is sound and based on prior literature. Theoretical Claims: I checked most of the proofs necessary for the main claims, though I didn't verify every single detail. As far as I can see, the overall proof idea is clean and correct. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: This paper advances the understanding of the computational complexity of linear classification under RCN beyond binary labels. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Other Weaknesses:** 1. Although this is clearly a solid theoretical paper with novel ideas, I'm skeptical about its broad impact on the community. It seems to me that the difficulty mainly arises from the use of 0-1 loss, which is known to be hard in many other settings. 2. A more practically relevant setting (as also pointed out by the authors) is to consider softmax regression, $ p_w(i \mid x) = \frac{e^{\langle w_i, x\rangle}}{\sum_{j=1}^{k} e^{\langle w_j, x\rangle}}, $ and optimize under the log-loss. This is known to admit computationally efficient algorithms even in the adversarial online setting (which reduces to the batch setting through online-to-batch conversion); see: [https://arxiv.org/pdf/2110.03960](https://arxiv.org/pdf/2110.03960). 3. The proof techniques largely follow from Diakonikolas & Kane (2022) and Nasser & Tiegel (2022), which is not particularly surprising. Other Comments Or Suggestions: **Writing Suggestions:** 1. The notation $C\text{opt}$ is confusing, consider using $C \cdot \text{opt}$. 2. In Definition 4.1, item 2 should be $x \not\in S_k$. 3. Definition 5.2 is quite confusing: - it would be better to explicitly say that "$J_i$ is a union of $m$ disjoint intervals." I only understood this after reading your construction in Definition 5.5. - did you miss $\textbf{conv}$ in the definition of $I_{\text{in}}$? Moreover, it is better to write $I_{\text{in}}$ instead of $I_{in}$, as one might confuse "in" with $i \cdot n$. 4. The distributions $h_i$s are confusing with the hypothesis, consider using $H_j$s. Questions For Authors: I list some comments in the "Other Strengths and Weaknesses" section, which will not impact my recommendation for acceptance. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our theoretical results and for providing useful feedback. We next respond to the comments from the reviewer as follows. >Classification v.s. Regression: We thank the reviewer for drawing our attention to the multi-class regression setting. In that respect, we want to point out that 0-1 loss is a standard benchmark for studying classification problems and for linear classification in particular. Specifically, the computational difficulty of the problem does not arise from using the 0-1 loss per se, but from the added noise. As we mentioned in the introduction (and also see our response to reviewer Y6KY), in the realizable setting the MLC problem can be written as a linear program and can therefore be solved efficiently (and with an SQ algorithm). Furthermore, if $H$ is well-conditioned, the problem is also efficiently solvable with respect to the 0-1 loss. In particular, the scale of the ground truth W has no effect on the 0-1 loss. That is, the use of the 0-1 loss is not the reason that makes the problem hard. On the other hand, the logistic regression problem studied in https://arxiv.org/pdf/2110.03960 is not comparable to the classification setting studied in our paper. In fact, under the logistic loss, even the ground truth hypothesis W* could have a very large loss in the realizable setting—while under the 0-1 loss in the realizable setting W* has 0-1 loss equal to 0. This means that, without reasonable distributional assumptions, we cannot hope to use logistic regression to learn a hypothesis with a good 0-1 loss, which is what we believe is most desirable in practice. Furthermore, the use of logistic regression depends on the magnitude of W, while in 0-1 loss the magnitude of W does not affect the learnability of the problem. For two W’s with the same prediction under 0-1 loss, their logistic losses can be very different. In fact, the regret bound obtained in https://arxiv.org/pdf/2110.03960 actually depends on this magnitude (which can blow up when W is large). Based on these observations, we believe that the results for logistic regression are somewhat orthogonal to our results and their future potential impact. In summary, we believe that the hardness results obtained in our paper may motivate subsequent algorithmic work under additional structural assumptions (please also see the conclusions of our paper). >Technical Novelty: Given the efficient learnability of halfspaces with RCN and MLC in the realizable setting, prior to our work, it was considered plausible that MLC with RCN is also efficiently learnable. As mentioned in the introduction, MLC with RCN has been widely studied in prior work both empirically and theoretically (by different communities), and efficient learning algorithms have been developed when $H$ is well-conditioned. Rather surprisingly, we show a strong computational separation between the binary and the ternary cases. Furthermore, we want to emphasize that although parts of our proof are built over the standard NGCA framework developed by [DKS17, DK22] and leverage modifications of the discrete Gaussian inspired by [NT22], formulating MLC to fit in these frameworks requires novel ideas, such as: identifying the correct condition, under which the hardness holds; and mapping MLC to a polynomial classification problem without using polynomials with very high degree. We refer the reviewer to our response to Y6KY’s similar question for more details. >Reference: [BPSTWZ19]Beygelzimer, A., Pal, D., Szorenyi, B., Thiruvenkatachari, D., Wei, C.-Y., and Zhang, C. Bandit multiclass linear classification: Efficient algorithms for the separable case. In International Conference on Machine Learning, pp. 624–633. PMLR, 2019. [DV04]Dunagan J, Vempala S. A simple polynomial-time rescaling algorithm for solving linear programs. InProceedings of the thirty-sixth annual ACM symposium on Theory of computing 2004 Jun 13 (pp. 315-320) [DKS17]Diakonikolas, I., Kane, D. M., and Stewart, A. Statistical query lower bounds for robust estimation of highdimensional gaussians and gaussian mixtures. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pp. 73–84, 2017. doi: 10.1109/ FOCS.2017.16. [DK22]Diakonikolas, I. and Kane, D. Near-optimal statistical query hardness of learning halfspaces with massart noise. In Conference on Learning Theory, pp. 4258–4282. PMLR, 2022. [NT22]Nasser, R. and Tiegel, S. Optimal sq lower bounds for learning halfspaces with massart noise. In Conference on Learning Theory, pp. 1047–1074. PMLR, 2022.
Summary: This paper is concerned with the task of multiclass linear classification (MLC) with $k$ labels under random classification noise (RCN). The problem parameters are a $k \times k$ row-stochastic noise matrix $H$, and a target linear classifier $f^\star$ that maps $\mathbb{R}^d$ to $[k]$ as $f^\star(x)=argmax_{i \in [k]}(w \cdot x)$ for some ground truth vector $w \in \mathbb{R}^d$. There is a joint distribution $D$ over $\mathbb{R}^d \times [k]$, such that a sample is drawn as follows: first, we draw $x \sim D_x$ where $D_x$ is the marginal of $D$ on $\mathbb{R}^d$, and then, we draw $y$ from the conditional distribution $\Pr[y=j|x]=H_{f^\star(x), j}$. Namely, the label $y$ should ideally have been $f^\star(x)$ with no noise; but now, we perturb the label according to $f^\star(x)^{\text{th}}$ row of $H$. Given iid draws from this distribution, and $\epsilon \in (0,1)$, the task in MLC with RCN is to output a classifier that has error at most $\epsilon$ larger than the error of the optimal classifier in some class. The sample complexity of this task is known. This paper is concerned with the computational complexity of the task. In particular, for $k=2$, computationally efficient algorithms (that furthermore fit into the SQ model of algorithms) are known. One of the main results in the paper (Theorem 1.2) is an SQ lower bound for the case when $k=3$. This is surprising, and establishes a separation between what can be achieved computationally efficiently for $k=2$ versus $k=3$. The authors also establish certain other results, which rule out SQ-based algorithms to even approximate the optimal error upto a multiplicative constant factor (Theorem 1.3), and also an instance that is hard to output a hypothesis better than random guessing in an efficient manner (Theorem 1.4). The results are accomplished by combining prior SQ-lower bound constructions in the literature due to Diakonikolas and others. In particular, the authors first show how a certain hypothesis testing problem that distinguishes cases where the labels are independent of $x$, versus the case where the labels are drawn according to a specialized MLC with RCN instance (see Definition 4.1), reduces to the learning problem of MLC with RCN (Lemma 4.4). So, the main task then becomes showing that this testing problem is hard. For this, the authors use a construction based on so-called "hidden direction distributions" (Definition 5.1) inspired from prior work. This construction appears to be standard in the SQ lower bound literature, but massaging it to the present context of MLC with RCN appears to require novel conceptual bridges. Finally, all the aforemenetioned hardness results for MLC under RCN follow (Section 6) from the hardness of distinguinging in the above testing problem. ## update after rebuttal I thank the authors for the clarifications. It would be definitely be useful to include this discussion (as the authors deem appropriate) in the revised version of the paper. I maintain my evaluation of the paper, and my score. Claims And Evidence: The claims and evidence appear convincing to me. Methods And Evaluation Criteria: NA Theoretical Claims: I only glanced over the proofs, and did not verify calculations line-by-line. They appear correct to me, and the progression in the overall analysis checks out to me. Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: A lot of statistical learning theory is concerned with analyzing the number of samples that are information-theoretically sufficient and necessary to output an accurate classifier. The computational complexity of generating these accurate classifiers however also demands study. The present result establishes a separation in the computational complexity of linear classification under noise when the number of labels increases from 2 to 3. This is conceptually a surprising result, and suggests that standard algorithmic paradigms (namely, SQ algorithms) do not suffice for the seemingly natural extension for learning tasks from binary to multiclass, and more powerful algorithmic primitives provably become necessary. Essential References Not Discussed: NA Other Strengths And Weaknesses: To me, the primary strength of the paper is the conceptual conclusion---within the class of computationally efficient algorithms, SQ algorithms suffice for binary classification with RCN, but provably do not for multiclass problems even with 3 classes. This conclusion is indeed intriguing to me. I believe this the conclusion would inspire future inquiry into the specifics of what can be achieved efficiently for MLC under RCN To establish the result, the authors do rely on heavy-weight machinery on SQ lower bounds from prior literature. I do not necessarily view this as a weakness, since as i mentioned, massaging prior literatrure into the present context does appear to require technical work, and the final conclusion is satisfying and important. Other Comments Or Suggestions: I would really encourage the authors to be more specific with particular lemmas/theorems/chapters when you refer to textbooks (e.g. the SSS-SBD textbook). Simply citing a textbook in my opinion is quite lazy and entirely useless. Questions For Authors: 1) Could you, in a brief paragraph, summarize the techincal conceptual novelty that is required in massaging the standard hidden distribution SQ lower bounds to the present context? While the overview in section 2 is helpful, I was not able to fully appreciate what parts in the overview were specifically novel and challenging for the present result. 2) What is the reference to the LP result for the realizable settingin lines 35-37? 3) I am a little confused about what the difference even is in non-realizable vs realizable learning under RCN. Since the labels are always perturbed by noise, and the error is measured with respect to the best classifier in a class, what does it mean to be realizable? Does the non-realizable case simply refer to the case where $f^\star$ is not in the class, so opt is no longer $f^\star$? If so, for the binary case, what is known about the difference between these two settings? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the appreciation and useful feedback. >Technical Contributions: We point out that our proof cannot be viewed as a simple modification of a previously developed SQ lower bound. As explained in the submission, the generic framework follows the moment-matching approach for Non-Gaussian Component analysis of [DKS17] and its generalization [DK22]. More precisely, we require a “relativized” version of that framework that is appropriate for supervised learning. Importantly, in order to be able to use this framework for our learning task, we need to construct novel low-dimensional moment-matching distributions that correspond to instances of MLC with RCN. This is our main contribution that requires significant conceptual and technical work. Specifically, to achieve this, we carefully map an instance of MLC to a polynomial classification problem and derive a new “hardness” condition (Definition 4.1), under which (after the mapping) the conditional distribution of the polynomial classification problem coincides with a modified discrete Gaussian. Prior to our work, it was unclear how a hard instance of MLC with RCN would look like (or whether such a hard instance exists). While the use of the modified version of a discrete Gaussian is inspired by prior works (such as [NT22]), reaching this step requires novel technical insights for MLC with RCN. In particular: 1. A key novelty of our paper is to identify the “correct” condition (Definition 4.1) under which we are able to relate our problem with the existing framework for proving SQ lower bounds. Identifying such a condition is highly non-trivial for the following reasons. Conceptually speaking, algorithms developed in prior works fail when the noise matrix $H$ is non-invertible. However, not all non-invertible matrices $H$ can be used to obtain hard instances. Specifically, for a generic non-invertible $H$, we can only guarantee that one row of $H$ can be written as a linear combination of the other rows of $H$; and the coefficients of the linear combination could be negative. Natural attempts to construct hard instances for a generic $H$ fail. Specifically, they may require mapping MLC to a polynomial classification problem with very high degree, which will not lead to a super-polynomial lower bound for the original problem (as explained in Section 6). On the other hand, using our SQ hard-to-distinguish condition (Definition 4.1), we are able to map MLC with RCN to a polynomial classification problem in a clean and novel way (Theorem 5.3); together with the modification of a discrete Gaussian (Definition 5.5) these rule out any efficient SQ learning algorithm for this problem. 2. Additionally, recognizing the hardness of MLC with RCN itself is already rather surprising, given the well-known efficient learners for halfspaces with RCN; and learning MLC in the realizable setting. Before our work, it seemed plausible that a similar result could hold for MLC with RCN, at least when $f^*$ is Bayes optimal. In fact, prior works have shown that when $H$ is well-conditioned, this is indeed possible. Our results settle the complexity of the problem by showing a surprising computational separation between MLC with RCN and binary classification with RCN. >LP formulation for MLC in the realizable setting: This reduction follows the definition of MLC. Instead of considering $k$ vectors in $R^d$, we can consider one vector $w$ in $R^{kd}$ (the concatenation of $w_1,..,w_k$) such that for every example $x$ with label $i$, we have $w_i.x-w_j.x \ge 0$ for every $j \neq i$ (see Definition 1 of [BPSTWZ19] for example). >Realizable v.s. Non-realizable: From the phrasing in the review, we understood that the question is the following: what is the difference between the case where $f^*$ is the Bayes optimal classifier (realizable) and the case where $f^*$ is not the Bayes optimal classifier (non-realizable). In case we misunderstood your question, please let us know and we will respond to the clarification. If the probability that the label of an example is unchanged is strictly larger than the probability that it is flipped to another label, then $f^*$ always achieves 0-1 error opt and is the Bayes optimal classifier. In the binary classification setting, this condition on the error probabilities is equivalent to saying that the probability the label of an example is flipped is less than $1/2$. On the other hand, if this condition does not hold, then $f^*$ may not be the hypothesis in the class that achieves error opt. However, in both cases, we always want to compete with the hypothesis in the class that achieves the optimal error (we do not want to compete with hypotheses not in the class). In binary classification, no matter whether $f^*$ achieves error opt or not, we are always able to learn a hypothesis with an error arbitrarily close to opt in polynomial time. The references are included in the response to Reviewer 4NDD.
Summary: The paper studies linear multiclass classification under random classification noise within the statistical query (SQ) model. It considers a setting with positive noise separation, meaning that for a given labeled pair $(x, f^{\star}(x))$ under the ground truth labeling hypothesis, the probability of correctly observing the label $y = f^{\star}(x)$ is strictly greater than the probability of observing any incorrect label $y \neq f^{\star}(x)$. The paper establishes a superpolynomial lower bound on the number of queries required for any SQ learner. Additionally, when the number of labels is large, it provides a superpolynomial lower bound for approximate SQ learners or SQ learners that outperform random guessing. Claims And Evidence: Possible issue with Theorem 6.2. See theoretical claims below. Methods And Evaluation Criteria: N/A Theoretical Claims: Yes, I went through Theorem 6.2, Corollary 6.3, and Corollary 6.4 at a high level. I may be overlooking something, but I think there might be an issue in Theorem 6.2. Specifically, the sum of the entries in the $k$-th row of the noise matrix $H$ does not seem to add up to $1$. Instead, it sums to $$ 1+ \zeta - \frac{(k-1)}{k} \zeta = 1+\frac{\zeta}{k}. $$ This defines a valid matrix only if $\zeta = 0$. I believe the intended definition of $H$ was $$ H_{kj} = \frac{1}{k} -\frac{\zeta}{k-1} \quad \forall j < k, \quad \text{ and } \quad H_{kk} = \frac{1}{k} + \zeta. $$ I briefly reviewed the proof of Theorem 6.2 and did not see any immediate step where division by $\zeta$ would cause the proof to break. However, given that the proof involves asymptotic bounds with $O(\cdot)$ and $\Omega(\cdot)$ notation, it is unclear whether some of the hidden constants depend on $1/\zeta$. I would appreciate it if the authors could verify whether this issue has a straightforward fix or if it presents a more fundamental problem. At the very least, it is evident that Corollaries 6.3 and 6.4 do not hold for $\zeta = 0$, as those proofs explicitly assume $\zeta$ is a nonzero constant. $\textbf{Given this issue, I am recommending a rejection. However, I am happy to engage with the authors during the rebuttal and change my recommendation if a minor fix is provided. }$ Experimental Designs Or Analyses: N/A Supplementary Material: Yes, proofs of Theorem 6.2 and Corollaries 6.3 and 6.4. Relation To Broader Scientific Literature: There is a growing body of research showing that the statistical landscape of multiclass learning can differ significantly from binary classification, especially as the number of labels increases. This paper contributes to this literature by highlighting a key difference that also considers the computational landscape. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think this is a strong paper that establishes several interesting results. And the paper does a good job of conveying the intuition behind some of the constructions used in their lower bound proofs. As for potential weaknesses, given that this appears to be the first work studying linear multiclass classification with random noise from an efficiency perspective, it would have been valuable to include some positive results as well. See the questions below for further discussion on this point. But this is just a personal opinion, so I won't take this into account in my judgement of the paper. Other Comments Or Suggestions: N/A Questions For Authors: *(This question is only relevant if the concern I raised earlier about the proof has a minor fix.)* In the introduction, the authors mention prior work proposing methods whose time complexity scales inverse polynomially with the minimum singular value of $H$. In their intuitive discussion of the proof, they describe a matrix $H$ in Equation (1) with linearly dependent columns and a separation of $\sigma = 0.1$. Based on this example, it seems that such a large separation is feasible only for $k = 3$, and as $k$ increases the separation has to grow progressively smaller for the lower bound to hold. This suggests a possibility of a phase transition. Specifically, for each $k$, there might exist a threshold $\sigma$—a function of $k$—such that if the separation is at least $\sigma$, a polynomial-time bound is achievable. For sufficiently large $\sigma$, it might be possible to establish a fixed lower bound on the smallest singular value in terms of $\sigma$, which, combined with the existing results cited in the introduction, would imply the existence of an efficient algorithm. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for appreciating our theoretical results and for pointing out typos in the manuscript. We next respond to the comments from the reviewer as follows. >Definition of $H$ in the proof of Theorem 6.2: We want to thank the reviewer for pointing out this typo in the submission. The correct definition is as follows: the last row of H is defined as $H_{ki} = 1/k - \zeta/(k-1)$ for $i<k$ and $H_{kk} = 1/k+\zeta$. We will fix this minor issue in the revised version of the manuscript. Importantly, we want to emphasize that this typo does not affect the correctness of Theorem 6.2 and its corollaries for the following reasons. First, for any noise matrix $H$ that satisfies the SQ-hard to distinguish condition (Definition 4.1), we construct a family of hypothesis testing problems in a black-box way. The lower bound on the running time (number of SQ queries) for such a hypothesis-testing problem only depends on the parameters of the hard distributions defined for this hypothesis testing problem (Theorem 5.3). The choice of the parameter $\zeta$ only affects the final error guarantee that can be ruled out by our hardness result—but does not affect the lower bound on running time for the corresponding hypothesis testing problem. So, even setting $\zeta=0$ does not affect any proof; moreover, the big-O notation does not have any hidden dependence on $1/\zeta$. Specifically, in Theorem 5.3, the lower bound is $\Omega_c(N) = C(c) N$, where $C(c)$ is a number that only depends on the parameter $c$ used to calculate $\tau$ in the statement of Theorem 5.3. Using Theorem 5.3 we prove Theorem 6.2. Importantly, in doing this, no matter which $\zeta$ we choose in the definition of $H$, we always choose the constant $c=1/poly(k)$. This gives us an SQ lower bound of $2^{\Omega_k(N)}>2^{\Omega(N^{0.99})}$ when $N^{0.01}$ is larger than $C(k)$ in the hidden constant. In summary, the proof of the lower bound is correct as is. Second, as we mentioned earlier, the choice of $\zeta$ only affects the final error guarantee that we can rule out. In particular, either opt or the error guarantee is calculated with respect to $1-H_{ii}$, as we did in line 1031 and line 1043. Luckily, despite the typos for $H_{ki}$ for i<k in the definition of H, the values of the diagonal elements $H_{ii}$ are correct. In summary, the final error guarantee we rule out remains correct as is. >Possible positive results on MLC with RCN: We first want to point out that although our paper is the first paper that establishes a computational hardness result for MLC with RCN, as mentioned in the introduction of our paper, there exist many prior works that study this problem both empirically and theoretically from an algorithmic perspective. Specifically, when the noise matrix H is invertible, one can invert the noise matrix H and reduce the problem back to MLC in the realizable setting. These methods work well if H is well-conditioned; for example, when $H_{ii}>2/3$, inverting $H$ is easy and will not affect the learning task by much. However, these methods fail if H is ill-conditioned; in fact, we show that in the distribution-free setting the problem is provably computationally hard. As we point out in the conclusion section, there are several interesting directions that can be explored with respect to positive results. These include MLC with RCN under well-behaved marginal distributions, and MLC with RCN for more structured noise matrices. Positive results in these directions are beyond the scope of this paper. >Phase transition for $\sigma$: We start by pointing out the following: since the choice of $\sigma=0.1$ gives a hard noise matrix for $k=3$, one can easily obtain a hard noise matrix with $\sigma=0.1$ for any $k>3$. Specifically, this can be done by concatenating this $H$ (for $k=3$) with an identity matrix with size $k-3$. That is, it is not the case that the separation needs to be small in order to make the hardness result work for large $k$. On the other hand, we believe that there is a threshold $\sigma$ such that this phase transition happens. For example, if $\sigma=2/3$, then $H$ is well-conditioned (i.e., has no small eigenvalues), and (as was pointed out in prior work and mentioned above) we can invert $H$ to get a computationally efficient algorithm. However, the actual running time of such algorithms scales polynomially with the inverse of the smallest singular value of $H$ instead of the separation parameter. It would be interesting to understand this phase transition phenomenon more deeply and precisely characterize this threshold–namely, whether $H$ is not invertible for some $\sigma$, but we are still able to solve the problem efficiently. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarification regarding the typo. While I did not verify every detail line by line, the explanation provided seems reasonable. I also appreciate the clarification on the phase transition. Since my concerns have been addressed, I have raised my score from 1 to 4.
null
null
null
null
null
null
null
null
Discrete and Continuous Difference of Submodular Minimization
Accept (poster)
Summary: This paper explores the minimization of the difference of submodular (DS) functions over both discrete and continuous domains, extending prior work that was restricted to set functions. The authors introduce a new variant of the DC Algorithm (DCA) to minimize DS functions, providing theoretical guarantees comparable to previous work in the set function case. The study demonstrates that DS functions naturally arise in applications such as quadratic programming and sparse learning. The proposed method outperforms existing baselines in integer compressive sensing and integer least squares tasks. Experimental results show significant improvements in recovery probability and error rates. Claims And Evidence: The claims in the paper appear well-supported both theoretically and experimentally, with no evident problematic claims. Methods And Evaluation Criteria: I think the proposed methods and evaluation criteria make sense for the problem and application at hand. The problem of minimizing the difference of submodular functions (DS functions) in both discrete and continuous domains is a well-defined and important one in optimization, with practical applications in areas like quadratic programming, sparse learning, and compressive sensing. Theoretical Claims: I have conducted a preliminary review of the theoretical proofs in the paper, and they appear to be correct. However, I should note that my examination was not exhaustive, and there remains a possibility that some details may have been overlooked. Experimental Designs Or Analyses: The authors should include more results regarding the running time of their algorithm to better demonstrate its efficiency. Supplementary Material: I have reviewed all of the Supplementary Material provided with the paper to enable a more comprehensive evaluation of the overall work. Relation To Broader Scientific Literature: The problem addressed in this paper may be related to regularized submodular optimization, which involves optimizing a submodular function that is regularized by subtracting a linear or convex function [1][2]. [1] Kazemi, Ehsan, et al. "Regularized submodular maximization at scale." International Conference on Machine Learning. PMLR, 2021. [2] Cui, Shuang, et al. "Constrained subset selection from data streams for profit maximization." Proceedings of the ACM Web Conference 2023. 2023. Essential References Not Discussed: Please refer to the above. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: Please refer to the above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive review and helpful feedback. We address below your comments and questions. --- **1- Results on the proposed algorithm's running time** We report the average running time of the compared methods on the integer least squares experiment (Section 5.1) in [Figure 4](https://drive.google.com/file/d/1qG4rhu3aBCSAeWfOMx_QJna69vjvDxxM/view). The reported times for DCA and ADMM do not include the time for the RAR initialization, and for OBQ do not include the time for the relaxed solution initialization (which has similar time as RAR). We observe that our algorithm is significantly faster that the optimal Gurobi solver, even for the small problem instance ($n=100$). For the larger instance ($n=400$), Gurobi is already too slow to be practical. While our algorithm is slower than heuristic baselines, it remains efficient, with a maximum runtime of $100$ seconds for $m=n=400$. We will include these plots and this discussion in the revision. The primary focus of our experiments was to demonstrate that our algorithm outperforms state-of-the-art baselines on challenging problems, rather than optimizing its computational efficiency. For example, the most computationally intensive part of our algorithm is computing subgradients of the continuous extensions, which requires evaluating successive marginals $F^t(y^{i-1} + e_{p_i}) - F^t(y^{i-1})$. In our current implementation, we compute these marginals by doing separate calls to $F^t$. This can be significantly sped up by reusing computation from evalutating $F^t(y^{i-1})$ when computing $F^t(y^{i-1} + e_{p_i}). --- **2- Relation to regularized submodular optimization [1][2]** The problem studied in [1][2] is that of maximizing the difference between a submodular set function and a modular set function over constraints. Our problem setup does not consider constraints. But the unconstrained version of that problem is indeed a special case of DS **set** function minimization. We will cite works on unconstrained regularized submodular set function optimization in the revision.
Summary: Submodular functions are commonly studied as set functions, which can be viewed as functions defined on the vertices of the hypercube $ \\{ 0,1 \\}^n$. This paper, however, similar to some prior literature, considers an extension of submodularity, where functions are defined over cartesian products of compact subsets of $\mathbb{R}$. They study the problem of minimizing the difference of these generalized submodular functions. For discrete domains, they propose an algorithm to solve DS via a reduction to integer lattice domains followed by their variant of DCA (difference of convex functions minimization algorithm), which converges to a local minimum at a rate of $O(1/k)$. They claim that with discretization, the same method applies to continuous domains as well. Claims And Evidence: Claims like "The results can be easily extended to unequal $k_i$s" need some explanation. Methods And Evaluation Criteria: Overall, Methods and Evaluation criteria seems fine. This paper builds upon "Difference of Submodular Minimization via DC Programming" by El Halabi et al., so it should include a clear comparison between its DCA variant and those of Halabi et al. in this setting. Theoretical Claims: The main body of the paper includes only proof sketches and high-level ideas. They seemed sound, but I was not able to verify their correctness. Experimental Designs Or Analyses: The experiments compare their proposed algorithm for integer least squares problem (a special case of quadratic programs) and integer compressive sensing problem (a special case of Sparse Learning) with appropriate baselines for the respective problem. Supplementary Material: I have looked through the supplementary material, but I have not examined them closely. Relation To Broader Scientific Literature: This paper extends the result of the paper "Difference of Submodular Minimization via DC Programming" by El Halabi et al. (2023) to the setting proposed in the paper "Submodular functions: from discrete to continuous domains" by Bach. Essential References Not Discussed: . Other Strengths And Weaknesses: The paper is hard to follow. This work appears to build on established ideas without introducing a significant departure from prior research. Other Comments Or Suggestions: . Questions For Authors: To my understanding, the paragraph Computational Complexity in Section 4 discusses the complexity of your DCA algorithm. Could you please specify the theoretical guarantees of your DS algorithm as a whole? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address below your comments and questions. --- **1- Explain the claim "The results can be easily extended to unequal $k_i$'s"** The extension to unequal k_i’s follows directly, though the notation becomes more cumbersome. The key modifications are:\ - The relaxed domain (i.e., the domain of the DC program in Eq. (11)) changes to $\prod_{i=1}^n [0,1]\_{\downarrow}^{k_i -1}$.\ - The map $\Theta$ is adjusted to map from $\prod_{i=1}^n \\{0,1\\}\_{\downarrow}^{k_i -1}$ to $\prod_{i=1}^n [0:k_i-1]$, with a similar definition.\ - The continuous extension is now defined on $\prod_{i=1}^n \mathbb{R}^{k_i -1}$, and the summation in Eq. (6) runs from $1$ to $\sum_{i=1}^n (k_i - 1)$. --- **2- Comparison with the DCA variants of El Halabi et al. (2023)** Please refer to our response to **Reviewer 3rHC (3rd item)**, where we include a clear comparison between the DCA variants. --- **3- The paper is hard to follow.** We understand that the technical overhead may be challenging for readers, and we are happy to revise the paper to improve accessibility. Could you clarify which parts were hard to follow and suggest areas where clarity could be improved? --- **4- This work appears to build on established ideas without introducing a significant departure from prior research.** All reviewers seem to agree that the problem we address is interesting and well motivated, and that our results are valuable. We believe that achieving our results by extending existing results in a relatively simple way should not be seen as a weakness. That said, we would like to clarify that while our results build on existing work, they did require novel ideas and proofs. In particular, the main challenges are the following: 1. One factor that makes our results appear “straightforward” is the use of simpler and cleaner notation. For example, we represent the input of the continuous extension as a matrix $X\in [0,1]_{\downarrow}^{n \times (k-1)}$, instead of a list of vectors $X = (x_1, \cdots, x_n) \in [0: k-1]^n$ as in Bach and Axelrod. This change significantly simplified the presentation of the results. 2. While the proof that any function on a discrete domain is a DS function (Proposition 3.1) is a straighforward extension of the set function result in (Iyer & Bilmes, 2012), the analogous result for continuous domains (Proposition 3.3) required leveraging an alternative definition of submodularity and relating it to function smoothness. 3. Extending DCA (whether our variant or those in (El Halabi et al., 2023)) from set functions to general discrete functions is non-trivial. It required restricting the non-increasing permutation $(p,q)$ of $X^t$ to be row-stable, to ensure that it's a non-increasing permutation of $\{\Theta^{-1}(y^i)\}\_{i=1}^{(k-1)n}$ too. This is essential to guarantee local minimality (see proof of Theorem 4.5-b). This is not needed in the set function case ($k=2$), where any non-increasing permutation of $X \in \mathbb{R}_\downarrow^{n \times (k-1)}$ is row-stable. 4. Our proposed DCA variant (Algorithm 1) introduces a novel approach for selecting the subgradient using the local search step (lines 3-4), which ensures direct convergence to an approximate local minimum. This improves performance in some settings compared to the variant (extended to general discrete domains) proposed in (El Halabi et al., 2023). For further details, please see our response to **Reviewer 3rHC (3rd item)**. 5. Identifying important applications (Section 3.2) that have natural DS decompositions but are not DC, and do not have easy discrete DC decompositions, was also non-trivial. It required exploiting properties of DC functions (e.g., in Proposition A.1) and discrete DC functions (see Section A.2). 6. Showing that Lipschitz continuity is not necessary for bounding the discretization error for continuous domains, as in the case of the $\ell_q$-norm (Proposition G.2), is novel and its proof is non-trivial. We believe that this result could be generalized to other similar functions. --- **5- Clarification on Theoretical Guarantees and Computational Complexity of DS Algorithm** Our full DS algorithm is presented in Algorithm 1. Its theoretical guarantees are given in Theorem 4.5, and its computational complexity is discussed in the corresponding paragraph in Section 4.2. The algorithm applies DCA to the DC Problem (11) using the DC decomposition given at the beginning of paragraph "Algorithm", while also maintaining iterates $x^t$ as the solution to the original DS Problem (1). The iterates $X^t \in [0,1]_{\downarrow}^{n \times (k-1)}$ are updated with DCA updates (see Eq. (9)), while iterates $x^t \in [0:k-1]^n$ are obtained by rounding (line 7). As explained, the rounding step can be skipped if the solution of the subproblem $\tilde{X}^t$ is integral.
Summary: This paper considers the minimization of a difference of submodular functions (DS) in both the continuous and discrete domains. Unlike the submodular minimization problem, which can be solved in polynomial time, this problem cannot even be approximated efficiently. This paper accomplishes two main things: (i) It shows how broad the class of DS functions are by proving that many existing functions can be represented this way, although it is computationally hard to find a representation, and (ii) It provides an algorithm and proves that the algorithm returns an approximate local minimum (even finding a true local minimum is computationally hard. This algorithms builds upon the fact that the continuous extension of DS functions via their Lovasz extension is a difference of convex functions for which the DCA algorithm exists. This paper proposes a modified variant of the DCA algorithm. Finally, they include an experimental section where they compare their algorithm to existing alternatives on two different applications. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I did not thoroughly check correctness, but I did not notice any issues. Experimental Designs Or Analyses: Yes, the experiments looked reasonable to me. Supplementary Material: No Relation To Broader Scientific Literature: This paper is related to several existing related works that have studied special cases of this problem, e.g. Narasimhan and Bilmes [2005]. This work is of interest to the submodular optimization community. Essential References Not Discussed: I did not notice any missing essential references. Other Strengths And Weaknesses: Strengths - I think their problem statement is interesting, and even though it's minimization I think it is related to submodular maximization applications as well. In particular, I wonder if summarization objectives commonly found in submodular maximization papers where there is a diversity penalty can also be viewed in this sort of problem statement. - The problem is computationally challenging, but they were able to develop an algorithm that finds an approximate local optimum efficiently. This is an interesting type of algorithm, and maybe could be applied more broadly. - Their result is connected with convex optimization, and they build upon the existing DCA algorithm. - They provided an experimental section. Weaknesses - The algorithm may not be extremely novel, but this is fine in my opinion. - Some of the writing, in particular in the introduction, was a bit hard to interpret. E.g. "developed algorithms for this special case, that monotonically decrease the objective value" I did not understand when reading the introduction. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review. We will improve the writing of the introduction.
Summary: The minimization of a difference of submodular functions is studied, in a discrete and continuous setting. The discrete setting is more like a lattice that generalizes set optimization. A variant of the DC algorithm is developed that uses local search ideas. Experiments are performed to validate the algorithm. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check. All proofs are relegated to appendix or results from other works. Experimental Designs Or Analyses: Yes. See strengths and weaknesses. Supplementary Material: No. Relation To Broader Scientific Literature: Generalizes DS set function optimization to discrete domains of R^n. For continuous domains, there is a work of El Halabi et al. (2023). I'm unsure how the results in this paper are related to the former, although they do cite and discuss. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - Exposition is fairly easy to follow, even for someone unfamiliar with the area. Background and preliminary information goes until page 4. However, see weaknesses. - The problem setup is very general, which makes it difficult. Some natural applications are given, which motivate the work well. - The experimental results are convincing, although perhaps a natural baseline was left out. See weaknesses. Weaknesses: - Although much background information relevant to the problem studied is presented, I found it difficult to get a good idea of what is novel to this paper, and what is challenging about the variant that is studied. Such a good exposition of context is given that it becomes difficult to separate the contribution from the context. I think the algorithm developed (a DCA variant) is interesting. But I don't know enough to really judge its distance from standard DCA. The authors did not explain clearly was is different. - I didn't understand why the reduction discussed on line 092 for DR submodular functions, could not be extended to the case of DS functions on discrete domains. And then existing techniques from DS set optimization applied. A related issue is the reduction mentioned on line 317. The authors state that this reduction (line 317) is more expensive, but no justification is given in the main text. Also, I don't think the authors compared with this approach experimentally. Other Comments Or Suggestions: - Line 208: funcion -> function - Line 167: us -> use - The writing style is generally fine but it is a little informal. I kind of like that, because it means it wasn't written by an AI. But still, there were multiple examples of incomplete sentences (often starting with "Though"). Though is an informal shortening of although, and starts a subordinate clause that needs to be paired with a main clause. As in: "Although I like cats, I like dogs better." Questions For Authors: - See strengths and weaknesses. - Is the algorithm of El Halabi et al. (2023) compared, either theoretically or empirically? As this is also a variant of DCA, shouldn't it be a natural baseline? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address below your questions. --- **1- Distinguishing novel contributions from background** Our main contributions are outlined in the introduction (lines 61-74, 1st col). The DS minimization problem over general discrete and continuous domains $\mathcal{X}$ (Problem 1) has not been studied before. Prior work only considered special cases, where $F$ is a set function ($\mathcal{X} = \{0,1\}^n$), or is submodular ($H=0$). DS Minimization is significantly harder than submodular minimization, which is solvable in polynomial time, whereas even finding a local minimum or a polynomial approximation factor for DS minimization is provably hard. Standard DCA (Eq. 9) is not guaranteed to converge to an approximate local minimum, even for set functions (Section 4.2, lines 324- 333). Our DCA variant (Algorithm 1) differs by maintaining iterates for the original problem $x^t \in [0:k-1]^n$, obtained by rounding the iterates of the relaxed problem $X^t \in [0,1]_{\downarrow}^{n \times (k-1)}$, and by carefully choosing a subgradient $Y^t$ which ensures convergence to an approximate local minimum. For further discussion on the technical novelty of our results, see our response to **Reviewer eVr5 (4th item)**. --- **2- Extension of the DR submodular reduction to DS functions & Cost of the submodular set function reduction and empirical comparison** The reduction by Ene & Nguyen (2016) (line 092) applies only to DR-submodular functions, a subclass of submodular functions that are concave along nonnegative directions. For a DS function $F = G - H$, the submodular components $G$ and $H$ are not necessarily DR-submodular. This reduction cannot be readily extended to general submodular functions. It relies on a carefully chosen map $M: \{0,1\}^t \to [0:k-1]^n$, where $t \leq 2 \log k + 1 $, to construct an equivalent submodular set function $\tilde{F}(X) = F(M(X))$ whose domain scales with $O(\log k)$. when $F$ is not DR-submodular, $M$ does not preserve submodularity. We discuss the general reduction for submodular functions (line 317) in Appendix B and compare its empirical performance. Our results show that the reduction approach is indeed slower than the direct approach used by Bach and us. See also our response to **Reviewer Ecvy (6th item)** for a discussion of the theoretical complexity of the two approaches. We will add a brief explanation in the main text. --- **3- Relation to (El Halabi et al., 2023) & Theoretical and empirical comparison with their DCA variants** El Halabi et al. (2023) study a special case of our DS minimization problem where $F$ is a **set** function ($\mathcal{X} = \{0,1\}^n$); they do not consider continuous domains. They proposed two DCA variants: One is infeasible, as it requires trying $O(n)$ subgradients per iteration, while the other (Algorithm 2 therein) restarts from the best neighboring point at convergence if the solution is not an approximate local minimum. Both can be extended to general discrete domains similarly to our DCA variant. The main non-trivial change is the restriction of the permutation $(p,q)$ to be row-stable. Our DCA variant (Algorithm 1) instead selects a single subgradient using a local search step (lines 3-4), ensuring direct convergence to an approximate local minimum without restarts. Our theoretical guarantees (Theorem 4.5) generalize those of (El Halabi et al. 2023) to general discrete domains; recovering the same guarantees in the set function case. We empirically compared our DCA variant (DCA-LS) to an extension of the more efficient DCA variant from El Halabi et al. (2023) (DCA-Restart) on all experiments included in the paper. We report their performance on integer least squares (ILS) in [Figure 5](https://tinyurl.com/Rebuttal-Fig5) and running times in [Figure 6](https://tinyurl.com/Rebuttal-Fig6). Similarly, [Figure 7](https://tinyurl.com/Rebuttal-Fig7) and [Figure 8](https://tinyurl.com/Rebuttal-Fig8) show their performance and running times on integer compressive sensing (ICS). We plot running times for all $\lambda$ values at $m/n =0.5$ and $0.2$. DCA-LS matches or outperforms DCA-Restart on all experiments. The two variants perform similarly when initialized with a good solution (LASSO in ICS, RAR in ILS), otherwise DCA-LS performs better, sometimes by a large margin. In terms of runtime, DCA-Restart is faster on ILS and for some $m/n$ values in ICS, e.g., $m/n=0.5$, but slower for others, e.g., $m/n=0.2$. Thus, the choice between the two variants is problem dependent. We will revise the paper to include these results and adjust the claim on line 343-344 (1st col), which originally stated that our DCA variant is more efficient than those in (El Halabi et al. 2023). This is only true for one of them. This claim was based on an earlier, less extensive comparison on an ICS experiment, where DCA-LS was both faster and performed better than DCA-Restart.
Summary: The paper investigates the minimization of difference-of-submodular (DS) functions over both discrete (products of finite sets) and continuous (products of intervals) domains. The authors establish that every function on a discrete domain and every smooth function on a continuous domain admits a DS decomposition. For DS minimization over discrete domains, they show that the problem can be reduced to the case of having an integer lattice as domain. In the continuous setting, they approximate the problem by discretization and then reducing it to the integer lattice case approximatively. The key result is that DS minimization on an integer lattice is equivalent to minimizing a continuous extension (generalizing the Lovász extension), enabling the use of a difference-of-convex (DC) algorithm. This algorithm monotonically decreases function values and converges to a local minimum. The authors validate their approach with experiments on integer least squares and integer compressive sensing, demonstrating improved performance over state-of-the-art methods. Claims And Evidence: The main claims of the paper are: - Any function on a discrete domain and any smooth function on a continuous domain can be expressed as a DS function. - One can (approximately, in the continuous case) reduce the problem to an integer lattice domain. - Minimizing a DS decomposition over an integer lattice domain is equivalent to minimizing a continuous extension. - The algorithm computes an approximate local minimum. - Applying a DC algorithm to integer least squares and integer compressive sensing improves performance over existing methods. The first three claims are theoretical and well-supported. The paper also discusses the computational complexity and the theoretical guarantees of the algorithm’s output, which appear to be correct. The experiments further support the empirical claims. Below are some minor issues: - It is not entirely clear whether the theoretical guarantees extend to a DS function over a continuous domain after discretization. While the single results put together suggest this should be the case, the paper would benefit from a formal theorem stating: The algorithm finds a local optimum with guarantee G in time T for the class of DS functions C. - The paper asserts that finding the best DS decomposition is generally infeasible, but it does not define what constitutes the "best" decomposition, nor does it provide strong support for this claim. The authors could strengthen this point by referencing literature. - After Proposition 3.1, the paper states: "Obtaining tight lower bounds α and β in the above proof requires exponential time in general" and "Note that loose bounds on α and β would degrade performance." Both claims lack supporting argumentation. A brief explanation or reference would help clarify these statements. - On line 241 (right), the sentence "We show in Appendix A.2 that both applications do not have a natural discrete DC decomposition." seems overly strong, as the authors only prove that a specific decomposition is not DC. A more precise formulation would improve accuracy. Methods And Evaluation Criteria: The chosen problems (integer least squares and integer compressive sensing) to evaluate the performance of the proposed algorithm and the selected metrics (recovery probability and estimation error) appear reasonable. The competing algorithms seem to represent the state of the art, but I am not deeply familiar with the field or these specific methods. Theoretical Claims: The theoretical results appear sound and well-supported, and the proof sketches are reasonable. However, I did not verify all the proofs in detail. Experimental Designs Or Analyses: I read the experimental setup in the main paper and it appeared sound to me. But I did not validate any technical details and I am not an expert neither of the problems nor the methods. Supplementary Material: I read through the appendix, but did not examine the technical details closely. I reviewed Appendices A and B more carefully. Relation To Broader Scientific Literature: The paper is well-grounded in the existing literature and acknowledges prior results and ideas upon which it builds. Essential References Not Discussed: As discussed in the Claims and Evidence section, the paper asserts that finding the best DS decomposition is generally infeasible, but it does not connect this claim to existing literature. The authors could reference works such as Decomposition Polyhedra of Piecewise Linear Functions (https://arxiv.org/abs/2410.04907), which explores the complexity of DS and DC decompositions. Other Strengths And Weaknesses: I appreciate the idea of relating general DS functions to a DC function, and it is interesting that this approach seems to work better in practice than reducing the problem to the set function case and minimizing the Lovász extensions via a DC algorithm, as illustrated in the appendix. This together with the experiments showing better performance than exisiting methods for DS functions makes the paper a valuable contribution. From a theoretical perspective, the results don’t seem particularly profound, as they appear to be a straightforward extension of existing knowledge. Other Comments Or Suggestions: There are statements in the main paper where the proofs are deferred to the appendix, but no links are provided to direct the reader to the relevant proofs. As a result, readers must search through the appendix to find the proofs, even for key results such as Proposition 4.4 and 4.5. Including hyperlinks would make the paper more reader-friendly. Further suggestions: - It would be helpful to define what a subgradient is. - For the definition of round, it could be clarified that the goal is to obtain the vector, not the index. This is only clear from the context, but the definition itself is somewhat ambiguous. - Providing some intuition and perhaps including a small illustration of an epsilon-subdifferential would be beneficial. - A sentence or two following Proposition 2.5, explaining the implications of the statement on the theoretical guarantees of the algorithm, would make it easier to understand its significance. Questions For Authors: - Can you think of any theoretic justification why the attempt of translating the DS problem directly to a DC problem works better than reducing first to the set function case? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive review and helpful feedback. We address below your comments and questions. --- **1 - Extension of theoretical guarantees to continuous domains** Theorem 4.5 extends to continuous domains as follows:\ Let $F'$ be defined as in Section 4.1, i.e., $F'(x) = F(x/(k-1))$ where $k = \lceil L/\epsilon' \rceil + 1$ for some $\epsilon'>0$. Let $\tilde{x}^t = x^t/(k-1)$, where $x^t$ are Algorithm 1's iterates for $F'$. Then:\ a) $F(\tilde{x}^{t+1}) \leq F(\tilde{x}^t) + \epsilon_x$\ b) At convergence: $F(\tilde{x}^t) \leq F(y^i/(k-1)) + \epsilon + \epsilon_x$ for all $i$. In particular, $F(\tilde{x}^t) \leq F(\tilde{x}^t \pm e_i/(k-1)) + \epsilon + \epsilon_x$.\ c) Number of iterations is at most $(F(\tilde{x}^0) - F^*)/\epsilon$. \ However, the second guarantee in (b) is meaningful only if $\epsilon' > \epsilon + \epsilon_x$, as Lipschitz continuity of $F$ already implies $F(\tilde{x}^t) \leq F(\tilde{x}^t \pm \tfrac{e_i}{k-1}) + \epsilon'$. We will clarify this in the paper. --- **2- Define "best" DS decomposition and clarify its infeasiblility** By best DS decomposition, we meant the one with the tightest $\alpha$ or $L_F$ in the proofs of Propositions 3.1 and 3.3. On lines 243-245 (1st col), we cite a reference showing that computing a tight $L_F$ is exponential hard. Computing the tightest $\alpha$ also has exponential complexity, even for set functions. Indeed, we can test if $F$ is submodular, which has exponential complexity (Seshadhri & Vondrák, 2014), by computing $\alpha = \min_{i, j \in V, S \subseteq V \setminus \{i, j\}} F(S \cup i) - F(S) - F(S \cup \{i, j\}) + F(S \cup j)$ and checking if $\alpha \geq 0$. We will clarify this in the revision. Like DC functions, DS functions have infinitely many DS decompositions. Finding the "best" one is an even more difficult question, as it's unclear how to define "best". Thank you for the suggested reference, we will cite it. Seshadhri, C., & Vondrák, J. (2014). “Is submodularity testable?” Algorithmica, 69(1), 1-25. --- **3- Impact of loose bounds on $\alpha$ and $\beta$** Looser bounds on $\alpha$ and $\beta$ lead to a DS decomposition where the continuous extensions of $G$ and $H$ have larger Lipschitz constants,slowing down optimization. As discussed in Section 4.2 ("Computational complexity"), the runtime of Algorithm 1, particularly for solving the submodular minimization at each iteration $t$, depends on the Lipschitz constant* $L_{f^t_\downarrow}$ of $f^t_\downarrow$, the continous extension of $F^t = G - H^t = F + \frac{\alpha}{\beta} \tilde{H} - \tilde{H}^t$. Here, $\tilde{H}^t(x) = \frac{\beta}{\alpha} H^t(x)$ is a modular approximation of $\tilde{H}(x)$ satisfying $\tilde{H}(x) \geq \tilde{H}(x^t) + \tilde{H}^t(x) - \tilde{H}^t(x^t)$. We can lower bound $L_{f^t\_\downarrow}$ by the Lipschitz constant of $F^t$: $L\_{f^t_\downarrow} \geq \max\_{x, x'} \frac{|F^t(x) - F^t(x')|}{\\|x - x'\\|\_1}$. For example, for a non-decreasing function $F$ and $x \leq x'$, taking $x' = x^t$ gives $|F^t(x) - F^t(x')| = |F(x) - F(x^t)| + \frac{|\alpha|}{\beta} |(\tilde{H}^t(x) - \tilde{H}^t(x^t)) - (\tilde{H}(x) - \tilde{H}(x^t))|.$ Thus, a larger $\frac{|\alpha|}{\beta}$ yields a larger $L_{f^t_\downarrow}$. We will clarify this in our revision. *Typo on line 373: $L_{F^t_\downarrow} \to L_{f^t_\downarrow}$ --- **4- Revise the sentence on line 241 (right) to be more accurate.** We will revise the sentence to: "We show in Appendix A.2 that the natural DS decomposition in both applications is not a discrete DC decomposition as defined in (Maehara & Murota, 2015) and cannot be easily adapted into one for general discrete domains, even when ignoring the integer-valued restriction." --- **5- Theoretical results seem straightforward extensions of existing work** See our response to **Reviewer eVr5 (4th item)**, where we clarify the novel aspects of our theoretical results. --- **6- Why is the direct approach better than set function reduction?** As discussed in (Bach, 2019, section 4.4), reducing a submodular function $F$ to a submodular set function $\tilde{F}$ leads to slower optimization due to the larger Lipschitz constant $L_{\tilde{f}\_\downarrow}$ of the continuous extension $\tilde{f}\_\downarrow$ of $\tilde{F}$. Specifically, while we can bound $L_{f\_\downarrow} \leq \sqrt{n k} \max\_i B_i$, the bound for $\tilde{f}\_\downarrow$ is $k$ times larger, i.e., $L_{\tilde{f}\_\downarrow} \leq k \sqrt{n k} \max\_i B_i$, where $B_i$ is defined in Eq. (26) in Appendix B. This affects submodular minimization algorithms in both (Bach, 2019) and (Axelrod et al., 2020), as their complexity scales with the Lipschitz constant. For example, Bach's algorithms have complexity $\tilde{O}((\frac{n k L_{f_\downarrow}}{\epsilon})^2 \textnormal{EO}_F)$ when applied directly to $F$. Using the reduction to $\tilde{F}$, this increases by $O(k^2)$ factor. --- Thank you for your other suggestions. We will incorporate them in our revision.
null
null
null
null
Learning-Order Autoregressive Models with Application to Molecular Graph Generation
Accept (poster)
Summary: This paper introduces a method for learning the order in which discrete elements from a masked set should be generated. The authors assume the generation process begins with a set of masked elements and, in addition to predicted the unmasked element, the model learns a policy which dictates the order in which elements should be unmasked from the set. The methods uses a variational approach and show how the previously proposed REINFORCE leave-one-out estimator can be used to backprop gradients to the variational distribution network. The authors focus the application of their method on molecule generation using two commonly used datasets and also show how the method can be applied to generate MNIST images. # Update after rebuttal I thank the authors for answering my questions and providing additional experimental results. I remain unconvinced that molecule generation is the right application for this method, and I think the additional sampling steps required vs. SMILES models remains a problem. However, I think the method is interesting, well-supported and could be useful for other applications, so I will increase my score. Claims And Evidence: The authors are well supported by their evaluation results, however, please see below for shortcomings with the evaluation. Methods And Evaluation Criteria: - For molecule generation tasks the authors should also report the diversity (eg. often defined using tanimoto similarity of the fingerprints) and novelty of the generated molecules. This is to help ensure that the model is not sampling only from a restricted part of chemical space. - Since the validity and uniqueness metrics are fairly saturated by existing methods, and the FCD on its own is not a strong enough predictor of molecule quality, I think it would be important to report metrics which attempt to measure distances to the data distribution. For example comparing the QED, synthetic accessibility, number of aromatic rings, etc. from a set of generated molecules to the training distribution. Ultimately the goal of the generative model is to sample from the same chemical space that the training data comes from. - In order to make the evaluation more practically meaningful the authors should report baseline results for a SMILES-based transformer model (eg. Molformer, Chemformer or Molecular Transformer would all be simple starting points), using both canonical and SMILES augmented orderings. This type of model is very commonly used in practice and is a very important baseline for autoregressive molecule generation. Theoretical Claims: The theoretical claims of the paper are well supported and the method derivation is very clearly presented. Experimental Designs Or Analyses: The experiments are well designed and follow previous work in the area. Supplementary Material: Yes Relation To Broader Scientific Literature: The paper introduces a novel method for generating graph-structured data (or any other data without a specific ordering). The method is related to masked discrete diffusion and flow-matching methods but introduces a variational framework and a different training strategy. The method produces strong results on graph generation and the authors show example generation traces and provide some intuition for the decisions made by the model. However, I think some important baselines have been missed - see above for evaluation and below for additional diffusion and FM graph generation models. Essential References Not Discussed: - DeFoG [1] and GruM [2] have both recently been proposed as graph generative models with strong performance on molecule generation using flow-matching and diffusion, respectively. [1] https://arxiv.org/abs/2410.04263 [2] https://arxiv.org/abs/2302.03596 Other Strengths And Weaknesses: - Is each component of graph (ie. nodes and edges) sampled one-at-a-time? Meaning for a graph with $n$ nodes the method requires $n^2 + n$ sampling steps? This seems like a major limitation in comparison to existing methods for autoregressive molecule generation where the length of the sequence is usually only slightly larger than $n$. It would also become a significant bottleneck for scaling the LO-ARM method to larger graphs. - I think the overall method for learning the order in which items are generated is an interesting idea and a potentially useful framework for other applications, however, I am not convinced that molecule generation is the right application for this. A number of very strong baselines exist for molecule generation and would likely have significantly faster sampling times than the proposed method. Additionally, once a starting point has been defined, the generation order of molecules could be constrained relatively easily by only unmasking atoms which are connected to already unmasked atoms, significantly reducing the space of permutations and making learning the ordering less important. Other Comments Or Suggestions: - Should the equation for $q_{theta}$ at the start of page 6 have a $\mathbf{x}$ instead of $\mathbf{x_{z_{\lt i}}}$? Questions For Authors: - I am not really clear on how you achieve single step generation for $\mathbf{z}$. Is each component of $\mathbf{z}$ sampled autoregressively, or are all sampled at the same time? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We extend our sincere gratitude to the reviewer for their expert insights and valuable suggestions regarding the incorporation of chemistry-specific metrics to enhance our evaluation. We address their points below. ## 1. Report results against chemistry-specific metrics We evaluated our best-performing models, trained on the ZINC250k dataset, using the Synthetic Accessibility Score (SAS), Quantitative Estimation of Drug-likeness (QED), diversity and novelty metrics. Diversity was measured by calculating the pairwise Tanimoto similarity within a set of generated molecules. The table below summarizes the results, including baselines and measurements on ground truth data, and more detailed visual comparisons of the metric distributions between our models and the ground truth data are available at [1]. As the results indicate, the distribution of these metrics in the samples generated by LO-ARM closely resembles that of the ground truth data. | mean | QED | SAS | Diversity | Novelty | |----------|----------|---------------|---------|--------| | Ground truth | 0.75 | 2.76 | 0.35 | - | | JT-VAE | **0.76** | 3.37 | - | 100.0 | | GCPN | 0.61 | 4.62 | - | 100.0 |MolecularRNN| 0.68 | 3.59 | - | 100.0 | LO-ARM | **0.75** | **3.08** | 0.34 | 100.0 | ## 2. Add baselines of SMILES-based transformers Consistent with our baselines (e.g., DiGress and CatFlow), we focus on graph representation of molecules in this work. For the reviewer’s interest, we included a SMILES-based baseline in our GuacaMol results (please see our reply to Reviewer Pa4s). Morever, LO-ARM is agnostic to data representations and could also be applied to SMILES strings. As suggested by the reviewer, SMILES would be more practically useful, adapting LO-ARM to SMILES generation is a promising direction for our future work. ## 3. Time complexity of inference Regarding the cost of sampling, we first note that we can skip the edge sampling stage if the order policy determines it as a no-edge dimension. Second, the learned ordering within LO-ARM effectively separates the generation of non-existent bonds (no-edges) from atoms and real bonds. This separation allows us to generate all no-edges in a single inference step without compromising the chemical validity of the generated molecules, substantially reducing inference overhead. We present a comparison of generic and ordering-informed sampling approaches evaluated on the ZINC250k dataset below. | | Validity | Uniquness | FCD | Avg. sampling steps | |-------------------|----------|-----------|-------|---------------------| | Generic sampling | 96.26 | 100. | 3.229 | 330.7 | | Ordering-informed | 96.13 | 100. | 3.319 | 48.8 | We plan to further explore this learned ordering to scale up inference in future work. A possible yet more principled direction could be leverating the equivalence between random order ARMs and masked diffusion models to enable parallel generation. Our preliminary experiment shows that we can obtain 90.0% validity in half of the number of steps using the masked diffusion formulation of our model. ## 4. Suitability of learning ordering for molecular graph generation While generation order constraints reduce the permutation space, our results show a clear benefit to learning a data-dependent order. For example, LO-ARM significantly outperforms GraphARM [2], which unmasks a node and its edges in one step, across all metrics. To ensure a fair comparison with our baseline models, we have only evaluated unconditional generation. To further tailor LO-ARM for molecular graph generation, one could integrate inductive biases into the backward sampling process during training, which we also recognize as a promising avenue for future development of LO-ARM. ## 5. Should the equation for qtheta at the start of page 6 have a $x$ instead of $x_{z<i}$? Yes, this is a typo, thanks for examining our work thoroughly. ## 6. Is each component of z sampled autoregressively, or are all sampled at the same time? To clarify: z is sampled autoregressively via the order-policy for generation, but for the variational distribution during training, all its components are generated in one pass by the q-network using the Gumbel top-k trick [3]. ## 7. Enrich essential references We will update the Related Work section with the references you’ve suggested in the final version of the paper. We hope that our responses have addressed your concerns and questions. If so, we would be grateful if you would reconsider your decision in light of our clarifications and update your recommendation score accordingly. We remain available and eager to address any further concerns you may have. [1] Chemistry-specific metrics: https://drive.google.com/file/d/1miSlh2vYYRfqHJ-e5QmYFmht5xMQcqEO/view?usp=sharing [2] https://arxiv.org/pdf/2307.08849 [3] https://arxiv.org/pdf/1903.06059
Summary: This paper addresses a fundamental limitation in Autoregressive Models (ARMs)—the assumption of a fixed generation order, which may not be optimal for complex data types like graphs. The authors introduce Learning-Order Autoregressive Models (LO-ARMs), a novel approach where the model learns a context-dependent order for sequential data generation rather than relying on a predefined or random order. This paper presents a major advancement in autoregressive modeling, showing that learning an optimal generation order improves sample efficiency and generation quality. Claims And Evidence: The majority of the claims in the paper are supported by well-structured evaluations. Methods And Evaluation Criteria: The proposed methods and evaluation criteria mostly align well with the problem of learning dynamic generation orderings in autoregressive models, especially for molecular graph generation. However, some aspects of the methodology and evaluation could be further discussed. The paper states, "LO-ARM generalizes across different molecular graphs" (p. 7), but only evaluates on QM9 and ZINC250K, both of which contain relatively simple organic molecules. The datasets such as larger bioactive molecules (ChEMBL) could be helpful to check the ability of the model on the complex molecular graphs. Theoretical Claims: No issues. Experimental Designs Or Analyses: All good Supplementary Material: No Relation To Broader Scientific Literature: The key contributions of this paper build upon and extend several established areas in machine learning, particularly autoregressive models (ARMs), variational inference, graph generation, and molecular generative modeling. Essential References Not Discussed: No Other Strengths And Weaknesses: See above Other Comments Or Suggestions: The bold result in table is not clear, it looks like authors are bold the best result on the other method and the on their proposed method in table 1. But in table 2, the best result on proposed method is not bolded. It would be great to keep them consistent. Also please considering to cite the survey papers for molecule/graph generation. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive feedback on the quality of our work. We now address each of your concerns as below. ## 1. Preliminary results on larger bioactive molecule dataset (ChEMBL) To address your concern regarding the scalability of LO-ARM to larger datasets, we include a preliminary result on the ChEMBL dataset (also known as the GuacaMol benchmark [1]) without hyperparameter tuning. Specifically, we preprocessed this dataset using the utility provided in [2], a method consistently employed in our benchmarks [2, 3]. We have summarized the statistics of the three datasets used in our evaluation—namely QM9, ZINC250k, and ChEMBL/GuacaMol—as presented below. We believe this inclusion provides a more comprehensive assessment of LO-ARM's capabilities. | | #samples | #nodes | #node types | Input dims | |----------|----------|------------------|-------------|------------| | QM9 | 130k | 1 <= \|V\| <= 9 | 4 | 90 | | ZINC250k | 250k | 6 <= \|V\| <= 38 | 9 | 1482 | | GuacaMol | 1.2M | 2 <= \|V\| <= 62 | 12 | 3906 | Our preliminary results show that LO-ARM still exceeds or matches performance of the current state-of-the-art-models. | | Input data | Validity | Uniqueness | Novelty | Raw FCD | |----------|--------|--------|---------|---------|--------| | LSTM | SMILES | **95.9** | **100.0** | 91.2 | **0.46** | | MCTS | Graph | 92.9 | 95.5 | 100.0 | 21.00 | | DiGress | Graph | 85.2 | 100.0 | 99.9 | 1.92 | | LO-ARM (ours) | Graph | *94.2* | **100.0** | **100.0** | 3.73 | To ensure consistency in our evaluation across the QM9 and ZINC250k datasets (as is common in the literature), we have converted the reported FCD scores from prior work, which are often presented on an exponential scale, back to their raw FCD values. Specifically, the LO-ARM model trained on the GuacaMol dataset exhibits a preference for an atom-first generation order. This means it tends to generate atoms first, followed by the real bonds connecting them, and lastly, it fills in the non-existent or "imaginary" bonds. We illustrate an example of this generation process in the accompanying figure [4]. This learned ordering contrasts with the edge-first ordering observed in models trained on the ZINC250k and QM9 datasets. This difference likely arises because generating node-related dimensions first is potentially simpler for the GuacaMol samples, given that the number of edge dimensions increases quadratically with the number of nodes. Our preliminary findings indicate that LO-ARM surpasses other graph-based methods in terms of validity and uniqueness, achieving performance levels close to the state-of-the-art SMILES-based approaches. However, we have observed a small gap in FCD compared to DiGress. We hypothesize that this difference arises because for datasets with higher dimensionality, a generation strategy operating at a coarser granularity (e.g., dimension blocks) than individual dimensions might be more effective. For example, tokenizing molecules into fragments, as discussed in our Discussion section, could potentially improve performance. We defer the task of integrating domain-specific tokenization into LO-ARM for future investigation. ## 2. Some aspects of the methodology and evaluation could be further discussed. In addition to these new results of GuacaMol experiments, we have provided an in-depth analysis on training stability with REINFORCE in the thread of Reviewer Stjb. ## 3. Improve the presentation of the main tables and enrich related work Thank you for your thorough review and your advice, and we will refine the main tables and reference the surveys of graph generation in the final version. We hope that our responses have addressed your primary oncerns regarding the scalability of our algorithm. If so, we respectfully request that the reviewer reconsider their decision in light of our responses and update their recommendation score accordingly. We remain available and eager to address any further concerns the reviewer may have. [1] https://arxiv.org/abs/1811.09621 [2] https://arxiv.org/abs/2209.14734 [3] https://arxiv.org/abs/2406.04843 [4] Generation trajectory of GuacaMol sample: https://drive.google.com/file/d/1a5HU4FZ98bqS_9JFK4Eb60QfhXiksaue/view?usp=drive_link --- Rebuttal Comment 1.1: Comment: Thanks authors for adding the dataset and additional experiments. I have no further concerns.
Summary: This paper proposes a method to learn an optimal generation order for autoregressive models in data domains that do not possess a natural canonical ordering (e.g., graphs or images). By framing the ordering itself as a latent variable with a dynamic, learnable distribution (the “order-policy”), the authors unify the ideas of AO-ARMs with variational inference. They demonstrate that a variational bound on the log-likelihood can be optimized by sampling permutations from a “posterior” distribution and matching them to the model’s own “order-policy”. Empirically, the method achieves competitive results on two molecular generation datasets, QM9 and ZINC250k. The paper’s thorough ablations highlight design trade-offs for parameterizing the order-policy, and the final approach shows consistent improvements over both uniform and biased AO-ARMs. Claims And Evidence: The main claims of the paper are: 1. By introducing a trainable policy over generation order, the autoregressive mode; avoids the limitations of either a single fixed order or a uniformly random permutation of dimensions. 2. The authors propose a variational approach, using REINFORCE with a leave-one-out baseline to reduce gradient variance. They argue that this is practical enough to handle the large combinatorial space of permutations. These claims are generally supported by the quantitative comparisons. My concerns are 1. the convergence of REINFORCE-based training is not thoroughly studied in the analysis session; 2. the performance gains on the two datasets (QM9 and ZINC250k) are very modest as the baselines have already achieved very high results. Methods And Evaluation Criteria: The authors follow common practice in generative modeling by: evaluating sample quality on the standard Validity/Uniqueness metrics for molecules, and measuring distributional similarity with FCD, a recognized measure for drug-like molecules. Although the results are strong, the paper focuses primarily on mid-scale tasks (QM9, ZINC). It would be beneficial to demonstrate that the approach still offers consistent improvements on small-scale benchmarks [1,2,3], as the baseline results on these benchmarks are less optimal compared to QM9 and ZINC. [1] FreeSolv: a database of experimental and calculated hydration free energies. [2] In silico evaluation of logD7.4 and comparison with other prediction methods. [3] The Harvard organic photovoltaic dataset. Theoretical Claims: The derivation of the method as a variational ARM with a learned ordering distribution is a clean extension to the standard AO-ARM framework. The theoretical foundation seems sound: the ordering z serves a hidden latent variable for the distribution of data x, so the standard variantional inference can play a role here for the optimization of joint probability distribution. Experimental Designs Or Analyses: The experiments on molecular generation are thorough, with ablations revealing the roles of shared vs. separate neural networks for the order-policy and the choice of top-p sampling strategies. The authors also visualize generation paths for example molecules, giving a nuanced look at how the dynamic ordering policy emerges in practice. However, concerns remain about REINFORCE’s notorious variance. While the authors do implement a leave-one-out method to reduce variance, readers may question the stability of training at larger scales or on smaller datasets with fewer training points (like FreeSolv or Lipophilicity). An in-depth discussion or demonstration of how many epochs/hyperparameter adjustments were needed would be welcome. Supplementary Material: No supp provided. Relation To Broader Scientific Literature: The paper situates its method well with respect to AO-ARMs, masked discrete diffusion, and existing graph-generation methods. It clarifies that past approaches either fixed or uniformly randomized the ordering. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: 1. Have you measured or visualized the gradient variance over the course of training? Are there scenarios (like larger, more complex data) where you suspect REINFORCE might fail to converge easily? 2. Do you have preliminary results on smaller tasks (e.g., FreeSolv, Lipophilicity, HOPV) that confirm the order-policy still yields an advantage? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive feedback! We are glad that you find our theoretical results sound, our improvement consistent and our ablations thorough. We address your concerns as below, especially your concern on training stability with REINFORCE. ### 1. Performance gains on QM9 and ZINC250k We believe our improvements on QM9 and ZINC250k are substantial. Although baselines score well on validity/uniqueness, our results indicated that there is still large room for improvement in distributional similarity metrics such as FCD — highlighted by Reviewer Stjb as a recognized measure for drug-like molecules. Our LO-ARMs model achieved new state-of-the-art FCD scores, reducing them to 0.240 (QM9) and 3.229 (ZINC250k) from prior bests of 0.441 and 13.21, respectively. ### 2. Smaller and larger scale benchmarks (e.g., FreeSolv, Lipophilicity, HOPV) Thank you for your suggestion, and we will certainly include the results on smaller benchmarks in the final version of our work. Given the time constraints of this rebuttal period, we made a strategic decision to prioritize the GuacaMol experiment, which is a much larger dataset than ZINC250k, to maximize the efficiency of our response. This is because we anticipate that training stability might pose a greater challenge for larger datasets compared to smaller ones, as their training processes tend to exhibit more stochasticity. Furthermore, the scalability of LO-ARM is a concern shared by several reviewers. The detailed results on GuacaMol dataset are presented in the response to Reviewer Pa4s. In short, our preliminary results show that LO-ARM learns an order-first generation ordering [1] and still exceeds or matches performance of the current state-of-the-art-models. ### 3. Analysis of variance and convergence of REINFORCE-based training To provide a clearer understanding of the variance and convergence of REINFORCE and the effectiveness of our variance reduction method, we have visualized the following quantities during training course of the GuacaMol experiments [2]: * Negative Evidence Lower Bound (ELBO) * Maximum and minimum q-logits from the variational order policy network. Specifically, we conducted an ablation study on the learning rate in two experiments, keeping all other settings constant. Given that the gradient variance primarily arises from the stochasticity of the variational order policy, the maximum and minimum q-logits can reflect this variance throughout the training. As seen in the plot, most of the time the RLOO variance reduction keeps the optimization smooth -- still occasionally training instability is observed with a large learning rate (2e-5), indicated by loss spikes and discontinuities in the maximum q-logits. Fortunately, reducing the learning rate to 1.5e-5 effectively stabilizes training, as demonstrated by the green curves. We hope that our responses have addressed your concerns and questions. If so, we respectfully ask that you reconsider your decision based on our responses and update your recommendation score accordingly. We are also eager to address any further concerns you may have. [1] Generation trajectory of GuacaMol sample: https://drive.google.com/file/d/1a5HU4FZ98bqS_9JFK4Eb60QfhXiksaue/view?usp=drive_link [2] Variance visualization: https://drive.google.com/file/d/19b2wQocvLJc82bAm6eUG5qbAcmVAra0V/view?usp=drive_link
Summary: This paper propose a new generative modeling framwork named Learning-Order Autoregressive Models. The core of this framework is to extend the traditional ARs to learning a dynamic order of sampling. Speicifically train a order polocy to determine the order. To train such model, the authers use a variational lower bound on the exact log-likelihood , and optimize via stochastic gradient estimation. It achieved quite good results in QM9 and Zink250K. ## Update after rebuttal I appreciate the authors' thorough response to the concerns raised. I have no further questions and will maintain my original score as positive unchanged. Thank you for your efforts. Claims And Evidence: I agree that the authors are dealing with a important issue in graph autoregressive generation as it is hard to determine a order for sampling. I think use a order-policy to sample the order seems reasonable. The claim of LO-ARMs Learn a Meaningful and Consistent Order for Generation is acceptable. While LO-ARMs Can Generalize to Other High-Dimensional Data (Images, Graphs) shoule be given more evidence. The MNIST experiment is more of a toy example rather than rigorous proof that the method generalizes to all high-dimensional data, and quantitative metrics (e.g., likelihood scores, FID, etc.) should be provided for image generation, making it easy to measure actual performance compared to baselines. Methods And Evaluation Criteria: Overall the the method proposed is suitable for the problem . The HyperEdge enhanced EGNN and the docking scorefeatures are reasonable. The evaluations are on the commonly used unconditional molecule graph generation. Theoretical Claims: There are no theoretical claims in this paper Experimental Designs Or Analyses: The experiment settings are from the existing related work, and the authors use the same settings, which are reasonable Supplementary Material: No Relation To Broader Scientific Literature: Appling the order policy to other generation modalitys like 3d point cloud, pixels, voxels maybe interesting. Essential References Not Discussed: Related works are discussed properly. Other Strengths And Weaknesses: Strengths The paper presents a novel approach to learning orderings in autoregressive models, which is a non-trivial extension of traditional ARMs. The paper provides a well-structured training algorithm and a clear sampling procedure, making the method well-grounded Weaknesses While LO-ARMs are tested on MNIST as a toy example, i think this is not sufficient to claim general applicability to high-dimensional data like images, more examples of benchmark in image generation would be better. Would be better if a pipeline illstration figure is provided. Currently only a case study figure shown in the paper main part. Other Comments Or Suggestions: No Questions For Authors: See the Strengths And Weaknesses part Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the time you’ve taken to review our work and for the positive and constructive feedback! We are glad that you found the problem we dealt with important and our approach novel and well-grounded. In response to the weaknesses and questions: ### 1. Generalization to other high-dimensional data (images, graphs) We would like to clarify that the focus of this work is on designing the order learning mechanism in ARMs and its application to molecular graph generation as the main testbed. We chose this task because we believe it is more suitable – As the reviewer has pointed out, an important issue in graph autoregressive generation is the difficulty to determine an order for sampling. Indeed, our learning-order ARM was able to discover an autoregressive order that outperforms fixed or random order used in prior graph generation work. Our focus is not on images – we included MNIST only as a sanity check to confirm the model can learn a meaningful order that distinguishes between the digits and the background. While there are no practical constraints for applying LO-ARMs to higher dimensional natural images – it is unclear whether there is a similarly meaningful and interpretable ordering to learn there and therefore we decided not to pursue it in this work. To demonstrate the scalability of our algorithm to higher dimensional graph datasets, we conduct an additional experiment on the GuacaMol dataset, which is a larger molecule dataset with 3906 input dimensions (which is larger than 1482 of ZINC250k and 90 of QM9). We have provided the comparison of the three molecule datasets in the discussion with Reviewer Pa4s. We believe that these three datasets can provide a comprehensive evaluation matrix to support our claims. Performance-wise, our preliminary results show that the order-policy still yields an advantage, exceeding or matching the SOTA performance, as you can see from the details we have provided in the discussion thread with Reviewer Pa4s. ### 2. Application to 3d point cloud, pixels, voxels Thank you for the suggestion. We are interested in exploring these modalities in future work. ### 3. Pipeline illustration figure We will follow the reviewer’s suggestion to add an illustration figure in the final version. We hope that our responses have addressed your concerns and questions. If so, we would kindly ask the reviewer to reconsider the decision in light of our responses and update their score accordingly. We are also eager to address any additional concerns the reviewer may have.
null
null
null
null
null
null
Posterior Inference with Diffusion Models for High-dimensional Black-box Optimization
Accept (poster)
Summary: ## Summary * The authors propose a two-stage approach for black box optimization using diffusion model. The first stage is training stage. The authors propose to train a weighted unconditional model for density estimation, and an ensemble of proxy models to capture the value and uncertainty of target. This diffusion model + discriminative approach is commonly adopted, e.g. in classifier based guidance. * Next, the authors propose to findtune the model (amortized inference) using a relative trajectory balance. The target is designed to balance exploration and exploitation. To further enhance the performance, the authors adopts two post processing technique: local search and filtering. The local search is a gradient ascent on finetuning target, and the filtering is a selection of candidates. * Empirical results on multiple dataset show the effectiveness of their approach. Claims And Evidence: ## Claims And Evidence * The reweighting scheme is interesting. However, the score matching loss of diffusion model is not a direct max likelihood target (See [Maximum Likelihood Training of Score-Based Diffusion Models]) Therefore, whether simple reweighted training achieve weighted likelihood such as Eq. 11 remains questionable. Methods And Evaluation Criteria: ## Methods And Evaluation Criteria * The benchmark datasets look standard for this field and the ablation studys are sufficient to support the claims. Theoretical Claims: ## Theoretical Claims * There are no theoretical claims. Experimental Designs Or Analyses: ## Experimental Designs Or Analyses * The experimental result and analysis are sufficient. Abundant results show the effectiveness of the proposed method, in terms of performance and complexity. The effectiveness of different component proposed by the authors is also verified. Supplementary Material: ## Supplementary Material * I skim over the additional results and spend some time on temporal complexity part. Relation To Broader Scientific Literature: ## Relation To Broader Scientific Literature * The proposed approach will likely to be a strong baseline in black box optimization. Essential References Not Discussed: ## Essential References Not Discussed * The reference to prior works is sufficient. Other Strengths And Weaknesses: * One part that I like about this paper is its empirical evaluation. Through benchmark and real world problems, the authors successfully show the advantage of their approach over prior works. * One part that I do not like about this paper is that it contains to many sub-parts and tricks. The paper is driven by a clear performance target, but there is no clear technical clue that lead this paper. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our paper's extensive experiment results. We've attempted to answer your questions below. >**Claims And Evidence)** Therefore, whether simple reweighted training achieve weighted likelihood such as Eq. 11 remains questionable. Thank you for pointing out the question regarding Eq.11. As you mentioned, we try to maximize the ELBO instead of the marginal likelihood. We will fix the Eq.11 in the final manuscript. > **Other Strengths And Weaknesses)** One part that I do not like about this paper is that it contains to many sub-parts and tricks. The paper is driven by a clear performance target, but there is no clear technical clue that lead this paper. Thank you for your constructive feedback. While there are several sub-parts in our method, please note that we systemically analyze the effect of each component through extensive ablation studies to verify that each component is crucial for improving performance. Furthermore, please note that several ideas we imported in this paper are already considered as a reasonable choice for training diffusion models as an amortized sampler effectively. For example, off-policy training is suggested in various GFlowNets literature [1, 2, 3]. [1] Venkatraman, Siddarth, et al. "Amortizing intractable inference in diffusion models for vision, language, and control." [2] Akhound-Sadegh, Tara, et al. "Iterated denoising energy matching for sampling from boltzmann densities." [3] Rector-Brooks, Jarrid, et al. "Steering masked discrete diffusion models via discrete denoising posterior prediction." Thank you again for your comments. We hope we have addressed them satisfactorily above, but do not hesitate to let us know if you have further questions. We are always ready to engage in further discussion!
Summary: This paper proposes a novel high-dimensional black-box optimization method, where the authors train a diffusion model based on the weighted data as the prior and performs posterior sampling when combined with a uncertainty-aware function proxy. The authors also use local search and filtering strategies to further refine the posterior samples. Over extensive benchmarks the proposed DiBO demonstrates improved optimization performances compared to representative high-dimensional optimization methods. Claims And Evidence: The main contributions that the authors claim: 1. The proposed diffusion-based algorithm to address the scalability and efficiency in high-dimensional optimization. 2. Superior performances over a variety of tasks compared to state-of-the art baselines. I think the proposed method and the experimental results support the claimed contributions. Methods And Evaluation Criteria: I think the method makes sense and the used benchmarks are representative in high-dimensional black-box optimization. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: I think the experiment setting and ablation studies are comprehensive. Supplementary Material: I checked the implementation details and additional ablation studies. Relation To Broader Scientific Literature: I think the idea of incorporating diffusion model for input prior learning and casting the sampling as posterior inference is novel and a suitable usage of diffusion model to address high-dimensional issues. Essential References Not Discussed: I think essential references are discussed. Other Strengths And Weaknesses: The paper is clear and well-written. And I don't have major concerns in terms of the weakness. Other Comments Or Suggestions: I think it is a good work which well applies diffusion models for high-dimensional black-box optimization. Given the expressive power of diffusion model, I think some tasks including structured input space may further enhance the paper (e.g. chemical/protein design). Questions For Authors: How to set the number of ensembles to guarantee the uncertainty quantification is reasonable? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive comment and for considering our key idea, incorporating the diffusion model as prior and casting sampling as posterior inference for solving high-dimensional black-box optimization, as novel. We answer your questions below. >**Other Comments Or Suggestions)** Given the expressive power of diffusion model, I think some tasks including structured input space may further enhance the paper (e.g. chemical/protein design). As you mentioned, tasks including structured input space further enhance the usefulness of our method. To this end, we conduct experiments on molecular optimization following [1]. As shown in the table, our method achieves not only higher performance but also high sample efficiency compared to recent BO baselines for structured inputs. We promise to add these results in our final manuscript. **Experiment results on structured inputs. Experiments are conducted with four random seeds.** | Tasks|# Evaluation Budget |LOL-BO [1]|CoBO [2]|DiBO| |:-|-|:-|:-|:-:| |Zaleplon MPO|20000|0.711 ± 0.014|0.724 ± 0.004|0.739 ± 0.034| ||30000|0.723 ± 0.006|0.728 ± 0.002|0.771 ± 0.002| ||40000|0.739 ± 0.000|0.738 ± 0.002|0.771 ± 0.002| |Perindopril MPO | 20000|0.734 ± 0.000|0.715 ± 0.025|0.815 ± 0.004| ||30000|0.771 ± 0.014|0.788 ± 0.024|0.818 ± 0.006| ||40000|0.798 ± 0.021|0.796 ± 0.018|0.825 ± 0.009| [1] Maus, Natalie, et al. "Local latent space bayesian optimization over structured inputs." [2] Lee, Seunghun, et al. "Advancing bayesian optimization via learning correlated latent space." > **Questions For Authors)** How to set the number of ensembles to guarantee the uncertainty quantification is reasonable? Thank you for your interest in our work. The number of ensembles is crucial to reasonably quantify the uncertainty of the surrogate model. To this end, we conduct experiments by varying number of ensembles, $K$. As shown in the table, there is no big difference in performance when we increase $K$ more than $K=5$. However, without uncertainty quantification or too small number of ensembles leads to poor performance, which indicates that uncertainty quantification is crucial for high-dimensional black-box optimization problems. **Ablation studies on the number of ensembles ($K$). Experiments are conducted with four random seeds.** | | $K$ | DiBO (Ours) | |:----------- |:----------- |:----------- | | HalfCheetah | 1 (None) | 2750.765 | | | 3 | 2604.994 | | | 5 (Default) | 3191.215 | | | 7 | 3131.849 | | | 9 | 2926.619 |
Summary: This paper utilizes the diffusion model for high-dimensional black-box optimization. At each iteration, they sample the candidates from the posterior distribution. The empirical results show that the proposed method outperforms other baselines. Claims And Evidence: The authors claim that by sampling candidates from the posterior distribution, the proposed method can effectively balance exploration and exploitation. However, they only measure the uncertainty of a portion of their model, which is from the ensemble of proxies. There is no measurement regarding the uncertainty of the generative model or discussion about the relationships between two terms of uncertainty. It remains unclear why this sampling approach can effectively balance exploration and exploitation. And there is no theoretical guarantee provided. Methods And Evaluation Criteria: Both the synthetic and real-world benchmark datasets have been evaluated. The authors follow the standard problem setting of high-dimensional bayesian optimization. Since the authors claim their method can effectively capture complex and multi-modal data distribution. It is helpful to verify that by including the black-box optimization benchmarks with structured data in the design space, such as molecular optimization tasks [1]. [1] Maus, Natalie, et al. "Local latent space bayesian optimization over structured inputs." Theoretical Claims: There is no theoretical guarantee provided for the sampling approach. It would be helpful if the authors can include some theoretical analysis of their proposed algorithm. For example, can it be proven that the proposed approach guarantees an optimal or near-optimal solution for black-box optimization under certain assumptions? Experimental Designs Or Analyses: My concern is that many baselines for high-dimensional black-box optimization are missing. I conducted a brief literature search and listed some of them [1-9]. DDOM is not an appropriate baseline as it is designed for offline optimization. For the ablation study in the appendix, it will be helpful if the authors can include other baselines in the analysis of batch size and initial dataset size. I assume the performance of DiBO will degrade when there is insufficient data for the diffusion model to learn or update the data distribution. [1] Ament, Sebastian, et al. "Unexpected improvements to expected improvement for bayesian optimization." [2] Eriksson, David, and Martin Jankowiak. "High-dimensional Bayesian optimization with sparse axis-aligned subspaces." [3] Nayebi, Amin, Alexander Munteanu, and Matthias Poloczek. "A framework for Bayesian optimization in embedded subspaces." [4] Wang, Zi, et al. "Batched large-scale Bayesian optimization in high-dimensional spaces." [5] Wang, Linnan, Rodrigo Fonseca, and Yuandong Tian. "Learning search space partition for black-box optimization using monte carlo tree search." [6] Letham, Ben, et al. "Re-examining linear embeddings for high-dimensional Bayesian optimization." [7] Song, Lei, et al. "Monte carlo tree search based variable selection for high dimensional bayesian optimization." [8] Ziomek, Juliusz Krzysztof, and Haitham Bou Ammar. "Are random decompositions all we need in high dimensional Bayesian optimisation?." [9] Nguyen, Quan, et al. "Local Bayesian optimization via maximizing probability of descent." Supplementary Material: I have reviewed all sections of the supplementary material. Relation To Broader Scientific Literature: This paper extends the diffusion model from offline to online black-box optimization, which has been studied in Diff-BBO [1]. The posterior sampling approaches of the proposed algorithm appears similar to the ones used in offline optimization with diffusion models [2,3]. [1] Wu, Dongxia, et al. "Diff-BBO: Diffusion-Based Inverse Modeling for Black-Box Optimization." [2] Yu, Peiyu, et al. "Latent energy-based odyssey: Black-box optimization via expanded exploration in the energy-based latent space." [3] Kong, Lingkai, et al. "Diffusion models as constrained samplers for optimization with unknown constraints." Essential References Not Discussed: Please refer to the previous sections to add the references. Other Strengths And Weaknesses: Please refer to the previous sections. Other Comments Or Suggestions: Please refer to the previous sections. Questions For Authors: Please refer to the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your concrete review. Below we answer the questions and concerns you raised. > **Claims and Evidence)** There is no measurement regarding the uncertainty of the generative model or discussion about the relationships between two terms of uncertainty. While utilizing the uncertainty of the diffusion model can be an interesting future work, measuring the uncertainty of the diffusion models is mostly complex [1]. We conducted a brief literature search on this topic [2, 3], but most of them focus on detecting poor-quality images, which lies outside of our research scope. [1] Wu, Dongxia, et al. "Diff-BBO: Diffusion-Based Inverse Modeling for Black-Box Optimization." [2] Kou, Siqi, et al. "Bayesdiff: Estimating pixel-wise uncertainty in diffusion via bayesian inference." [3] Jazbec, Metod, et al. "Generative Uncertainty in Diffusion Models." > **Methods and Evaluation Criteria)** It is helpful to verify that by including the black-box optimization benchmarks with structured data in the design space, such as molecular optimization tasks. Our method can be directly applied to benchmarks with structured data, such as molecular optimization tasks. To this end, we conducted additional experiments on benchmarks with structured inputs. We follow the standard evaluation pipeline with [4]. As shown in the table, our method achieves both higher performance and better sample efficiency compared to recent BO baselines for structured inputs. We promise to add these results in our final manuscript. **Experiment results on structured inputs. Experiments are conducted with 4 random seeds.** | Tasks|# Evaluation Budget |LOL-BO [4]|CoBO [5]|DiBO| |:-|-|:-|:-|:-:| |Zaleplon MPO|20000|0.711 ± 0.014|0.724 ± 0.004|0.739 ± 0.034| ||30000|0.723 ± 0.006|0.728 ± 0.002|0.771 ± 0.002| ||40000|0.739 ± 0.000|0.738 ± 0.002|0.771 ± 0.002| |Perindopril MPO | 20000|0.734 ± 0.000|0.715 ± 0.025|0.815 ± 0.004| ||30000|0.771 ± 0.014|0.788 ± 0.024|0.818 ± 0.006| ||40000|0.798 ± 0.021|0.796 ± 0.018|0.825 ± 0.009| [4] Maus, Natalie, et al. "Local latent space bayesian optimization over structured inputs." [5] Lee, Seunghun, et al. "Advancing bayesian optimization via learning correlated latent space." > **Theoretical Claims)** There is no theoretical guarantee provided for the sampling approach. While we acknowledge that the theoretical guarantee of our algorithm could further enhance its reliability, the guarantee of finding optimal solutions using deep learning models is almost impossible. Our paper makes a methodological and empirical contribution to solving high-dimensional black-box optimization problems effectively. We believe that our method represents a new departure for solving high-dimensional black-box optimization by importing ideas from diffusion models and amortized posterior inference. >**Experimental Designs or Analyses)** My concern is that many baselines for high-dimensional black-box optimization are missing. For the ablation study in the appendix, it will be helpful if the authors can include other baselines in the analysis of batch size and initial dataset size. We apologize for missing some crucial baselines in high-dimensional BO. However, we would like to emphasize that we present 4 strong BO-based baselines (We also already include LA-MCTS, which you mentioned in [5]). In particular, MCMC-BO and CMA-BO, which published last year, outperform most of the baselines listed above in various benchmarks. Nevertheless, we conducted experiments with additional baselines, logEI and MCTS-VS. As shown in the table, we outperform those baselines in terms of performance. We promise to conduct experiments on all benchmarks and update the results in our final manuscript. **Experiment results of DiBO and additional baselines. Experiments are conducted with four random seeds.** || TuRBO (LogEI) | MCTS-VS-TuRBO | DiBO (Ours) | |-|-|:-|:-| |Rastrigin|-584.09|-1089.62|-560.364| |HalfCheetah|-511.99|-223.175|3378.353| Regarding ablation studies, we also include other baselines in the analysis of batch size and initial dataset size. As shown in the table, even with different experiment settings, our method consistently outperforms other baselines by a large margin. Furthermore, as depicted in Figure 9 in the Appendix, our method does not degrade in terms of performance even with a small initial dataset size. **Ablation studies on other baselines in terms of initial experiment settings. Experiments are conducted with four random seeds.** ||Batch size|TuRBO|Diff-BBO|DiBO (Ours)| |:-|:-|:-|-|:-| |Rastrigin|20|-797.520|-1728.317|-573.528| ||50| -812.958|-1702.763|-545.124| ||100(Default)|-950.376|-1730.651|-560.364| ||Initial Dataset size|TuRBO|Diff-BBO|DiBO (Ours)| |:-|:-|-|-|:-| |Rastrigin|10|-1012.730|-1745.659|-586.761| ||50|-952.407|-1700.911|-629.776| ||200 (Default)|-950.376|-1730.651 |-560.364|
null
null
null
null
null
null
null
null
Understanding and Mitigating Memorization in Generative Models via Sharpness of Probability Landscapes
Accept (spotlight poster)
Summary: The paper presents a geometrical analysis of memorization in generative diffusion models based on the Hessian of their energy function around generated points. The idea follows naturally from recent results on the geometry of generative diffusion, which relate memorization and generalization to the spectrum of eigenvalues of the energy landscape. Based on these ideas, the paper introduces an effective and tractable method to detect memorized samples, which is shown to have high accuracy compared with established baseline. The authors also provide an initialization method that is shown to mitigate the generation of memorized examples. Claims And Evidence: The main claims are well supported both on an intuitive and on an experimental level. However, some of the theoretical considerations are somewhat handweavy and should be further elaborated. In particular, it would be important to clarify the relation between the trace of the Jacobian and the norm of the score function in the general non-Gaussian case. Methods And Evaluation Criteria: The experimental analysis is comprehensive and it provides robust results in favor of the main claims. Theoretical Claims: The intuitive ideas are very well motivated but the paper would have benefited from a more rigorous theoretical approach in the non-Gaussian case. I think it should be possible to obtain general formulas that relate the norm of the score to the trace in the general case, possibly using a second order Tweedie's Formula. It would be nice to see if the authors can derive a general formula. I think it should also be possible to provide more rigorous theoretical motivations for the upscaling formula. Experimental Designs Or Analyses: The experiments and well executed and provide solid support to the claims. Supplementary Material: I did not have time to review the supplementary materials. Relation To Broader Scientific Literature: I do think that the main innovation of the paper is the introduction of the Hessian upscaling method. In general, I appreciated how this work used competently and effectively several ideas that were floating in the geometric analysis of diffusion models through their Jacobian spectra. A very related piece of literature is given in [1], where the onset of (geometric) memorization is identified by the closure of spectral gaps in the Jacobian, which can be connected directly to the onset of sharpness in the trace. It would also be useful to discuss the connections with modern Hopfield networks, which has been shown to be equivalent to diffusion models in the memorization regime [2,3]. Finally, the analysis should be connected with the similar results in [4], which studies the onset of generalization through a similar analysis of the Jacobi matrix. I do think that the main innovation of the paper is the introduction of the Hessian upscaling method. In general, I appreciated how this work used competently and effectively several ideas that were floating in the geometric analysis of diffusion models through their Jacobian spectra. A very related piece of literature is given in [1], where the onset of (geometric) memorization is identified by the closure of spectral gaps in the Jacobian, which can be connected directly to the onset of sharpness in the trace. It would also be useful to discuss the connections with modern Hopfield networks, which has been shown to be equivalent to diffusion models in the memorization regime [2,3]. These works already highlighted some of the ideas discussed here, for example the fact that memorized states should have sharp energy wells around them. Finally, the analysis should be connected with the similar results in [4], which studies the onset of generalization through a similar analysis of the Jacobi matrix. Essential References Not Discussed: [1] Achilli, Beatrice, et al. "Losing dimensions: Geometric memorization in generative diffusion." arXiv preprint arXiv:2410.08727 (2024). [2] Ambrogioni, Luca. "In search of dispersed memories: Generative diffusion models are associative memory networks." Entropy 26.5 (2024): 381. [3] Hoover, Benjamin, et al. "Memory in plain sight: A survey of the uncanny resemblances between diffusion models and associative memories." Associative Memory {\&} Hopfield Networks in 2023. 2023. [4] Kadkhodaie, Zahra, et al. "Generalization in diffusion models arises from geometry-adaptive harmonic representations." arXiv preprint arXiv:2310.02557 (2023). Other Strengths And Weaknesses: The main strengths are in the ideas and in the experimental evaluations and results. The main weakness is in the lack of rigor in several theoretical points. However, I am of the opinion that the main ideas can be fully formalized in a very elegant way and I would encourage the authors to work on obtaining more general theoretical results. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Relation between the trace of the Jacobian and the norm of the score function in the non-Gaussian case Thank you for the insightful comment regarding the relation between the trace of the Jacobian and the score norm in the non-Gaussian case. We found that Lemma 4.1 is indeed generalizable beyond the Gaussian case under mild boundary conditions [1] (e.g., $\lim_{|\mathbf{x}| \to \infty} p(\mathbf{x}) = 0$ and $\lim_{|\mathbf{x}| \to \infty} p(\mathbf{x}) \nabla \log p(\mathbf{x}) = 0$), under which the identity $\mathbb{E}[\||s(\mathbf{x}) \||^2] = -\mathbb{E}[\mathrm{tr}(H(\mathbf{x}))]$ holds in general where the trace is now in expectation. We also appreciate the suggestion to consider Tweedie's formula for Lemma 4.3. We are currently revisiting the proof and will consider incorporating it into the formulation. Regarding Lemma 4.2, which quantifies the gap between conditional and unconditional distributions, we chose Gaussian assumption to ensure tractability and interpretability. This choice is supported by prior work [2] and the empirical trends shown in Figure 4, making our approximation both practical and well-justified. - [1] Hyvärinen, "Estimation of Non-Normalized Statistical Models by Score Matching", JMLR, 2005. - [2] Wang et al. "The unreasonable effectiveness of gaussian score approximation for diffusion models and its applications." TMLR, 2024. > Connections with modern Hopfield networks We sincerely thank the reviewer for highlighting important theoretical connections between our paper and recent work on modern Hopfield networks. As pointed out by the reviewer, [1] demonstrates that large class of diffusion models asymptotically yield energy landscapes equivalent to modern Hopfield networks, where memorized states correspond to local minima. [2] similarly presents an insightful perspective by interpreting diffusion models as a extension of associative memory retrieval, showing that the iterative denoising process parallels recurrent energy minimization in Hopfield-like networks. Their framework highlights how the dynamic updating of latent variables in diffusion models can be cast as an attractor-based process, reinforcing the broader view of diffusion as a form of associative memory. Our paper shares the central theoretical idea-that memorized states are sharp minima-but differs in scope and methodology. While these referenced studies focus on establishing theoretical equivalences and analyzing associative memory capacity, our contribution lies in explicitly quantifying sharpness via Hessian eigenvalues and score norms as an early-stage utility tool to detect and mitigate memorization during model inference. We greatly appreciate the reviewer's insightful suggestion to clarify these theoretical connections, as this strengthens the context and clarity of our work. - [1] Ambrogioni "In search of dispersed memories: …", Entropy, 2024 - [2] Hoover et al. "Memory in Plain Sight:...", NeurIPS Workshop, 2023. > Connection with the eigenbasis framework We sincerely appreciate your valuable suggestions, particularly regarding closely related studies that enrich the theoretical context of our work. The concurrent work [1] rigorously characterizes geometric memorization through analysis of the score function's Jacobian eigenstructure. The authors identify memorization onset when spectral gaps between singular values close, indicating the manifold's tangent-space structure has collapsed. This eigenbasis approach aligns with our central idea-that memorization corresponds to increased local curvature, which implies very small local variance in the density. We find their elegant mathematical formulation very insightful and acknowledge the deep theoretical contributions they've made to understanding generative model memorization. However, while their detailed eigenanalysis can be computationally prohibitive for large-scale diffusion models like Stable Diffusion, our approach simplifies the process by using the Hessian score product as a summary statistic. This streamlined method maintains theoretical coherence while efficiently monitoring and mitigating memorization in high-dimensional generative models. Similarly, [2] demonstrates how strong generalization naturally emerges through geometry-adaptive harmonic representations. Their work shows that optimal denoisers implicitly operate within a geometry-adaptive basis, explaining how diffusion models achieve generalization without exponentially large datasets. While they characterize the adaptive basis underlying generalization, our contribution focuses specifically on sharpness as a direct, practical measure of memorization, complementing their basis-oriented theoretical analysis. We are grateful for these insightful references to the reviewer and will update our manuscript accordingly. - [1] Achilli, Beatrice, et al. "Losing dimensions: ….", arxiv, 2024 - [2] Kadkhodaie et al. "Generalization in diffusion models arises …", ICLR, 2024.
Summary: This paper proposes to understand and mitigate the memorization of diffusion models from the perspective of the sharpness of probability landscapes. More specifically, it first shows that the large negative eigenvalues of the Hessian matrix, which reflects the sharpness, can indicate the risk of memorization. It then proposes a computationally efficient metric (i.e., Hessian trace and score norm) to measure the sharpness. The authors also show that the popular Wen’s metric can be explained from the aspect of sharpness and enhances Wen’s metric to enable the early-stage detection of memorization. Finally, the authors develop a sharpness-aware initialization method to mitigate the memorization. Experimental results on MNIST and Stable Diffusion reveal that the memorization of the diffusion model can be detected and mitigated by the sharpness-based method provided by the authors. ## Update After Rebuttal I think my concerns are addressed by the rebuttal. Therefore, I prefer to maintain my original rating of 4, showing that I tend to accept this paper. Claims And Evidence: I believe that the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method and evaluation are reasonable for detecting and mitigating the memorization in diffusion models. Theoretical Claims: To the best of my knowledge, I think the proofs for the theoretical claims (Lemma 4.1-4.3) should be correct. Experimental Designs Or Analyses: I have checked the experimental designs and analyses. I think the evaluation is valid. Supplementary Material: I quickly go through Section B, Proofs. Relation To Broader Scientific Literature: I think this paper is highly relevant to Wen’s metric [1] and the LID work [2, 3]. [1]. Wen, Y., Liu, Y., Chen, C., and Lyu, L. Detecting, explaining and mitigating memorization in diffusion models. In ICLR, 2024. [2]. Ross, B. L., Kamkari, H., Wu, T., Hosseinzadeh, R., Liu, Z., Stein, G., Cresswell, J. C., and Loaiza-Ganem, G. A geometric framework for understanding memorization in generative models. arXiv preprint arXiv:2411.00113, 2024. [3]. Kamkari, H., Ross, B. L., Hosseinzadeh, R., Cresswell, J. C., and Loaiza-Ganem, G. A geometric view of data complexity: Efficient local intrinsic dimension estimation with diffusion models. In ICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling, 2024. Essential References Not Discussed: I think the essential references have been covered by the authors. Other Strengths And Weaknesses: The strengths of the paper are listed as follows. 1. The paper is well-written and well-developed, making it easy to follow. Even if the readers have little background knowledge, they can easily get the key points of the paper. 2. All of the important claims in the paper are supported by both theoretical analysis or proof and empirical evaluation. 3. The idea is novel. To the best of my knowledge, it is the first work to understand and mitigate the memorization of the diffusion model from the perspective of sharpness. 4. The proposed method is practical. The proposed methods to detect and mitigate memorization are computationally affordable and can be used in real applications. The other weaknesses of the paper are listed as follows. 1. It would be better if the authors could evaluate their methods on more datasets. However, given the theoretical analysis from the authors, I think it is just a minor weakness. 2. For section 4.4, the existing empirical results in Table 1 do not show an obvious improvement over Wen’s metric at step 1. It would be better if the authors could find a case where Wen’s metric performs badly to support the importance of upscaling. Other Comments Or Suggestions: N/A Questions For Authors: 1. In Section 5.1, how do you solve the formulated objective problem for sharpness-aware initialization? 2. In the right part of Figure 3, did you miss the negative sign for the lower part values in the y-axis? 3. In the left part of Figure 6, for the proposed method, why can the CLIP score not reach larger values (e.g., > 0.26) like the other baselines? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Evaluate methods on more datasets. Thank you for the suggestion. For our Stable Diffusion experiments, we adopted the established benchmark of known memorized prompts introduced by [1], which has become a standard dataset in the current literature [2-4]. In line with prior work, we made an effort to comprehensively include all verbatim categories within this benchmark. - [1] Webster, "A reproducible extraction …" arXiv. 2023. - [2] Wen et al. "Detecting, explaining, and mitigating …", ICLR. 2024. - [3] Ren et al. "Unveiling and mitigating memorization …", ECCV, 2024. - [4] Chen et al. "Exploring local memorization …", ICLR, 2025. > Empirical results in Table 1 (Comparison with Wen’s metric at step 1) Thank you for the suggestion. We would like to highlight the advantages of our upscaling method from two perspectives: detection and mitigation. **Computational Cost** Our metric offers clear computational benefits in detection. As shown in Table 1, our approach consistently achieves comparable or superior performance while requiring substantially fewer sampling steps (“Steps”) and fewer simultaneous generations (“n”) compared to Wen’s metric. This efficiency is especially valuable in practical scenarios such as real-time detection. Below, we present a comparison of computation time (in seconds) between Wen’s metric and ours using Stable Diffusion v1.4. We highlight in **bold** the entries where both methods achieve equivalent AUC performance. |-|Step 1|Step 5|Step 50| |-|-|-|-| |Wen et al. (n=1)|0.233|0.431|3.211| |Wen et al. (n=4)|0.728|1.323|11.25| |Wen et al. (n=16)|1.326|**1.955**|**16.55**| ||||| |Ours (n=1)|0.412|–|–| |Ours (n=4)|**0.882**|–|–| As shown above, our method achieves similar or better performance at a clearly lower computational cost. We also expect further efficiency gains if the JVP operation is optimally integrated into libraries such as Hugging Face. **Harder Detection Cases in SD v2.0:** Unlike Stable Diffusion v1.4, which includes both Exact and Partial memorized samples, v2.0 contains only Partial memorized prompts, making the detection task more challenging. The more pronounced performance gap between our method and Wen’s metric in this setting empirically demonstrates the effectiveness of our upscaling approach. **Mitigation - SAIL Optimization:** Our metric is also crucial for SAIL. Although Wen's metric works well for detection, using it in the SAIL objective (line 360) resulted in optimization failure. Our approach, however, successfully achieved stable convergence and effective mitigation by amplifying differences in the early stages. > How to solve SAIL in Section 5.1 Thank you for the question. While we provide detailed pseudocode for SAIL in Appendix D.2, we are happy to explain it here as well. The SAIL objective involves two forward passes at the initial sampling step ($t=T-1$). - In the first pass, we compute the score difference $s_{\theta}^\Delta(\mathbf{x}_T) $. - We then perturb the initialization slightly in the direction of this score difference, i.e., $\mathbf{x}_T + \delta \cdot s\_\theta^\Delta (\mathbf{x}_T) $, - and perform a second forward pass to obtain $s_{\theta}^\Delta\bigl(\mathbf{x}_T + \delta \cdot s\_{\theta}^\Delta(\mathbf{x}_T)\bigr) $. With these components, we optimize the initial noise $ \mathbf{x}_T$ sampled from an isotropic Gaussian using the SAIL objective (refer to line 373). This optimization is lightweight and typically converges within two to three iterations on average, depending on the threshold hyperparameter $ \ell_{\text{thres}} $. SAIL also supports batch-wise execution, allowing multiple initializations to be optimized in parallel. Please feel free to let us know if any part requires further clarification. > In Figure 6 (left), for SAIL, why can the CLIP score not reach larger values like the other baselines? We appreciate your valuable question. It is indeed possible to achieve higher average CLIP scores with SAIL by adjusting its threshold hyperparameter $\ell_{\text{thres}}$. However, we would like to clarify that, empirically, algorithms achieving high CLIP scores alongside high SSCD scores (e.g., > 0.35) often produce outputs that are only superficially altered. These include blurry or partially contaminated memorized images, which are difficult to consider genuinely mitigated. For all baseline methods, we carefully selected hyperparameters based on their original papers to ensure a fair comparison. In contrast, for SAIL, we prioritized configurations that provide strong memorization mitigation. Notably, across extensive hyperparameter sweeps, SAIL consistently achieved the lowest points on the SSCD -CLIP performance trade-off curves in both Stable Diffusion v1.4 and v2.0. > In the Figure 3 (right), did you miss the negative sign in the y-axis? Thank you for pointing this out. The reviewer is correct and we will revise it accordingly in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response from the authors. I think my concerns are addressed by the rebuttal. Therefore, I prefer to maintain my original rating of 4, showing that I tend to accept this paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for positively recognizing the contributions of our work. We greatly appreciate your valuable time, effort, and insightful feedback, which will help us further improve our manuscript. Sincerely,\ The authors
Summary: To alleviate the memory effect of the diffusion model, this paper proposes a sharpness-based detection metric and develops an effective mitigation strategy based on this metric. The strengths of this paper lie in its clarity and the progressive experiments and theoretical analysis that illustrate the rationale and effectiveness of the proposed method. The proposed mitigation strategy outperforms existing methods, while requiring no additional modifications on texts and model architecture. Claims And Evidence: 1. The metric analyzed in this paper is almost identical to that in [1], with the main difference being whether it analyzes the first-order properties of the score function or the denoiser. The difference between the two lies only in a constant, resulting in nearly no distinction in properties such as the Jacobian. Additionally, conclusions like "smoother regions tend to yield non-memorized images" (line 375) also appear in [1]. However, this paper does not discuss that work, which weakens its contribution. 2. The key motivation of this paper (see Lines 160-164) lacks sufficient evidence. Although the expected phenomenon is observed on toy data such as MNIST, merely observing the phenomenon does not justify the validity of this key motivation. Could additional theoretical explanations or further analysis be provided? [1] Wang, Hanyu, Yujin Han, and Difan Zou. "On the discrepancy and connection between memorization and generation in diffusion models." ICML 2024 Workshop on Foundation Models in the Wild. 2024. Methods And Evaluation Criteria: 1. Works like [1,2] have already used curvature properties to discuss memorization, and this paper seems to upgrade Wen’s metric while introducing the additional assumption that \( x_t \) follows a Gaussian distribution. 2. To facilitate the computation of the proposed metric, the authors introduce several approximations. Such as Lemma 4.1, which connects the trace with the norm of the score function, Lemma 4.2, which links Wen’s Metric to the proposed Sharpness Measure, or Lemma 4.3, all assume a Gaussian distribution. These assumptions raise concerns regarding the discrete diffusion model's reverse process and prompt a crucial question: how accurate is the improved metric in real-world scenarios? [2] Kamkari, H., Ross, B. L., Hosseinzadeh, R., Cresswell, J. C., and Loaiza-Ganem, G. A geometric view of data complexity: Efficient local intrinsic dimension estimation with diffusion models. In ICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling, 2024. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. The right-side image in Figure 2 shows that the eigenvalues of memorized and non-memorized samples at the initial sampling step are not very large. A similar issue appears in Figure 3, where at \( t = T-1 \), the differences between different memorization categories are not significant enough. This undermines the necessity of monitoring memorization throughout the entire sampling process, making the advantage over works like LID less compelling. 2. Table 1 shows that the proposed method performs very similarly to Wen’s metric. For example, at \( T = 1 \), the AUC difference is only between \( 1e{-3} \) and \( 1e{-2} \). Moreover, the proposed metric requires the additional computation of the Hessian matrix. 3. The proposed mitigation strategy may not be applicable to stochastic sampling methods such as SDE. Supplementary Material: I reviewed and checked the necessary appendices related to the main text, and found no additional issues. Relation To Broader Scientific Literature: This paper proposes a new metric and method to detect the memorization of diffusion models. The underlying principle, such as the use of sharpness, has appeared in previous work. The authors leverage this property and further improve the existing Wen’s metric, achieving enhancement. Essential References Not Discussed: See Claims And Evidence. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Connection to [1]. We thank the reviewer for highlighting the connection to [1], which we will properly acknowledge. While both works study memorization through geometric properties of the density, there are key differences: - [1] focuses on first-order smoothness via comparisons between trained and oracle scores, whereas our analysis emphasizes second-order geometry through the trained Hessian and explicitly measures sharpness by observing distribution of eigenvalues. - We also demonstrate how specific initial latents lead to memorization by consistently mapping into high-curvature regions, further amplified by text conditioning. We believe these distinctions clearly position our contributions as complementary to [1], enriching the understanding of memorization in generative models. > Evidence of key motivation We appreciate the concern about the generalizability of our observed phenomenon. While we use MNIST for clarity and intuition, our experimental design extends well beyond this toy dataset. As shown in Figure 3, the key phenomenon, sharpness correlating with memorization, is present in modern, large-scale models like Stable Diffusion. This progression from simple to complex domains actually strengthens our motivation by showing that the behavior persists across different datasets. Figure 5 further supports this finding, revealing consistent patterns in the Hessian spectra between memorized and non-memorized samples. > Works like [1,2] already used curvature properties to discuss memorization. We acknowledge that prior works [1, 2] have explored memorization and data complexity through curvature-related concepts. However, our contribution advances beyond these studies in both scope and methodology. While [2] examines curvature only at the final generation stage, we introduce a dynamic framework that analyzes sharpness throughout the entire diffusion process across all timesteps. This continuous perspective enables earlier detection and mitigation of memorization, which we believe are novel and practically valuable contributions. > Gaussian Assumptions Empirical evidence from recent work [3] shows that diffusion models exhibit approximately Gaussian score behavior in early and intermediate steps, supporting the validity of our Gaussian-based assumptions. As shown in Figure 4, key metrics like the negative Hessian trace align well with the score norm and Hessian-score product, confirming that our theoretical approximations hold in practice. > Small differences in eigenvalues We understand your concern. While eigenvalue differences at the initial step are small, they become effective when aggregated (e.g., sum or cubed sum). As shown in Table 1, both our metric and Wen’s metric, which measure these statistics, perform well at Step 1. This confirms the usefulness of early-stage eigenvalue signals. Regarding LID, we excluded it from our detection experiments since it requires full sampling steps, making direct comparison unfair. For reference, we provide our LID results at Step 50: - SD v1.4: AUC = 0.974 (n=1), 0.992 (n=4); TPR@1%FPR = 0.184 (n=1), 0.824 (n=4) - SD v2.0: AUC = 0.972 (n=1 and n=4); TPR@1%FPR = 0.470 (n=1), 0.216 (n=4) Compared to our method in Table 1, LID yields significantly lower TPR@1%FPR, especially in SD v2.0. This highlights the effectiveness of our sharpness-based approach, which is essential to the success of our mitigation strategy (Figure 6). > Performance \& Time cost compared to Wen's metric Thank you for pointing this out. While our method involves computing a Hessian-score product, the additional cost is minimal due to efficient JVP implementations in standard libraries. Our metric provides clear computational benefits. As Table 1 demonstrates, it achieves equal or superior AUC compared to Wen's metric with fewer sampling steps ("steps") and generations ("n"). Using SD v1.4, we compare the runtime (in seconds) between these metrics below, with **bold** entries indicating where both methods achieve equivalent AUC. |-|Step 1|Step 5|Step 50| |-|-|-|-| |Wen et al. (n=1)|0.233|0.431|3.211| |Wen et al. (n=4)|0.728|1.323|11.25| |Wen et al. (n=16)|1.326|**1.955**|**16.55**| ||||| |Ours (n=1)|0.412|–|–| |Ours (n=4)|**0.882**|–|–| Our metric is also critical for SAIL. While Wen’s metric is effective for detection, we found that using it as SAIL objective (line 360) led to optimization failure. In our case, the amplified early-stage differences enabled stable convergence and effective mitigation. > SAIL may not be applicable to SDE. While most of prior works focus on ODE samplers for analysis, we believe SAIL can still be applied to SDE samplers in expectation by selecting good initializations. We agree this is a valuable direction for future work. [1] Wang et al. "On the discrepancy and connection...", ICML Workshop, 2024\ [2] Kamkari et al. "A geometric view of data...", ICML Workshop, 2024\ [3] Wang et al. "The Unreasonable Effectiveness...", TMLR, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the response. Most of the authors' replies addressed my concerns, so I have raised my score to 3. Why not a higher score? Regarding the connection to [1], I partially agree with the authors. Specifically, [1] also analyzes first-order properties of both the oracle model $\epsilon^*$ and the trained model $\epsilon_\theta$, including analysis about eigenvalues (Fig. 3), while this paper analyzes second-order properties of $\log p$ (lines 150–155). These two analyses are essentially equivalent, given the fact that the difference between $\epsilon_{\theta}$ and $\nabla \log p$ is only a scaling factor of $-\frac{1}{\sigma_t}$. I agree with the authors that this work includes additional experimental findings (e.g., discussions on initial latents), but the same proposed metric reduces the contribution of this work. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful and constructive comments, and for recognizing the contributions of our work. We appreciate your point regarding the connection to [1] and will ensure it is properly acknowledged and clarified in the final version. Thank you again for your valuable feedback and for helping improve the quality of our submission.
Summary: This paper studies the memorization phenomenon in diffusion models, which is a crucial task that is well-motivated by its practical significance in privacy preservation in the era of GenAI. This paper discovers a new pattern that can differentiate memorized and non-memorized generations of diffusion models that is based on the sharpness of the log probability density, quantified by the Hessian of the log probability, serving as a new detection strategy for the memorized generations. Also, it shows the relevance of this pattern with the existing pattern found by Wen et al. Armed with such a finding, this paper proposes a mitigation strategy named SAIL that can efficiently mitigate memorization while being more effective (better text alignment under the same privacy level) than existing baselines. Experiments are conducted on a 2D toy dataset, MNIST, and Stable Diffusion’s LAION dataset. ## update after rebuttal I have no further questions, so I keep my original rating of accept. Claims And Evidence: This paper’s claims are well-supported by its analysis, either as visualizations or in the form of math proofs, and its superior experimental results. Methods And Evaluation Criteria: The benchmark datasets follow the existing baselines, which makes their comparisons well-justified for the task that they address. Theoretical Claims: I have carefully checked the equations and claims in the main paper and observed no issues. Nevertheless, I am uncertain about the detailed proofs in the Supplementary Material. Experimental Designs Or Analyses: The design of evaluating the proposed detection and mitigation strategies is sound, which follows the baseline that is shown in Table 1 and Figure 6. Supplementary Material: I have fully reviewed the supplementary material. However, I did not entirely follow all the steps in the proofs and theoretical justifications in Sections A4 and B. Relation To Broader Scientific Literature: This paper specifically contributes to the field of understanding and addressing memorization issues in diffusion models. I believe this paper contributes to both theoretical and empirical aspects. Essential References Not Discussed: The paper has provided a comprehensive discussion of the core related papers. Other Strengths And Weaknesses: Please see the previous sections. Other Comments Or Suggestions: N/A Questions For Authors: Overall, this is a solid paper to me and I would like to recommend acceptance. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review and for recognizing our contributions to this topic. We sincerely appreciate the time and effort you dedicated to reviewing our work. Please do not hesitate to reach out with any further questions or suggestions.
null
null
null
null
null
null
SK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMs
Accept (oral)
Summary: The paper proposes a dataset for visual question answering with external knowledge or for evaluating MLLM + RAG systems. The dataset is constructed by using images from multiple datasets as seed images, and then writing context for those images using GPT-4o or using paired Wikipedia context when available, and then writing questions based on those contexts. Human analysis is done on the generated question-answer pairs. A number of quality control steps are performed. The experiments section describes a series of experiments where transfer is measured from the proposed dataset to other datasets. The proposed dataset is substantially larger than existing datasets, and training on the proposed dataset transfers well to other datasets. The dataset is of comparable difficulty to existing datasets w.r.t to both human and model evaluation. Quality control checks suggest that the level of noise in the dataset are low. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no proofs or theoretical claims. Experimental Designs Or Analyses: I checked all of $\S5$. I do not see any issues. Supplementary Material: I reviewed sections F, K, and L. Relation To Broader Scientific Literature: External knowledge visual question answering datasets started off small, with datasets like OK-VQA and A-OKVQA. These datasets have been superseded by more recent, larger datasets that are harder, like InfoSeek and EncyclopedicVQA. The proposed dataset is larger and appears to be higher quality than InfoSeek and EncylopedicVQA. I'm not sure if it is conceptually any different than EncyclopedicVQA and InfoSeek. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: The main strength is that this is a high quality dataset with a well-thought out design that appears to be more effective as a source of training data than previous datasets. The main weakness is that conceptually, it seems to offer nothing new (other than being larger + higher quality) than existing datasets. Additional strengths: - The dataset contains nearly 50% more questions than the next largest dataset. - In addition to being larger, it is substantially more diverse (11x unique questions vs EncylopedicVQA). - There is a human evaluation performed and all humans perform similarly on the dataset. Additional weaknesses: - Open-ended VQA datasets are known to contain high levels of noise. Specifically, for some questions the annotated answer might not be the only correct answer, or some questions might be ill-posed. I did not see any analysis done on how many model errors are the result of possibly noisy questions vs a genuine error. This could be done by evaluating a model on a subset and looking at a few errors (let's say 50-100). - I don't see any evaluations of frontier models like GPT-4o on this dataset. If this dataset is generated by GPT-4o, does that mean it is already "solved" by GPT-4o and cannot be used to evaluate GPT-4o. This is likely not the case for something like EncVqa, for which AFAICT even frontier models (at the time of evaluation) perform poorly. Other Comments Or Suggestions: The authors should have a "-lite" split of their dataset. This dataset is large, and to make it accessible to the community, you should prepare a smaller evaluation split of the dataset that could be evaluated on in a smaller amount of time. Questions For Authors: 1. What was your motivation for this work? In particular, can you point out the problems with InfoSeek or EncylopedicVQA that you were trying to solve with this paper? Are there new applications that SK-VQA enables that were nontrivial to do with EncyclopedicVQA or InfoSeek? Concrete examples would be helpful. If answered convincingly, I will raise my rating. 2. I don't see any evaluations of frontier models like GPT-4o on this dataset. If this dataset is generated by GPT-4o, does that mean it is already "solved" by GPT-4o and cannot be used to evaluate GPT-4o? This is likely not the case for something like EncVqa, for which AFAICT even frontier models (at the time of writing) perform poorly, though they only evaluate on GPT-3. Should frontier models only be evaluated on the WiT split of the dataset? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful review and your recognition of the dataset’s quality, thoughtful design, large scale, and strong transfer performance. We have carefully addressed your comments and concerns as follows: > **...did not see any analysis done on how many model errors are the result of possibly noisy questions vs a genuine error...** > **...I don't see any evaluations of frontier models like GPT-4o on this dataset...*** During the rebuttal period, we evaluated GPT-4o on the SK-VQA test set, and it achieved a score of 58.9%. We also conducted a manual error analysis on randomly sampled 50 examples that were marked incorrect by automatic evaluation. Our findings show that: - 40% were genuine model errors - 30% were partially correct - The remaining 30% were actually correct. We will add these additional GPT4o’s results in our final version. > **...you should prepare a smaller evaluation split of the dataset that could be evaluated on in a smaller amount of time.** We fully agree with the reviewer’s suggestion. To improve accessibility and encourage broader adoption, we will release a “-lite” evaluation split alongside the full dataset. This smaller subset will be designed to run efficiently on limited compute while preserving task diversity. > **What was your motivation for this work? In particular, can you point out the problems with InfoSeek or EncylopedicVQA that you were trying to solve with this paper? Are there new applications that SKVQA enables that were nontrivial to do with EncyclopedicVQA or InfoSeek? Concrete examples would be helpful...** We appreciate this opportunity to clarify our motivation and contributions. Our goal was to address key limitations of existing KB-VQA datasets such as InfoSeek and Encyclopedic-VQA, which include: - Narrow image coverage: These datasets mostly include images that can be linked to Wikipedia entities (e.g., landmarks, animals), excluding everyday or abstract visuals. In contrast, SK-VQA includes images from LAION, Wikipedia, and COCO-Counterfactuals, enabling coverage of open-domain, synthetic, and artistic images. - Low question diversity: InfoSeek uses templated QA generation (e.g., “What is the capital of X?”), leading to <1% unique questions. SK-VQA uses GPT-4 to generate both context and QA pairs together, resulting in ~96% unique questions with richer phrasing (Table 2). - Limited knowledge types: Prior datasets focus on entity-centric facts. SK-VQA includes a broader range of topics (25 identified via topic modeling; Fig. 4), including art, cultural events, sports, and fashion. Some concrete examples: - Figure 1 (main paper): Shows a question about the Golden Globe Awards hosted at the Beverly Hilton — a cultural-event-centric question that would be unlikely in InfoSeek or Encyclopedic-VQA due to lack of coverage. - Figure 2: Demonstrates questions such as “What characteristic helps this breed adapt to cold water?” (about Labrador Retrievers) — combining visual traits with world knowledge, beyond simple object labels. - Appendix Figure 10: Compares a GPT-4 generated context about vineyards to a Wikipedia context about New World wines. The synthetic version better aligns with the image and supports diverse, image-grounded questions. New Applications Enabled by SK-VQA: - Multimodal RAG Training & Evaluation: SK-VQA includes paired image, context, and QA for over 2M examples — a scale not offered in existing datasets — enabling training of models that retrieve and reason over context. - Fully-synthetic training and counterfactual reasoning: The inclusion of COCO-CFs and GPT-4 generated knowledge allows SK-VQA to support training and testing on hypothetical, non-real-world scenarios.
Summary: This paper provides and analyzes a new dataset called SK-VQA, which is a large-scale dataset designed to train multimodal language models for knowledge-based visual question answering with context augmentation. The authors’ motivation is that existing datasets for this specific task do not cover large and diverse enough topics and questions. They leverage GPT-4 to produce synthetic data, resulting in a dataset with greater question diversity and broader knowledge coverage compared to previous resources. Evaluation demonstrates the proposed dataset could serve as a challenging benchmark and an effective training tool. Claims And Evidence: There are three major claims made in this paper. 1. Introduction of SK-VQA as a large-scale and diverse synthetic multimodal dataset for context-augmented KB-VQA: This claim is supported by its size when compared to other datasets. 2. SK-VQA exhibits greater question diversity compared to previous datasets: It is supported by Table 2. It has significantly more questions than other KB-VQA datasets like InfoSeek and Encyclopedic-VQA. 3. SK-VQA presents challenges on existing models, especially for zero-shot: This result should be supported by Figure 5. However, it is not clear why the results of training and testing with SK-VQA are missing in the figure. The author could split the proposed dataset into two splits and provide the results. Methods And Evaluation Criteria: 1. The proposed data synthesis method is targeted at addressing the data scarcity problem. Although the motivation is reasonable, the generation of context documents is questionable. The quality of the generated context documents is not well assessed in terms of correctness, diversity, and completeness. 2. Another question is since both the context documents and QAs are all generated with GPT-4, I am worried that the quality of the dataset might not keep up with the current advanced multimodal LLMs. As little human effort is included in the loop during data synthesis, I wonder if the GPT-4 generated dataset is qualified to evaluate other models that already achieve better performance than GPT-4 on most benchmarks. 3. The evaluation results look comprehensive while lacking some recent models, like Qwen-VL2, Ovis, and Molmo, etc. It would be more informative to readers if these methods could be included. Theoretical Claims: The paper is about proposing a dataset. There is no theoretical claim. Experimental Designs Or Analyses: 1. The finetuned results of different datasets are not clear. In table 5, I feel the author is using split training and test sets of InfoSeek during training and test stages while using the same proposed SK-VQA both for training and testing. In that case, the author should make it clear in the paper. Supplementary Material: Yes, I read the supplementary. Relation To Broader Scientific Literature: The dataset would be a choice of evaluating context-aware VQA models and might benefit the multimodal LLM society. Essential References Not Discussed: The paper might lack some recent models, like Qwen VL2, Ovis, InternVL, Molmo, etc. Other Strengths And Weaknesses: Weakness: 1. One major concern is about how the accompanied document being generated in the proposed dataset. As stated in Section 3.1, the context document is generated with QA pairs from GPT-4 at the same time. However, as the context documents are generated from a single model using the same prompt, they might lack diversity in both the knowledge and the writing styles. This might make the synthesized dataset 'not real' compared to other multimodal RAG datasets. Similarly, I am not conviced by the argument between L194 - 196. From my point view, the consistancy can also be guaranteed if we first prepare the context document from web or other resouces and then ask GPT-4 to prepare QA pairs based on the document. 2. The quality assessment of the generated document is not comprehensive. The author discuss 'no obvious cases of hallucination were identified' in L856 of their supplemenraty. However, the correctness of the generated document is not shown or hard to evaluate based on the propsoed dataset creation pipeline. Besides, it is also important to discuss the knowledge coverage of the generated document. Other Comments Or Suggestions: What is the release plan of the proposed dataset? Questions For Authors: I have no other questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your detailed review and your recognition of the dataset’s scale, diversity, and its potential to benefit context-aware multimodal research. We have addressed your concerns as follows: > **it is not clear why the results of training and testing with SK-VQA are missing in the figure 5** Figure 5 focuses only on out-of-domain generalization, where models are trained on one dataset and evaluated on the test sets of other datasets. This is why we did not include results where both training and testing are done on SK-VQA (i.e., in-domain performance). The in-domain performance—where training and testing are on SK-VQA—is provided in Table 6 (Appendix A) for completeness. We intentionally separated in-domain and out-of-domain results to better highlight the generalization ability of each dataset. > **The quality of the generated context documents is not well assessed in terms of correctness, diversity, and completeness.** > **..the correctness of the generated document is not shown...** During the rebuttal time, we have added a new experiment based on LLM-as-judge (GPT-4o) to analyze the quality of the dataset. Specifically, we asked the model to check the factuality, question relevancy, question answerable, and answer correctness. - Factuality (0 = Completely inaccurate to 5 = Fully accurate and matches the image): the average score is 4.6, with 87.5% cases scored of “5”. - Question relevancy (0 = not relevant to image and context to 5 = relevant to image and context): the average score is 4.9, with 92.0% is a score of 5. - Question Answerable(Yes/No): 99.6% questions are answerable. - Answer correctness (Yes/No): 90.7% questions are answerable. Additionally, we performed an additional 100 human analysis as in section 4.4, the results are consistent with the analysis in the Table 3 of the paper, reinforcing the reliability and representativeness of our evaluation. These new experiments plus the analysis in our paper for correctness (Table 3 for Human analysis and Section 4.4.2 for grammar), diversity (Table 2), completeness (above new LLM-as-judge analysis), and Bias and Toxicity (Section 4.2.2), show that SK-VQA undergoes richer validation, both automated and human, than most existing benchmarks (see Appendix L for a detailed comparison). > **I wonder if the GPT-4 generated dataset is qualified to evaluate other models...lacking some recent models, like Qwen-VL2, Ovis, and Molmo...** During the rebuttal, we evaluated more recent and powerful VLMs, including Qwen-2.5-VL (3B/7B/32B/72B) and Ovis (1B/2B/4B/8B/16B/34B). On SK-VQA, these models achieved: - Qwen-2.5-VL: 53.74, 49.26, 52.08, 49.09 - Ovis: 32.25, 44.54, 50.55, 50.36, 52.36, 55.2 For comparison, on ViQuAE, the same Ovis models achieved: - Ovis: 39.50, 67.09, 49.38, 57.96, 72.77, 67.03 These results show that SK-VQA is a more challenging dataset compared to the existing ones. > **lack diversity in both the knowledge and the writing styles ... consistancy can also be guaranteed if we first prepare the context document from web or other resouces ...** We address the two points as follows: On diversity of generated contexts: We designed our pipeline and prompts as open-ended to promote both linguistic and knowledge diversity. As shown in Table 2, SK-VQA exhibits significantly higher diversity in POS patterns, vocabulary, and question structure than prior datasets. Figure 4 further shows that our context documents span 25+ knowledge domains, based on unsupervised topic modeling. Additionally, the use of diverse image sources (e.g., LAION, Wikipedia, COCO-CFs) ensures a wide range of visual prompts for generation, resulting in stylistic and content variation. The strong zero-shot difficulty across models (Table 4) also supports the idea that the data is not overly templated or repetitive. On generating QA and context together: We agree that it is possible to first retrieve real documents and then generate QA pairs. However, our method generates context and QA jointly in one step, which allows us to explicitly control key constraints — such as ensuring the answer is only in the context (not the image), that object names are avoided, and that reasoning is required. This level of alignment is difficult to achieve when using unstructured web data, where the context may not be tailored to support the desired QA types. > **...it is also important to discuss the knowledge coverage...** We applied topic modeling and identified 25 distinct domains (Figure 4), showing broad topical diversity beyond entity-specific facts. > **What is the release plan of the proposed dataset?** We will publicly release the full SK-VQA dataset, along with the code, upon acceptance.
Summary: This paper presents a large-scale synthetic dataset containing over 2 million visual questions with answers that require information from associated context. The images used in this dataset are from a hybrid of synthetic images from COCO-CFs and real images from Wikipedia and LAION, while the context and answers are generated from GPT-4. In the evaluation, they illustrate that this dataset can be used as a challenging benchmark of KB-VQA models and can also be effectively used as training data for multimodal context-augmented generation. Claims And Evidence: 1. One concern I have is regarding the quality of the question-answer pairs. It seems that the only quality control in this paper regarding the validity of question-answer pairs is the human evaluation. While it is conducted on a very small scale (100 QA pairs). Therefore, it may be questionable whether the dataset can be used as a reliable benchmark to evaluate the KB-VQA models. Methods And Evaluation Criteria: The proposed method is, in general, technically sound. By synthetically generating context and QA pairs, it can potentially increase the diversity of data when training the KB-VQA models. The evaluation of using different data sources at a similar scale as the training set and test on other datasets is also legitimate. Another evaluation that is currently missing is the ablation of using different image sources. It is not clear what roles different image sources play in either training or evaluation. Especially in this paper, there are also some synthetic images. It would be meaningful to see some analysis in this direction. Theoretical Claims: N/A Experimental Designs Or Analyses: Most of the experimental designs and analyses are sound to me. One issue I found is related to the impact of using generation source and real source for Wiki-based images (Table 5). It is not clear to me whether it is a fair comparison because from Table 1, we can find that the number of synthetic contexts is much larger than the number of real contexts from Wikipedia. So it is not clear to me whether the advantage is coming from the quality of the synthetic context and QA pairs, or just that the number of the contexts is increased. Supplementary Material: I reviewed the comparison between generated context and real Wikipedia context. Relation To Broader Scientific Literature: This proposed synthetic data generation method can be potentially adopted by other KB-VQA work to enhance the training data or as a challenging benchmark for evaluation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper presents an interesting way that additionally focuses on generating the context and then the question answer pairs, as well as using synthetically generated images. Other Comments Or Suggestions: The title of Section 4.4 is *Human Evaluation*, while some part of Section 4.4.2 *Additional Dataset Quality Evaluations* are automatic evaluation using LanguageTool. The authors may consider using other names to avoid confusion. Questions For Authors: The context in WIT dataset is organized at different levels, for example, caption, paragraph or the whole Wikipedia page. What kind of contexts are you using to compare with the synthetic contexts? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive review. We're grateful for your recognition of the strengths of our approach — particularly the scale and diversity of the dataset, our use of varied image and context sources, and the overall soundness of our methodology. We have addressed your concerns in detail as follows: > **One concern I have is regarding the quality of the question-answer pairs...** We appreciate the reviewer’s concern regarding the sample size of our human evaluation. We would like to highlight that our initial evaluation of 100 QA pairs already exceeds the human evaluation effort in prior synthetic dataset works (see Appendix L for a detailed comparison). To further address this concern, during the rebuttal period we conducted an additional human evaluation on 100 new QA pairs using the same methodology described in Section 4.4. The results remained consistent with the original analysis, with a mean accuracy of 77.0%, reinforcing the reliability and representativeness of our evaluation. - For factuality, we asked the model to score the factuality of the description from 0 to 5: 0 = Completely inaccurate, 5 = Fully accurate and matches the image. The result shows the average score is 4.6, with 87.5% cases scored of “5”. - For the question relevancy, we asked the model to score the relevance of the question to the description and image (0–5). The result shows the average score is 4.9, with 92.0% is a score of 5. - For the question answerable, we ask the model is the question clearly answerable based on the description? (Yes/No). The result shows 99.6% questions are answerable. - For the answer correctness, we ask “Is the answer factually correct based on the description? (Yes/No)”. The result shows 90.7% questions are answerable. These results, combined with our original and extended human evaluations, automated grammar checks (LanguageTool), fact-checking (via manual validation), bias/toxicity screening, and strong downstream task performance across multiple benchmarks, collectively demonstrate the high quality and utility of our dataset. > **Another evaluation that is currently missing is the ablation of using different image sources...** We thank the reviewer for the suggestion. In fact, we have already included this analysis in Table 5 and Table 8 of our paper. In particular, Table 5 shows that models trained on synthetic images (COCO-CFs) paired with GPT-4 context perform on par with or better than those trained on real images (e.g., from Wikipedia). Table 8 further explores how filtering methods interact with image sources, showing that certain sources (e.g., LAION vs. Wiki) generalize differently across downstream tasks. These results suggest that diverse image sources — including synthetic ones — contribute positively to generalization, and that combining them can be more effective than relying on a single source. We will make this clearer in the final version. > **...it is not clear to me whether the advantage is coming from the quality of the synthetic context and QA pairs, or just that the number of the contexts is increased.** To ensure a fair comparison, all results in Table 5 were obtained using equal-sized subsets (downsampled to 200K training samples per setting). We will make this clarification more explicit in the final version of the paper. > **The title of Section 4.4 is Human Evaluation...consider using other names to avoid confusion.** Thank you for the helpful suggestion. In the final version, we will keep 4.4. As Human Evaluation, and split the 4.4.2 to a new section titled “Automatic Evaluation”. > **The context in WIT dataset...What kind of contexts are you using to compare with the synthetic contexts?** Thank you for pointing this out. For Wikipedia-based contexts, we use the paragraph-level context associated with each image from the WIT dataset. We will make this more clear in our final version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I would like to further clarify my questions regarding the role of different data sources. Because the proposed data consists of multiple resources, besides training the model with each of the individual resources to demonstrate the differences between different resources, I would also like to see what is the impact of each resource on the overall data. One example would be similar to the ablation study, in which we remove each resource from the overall data and train the model from the remaining data to evaluate the impact of this resource on the overall data. Similarly, regarding using the synthetic data for evaluation, it is also nice to have some analysis that can break into each resource. For example, are there certain resources in general easier or difficult than others? These analyses will help us better understand the role of each image and context source. --- Reply to Comment 1.1.1: Comment: Dear reviewer, we sincerely appreciate your insightful suggestion. In response, we have conducted an evaluation of the performance of five models on different subsets of our dataset, divided based on the source and type of image content. The results demonstrate that each subset presents a distinct level of difficulty. Specifically, we observe a consistent increase in difficulty across the following order: WiT (Wiki content), WiT- (GPT-4 generated content), LAION, and Coco-CF. We hypothesize that the WiT (Wiki) subset is the easiest because large language models are likely to have been trained on a substantial amount of Wikipedia content, making this subset more familiar and easier to answer. In contrast, the Coco-CF subset includes counterfactual image and GPT-4 generated content pairs that are largely out-of-distribution relative to the training data of these models, thus presenting the highest degree of difficulty. These findings highlight the diversity of our dataset and underscore the importance of incorporating varied content sources—especially those beyond Wikipedia-based images, which are predominantly used in many existing knowledge-based VQA datasets—in the construction of SK-VQA. We will include this analysis in the final version of the paper. | Model | LAION | WiT(GPT-4) | WiT(Wiki) | Coco-CF | |------------------|-------|------------|-----------|---------| | LLaVA-v1.5-7B | 40.99 | 44.35 | 50.45 | 41.4 | | LLaVA-v1.6-7B | 46.68 | 48.9 | 54.8 | 46.85 | | Qwen-VL-7B | 42.55 | 42.45 | 47.6 | 41.6 | | LLaVA-v1.5-13B | 40.42 | 41.5 | 50.85 | 39.4 | | LLaVA-v1.6-13B | 45.57 | 46.5 | 56.25 | 43.5 |
Summary: This paper introduces SK-VQA, a dataset with over 2 million question-answer pairs associated with context documents for training multimodal language models in knowledge-based visual question answering. Using GPT-4, the authors generated context documents and diverse QA pairs for images from varied sources, creating a dataset with 11× more unique questions than existing resources. Their experiments show that SK-VQA serves as both a challenging benchmark and effective training resource, with models trained on it demonstrating superior generalization in context-augmented settings compared to models trained on other datasets. This addresses a critical limitation of current multimodal LLMs which aren't designed for context-augmented generation in knowledge-intensive tasks. Claims And Evidence: The paper's central claims about SK-VQA's size, diversity, and performance improvements are well-supported by quantitative evidence. The dataset metrics showing 11× more unique questions than comparable datasets are documented in Table 2, while the performance advantages of models trained on SK-VQA are consistently demonstrated across multiple experiments in Figure 5 and Table 4. However, there is one major limitation: the human evaluation covered only 100 QA pairs (0.005% of the dataset), raising questions about the representativeness of the evaluated samples and therefore overall quality of the dataset. Methods And Evaluation Criteria: Yes. Using GPT-4 to generate synthetic QA pairs and context documents addresses the scarcity of suitable training data, while the filtering techniques (IR and CAP) help ensure data quality. The evaluation metrics, such as BEM and exact match, are standard. The evaluation framework is comprehensive, examining both zero-shot performance on existing benchmarks and fine-tuning outcomes across multiple models, including out-of-domain generalization which is particularly relevant for real-world applications. The RAG experiments simulate practical use cases where retrieved knowledge must be integrated with visual information. The comparison against existing datasets (InfoSeek, Enc-VQA, ViQuAE) provides meaningful context. Theoretical Claims: No theoretical claims are provided in the paper. Experimental Designs Or Analyses: The zero-shot and fine-tuning evaluations use appropriate metrics and multiple model sizes, strengthening validity. The comparative analysis against InfoSeek, Enc-VQA, and ViQuAE provides the necessary benchmarking context. However, the RAG experiments use only one model architecture (PaliGemma-3B), limiting generalizability claims across architectures. Also, when testing with LLaMA-3-70b to create a "hard" subset, the authors don't clearly establish what percentage of questions are answerable by looking only at context (only provided the final number of samples), making it difficult to assess the true multimodal reasoning requirements of the dataset. Is it the case that only $2,853$ out of 2 million can be answered just using the context? Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The prior KB-VQA datasets seem to already be included in the paper. Essential References Not Discussed: NA. Other Strengths And Weaknesses: Strength: the diversity of the dataset is much greater than prior work -- over 96% of the questions in SK-VQA are unique, an 11x improvement than Enc-VQA; the questions in this work also have a greater number of unique POS sequences, total vocabulary size, and mean word length. Weakness: No assets were provided in the submission. The reviewer recommends the authors release the datasets and codebase upon acceptance. Other Comments Or Suggestions: Table 6 caption misspelled: "semantic matric". Questions For Authors: From Lines 327 (left) - 287 (right), "Factual accuracy is not a primary concern... as its main purpose is to train MLLMs to effectively utilize long contexts for VQA ... specifically, we ask a native speaker to fact-check 50 QA pairs and supporting evidence in context documents using online sources. 86% were verified as factual, 4% were non-factual, 2% were partially factual, and 8% could not be determined due to a lack of available information." Could the authors further explain why factual accuracy is not a primary concern as a QA dataset? Noisy labels may undermine the performance from training / fine-tuning the MLLMs, and the 86% factual accuracy may hinder the practical usage of this dataset. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of the strengths of our work, including the dataset’s scale and diversity, the quality-controlled generation process, and the robustness of our experimental design. We have carefully addressed all your concerns as follows: > **However, there is one major limitation: the human evaluation covered only 100 QA ...** We appreciate the reviewer’s concern regarding the sample size of our human evaluation. We would like to highlight that our initial evaluation of 100 QA pairs already exceeds the human evaluation effort in prior synthetic dataset works (see Appendix L for a detailed comparison). To further address this concern, during the rebuttal period we conducted an additional human evaluation on 100 new QA pairs using the same methodology described in Section 4.4. The results remained consistent with the original analysis, with a mean accuracy of 77.0%, reinforcing the reliability and representativeness of our evaluation. - For factuality, we asked the model to score the factuality of the description from 0 to 5: 0 = Completely inaccurate, 5 = Fully accurate and matches the image. The result shows the average score is 4.6, with 87.5% cases scored of “5”. - For the question relevancy, we asked the model to score the relevance of the question to the description and image (0–5). The result shows the average score is 4.9, with 92.0% is a score of 5. - For the question answerable, we ask the model is the question clearly answerable based on the description? (Yes/No). The result shows 99.6% questions are answerable. - For the answer correctness, we ask “Is the answer factually correct based on the description? (Yes/No)”. The result shows 90.7% questions are answerable. These results, combined with our original and extended human evaluations, automated grammar checks (LanguageTool), fact-checking (via manual validation), bias/toxicity screening, and strong downstream task performance across multiple benchmarks, collectively demonstrate the high quality and utility of our dataset. > **Also, when testing with LLaMA-3-70b ... what percentage of questions are answerable ...** Thank you for raising this point. To clarify: we applied LLaMA-3-70B-Instruct to QA pairs from the SK-VQA testing set, providing only the context document and question, without access to the image. Among these 26.5% are answerable, and we use the rest 73.5% which are answered incorrectly by the model as the hard subset. We will add this percentage in the paper. > **... The reviewer recommends the authors release the datasets and codebase upon acceptance.** We fully agree, and we confirm that we will release the full SK-VQA dataset, as well as the code, upon acceptance of the paper. > **Table 6 caption misspelled: "semantic matric".** Thank you for pointing this out. We will correct the typo in Table 6 and change "semantic matric" to "semantic metric" in the final version. > **... why factual accuracy is not a primary concern ...** We would like to clarify that the term “factual accuracy” in our paper refers to the alignment of the context document with real-world facts—not the correctness of the answer with respect to the context. And this alignment with real-world knowledge is not necessary for our task because we aim to teach models to effectively utilize long multimodal contexts for grounded reasoning, rather than memorizing the context. That said, we still conducted fact-checking via manual verification as described in the paper. In addition, during the rebuttal, we applied the LLM-as-judge approach to assess factuality, question relevance, and answer correctness (the results are mentioned in the previous answer), all of which further support the high quality and utility of our dataset. --- Rebuttal Comment 1.1: Comment: The reviewer sincerely thanks the authors for their great efforts. The responses address my concerns, and I am increasing the rating to 4. --- Reply to Comment 1.1.1: Comment: we appreciate reviewer's recognition and sincerely thanks your valuable and insightful comments which improve our work.
null
null
null
null
null
null
COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning
Accept (poster)
Summary: The paper proposes a framework that trains a cost model for performance prediction on general-purpose hardware and then performs few-shot fine-tuning on emerging hardware accelerators. It focuses on optimizing sparse tensor programs on hardware accelerators. The proposed method achieves better hardware performance compared to other approaches like zero-shot and no-transfer learning. For instance, the source model is trained with data from 100 matrices, while the few-shot fine-tuning uses data from only 5 matrices. The paper claims that its few-shot learning approach can fine-tune the cost model with near-optimal accuracy using significantly smaller samples. These claims are generally supported by experimental data. Claims And Evidence: The paper asserts that its few-shot fine-tuning approach leads to near-optimal accuracy with significantly fewer samples. These claims are supported by empirical results, including performance comparisons against zero-shot and no-transfer approaches. The evaluation is rigorous and shows clear improvements in hardware performance across various platforms. Methods And Evaluation Criteria: The evaluation focuses on SpMM and SDDMM computations targeting three platforms: CPU, GPU, and an accelerator for sparse matrices. The baseline is adequate, and the dataset consists of 715 real-world matrices, which is sufficiently large. The proposed framework is compared with WacoNet, making the comparison fair. Theoretical Claims: The paper does not make any theoretical claims. Experimental Designs Or Analyses: The experimental design includes adequate baselines and comparisons against different approaches based on the geometric mean speedup. The study also features good ablation experiments to show the impact of excluding individual components. Supplementary Material: I have reviewed the supplementary material in particular the cost model performance and data efficiency objectives. Relation To Broader Scientific Literature: The paper adequately cites state-of-the-art literature, including WacoNet, which was the previous SOTA method. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: * The paper presents an effective approach that improves upon WacoNet, which was the SOTA. * It introduces a latent encoder to capture heterogeneous components and enhances the configuration mapper to optimize techniques unique to individual platforms, improving prediction accuracy. * The empirical results demonstrate significant improvements in hardware performance. Weaknesses: * The clarity of some sections, particularly the equations in the code optimization section (Section 3.2), is lacking. The notations are not clearly explained and are difficult to follow. * Some figures (e.g., Figures 5 and 6) are too small to read, which hinders the comprehension of the results. Other Comments Or Suggestions: * Clarifying the notation in Section 3.2 would greatly improve readability. * Increasing the size of Figures 5 and 6 would help readers better understand the visualized data. Questions For Authors: When the paper states, "The optimal speedup was determined by running all possible configurations (Section 4.2)," how many configurations are there? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing the significance of the problem we address and the contributions of our work. We are especially grateful for the time and effort you dedicated to providing such a detailed and thoughtful review. We hope that our following response addresses your suggestions and questions. > ***Questions 01*** \ > When the paper states, "The optimal speedup was determined by running all possible configurations (Section 4.2)," how many configurations are there? Thank you for pointing this out. We apologize for this incomplete statement. We will correct this to be read as: > “The optimal speedup was determined by running all possible configurations **within our defined search space**.” and, we will also provide more explanation of how the search space was constructed. To clarify, the search space we considered for the SPADE accelerator consists of a total of 256 configurations, derived from combinations of key tunable parameters related to tiling, barrier, cache bypassing, and matrix reordering. As shown in Table 1, the parameters for barrier, cache bypassing, and matrix reordering are binary (enabled or disabled). For tiling, we had three numerical parameters: row panels, column panels, and split factor. We had to select a constrained set of values for those numerical parameters since testing every single value would explode the size of the search space and would make data collection infeasible, as explained later. We spaced those values to resemble the ones tested in the SPADE paper, as those are expected to show more significant performance deviations for different sparse matrices. In summary, our defined search space included 4 values for row panels [4, 32, 256, 2048], 4 values for column panels [1024, 16384, 65536, NUM_MATRIX_COLS] (where NUM_MATRIX_COLS depends on each matrix), and 2 values for the split factor [32, 256]. Combined with the three binary parameters, this resulted in 4 × 4 × 2 × 2 × 2 × 2 = 256 configurations. We opted for a constrained search space to make data collection feasible. Even so, collecting performance data for training, validation, and testing required approximately 4 million CPU hours. Although we parallelized the required experiments across 32 machines, each with 64 CPU cores, data collection for the constrained set still took nearly three months. Hence, data collection for the full theoretical search space would be infeasible. This challenge again reinforces the importance of data-efficient solutions like COGNATE. Once again, we thank the reviewer for raising this point, and we will include this clarification in the final version of the paper to provide additional context around our experimental setup. > ***Weakness 01*** \ > The clarity of some sections, particularly the equations in the code optimization section (Section 3.2), is lacking. Clarifying the notation in Section 3.2 would greatly improve readability. We thank the reviewer for highlighting this valuable point. We will add more explanations in the final version. Further, we will include illustrative examples in the appendix to demonstrate how the equations are applied in practice. We believe these additions will make the technical content more accessible and further strengthen the presentation of our contributions. For instance, the following data point demonstrates how a SpMM schedule configuration in SPADE can be approximately mapped into its corresponding low-level loop configurations. The original high-level configuration specifies the key tunable parameters used in SPADE: > name, row_panels, column_panels, split,barrier, bypass, reorder, time \ > 144, 4, 1024, 1, 0, 0, 0, 38.83592 These values are mapped into the corresponding loop-level configuration by applying the equations defined in our paper. Specifically, row_panels, column_panels, and split are used to derive i_split, j_split, and k_split, which represent how the loop indices are partitioned across dimensions. The loop nest structure is encoded by loop_1 through loop_7, which define the execution order of the tiled loops. The binary flags barrier, bypass, and reorder are retained to reflect platform-specific scheduling configurations. The resulting mapped configuration looks like this: > name, i_split, j_split, k_split, loop_1, loop_2, loop_3, loop_4, loop_5, loop_6, loop_7, barrier, bypass, reorder, time \ > 144, 4, 1024, 32, 6, 7, 2, 4, 1, 3, 5, 0, 0, 0, 38.83592 This representation captures tiling structure, loop ordering, and other applicable optimizations. We will include similar examples along with the corresponding equations and code segments in the final version of the appendix. > ***Weakness 02*** \ > Some figures (e.g., Figures 5 and 6) are too small to read, which hinders the comprehension of the results. Thank you for pointing this out. We will update these figures to ensure all elements are legible and better support the interpretation of the results in the final version of the paper.
Summary: This paper introduces COGNATE, a framework designed to optimize sparse tensor programs (e.g., SpMM and SDDMM) for emerging hardware accelerators using transfer learning. The key innovation lies in leveraging inexpensive data from general-purpose hardware (e.g., CPUs) to pre-train cost models and then fine-tuning them with minimal data from target accelerators. COGNATE addresses challenges such as heterogeneous program configuration spaces and high sample efficiency requirements by: 1. Approximate mapping of comparable code optimizations: Identifying homogeneous features across hardware platforms. 2. Latent encoding of hardware-specific features: Using autoencoders to compress heterogeneous configurations into a unified latent space. 3. Few-shot fine-tuning: Achieving competitive performance with only 5% of the data required by baseline methods. Experimental results demonstrate significant speedups: $1.47\times$ (SpMM) and $1.39\times$ (SDDMM) on the SPADE accelerator, and $1.17\times$ (SpMM) and $1.15\times$ (SDDMM) on an NVIDIA A100 GPU, compared to existing techniques. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed methods and evaluation criteria, including the use of benchmark datasets, are generally sound and appropriate for the problem at hand. I did not identify any significant issues to address. Theoretical Claims: In Section 3.2, this paper employs a mapping function to illustrate the correlation between certain scheduling parameters across different types of machines, suggesting that they can be transformed into one another. However, in the discussion of loop strip mining, the explanation regarding the barrier on SPADE and the reorder operation on the CPU lacks clarity. From my understanding, reordering on the CPU can involve multiple parameter combinations, whereas the barrier is a boolean parameter, making direct conversion between them infeasible. Besides, there is an indexing error in the argument for loop reordering in the next paragraph. Experimental Designs Or Analyses: Since there is currently no cost model that leverages transfer learning to predict the performance of sparse tensor programs across different types of machines, this paper primarily compares the program performance obtained by various methods against the baseline. From an experimental perspective, a comprehensive set of evaluations has been conducted, mainly including: - Performance achieved by different transfer learning methods (Figure 4) - Performance improvements contributed by individual components (Figure 7) - Impact of different component choices on performance (Figure 9) - Experiments on the data cost associated with transfer learning (Figures 10 and 12) Supplementary Material: Yes. Relation To Broader Scientific Literature: The work builds on: - ML-based cost models (WACO, TIR) for sparse tensor optimizations. - Transfer learning techniques (e.g., feature augmentation/mapping) but classifies machine-specific features into homogeneous and heterogeneous categories for heterogeneous hardware. - Hardware-aware optimizations (SPADE, HotTiles) by integrating learned models into accelerator design. The novelty lies in bridging sparse tensor program optimization and heterogeneous transfer learning, addressing the gap in early-stage accelerator development. Essential References Not Discussed: As far as I know, there aren't Essential References Not Discussed. Other Strengths And Weaknesses: Weaknesses: - Strengths: - Clarity: Well-structured with clear methodology. - Weaknesses: - The types of sparse operations supported in this paper are somewhat limited, covering only SpMM and SDDMM. So the applicability to other sparse operations (e.g., convolution) is unclear. In contrast, WACO, which this paper compares against, also includes SpMV and MTTKRP. Other Comments Or Suggestions: Since feature mapping requires manually identifying similar types of scheduling parameters and defining the mapping function, each time a new hardware platform emerges, a new mapping function needs to be designed. In contrast, the autoencoder-based latent representation proposed in this paper can automatically extract key information from the features. Have you considered applying the autoencoder to transform all features, rather than only the heterogeneous ones? If so, what were the results of this approach? Questions For Authors: 1. Have you conducted experiments on a broader range of sparse operations, such as SpMV and MTTKRP? 2. On GPUs, there are explicit data movement scheduling parameters, such as cache_read and cache_write. How did you handle these scheduling parameters in your approach? Did you treat them as homogeneous features that can be mapped across different hardware, or were they categorized as heterogeneous features and learned through the autoencoder? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing the significance of the problem we address and the contributions of our work. We are especially grateful for the time and effort you dedicated to providing such a detailed and thoughtful review. We hope that our following response addresses your suggestions and questions. > ***Weakness 01 & Question 01*** \ > The types of sparse operations supported in this paper are somewhat limited, covering only SpMM and SDDMM. … Have you conducted experiments on a broader range of sparse operations, such as SpMV and MTTKRP? We thank the reviewer for this important question. We agree that our current scope considers only SpMM and SDDMM sparse operations. This limitation stems from the fact that these are the only sparse operations currently natively supported by the SPADE accelerator [1] and the SparseTIR framework [2]. That said, it is possible to indirectly express SpMV in SPADE using a workaround if the operation is expressed as a SpMM with a very skinny zero-padded dense matrix. During our data collection process, we had gathered performance numbers for a split factor of 16 (an SpMM with 256 dense columns is broken down into 16 SpMMs, each with 16 dense columns). Each of these smaller SpMMs is computationally equivalent to a zero-padded SPADE SpMV. Since we had these performance numbers already available, we trained a model for SpMV during the rebuttal period to evaluate COGNATE’s ability to generalize. The results were promising, with COGNATE achieving a 1.25× geometric mean speedup (top-1) prediction, compared to the optimal speedup of 1.36×. These findings demonstrate that COGNATE could potentially generalize to SpMV-style workloads. We will include these findings and the above clarification in the final version of the paper. > ***Other Comments*** \ > ... Have you considered applying the autoencoder to transform all features, rather than only the heterogeneous ones? If so, what were the results of this approach? We thank the reviewer for this insightful question. We share your concern about the need to have mapping functions as diverse new hardware platforms emerge. However, our results suggest that the inclusion of these enabled us to overcome the limitations of prior work in the domain, which achieved effective knowledge transfer only between hardware platforms of the same architecture. We explored the idea of applying the autoencoder to all features (both homogeneous and heterogeneous), but found that this approach performed poorly in our experiments compared to the solution we propose in COGNATE. Specifically, we observed the following results for SpMM for the SPADE accelerator when using the autoencoder to encode all input features; - Top-1 speedup: 1.118× (comapred to 1.40× in COGNATE) - Top-5 speedup : 1.237× (comapred to 1.47× in COGNATE) Our results suggest that transforming homogeneous features, which tend to be consistent and interpretable across hardware platforms, via an autoencoder along with the rest of the features could introduce unnecessary complexity, which hinders the model's ability to generalize effectively. By contrast, limiting the autoencoder to heterogeneous features allows us to preserve the generalizability of shared input characteristics. We will include these findings in the final version to clarify our design choices. > ***Question 02*** \ > On GPUs, there are explicit data movement scheduling parameters, such as cache_read and cache_write. How did you handle these scheduling parameters in your approach? ... We thank the reviewer for pointing this out. In our current experiments, we followed the default behavior used in SparseTIR examples, where cache_read and cache_write scheduling optimizations are enabled for SDDMM and disabled for SpMM. As a result, we did not explicitly include these parameters as part of the search space in our evaluation. However, if we were to expand the search space to include these scheduling parameters, we agree that they would need to be treated as heterogeneous features, since they represent architecture-specific optimizations primarily exposed in GPU environments. In that case, they would be encoded through the autoencoder, rather than mapped as homogeneous features. We will explore the possibility of expanding the search space to incorporate these optimizations into our experiments in the final version of the paper. [1] Gerogiannis, Gerasimos, et al. "Spade: A flexible and scalable accelerator for spmm and sddmm." Proceedings of the 50th Annual International Symposium on Computer Architecture. 2023. [2] Ye, Zihao, et al. "Sparsetir: Composable abstractions for sparse compilation in deep learning." Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3. 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the efforts and clarification. Overall, I still like this paper and will keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for acknowledging our efforts and clarifications. We appreciate the reviewer’s positive assessment of our work.
Summary: The submission introduces COGNATE, a novel framework designed to optimize sparse tensor programs on emerging hardware accelerators using machine learning-based cost models. It addresses the challenges of optimizing sparse tensor programs, such as Sparse Matrix-Matrix Multiplication (SpMM) and Sampled Dense-Dense Matrix Multiplication (SDDMM), on early-stage accelerators where performance is sensitive to sparse input variations and data collection via simulators is costly. COGNATE leverages transfer learning by pre-training cost models on inexpensive CPU data and fine-tuning them with minimal accelerator-specific data (5% of typical requirements), achieving high sample efficiency. Key contributions include: (1) techniques to segregate program configurations into homogeneous (mapped via approximate code optimization mappings) and heterogeneous (encoded via autoencoders) components, enabling effective knowledge transfer across hardware platforms; (2) a data-frugal cost model framework that modifies the WACO architecture for better transferability, incorporating a configuration mapper, enhanced input featurizer, latent encoder, and predictor; and (3) extensive evaluations demonstrating COGNATE’s effectiveness. Main results show COGNATE achieves average speedups of 1.47× (up to 5.46×) for SpMM and 1.39× (up to 4.22×) for SDDMM on the SPADE accelerator, and 1.17× (up to 1.61×) for SpMM and 1.15× (up to 1.49×) for SDDMM on an NVIDIA A100 GPU, outperforming existing transfer learning methods by 28.44% on SPADE. These findings highlight COGNATE’s ability to deliver near-optimal performance with minimal data overhead, enhancing design space exploration for emerging sparse accelerators. Claims And Evidence: The claims in the submission are generally supported by clear and convincing evidence, including detailed experimental results, figures, and tables comparing COGNATE’s performance against baselines and alternative methods. The main findings—speedups of 1.47× for SpMM and 1.39× for SDDMM on SPADE, and 1.17× and 1.15× on A100 GPU—are backed by evaluations on 715 real-world matrices from the SuiteSparse collection, with geomean speedups, per-matrix speedups (Figures 5, 13-15), and accuracy metrics (Figure 6) provided. The claim of data efficiency (using 5% of typical data) is substantiated by comparisons of data collection overhead (Figure 10) and fine-tuning sample analysis (Figure 12). Ablation studies (Figure 7) and component design choices (Figures 8-9) further support the algorithmic innovations. However, two claims could be seen as less robustly evidenced: - Generalizability to Other Accelerators: The paper claims COGNATE is generalizable beyond SPADE and A100, referencing Intel PIUMA and Vesper (Section C). However, no experimental data is provided for these platforms due to proprietary restrictions or unavailable source code, weakening this claim with speculative rather than empirical support. - Near-Optimal Accuracy: The assertion of "near-optimal accuracy" (e.g., 95% of optimal speedup on SPADE) relies on comparing COGNATE’s top-5 predictions to an optimal baseline derived from exhaustive configuration testing. While results are strong, the lack of a statistical significance test or error bounds around the 95% figure undermines the claim. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria in COGNATE are well-suited to the problem of optimizing sparse tensor programs on emerging hardware accelerators. The transfer learning approach—pre-training on CPU data and fine-tuning with minimal accelerator data—addresses the high cost of simulator-based data collection and input sensitivity of sparse operations like SpMM and SDDMM. Key components (configuration mapper, latent encoder, enhanced input featurizer) logically tackle the heterogeneity and data efficiency challenges outlined. The evaluation criteria, including geomean speedups, top-1/top-5 predictions, and accuracy metrics (Pairwise Ranking Loss, Ordered Pair Accuracy, Kendall’s Tau), effectively measure performance and ranking quality against baselines (WACO+FA/FM, no-transfer models). The use of the SuiteSparse Matrix Collection (715 real-world matrices) as a benchmark dataset is appropriate, offering a diverse, established standard for sparse tensor research. Testing on SPADE and A100 GPU aligns with the focus on emerging and established accelerators. Overall, the methods and criteria are sensible and relevant to the application. Theoretical Claims: The submission primarily focuses on empirical contributions rather than theoretical proofs, so there are no formal mathematical proofs to check for correctness. Experimental Designs Or Analyses: Dataset, Training, and Evaluation Setup (4.1): - Design: Uses SuiteSparse Matrix Collection (1500 matrices for training, 715 for evaluation) across CPU, SPADE, and A100 GPU, with 100 random configurations per matrix. - Assessment: Sound dataset choice; widely accepted benchmark. Random sampling ensures diversity. Separation of training and evaluation sets avoids overlap. No major issues. Transferability to SPADE (4.2): - Design: Compares COGNATE (Top-1/Top-5) speedups vs. baselines (zero-shot, no-transfer, WACO+FA/FM) using geomean speedups. - Assessment: Valid comparison with clear metrics. Optimal speedup baseline (exhaustive testing) is a strong reference. Fine-tuning with 5 matrices is justified later (4.4). No significant flaws. Transferability to GPU (4.3): - Design: Extends evaluation to A100 GPU, comparing COGNATE to cuSPARSE and modified WACO models. - Assessment: Logical extension to test generalizability. Consistent methodology with SPADE. cuSPARSE as a baseline is relevant. No issues. Supplementary Material: N/A Relation To Broader Scientific Literature: - Segregation and Encoding of Program Configurations: Builds on transfer learning concepts from Weiss et al. (2016) and Zhuang et al. (2020), extending homogeneous transfer (e.g., Sasaki et al., 2022) to heterogeneous CPU-to-accelerator settings. Feature reuse and latent encoding draw from Neyshabur et al. (2020), adapting them to sparse tensor optimization, unlike prior feature augmentation (Daumé III, 2009; Duan et al., 2012) that struggled with sparsity. - Data-Frugal Cost Model Framework: Enhances WACO (Won et al., 2023) by refining its SCNN-based architecture (Graham & Van der Maaten, 2017) for transferability, contrasting with data-intensive models like Ansor (Zheng et al., 2020). Aligns with few-shot learning ideas (Shen et al., 2021), reducing data needs compared to traditional ML-based optimization (Chen et al., 2018b; Baghdadi et al., 2021). - Evaluation on Sparse Accelerators: Extends sparse tensor optimization from CPU/GPU systems (Kjolstad et al., 2017; Ye et al., 2023) to emerging accelerators like SPADE (Gerogiannis et al., 2023), complementing hardware-specific efforts (Hegde et al., 2019; Li et al., 2023). Speedup results (1.47× SpMM, 1.39× SDDMM) improve on prior sparse kernel optimizations (Hong et al., 2019; Jiang et al., 2020), offering a data-driven alternative to analytical models (Jin et al., 2024). COGNATE bridges ML-based program optimization and hardware acceleration, advancing sample efficiency and cross-platform applicability beyond prior works. Essential References Not Discussed: - Sparse Accelerator Design Context: Missing: "OuterSPACE" by Parashar et al. (MICRO 2017) introduced a sparse tensor accelerator with configurable tiling, relevant to COGNATE’s mapping of optimizations like tiling and barriers. - Transfer Learning Efficiency: Missing: "MetaTune" by Lee et al. (MLSys 2023) proposed a meta-learning approach for tuning tensor programs across platforms with minimal data, akin to COGNATE’s few-shot fine-tuning. - Cost Model Accuracy: Missing: "AutoTVM" by Chen et al. (OSDI 2018) (beyond the cited TVM paper) detailed a tuning framework with cost models for sparse workloads, achieving near-optimal schedules. Other Strengths And Weaknesses: Strengths: - Originality: Creatively combines transfer learning, feature segregation, and latent encoding to address sparse tensor optimization on emerging accelerators, a novel synthesis of ideas from Neyshabur et al. (2020) and Won et al. (2023). - Significance: Tackles a real-world bottleneck in early-stage accelerator design, offering a practical, data-frugal solution with significant speedups (up to 5.46×), impactful for hardware-software co-design. - Clarity: Well-structured with clear figures (e.g., Figure 3) and detailed appendices, making complex concepts accessible. Weaknesses: - Originality: While innovative, it heavily builds on WACO (Won et al., 2023), potentially limiting its perceived novelty in the ML optimization space. - Significance: Generalizability to untested accelerators (e.g., PIUMA, Vesper) is speculative, tempering its broader impact claims. Other Comments Or Suggestions: Could use a brief table summarizing key hyperparameters for autoencoders (like Table 3 for main model) to improve reproducibility. Questions For Authors: How was the 5-matrix fine-tuning sample size chosen beyond empirical observation (e.g., statistical power analysis), and how sensitive are results to slight variations (e.g., 3 or 7 matrices)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing the significance of our contributions. We are especially grateful for the time and effort you dedicated to providing such a detailed and thoughtful review. We also appreciate the additional references you shared, which we'll include in the final version. We hope that our following response addresses your suggestions and questions. > ***Other Comments*** \ > ... key hyperparameters for autoencoders … to improve reproducibility. We thank the reviewer for pointing this out. The hyperparameters for the autoencoder training are: - Learning Rate: 0.001 - Loss: MSE - Optimizer: Adam - Batch Size: 32 - Epochs: 1000 We will include this information in a table similar to Table 3. > ***Question 01*** \ > ... how sensitive are results to slight variations (e.g., 3 or 7 matrices)? We thank the reviewer for this thoughtful question. The choice of 5 matrices for fine-tuning was primarily guided by empirical observation and the need to balance transfer learning effectiveness with the cost of data collection. As mentioned in the paper, “the matrices were randomly selected from the training set while ensuring a balanced representation of their dimensions and sparsity.” To do this, we first divided the available matrices into five groups based on input size [8192, 32768, 65536, 131072, and > 131072], then randomly sampled a matrix from each group while ensuring the selected subset captured a reasonable range of sparsity. We used the same randomly selected matrices for the non-transfer learning baselines to ensure consistency and fair comparison across experimental settings. In response to the reviewer’s suggestion, we conducted additional experiments for SpMM using 3 and 7 matrices to assess sensitivity to small variations in fine-tuning set size. The observed top-1 speedups were (compared to 1.40×); - 3 matrices: 1.301× - 7 matrices: 1.405× These results suggest that COGNATE is relatively robust to small variations in the size of the fine-tuning dataset. While using only 3 matrices results in a slight drop in performance, the model still achieves meaningful gains over the zero-shot baseline. Finally, we acknowledge that no formal statistical power analysis was used in selecting the fine-tuning size. We view this as a promising direction for future work. > ***Weakness 01*** \ > While innovative, it heavily builds on WACO …, potentially limiting its perceived novelty in the ML optimization space. We thank the reviewer for this observation. We agree that our work builds on top of WACO, the current state of the art in ML-based sparse autotuning. However, the key contributions of COGNATE are distinct and extend beyond WACO in several important ways. While we adopt WACO’s search space and program representation due to their relevance to sparse tensor programs, our primary focus is cross-platform generalization and efficient model transfer, which are orthogonal to WACO’s contributions. Specifically, COGNATE’s framework leverages the homogeneity of input features across hardware platforms while mitigating heterogeneity to efficiently fine-tune learned cost models for accelerating sparse operations on emerging hardware. As part of future work, we are actively exploring the application of COGNATE to other domains beyond sparse tensor programs and WACO’s framework. > ***Weakness 02*** \ > Generalizability to untested accelerators (e.g., PIUMA, Vesper) is speculative, ... We acknowledge the reviewer’s concern. Our current evaluation focuses on two examples (SPADE and NVIDIA A100) primarily due to practical constraints, including lack of access to closed-source platforms like PIUMA and Vesper. Even if we had access, the data collection process for these accelerators would be highly time-consuming, likely requiring millions of machine hours to collect sufficient data for training, validation, and testing. For instance, collecting performance data for training, validation, and testing for SPADE required approximately 4 million CPU hours. Although we parallelized the required experiments across 32 machines, each with 64 CPU cores, this process took us nearly three months. Hence, while extending our evaluation to even more hardware platforms would be desirable, it was not feasible given the computational resources and simulators we had access to. However, we emphasize that COGNATE was designed with hardware-agnostic principles in mind. We believe that as a wider range of accelerators becomes accessible to the research community, and as sparse compilation frameworks like SparseTIR evolve, COGNATE can be extended with minimal changes. We will further clarify this point in the final version of the paper. That being said, in the final version, we will tone down our generalization claims and more clearly state that the quantitative demonstration of COGNATE’s even broader applicability (beyond the two hardware accelerators we evaluated) remains an important direction for future work. --- Rebuttal Comment 1.1: Comment: Acknowledged --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for acknowledging our efforts and clarifications.
Summary: The paper introduces COGNATE, a novel framework for developing learned cost models to optimize sparse tensor programs on emerging hardware platforms. COGNATE leverages transfer learning to adapt cost models from general-purpose hardware (e.g., CPUs) to specialized accelerators with minimal fine-tuning data. Main Contributions: - Transfer Learning Approach: COGNATE uses a pre-trained model on CPU data and fine-tunes it on sparse accelerators, significantly reducing the need for expensive simulation data. - Homogeneity and Heterogeneity Handling: It maps similar code optimizations across platforms and uses autoencoders to encode heterogeneous components, allowing for efficient knowledge transfer. - Performance: COGNATE achieves average speedups of 1.47× for Sparse Matrix-Matrix Multiplication (SpMM) and 1.39× for Sampled Dense-Dense Matrix Multiplication (SDDMM) on the SPADE accelerator. Update after rebuttal: I would like to sincerely thank the authors for taking the time to respond to all the issues I raised. However, as I pointed out in the initial review, the design and evaluation are quite limited with SPADE and NVIDIA A100 GPU. The authors claimed that just focusing on two examples (SPADE and NVIDIA A100) is primarily due to practical constraints, such as lack of access to closed-source platforms. However, there are plenty of better platforms than closed-source platforms that can be accessed, just to name a few: NVIDIA Jetson AGX Orin, NVIDIA H20 & H100 GPU. Moreover, from the authors' replies, the effectiveness of the proposed COGNATE method is highly related to the hyper-parameter settings. For example, the size of the dataset will significantly influence the transfer effect. So, it brings the concern of the generalization of the COGNATE method. Finally, the Sparse Tensor Core features may change significantly for different accelerator hardware platforms. For example, H100 GPU further supports the new FP8 data type, and the combination of Tensor Memory Accelerator (TMA). The COGNATE method also fails to address how to fit the 2:4 structured sparse pattern, which is the most important feature introduced in the NVIDIA A100 sparse Tensor Core. With all these concerns, I still hold my initial score. So, the current version may not meet the acceptance threshold for ICML. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence, but there are a few aspects that could be scrutinized further: 1. Performance Comparisons: - Evidence: The paper provides extensive comparisons with existing techniques like WACO+FA and WACO+FM, showing that COGNATE outperforms them by a significant margin (28.44% improvement for SpMM on SPADE) - Potential Issue: While the comparisons are thorough, it would be beneficial to see more detailed analysis on why COGNATE performs better, especially in terms of its ability to handle heterogeneity and its data efficiency 2. Data Efficiency: - Evidence: COGNATE is shown to achieve comparable performance with only 5% of the data required by accelerator-specific models, which is a significant reduction in data collection overhead - Potential Issue: The paper could further elaborate on how this efficiency is achieved, particularly in terms of the autoencoder's role in compressing heterogeneous features and the impact of using a reduced number of layers in the cost model 3. Generalizability: - Evidence: COGNATE demonstrates its applicability across different hardware platforms, including SPADE and NVIDIA A100 GPU, with notable speedups in both cases - Potential Issue: While the results are promising, additional evaluations on more diverse hardware platforms would strengthen the claim of generalizability Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper make sense for the problem of optimizing sparse tensor programs on emerging hardware platforms. The paper addresses a critical challenge in optimizing sparse tensor programs, which are essential in deep learning and graph analytics. The use of real-world sparse matrices from the SuiteSparse Matrix Collection provides a comprehensive and realistic evaluation setup. Theoretical Claims: The paper on COGNATE does not present formal proofs for its theoretical claims. Instead, it focuses on empirical evaluations and algorithmic design to support its contributions. However, there are no explicit mathematical proofs provided in the paper for theoretical claims like: - Optimality of the Transfer Learning Approach: The paper demonstrates empirically that COGNATE outperforms other transfer learning techniques but does not provide a theoretical proof of optimality. - Effectiveness of Latent Encoding: The use of autoencoders to handle heterogeneous components is shown to be effective in practice, but there is no formal proof of its theoretical advantages over other methods like feature augmentation. - Data Efficiency: While the paper shows that COGNATE achieves comparable performance with significantly less data, there is no formal proof that this approach is optimal in terms of data efficiency. Experimental Designs Or Analyses: The experimental design and analyses in the paper on COGNATE appear to be sound, but there are a few aspects that could be scrutinized further: - The evaluations on more diverse hardware platforms (e.g., other specialized accelerators, FPGAs) could strengthen the claim of generalizability. - Further analysis on how the number of fine-tuning samples affects performance could provide deeper insights into the limits of COGNATE's data efficiency. - Including other metrics (e.g., energy efficiency, memory usage) could offer a more holistic evaluation of COGNATE's benefits. Supplementary Material: Yes. I have reviewed supplementary materials what was included in the main text of the paper. Relation To Broader Scientific Literature: The key contributions of the paper on COGNATE are closely related to the broader scientific literature in several ways: 1. Transfer Learning for Cost Models: Prior Work: Transfer learning has been successfully applied in various domains to reduce data requirements for target tasks (Weiss et al., 2016; Zhuang et al., 2020). In program optimization, transfer learning has been used to adapt cost models across similar hardware platforms (Sasaki et al., 2022; Zheng et al., 2021). 2. Handling Heterogeneity in Transfer Learning: Prior Work: Existing heterogeneous transfer learning techniques, such as feature augmentation and feature mapping, have limitations when dealing with diverse program configurations across different hardware platforms (Daumé III, 2009; Duan et al., 2012). 3. Optimization of Sparse Tensor Programs: Prior Work: Optimizing sparse tensor programs is crucial for deep learning and graph analytics, with techniques like TACO (Kjolstad et al., 2017) and WACO (Won et al., 2023) providing significant performance improvements. 4. Data Efficiency in Early-Stage Hardware Development: Prior Work: The high cost of collecting large datasets for emerging hardware platforms is a significant challenge (Gerogiannis et al., 2023). Essential References Not Discussed: No Other Strengths And Weaknesses: The paper could provide more discussion on the broader impact of COGNATE beyond the specific hardware platforms evaluated (SPADE and NVIDIA A100 GPU). Demonstrating its applicability to a wider range of accelerators or scenarios could enhance its significance. Some sections, particularly those detailing the mapping functions and autoencoder training, could benefit from additional illustrations or step-by-step explanations to improve clarity for readers unfamiliar with these techniques. Additionally, the paper assumes a strong background in sparse tensor programs and transfer learning, which might limit accessibility for readers from other domains. The paper could provide more details on the hyperparameter tuning process and how specific hyperparameters were chosen. This would help in reproducing the results and understanding the sensitivity of COGNATE to different hyperparameter settings. Other Comments Or Suggestions: N/A Questions For Authors: 1. How do you envision extending COGNATE to support a broader range of emerging hardware platforms beyond SPADE and NVIDIA A100 GPU? Are there specific challenges or modifications needed for other types of accelerators? 2. You mention that using a moderate-sized dataset for the source model helps mitigate negative transfer. Can you provide more insights into how the size of the source dataset affects the fine-tuning performance on target platforms? 3. As the complexity of sparse tensor programs increases, how does COGNATE's performance scale? Are there any plans to address more complex operations or larger-scale programs? 4. Given the rapid evolution of hardware, how does COGNATE adapt to changes in hardware architecture or new optimizations introduced in emerging accelerators? 5. Can you elaborate on the pairwise ranking loss used in training the cost model? How does this objective function contribute to the model's ability to identify optimal program configurations? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing the significance of the problem we address and for acknowledging the contributions of our work. We are especially grateful for the time and effort you invested in providing a detailed and thoughtful summary of the paper’s strengths and areas for improvement. We hope that our following response addresses suggestions and questions. **Q1 (Question) & W1 (Weakness)**: We thank the reviewer for this thoughtful question. In Appendix Section C (Generalizability), we provide an initial qualitative discussion and intuition on how COGNATE can be generalized. Our current evaluation focuses on two examples (SPADE and NVIDIA A100) primarily due to practical constraints, such as lack of access to closed-source platforms (PIUMA and Vesper). Even if we had access, the data collection for these accelerators would be highly time-consuming, requiring millions of machine hours. For instance, collecting performance data for SPADE required approximately 4 million CPU hours. Hence, while extending our evaluation to more hardware platforms would be desirable, it was not feasible given the computational resources and simulators we had access to. We believe that as a wider range of accelerators becomes accessible to the research community, COGNATE can be extended with minimal changes. We will further clarify this point in the final version of the paper. **W2 & W3**: We thank the reviewer for pointing this out. In the final version, we will improve the content to enhance clarity and include more details about hyperparameter tuning. **Q2**: We thank the reviewer for this insightful question. We explored the effects of negative transfer by training the source model using data samples with 5, 20, 500, and 1000 matrices (Figure 11). While a larger dataset improves generalization in the source domain, it may also cause over-specialization to the source platform. This overfitting hampers fine-tuning, as specialized features may not transfer well to the target platform. Our empirical findings show that training the source model on 100 matrices strikes a good balance, capturing useful patterns while maintaining generality for transfer. This reduces the effort needed during fine-tuning, leading to faster convergence and better performance. This is especially valuable given the high cost of collecting data for SPADE. **Q3**: We thank the reviewer for this important question. Our current evaluation focuses on SpMM and SDDMM because these are the sparse operations currently natively supported by the SPADE and SparseTIR. That said, SpMM and SDDMM serve as foundational blocks for complex programs. To evaluate COGNATE’s scalability, we conducted preliminary experiments during the rebuttal period on an end-to-end GNN workload running on a GPU, using the ‘transient’ sparse matrix from our testset as input (178,866 nodes with 961,368 non-zeros) and GraphSAGE model. The model was configured with 3 hidden layers, 256 hidden features. COGNATE achieved notable speedups over the default SparseTIR implementation, with a 1.30× speedup for inference and 1.28× for training, demonstrating the scalability of COGNATE. We will include this result in the final version of the paper and highlight it as a key direction moving forward. **Q4**: We appreciate the reviewer raising this point. While changes in emerging accelerators may necessitate updates to configuration mappings or model parameters, COGNATE significantly reduces this burden by relying on lightweight fine-tuning, rather than requiring retraining from scratch. As long as the overall structure of the sparse tensor program remains consistent, COGNATE can quickly adapt by using a small number of new performance samples. In contrast, traditional cost model construction approaches would require re-evaluating a large number of configurations. We provided further elaboration on this in under Generalizability (Section C) in the Appendix. **Q5**: We appreciate the reviewer’s interest in the training objective. Our goal during model training is not to predict the absolute runtime of a sparse tensor program configuration, but rather to rank candidate configurations by their relative performance, enabling the selection of the best-performing ones. To this end, we adopted a pairwise ranking loss, which is more aligned with the optimization task than pointwise losses. This objective improves robustness to noise and runtime scale variance, which are common in early-stage accelerator performance data, as the model focuses on preserving relative orderings. Furthermore, prior work (Kaufman et al., 2021) has shown that training with ranking loss significantly improves a model’s ability to identify optimal configurations. We will clarify this motivation and include these details in the final version to make the learning objective and its impact more explicit. Further, if space permits, we will move the loss equation (in Appendix Section A.4) to the main text of the paper.
null
null
null
null
null
null
am-ELO: A Stable Framework for Arena-based LLM Evaluation
Accept (spotlight poster)
Summary: The paper addresses challenges in ranking consistency and the variety of annotator abilities in arena-based evaluation of LLMs. They develop an enhanced ELO framework that replaces iterative updates with Maximum Likelihood Estimation (m-ELO). They prove theoretically that this MLE approach provides consistent and stable model rankings. The authors further extend their work with am-ELO, which factors in annotator abilities to simultaneously evaluate both model performance and annotator reliability. Experimental results validate their framework's effectiveness in stability and robustness compared to traditional ELO-based evaluation methods. Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. The authors identify instability as a significant issue in the traditional ELO rating system used for evaluating LLMs and propose a new framework, am-ELO, to address this problem. The claims are backed by both theoretical analysis and empirical experiments. The authors provide mathematical proofs for the stability and consistency of their proposed methods and demonstrate through experiments the effectiveness of their methods. Methods And Evaluation Criteria: The authors replace the iterative update method of the traditional ELO system with a MLE approach, which is theoretically sound and shown to be more stable. Additionally, the incorporation of annotator abilities into the evaluation process through am-ELO is a novel and meaningful enhancement. The evaluation criteria used are appropriate for assessing the stability and robustness of the proposed methods. Theoretical Claims: I have checked the correctness of the proofs for the theoretical claims. Specifically, the proof for Theorem 4.1 is sound. This ensures the stability of the ELO scores obtained through the MLE method. Also, the proof for Theorem 4.2 is logically consistent and demonstrates the practical significance of the estimated annotator abilities. Experimental Designs Or Analyses: The experimental designs and analyses are valid and well-executed. The authors conducted experiments on a real-world dataset and compared the performance of the traditional ELO method with their proposed methods. The results show significant improvements in terms of lower loss values and higher consistency in model rankings. Additionally, the stability tests using perturbation strategies (Random, Equal, Flip, Mixed) effectively demonstrate the robustness of the am-ELO method. Supplementary Material: I reviewed the supplementary material, specifically the proofs of Theorem 4.1 and Theorem 4.2. These proofs are detailed and provide additional clarity on the theoretical foundations of the proposed methods. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature on LLM evaluation and ranking systems. The authors build upon the well-established ELO rating system and enhance it using principles from psychometrics and maximum likelihood estimation. The paper cites relevant prior work, such as the use of the ELO system in competitive games and its application to LLM evaluation. The proposed methods address the instability issues in the traditional ELO system and incorporating annotator abilities, which is a novel contribution in this domain. Essential References Not Discussed: The paper appears to cover most relevant prior work. However, it might benefit from a discussion on other ranking systems used in machine learning, such as the Plackett-Luce model [1], to further contextualize the contributions. Additionally, recent work on robust evaluation of LLMs, such as [2, 3], could be cited to provide a more complete picture of the current landscape. [1] Robin L Plackett. The analysis of permutations. Applied Statistics, 1975. [2] Yiqiao Jin, Minje Choi, Gaurav Verma, Jindong Wang, Srijan Kumar. MM-Soc: Benchmarking Multimodal Large Language Models in Social Media Platforms. In Proceedings of ACL 2024 [3] Jinjie Ni, Fuzhao Xue, Xiang Yue, Yuntian Deng, Mahir Shah, Kabir Jain, Graham Neubig, Yang You. MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures. In Proceedings of NeurIPS 2024 Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and clearly presents the problem, proposed methods, and experimental results. 2. The incorporation of annotator abilities into the evaluation process is a significant innovation that addresses a critical limitation of existing methods. 3. The experiments are thorough and demonstrate the effectiveness of the proposed framework. Weaknesses: 1. The paper could benefit from a more detailed discussion on the computational complexity of the proposed methods, especially in large-scale scenarios. 2. While the paper demonstrates the robustness of am-ELO through perturbation experiments, it would be useful to see how the method performs in real-world scenarios with varying levels of annotator quality. Other Comments Or Suggestions: The paper could include a section on the potential applications of the proposed framework beyond LLM evaluation, such as in other competitive ranking scenarios. The authors might consider discussing the limitations of their approach, such as the assumptions made about annotator behavior and the potential impact of these assumptions on the evaluation results. Questions For Authors: 1. How does the computational complexity of the proposed am-ELO method compare to the traditional ELO method, especially in scenarios with a large number of models and annotators? 2. Can the authors provide insights into how the proposed framework could be extended to handle more complex evaluation scenarios, such as multi-class or multi-label evaluations? 3. How sensitive are the results to the choice of the learning rate and the number of epochs in the gradient descent process? Are there any guidelines for selecting these hyperparameters? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for your high appreciation of the contribution and novelty presented in our paper. Your positive feedback means a lot to us. We also appreciate your valuable suggestions and questions for the computational complexity and experiments aspect of the paper. Here are the responses to each of your comments: > **Q1**: This paper might benefit from a discussion on other related work. We sincerely appreciate your suggestion. We have referred to these work, which have been very enlightening for us. We will consider incorporating these related works into our article in future versions. > **Q2**: While the paper demonstrates the robustness of am-ELO through perturbation experiments, it would be useful to see how the method performs in real-world scenarios with varying levels of annotator quality. Thank you very much for your question. We've discussed on the motivations behind using perturbations in experiments and plan to collect real-world datasets and test am-ELO online in future work, **which can be seen in Comment 5 for reviewer Nkad**. > **Q3**: How does he computational complexity of the proposed am-ELO method compare to the traditional ELO method? Thank you for your question. The time complexity of am-ELO is equivalent to that of the MLE gradient descent, which is $O(1/\epsilon^2)$ where $\epsilon$ is the calculation accuracy of the GD method. However, since the MLE of m-ELO is a concave function, its time complexity is sub - linear, specifically $O(1/\epsilon)$ [1]. In the experiments, there was no significant difference in the efficiency between am-ELO and m-ELO when running on a GPU. Moreover, as tools for LLM evaluation, their inference costs are far lower than those of large - model inference. Hence, these costs are entirely acceptable. > **Q4**: Can the authors provide insights into how the proposed framework could be extended to handle more complex evaluation scenarios, such as multi-class or multi-label evaluations? Thank you for your question. In my opinion, this paradigm can be applied to more complex annotation scenarios, such as multi-label evaluation, as long as **the probability density function can be reasonably defined**. > **Q5**: The lack of a detailed sensitivity analysis regarding some parameters. We sincerely appreciate your suggestion. We assert that due to the convergence required by m-ELO and am ELO, **the learning rate and epoch are almost unaffected**. In addition, we’ve carried out sensitivity analysis experiments on two parameters. The results of these experiments will be integrated into subsequent versions of our paper. Regarding scale factor K (Learning Rate) specifically, both m-ELO and am-ELO necessitate extensive training and exhibit insensitivity to this scale factor. Therefore, we focused our analysis on the consistency of the traditional ELO method across different values of K. The findings are presented in Table 1: Table 1 | ELO | K=0.5 | K=1 | K=4 (Standard) | K=10 | | ----------- | ------ | ------ | -------------- | ------ | | Consistency | 0.9916 | 0.9811 | 0.9637 | 0.9305 | | MSE | 0.1473 | 0.1368 | 0.1238 | 0.1225 | | AUC | 0.7426 | 0.7443 | 0.7492 | 0.7505 | Our analysis reveals that as the scale factor K increases, the model demonstrates enhanced data fit. However, this improvement in fit comes at the cost of reduced consistency. Furthermore, we carried out a hyperparameter sensitivity experiment on the **minimum number of annotations per annotator**. The results of this experiment are detailed as follows: Table 2 | Annotation | 10 | 20 | 30 | 40 | 50 | | ---------- | ------ | ------ | ------ | ------ | ---------- | | ELO | 0.9695 | 0.9637 | 0.9768 | 0.9726 | 0.9637 | | m-ELO | 1.0000 | 1.0000 | 0.9979 | 0.9968 | 1.0000 | | am-ELO | 0.8642 | 0.9305 | 0.9568 | 0.9979 | **1.0000** | As our research indicates, both the traditional ELO and m-ELO show resilience to the variation of annotation count parameters. However, their performance patterns diverge significantly. The traditional ELO consistently exhibits inconsistent results regardless of parameter fluctuations. In contrast, m-ELO demonstrates a tendency to converge towards consistency, underscoring its enhanced stability. Regarding am-ELO, it’s true that it’s highly sensitive to the annotation count. When faced with sparse annotations, the consistency of am-ELO is indeed compromised. But we’ve developed a practical solution. By implementing a screening process for annotators, we can effectively address this issue. This screening process ensures that only reliable annotators with sufficient annotations are included in the analysis, thus improving the consistency of am - ELO. Reference: [1] Introductory Lectures on Convex Optimization: A Basic Course. 2014.
Summary: The paper focuses on Arena-based LLM evaluation. The main algorithmic ideas include enhancing the ELO Rating System. It replaces the iterative update method with a MLE approach (m-ELO), which is more stable as it is insensitive to sample order. The am-ELO is also proposed, which modifies the ELO probability function to incorporate annotator abilities. The main findings are that the proposed methods can effectively model annotators, identify anomalous annotators, and reduce the inconsistency of ELO scores. Experimental results show that am-ELO outperforms the traditional ELO method in prediction tasks, with a lower loss and higher generalization ability. Claims And Evidence: The claims are generally supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. The m-ELO and am-ELO methods address the instability issues in the traditional ELO method, which is crucial for accurate LLM evaluation. The use of real-world datasets like Chatbot for evaluation is appropriate, as it reflects the practical scenario of LLM comparison. The evaluation metrics such as MSE, AUC, loss, and consistency of rankings are well-chosen to measure different aspects of the methods' performance, including prediction accuracy, goodness-of-fit, and stability. Theoretical Claims: The correctness of the proofs for theoretical claims has been checked. For Theorem 4.1, the authors prove that when fixing the score of one model, the log-likelihood function with respect to is a concave function and has at most one extreme point. This is done by calculating the second-order partial derivatives of the log-likelihood function and showing that the Hessian matrix is negative definite. For Theorem 4.2, the authors prove the properties of annotator abilities. The proofs are logical and based on sound mathematical reasoning. Experimental Designs Or Analyses: The soundness/validity of the experimental designs and analyses has been checked. In the experiments, the authors compare the proposed methods with the traditional ELO method. They use appropriate baselines and perform multiple random initializations and repeated experiments (shuffling the dataset 1000 times for the traditional ELO method). The way they record the loss during the gradient descent process and calculate the consistency of rankings is reasonable for analyzing the convergence and efficiency of the methods. The perturbation strategies in the stability experiments are well-designed to simulate real-world annotation anomalies. Supplementary Material: There are only code for this paper in the supplementary material and I have checked it. Relation To Broader Scientific Literature: The paper improves upon the widely-used ELO rating system, which is the foundation for many existing model arena evaluation systems. The proposed methods address the instability issues and lack of annotator ability consideration in previous works, and the use of MLE and psychometric concepts for annotator modeling is an extension of relevant research in statistics and psychometrics. Essential References Not Discussed: There are no essential references that are not currently cited/discussed in the paper. Other Strengths And Weaknesses: **Strengths:** Originality: The combination of MLE and annotator ability modeling in the ELO-based evaluation framework is novel. It provides new solutions to the long-standing problems of instability and annotator variability in LLM evaluation. Significance: The proposed methods can improve the reliability and accuracy of LLM evaluation, which is of great significance for the development and deployment of LLMs. It helps to make more informed decisions in model selection and research directions. Clarity: The paper is well-written. The algorithms, theoretical proofs, and experimental results are clearly presented, making it easy for readers to understand the research content. **Weaknesses:** The annotator modeling in the paper is somewhat simplistic. It mainly focuses on the annotator's discriminatory ability and consistency with other annotators, and may not fully capture the broader capabilities of annotators. Other Comments Or Suggestions: See the question below. Questions For Authors: 1. In the am-ELO method, how do you plan to extend the annotator modeling to better capture the diverse capabilities of annotators? A more comprehensive answer could further strengthen the potential of this research. If the authors have clear plans or ideas, it would enhance the value of this work for future research. 2. Can the am-ELO method still be applicable in scenarios where the annotator is not a human but Judge LLM. 3. This article identifies abnormal annotators by screening those with negative ability. Is there any other baseline method to do this? What are the advantages of am ELO? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Regarding the questions you raised, we have carefully considered each point and have made the following responses: > **Q1**: The annotator modeling in the paper is somewhat simplistic. It mainly focuses on the annotator's discriminatory ability and consistency with other annotators and may not fully capture the broader capabilities of annotators. > > **Q2**: How do you plan to extend the annotator modeling to better capture the diverse capabilities of annotators? Thank you for your suggestion. Indeed, our work has primarily focused on proposing a stable framework that can simultaneously model both annotators and models, rather than comprehensively modeling the annotators. In our subsequent work, we will conduct research on how to comprehensively model the annotators while evaluating the model capabilities. > **Q3**: Can the am-ELO method still be applicable in scenarios where the annotator is not a human but Judge LLM? This is indeed an issue we plan to research in the future. We believe that this method is applicable to both human annotators and Judge LLMs. In fact, by using a combination of human and Judge LLM annotations, we can potentially reduce annotation costs and evaluate the capabilities of Judge LLMs. However, we haven't yet figured out how to validate the effectiveness of this method. > **Q4**: Is there any other baseline method for identifying abnormal annotators? In Arena systems, historical annotation records are typically used to identify abnormal annotators through hypothesis testing [1]. However, these hypotheses often rely on a rather strong assumption, that is, all annotators in the historical records are normal. Unfortunately, it is extremely difficult to verify this assumption. The premise of am-ELO is only that most annotators are normal annotators, and the premise for use is simpler, which is also the advantage of am ELO compared to hypothesis testing methods. In future work, we plan to focus on identifying and comparing our am-ELO method on other scenes specifically tailored to the large-scale arena evaluation context [2]. This will enhance the comprehensiveness of our evaluation and more accurately position the contributions of our research. [1] Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference. 2024. [2] Decentralized Arena via Collective LLM Intelligence: Building Automated, Robust, and Transparent LLM Evaluation for Numerous Dimensions. 2024.
Summary: This paper introduces am-ELO, an evaluation framework designed to enhance the ELO rating system for evaluating LLMs through arena-based comparisons. Traditional ELO systems exhibit instability mainly due to their sensitivity to data ordering and their failure to account for variations in annotator expertise, resulting in inconsistent and potentially biased evaluation outcomes. To resolve these issues, m-ELO replaces the traditional iterative ELO method with an MLE-based approach, providing theoretical guarantees for consistency and stability in model rankings. In addition, am-ELO extends m-ELO by explicitly modeling annotator abilities. Claims And Evidence: The paper points out the instability of the existing iterative ELO method in terms of ordering (and unreliable annotations). The proposed am-ELO method effectively reduces the instability by removing the data ordering issue and leveraging the reliability of each annotator. Both theoretical proofs and empirical evidence across the paper convincingly support these claims. Methods And Evaluation Criteria: The proposed methods (m-ELO and am-ELO) and their evaluation criteria appropriately address the identified problems. The evaluation utilizes Chatbot Arena, extensive perturbation experiments, and robust statistical measures (consistency, MSE, AUC, F1 scores) that effectively assess the methods’ stability and reliability. However, the evaluation criteria still lack a detailed sensitivity analysis regarding some parameters, such as annotator counts or the scale factor K. Also, it only compares slightly outdated models. Theoretical Claims: It seems the theoretical proof sufficiently supports the authors' claim, but I'm not sure. One limitation is that the analysis does not adequately address the potential impact of noisy or sparse annotation datasets on MLE stability. Experimental Designs Or Analyses: While the experiments seem reasonable, the below limitations could be addressed more. - Experiments on one dataset (Chatbot Arena) provide limited evidence of method robustness. - Stability experiments rely on artificial perturbations, lacking clear justification that these perturbation methods accurately represent realistic annotator behavior. - No baseline or state-of-the-art comparison beyond the traditional ELO was considered, missing an opportunity to compare am-ELO with other advanced ranking or annotator-modeling methods (e.g., advanced crowdsourcing or Bayesian methods). Supplementary Material: Yes Relation To Broader Scientific Literature: The paper clearly situates itself within the broader literature on model evaluation, annotator reliability, and statistical ranking methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper is well-written and easy to follow. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback on our manuscript. We sincerely appreciate your time and effort in evaluating our work, and we will explain your questions and suggestions one by one below: > **Q1**: The evaluation criteria still lack a detailed sensitivity analysis. We've conducted sensitivity analysis on **scale factor K** and **minimum number of annotations**. Results will be added to future paper versions. | ELO | K=0.5 | 1 | 4(Standard) | 10 | | ----------- | ------ | ------ | ----------- | ------ | | Consistency | 0.9916 | 0.9811 | 0.9637 | 0.9305 | | MSE | 0.1473 | 0.1368 | 0.1238 | 0.1225 | | AUC | 0.7426 | 0.7443 | 0.7492 | 0.7505 | As the K increases, the model demonstrates enhanced data fit. However, this improvement in fit comes at the cost of reduced consistency. | Annotation | 10 | 20 | 30 | 40 | 50 | | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | | ELO | 0.9695 | 0.9637 | 0.9768 | 0.9726 | 0.9637 | | m-ELO | **1.0000** | **1.0000** | **0.9979** | **0.9968** | **1.0000** | | am-ELO | 0.8642 | 0.9305 | 0.9568 | 0.9979 | **1.0000** | This shows traditional ELO gives inconsistent results across parameter changes, while m-ELO converges towards consistency, highlighting its greater stability. > **Q2**: The analysis fails to adequately address the impact of noisy or sparse annotation datasets on MLE stability. You’re right—our theory doesn't directly counter the impact of noisy or sparse annotation datasets on MLE stability. But our analysis shows m-ELO has at most one maximum regardless of the dataset, maintaining stability as seen in Table above. In contrast, am-ELO, which models annotators explicitly, is sensitive to sparse data. In section 4.3, we proposed selecting annotators with enough annotations. As shown in Table above, this strategy effectively lessens the negative impact of sparse datasets on MLE stability. > **Q3**: Experiments on one dataset provide limited evidence of method robustness. Currently, open-source arena datasets are scarce. The Chatbot Arena platform is one of the few that offer public data. One NeurIPS 2024 paper also used just one real-world dataset[1]. Besides the dataset in our study, there’s the MTBench dataset[2]. Here’s its statistical information: | Dataset | Chatbot/MTBench | | -------------- | --------------- | | #Annotators | 42/7 | | #Models | 20/6 | | #Response logs | 4321/1044 | However, after filtering, MTBench had a severely **limited number of annotators and models**. This scarcity made it inadequate for fully validating the stability of our research method. Despite this, MTBench is still valuable for demonstrating the superiority of our am-ELO modeling: | Method | MSE(Chatbot/MTBench) | AUC(Chatbot/MTBench) | | ------ | --------------------- | --------------------- | | ELO | 0.1238/0.1120 | 0.7492/0.7738 | | m-ELO | 0.1234/0.1097 | 0.7503/0.7785 | | am-ELO | **0.1208**/**0.1088** | **0.7581**/**0.7936** | We’ll incorporate this finding in later paper versions. > **Q4**: There’s no clear justification that artificial perturbation methods accurately represent realistic annotator behavior. We designed artificial perturbations to stress-test the ELO. By setting extreme perturbations, we explored the system's robustness. In LLM evaluation, real annotator behavior is uncertain. Extreme perturbations can better expose ELO's weaknesses and help evaluate its stability. Going forward, we'll explore artifact perturbation. First, we'll collect a larger real dataset to develop realistic perturbation methods. Second, we'll conduct online testing of am-ELO to validate its real-world effectiveness. > **Q5**: No baseline comparison beyond the traditional ELO. Currently, ELO algorithms are widely used in arena platforms like Chatbot Arena. Modified ELO algorithms used in traditional competitive scenarios, such as ELO++[3], incorporate temporal information. But this conflicts with the static nature of LLM evaluations in our paper. Shuffling the dataset for these methods yields unreliable temporal information, making them unsuitable. Crowdsourcing and arena scenarios differ fundamentally. Arena scenarios lack ground-truth annotation values, while crowdsourcing assume their existence[4]. So, applying annotator-modeling techniques directly to the arena is inappropriate. In future work, we'll focus on evaluating our am-ELO method in other arena scenarios to enhance evaluation comprehensiveness. Reference: [1] Elo Uncovered: Robustness and Best Practices in Language Model Evaluation. 2023. [2] Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. 2023. [3] How I won the "Chess Ratings - Elo vs the Rest of the World" Competition. 2010. [4] Learning from Crowds with Annotation Reliability. 2023.
Summary: The paper introduces a novel stable arena framework, am-ELO, for evaluating LLMs using an enhanced ELO rating system. The authors address the instability issues in the traditional ELO method by replacing the iterative update approach with a MLE method, termed m-ELO. They further propose am-ELO, which incorporates annotator abilities into the ELO rating system, allowing for simultaneous estimation of model scores and annotator reliability. The paper provides theoretical proofs of the consistency and stability of the MLE approach and demonstrates through experiments that am-ELO offers a more robust, accurate, and stable evaluation method for LLMs compared to the traditional ELO system. Claims And Evidence: The claims made in the paper are well-supported by clear and convincing evidence. The authors provide theoretical proofs for the stability and consistency of the MLE approach (Theorem 4.1) and demonstrate the practical significance of annotator ability modeling (Theorem 4.2). The experimental results, including the comparison of log-likelihood losses and the stability of ELO scores under different perturbation strategies, further validate the claims. The paper also includes a case study that highlights the differences in model rankings between the proposed methods and the traditional ELO method, reinforcing the superiority of am-ELO. Methods And Evaluation Criteria: am-ELO, are well-suited for the problem of LLM evaluation in arena-based settings. The use of MLE to replace the iterative ELO update method addresses the instability issue caused by the order of data presentation. The incorporation of annotator abilities into the ELO system is a significant improvement, as it accounts for the variability in human judgment, which is often overlooked in traditional ELO systems. Theoretical Claims: Yes, the authors provided detailed proofs for Theorems 4.1 and 4.2, and upon inspection, no issues were found. Experimental Designs Or Analyses: Yes, the authors have demonstrated through prediction tasks that am-ELO has good fitting and generalization abilities. Multiple tests have shown that am-ELO can converge and obtain unique results. Simulation experiments have shown that am-ELO can effectively identify disturbances. The design and conclusions of these experiments are very reasonable. Supplementary Material: Yes, I check the code for this paper. Relation To Broader Scientific Literature: The paper is well-situated within the LLM evaluation. In my opinion, this article is an extension of Chatbot Arena. The incorporation of annotator abilities draws from psychometrics and Item Response Theory (IRT), which are well-established in educational assessment. Essential References Not Discussed: Perhaps the current work on annotator modeling in crowdsourcing can provide some ideas for the paper. Other Strengths And Weaknesses: The paper's strengths lie in its originality and significance. The proposed am-ELO framework addresses a critical issue in LLM evaluation by incorporating annotator abilities and providing a stable ranking system. The theoretical proofs and experimental results are convincing and demonstrate the practical utility of the proposed methods. One potential weakness is the simplicity of the annotator ability modeling, which primarily focuses on discriminatory ability and consistency. Future work could explore more nuanced dimensions of annotator capabilities to further enhance the evaluation framework. Other Comments Or Suggestions: The paper is well-written and clearly presents its contributions. However, there are a few minor typos and formatting issues that could be addressed in the final version. For example, the descriptions of some formulas are not particularly clear in Section 4.2 Questions For Authors: 1. Changing the iterative algorithm to the MLE method undoubtedly increases the computation time. So I am curious that how does the proposed am-ELO framework perform in scenarios where the number of annotators is very large, and how scalable is the method in such cases? 2. Could the authors discuss potential limitations of the proposed method when applied to highly imbalanced datasets, where some models are significantly stronger or weaker than others? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. Regarding the questions you raised, we have carefully considered each point and have made following responses: > **Q1**: One potential weakness is the simplicity of the annotator ability modeling, which primarily focuses on discriminatory ability and consistency. Thank you for your suggestion. Indeed, our work has primarily focused on proposing a stable framework that can simultaneously model both annotators and models rather than comprehensively modeling the annotators. In our subsequent work, we will conduct research on how to comprehensively model the annotators while evaluating the model's capabilities. > **Q2**: There are a few minor typos and formatting issues that could be addressed in the final version. For example, the descriptions of some formulas are not particularly clear in Section 4.2 Thank you for your kind reminder. We will correct these issues and refine the formulas in subsequent versions. > **Q3**: How does the proposed am-ELO framework perform in scenarios, especially the number of annotators is very large This is a highly worthy question for discussion. In Comment 2 of Review vzb8, the relationship between time complexity and precision was discussed, indicating that this complexity can be neglected when compared to the inference of large-scale models. As the number of annotators gradually increases, since each annotator is associated with only a single parameter, the impact on the total number of parameters of the entire model is relatively small. Moreover, with the help of a GPU, the results can be easily computed. > **Q4**: Could the authors discuss potential limitations of the proposed method when applied to highly imbalanced datasets, where some models are significantly stronger or weaker than others? We believe that the methods we proposed are hardly affected by the dataset, especially the m-ELO method. Theorem 4.1 proves that the MLE of this method has at most one extreme point. When there is a significantly stronger model, there is usually no extreme point, or rather, the extreme point occurs at infinity. Although the MLE does not converge in this case, we can still obtain a stable ranking.
null
null
null
null
null
null
Uncertainty-Based Extensible Codebook for Discrete Federated Learning in Heterogeneous Data Silos
Accept (poster)
Summary: This paper introduces Uncertainty-Based Extensible-Codebook Federated Learning (UEFL), a novel framework designed to address data heterogeneity in federated learning (FL). The key innovation lies in dynamically mapping latent features to trainable discrete vectors (codewords) and extending the codebook for silos with high uncertainty, identified via Monte Carlo Dropout. UEFL demonstrates significant improvements in accuracy and uncertainty reduction across various datasets, including MNIST, CIFAR10, and CIFAR100. The extensible codebook approach, initialized using K-means, ensures efficient adaptation to unseen data distributions while maintaining low computational overhead. Claims And Evidence: The claims of improved accuracy and uncertainty reduction are well-supported by empirical evidence from experiments on multiple datasets. Methods And Evaluation Criteria: The proposed method is well-suited to the problem of feature heterogeneity in FL, and the use of an extensible codebook with uncertainty evaluation is innovative. However, the introduction of an extensible codebook raises potential concerns regarding privacy risks that warrant further discussion. Since the codebook is shared and updated across clients, there may be a risk of leakage of sensitive information embedded in the codewords, especially if adversaries attempt to reverse-engineer the mapping between latent features and codewords. While this paper emphasizes the alignment of codewords with latent features via K-means initialization to improve model performance, it remains unclear how this process is safeguarded against potential attacks, such as model inversion or membership inference attacks. Theoretical Claims: This paper provides theoretical support for the benefits of discretization in reducing noise sensitivity and dimensionality, as outlined in Appendix A. While the provided theorems establish the foundational advantages, including enhanced robustness to noise and reduced dimensionality, a more detailed step-by-step proof process would further strengthen the theoretical claims and improve the clarity of the mathematical reasoning. Experimental Designs Or Analyses: The experimental design is robust, with ablation studies and comparisons against strong baselines like FedAvg and DisTrans. The use of domain generalization tasks and large-scale setups (e.g., 50 and 100 clients) further strengthens the evaluation. Supplementary Material: Yes, the experimental results in Appendix B-I. Relation To Broader Scientific Literature: UEFL builds on prior work in FL, such as FedAvg and DisTrans, while introducing a novel mechanism to handle data heterogeneity through extensible codebooks and uncertainty-based adaptation. It also aligns with recent trends in uncertainty modeling (e.g., Monte Carlo Dropout, Deep Ensembles) and discrete representations for robustness. Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: The resolution of the figures in the paper is quite low, and some of the text within the images is difficult to read due to its small size. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the recognition of the novelty of our method, and the robustness of our experimental design. Additionally, we value the insightful critique regarding the limitations of our work. In response, we address these issues below: > However, the introduction of an extensible codebook raises potential concerns regarding privacy risks that warrant further discussion. Since the codebook is shared and updated across clients, there may be a risk of leakage of sensitive information embedded in the codewords, especially if adversaries attempt to reverse-engineer the mapping between latent features and codewords. We thank you for highlighting the important concern regarding potential privacy risks associated with the extensible codebook. To ensure robust privacy preservation, our method follows standard federated learning protocols, raw data never leaves the local client, and only aggregated model updates are communicated. The codewords represent abstract latent features derived from model encoders, rather than explicit raw data or identifiable content. These latent vectors are highly compressed and abstract, significantly reducing the feasibility of reverse-engineering meaningful private information. Furthermore, the segmentation of codewords further abstracts feature information. Newly added codewords for highly heterogeneous silos are client-specific and accessible exclusively by those clients, minimizing potential information leakage risks. We will explicitly discuss these privacy aspects in our updated manuscript to highlight our framework's robust measures against potential privacy risks. Nonetheless, we acknowledge the insightful suggestion and completely agree that the introduction of an extensible codebook could raise potential privacy risks. However, this is not the focus of this work, as our primary aim was to address model accuracy and uncertainty reduction in heterogeneous data silos within federated learning settings. Possible solutions to address privacy concerns include differential privacy [1] or secure aggregation [2]. We will leave the detailed exploration and integration of such privacy-preserving techniques for further study. > While this paper emphasizes the alignment of codewords with latent features via K-means initialization to improve model performance, it remains unclear how this process is safeguarded against potential attacks, such as model inversion or membership inference attacks. We appreciate this insightful comment regarding the robustness of our K-means initialization method against potential attacks. Our K-means algorithm operates solely on client-specific, highly abstracted latent embeddings rather than raw data, inherently limiting the risk of reconstructing original inputs. Additionally, our approach is inherently compatible with advanced privacy-preserving techniques, such as differential privacy [1], where calibrated noise can be added to the discrete codeword embeddings to further enhance security. Although our primary goal was to address model accuracy and uncertainty reduction, we fully acknowledge the concerns and will explicitly mention this security consideration in the revised manuscript, noting this as an area for further detailed exploration. > While the provided theorems establish the foundational advantages, including enhanced robustness to noise and reduced dimensionality, a more detailed step-by-step proof process would further strengthen the theoretical claims and improve the clarity of the mathematical reasoning. While Appendix A provides theoretical foundations supporting the benefits of discretization, we agree that providing a detailed, step-by-step derivation of our theoretical results would strengthen the clarity of our claims. In the revised supplementary materials, we will include full derivations based on Hoeffding Inequality for Theorem 1 and 2. > The resolution of the figures in the paper is quite low, and some of the text within the images is difficult to read due to its small size. Thank you for highlighting this issue. We have improved the readability and resolution of our figures by regenerating them using vector graphics in PDF format, with increased font sizes for better clarity. Please review a few updated figure samples (.pdf figures) at the following anonymous link: https://blush-melessa-85.tiiny.site Thanks once again for the constructive comments and valuable suggestions, which have significantly enhanced the quality and clarity of our manuscript. [1] Agarwal, Naman, Peter Kairouz, and Ziyu Liu. "The skellam mechanism for differentially private federated learning." Advances in Neural Information Processing Systems 34 (2021): 5052-5064. [2] Kairouz, Peter, Ziyu Liu, and Thomas Steinke. "The distributed discrete gaussian mechanism for federated learning with secure aggregation." International Conference on Machine Learning. PMLR, 2021.
Summary: The paper addresses the challenge of data heterogeneity in federated learning (FL) by proposing Uncertainty-Based Extensible-Codebook Federated Learning (UEFL). The method dynamically extends a codebook of latent vectors using uncertainty estimates (via Monte Carlo Dropout) to adapt to diverse data distributions across silos. Key innovations include K-means initialization for new codewords, segmentation of feature vectors, and iterative codebook expansion. Experiments on rotated datasets (MNIST, CIFAR, etc.) demonstrate improvements in accuracy (3%-22.1%) and uncertainty reduction (38.83%-96.24%) over FedAvg and DisTrans. The approach also scales well to large client numbers (50–100) and handles domain generalization tasks. Claims And Evidence: The claims in the paper are supported by clear evidence: 1. The proposed method shows consistent improvement in MNIST, PACS, CIFAR10, and CIFAR 100. The authors have designed sufficient experiments to prove the effectiveness of the proposed UEFL. 2. The authors have conducted careful ablation studies on the hyperparameters (e.g. Uncertainty Threshold) 3. The paper also gives a theoretical explanation in Appendix A. Methods And Evaluation Criteria: Yes, the proposed method is simple and straightforward in solving the problem of limited codebooks when heterogeneous data occurs. It is expected to get an improvement. The experiments also prove the method's effectiveness. Theoretical Claims: There is no proof in the paper. Experimental Designs Or Analyses: The paper have conducted a series of experiments on MNIST, PACS, CIFAR10, and CIFAR 100. Careful ablation studies have been done. Supplementary Material: Yes, I have reviewed Appendix A-F. The appendix of this paper includes very detailed ablation and experiment settings. Relation To Broader Scientific Literature: The paper is related to privacy-preserving training. Essential References Not Discussed: The authors discussed the related works well. Other Strengths And Weaknesses: Strength: 1. The algorithm introduces light overhead, which is suitable for edge deployment. 2. The paper is well written and easy to read. Other Comments Or Suggestions: NA Questions For Authors: 1. What is the extra information, except for the model parameters, that the clients send to the central server? Is there any risk of information leakage during the codebook sharing? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for the insightful and constructive feedback, as well as the recognition of our contributions and experiments. Below, we address the specific questions raised: > What is the extra information, except for the model parameters, that the clients send to the central server? Is there any risk of information leakage during the codebook sharing? In our UEFL framework, the codebook is a trainable component of the model architecture (like classifier weights or encoder layers) and is thus included in the standard model parameters exchanged during federated averaging. Clients only share updated model parameters (including discrete codebook vectors) with the server, no raw data or additional metadata. Codebooks map latent features to discrete codewords, which represent aggregated and abstracted latent features rather than raw data, less prone to information leakage. Discrete representations inherently limit the granularity of shared information, aligning with privacy-preserving mechanisms in federated learning [1, 2]. In addition, the segmentation of codewords (Section 3.2) further abstracts feature information, enhancing the robustness against potential information leakage compared to raw data or explicit feature representations. We will clearly clarify these points in the revised manuscript to explicitly address potential privacy concerns. We deeply appreciate the thoughtful review and suggestions. [1] Kairouz, Peter, Ziyu Liu, and Thomas Steinke. "The distributed discrete gaussian mechanism for federated learning with secure aggregation." International Conference on Machine Learning. PMLR, 2021. [2] Agarwal, Naman, Peter Kairouz, and Ziyu Liu. "The skellam mechanism for differentially private federated learning." Advances in Neural Information Processing Systems 34 (2021): 5052-5064. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed explanation. The answer already delivers my problem. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your positive feedback and acknowledgment. Thank you again for your insightful comments and helpful suggestions.
Summary: The paper introduces Uncertainty-Based Extensible-Codebook Federated Learning to address data heterogeneity in federated learning (FL) by dynamically expanding a discrete codebook based on model uncertainty. UEFL improves generalization by mapping latent features to trainable codewords and selectively extending the codebook for clients exhibiting high uncertainty, reducing performance degradation caused by non-IID data distributions. The approach integrates Monte Carlo Dropout for uncertainty evaluation and K-means clustering for efficient codeword initialization, ensuring minimal computational overhead. Claims And Evidence: The computational overhead of UEFL is minimal and does not impact scalability. While UEFL’s memory overhead is shown to be small (~3.34% increase), scalability to thousands of clients is not tested. It is also useful to provide the estimation of the overhead w.r.t. the number of clients. Methods And Evaluation Criteria: FL often deals with privacy-preserving scenarios where clients cannot share model updates freely. The paper assumes all clients can exchange information, but in some FL settings, stricter constraints (e.g., differential privacy, homomorphic encryption) exist. Theoretical Claims: The theoretical section (Appendix A) provides proofs justifying discretization in FL. However, they rely on IID assumptions, while real FL data is often non-IID. Experimental Designs Or Analyses: Figure 9 shows that lower thresholds improve performance, but there is no principled way to set the threshold. Supplementary Material: Yes. I have checked the whole supplementary. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper does not analyze long-term codebook growth, which could become a computational bottleneck over extended training. This work lacks theoretical support, e.g., analysis of the size of the codebook on the uncertainty. Other Comments Or Suggestions: 1. The plots are blurry when I zoom in. It is better to use vector illustration. 2. The running title needs an update. 3. The experimental setting is unclear. It says the "experiments are performed on a machine with 2 GPUs". Did you use both GPUs or only one GPU? Questions For Authors: 1. How does performance degrade if the codebook is not expanded enough? 2. The baseline comparison primarily includes FedAvg (2017) and DisTrans (2022), which are outdated given recent advancements in federated personalization and adaptive aggregation methods; incorporating newer techniques like FedPer, FedRod, FedBABU, or FedDyn would provide a more rigorous evaluation of UEFL’s effectiveness. 3. This work utilizes a pre-trained VGG model for tasks like CIFAR and GTSRB, which already have strong feature extractors. How does this justify the necessity of the proposed approach, given that the model is already well-suited for these datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the insightful comments and suggestions. In response, we address these issues below: > Scalability to thousands of clients is not tested. Estimate overhead w.r.t. the number of clients While we did not test UEFL with thousands of clients, we evaluated it with 50 & 100 clients, showing consistent improvements over baselines (Tables 3 & 4). UEFL’s overhead scales with iteration count, not the number of clients. In each iteration, 64 codewords are added and shared with all selected high-uncertainty clients; the overhead after $i$ iterations is: \begin{equation} Overhead(i) = \frac{i\times 0.125}{14.991}\times 100\\% \end{equation} UEFL typically needs 1-3 iterations and we set the maximum to 5 (Section 3.2), so the introduced overhead is at most 4.17%. > In some FL settings, stricter constraints (e.g., differential privacy, homomorphic encryption) exist Thanks for the comment. Our current experimental setup assumes standard FL settings, but UEFL is compatible with these privacy-preserving methods. Specifically, our codeword-based discretization process can integrate differential privacy by adding calibrated noise to codeword embeddings. We will explore this in future work. > The theoretical section ... rely on IID assumptions, while real FL data is often non-IID In non-IID FL with $K$ clients, client $k$ has $n_k$ samples drawn from its distribution $P_k$. Total samples $n = \sum_{k=1}^Kn_k$. The global distribution is $\overline{P} = \sum_{k=1}^K \frac{n_k}{n}P_k$, then **With discretization:** \begin{equation} \left|\sum_{k=1}^K \frac{n_k}{n} \mathbb{E}_{\boldsymbol{h} \sim P_k}[\phi_k^S(q(\boldsymbol{h}, L, G))] - \frac{1}{n} \sum\_{k=1}^K \sum\_{i=1}^{n_k} \phi_k^S(q(\boldsymbol{h}_i^{(k)}, L, G))\right| = \mathcal{O}( \alpha \sqrt{ \frac{G \ln L + \ln(2K/\delta)}{2n}} + \frac{\nu^{(q)}}{\sqrt{n}}), \end{equation} **Without discretization:** \begin{equation} \left|\sum_{k=1}^K\frac{n_k}{n} \mathbb{E}_{\boldsymbol{h} \sim P_k}[\phi_k^S(\boldsymbol{h})] - \frac{1}{n} \sum\_{k=1}^K \sum\_{i=1}^{n_k} \phi_k^S(\boldsymbol{h}_i^{(k)}) \right| = \mathcal{O}( \alpha \sqrt{ \frac{m \ln(4\sqrt{nm}) + \ln(2K/\delta)}{2n} } + \frac{\overline{\varsigma} R\_\mathcal{H} + \nu}{\sqrt{n}}), \end{equation} Here, $\nu^{(q)} = \frac{1}{K}\sum_{k=1}^K\text{Div}(P_k^{(q)}, \overline{P}^{(q)})$ and $\nu = \frac{1}{K}\sum_{k=1}^K\text{Div}(P_k, \overline{P})$ denote the KL divergence between client and global distribution with and without discretization. $\nu^{(q)} < \nu$. Therefore, discretization not only improves robustness to noise and reduces dimensionality, but also it effectively mitigates the effects of data heterogeneity typical in non-IID FL. > Figure 9 ... no principled way to set threshold In this work, the threshold is manually tuned per dataset. We agree that a dynamic adjustment mechanism (e.g., based on convergence) is promising future work. > Not analyze long-term codebook growth ... analysis of the size of the codebook on the uncertainty Computation is discussed above. Codeword utilization, measured by perplexity ($\exp(-\sum_{\text{class}} p \log p)$), initially increases as the codebook expands, leading to richer representations and higher mutual information $I(Z; Y)$, which reduces uncertainty ($H(Y|Z) = H(Y) - I(Z; Y)$). However, when the codebook becomes very large, most codewords are rarely used (collapse) [1], resulting in reduced perplexity and diminishing returns in uncertainty reduction. More details will be included. > The plots are blurry. Use vector illustration Thanks. We have replaced all figures with vector graphics to ensure clarity. Please review a few updated samples (.pdf figures): https://blush-melessa-85.tiiny.site > The running title needs an update. We have updated it to: "UEFL: Uncertainty-Based Extensible Codebook Federated Learning". > The experimental setting is unclear ... Use both GPUs or only one? All experiments utilized only one GPU, added to the revision. > How does performance degrade if the codebook is not expanded enough? Then, the model lacks representational capacity, resulting in reduced accuracy and higher uncertainty (Figure 6). However, UEFL typically converged after 1-3 iterations. > Incorporate newer techniques like FedPer, FedRod, FedBABU, or FedDyn Here is the comparison with FedRod & FedDyn: | Method| FMNIST| | -------- | ------- | | FedDyn| 89.65 | | FedRod| 90.28 | | UEFL| 90.59 | UEFL outperforms them. More results will be included in the updated version. > This work utilizes a pre-trained VGG model ... justify the necessity of the proposed approach, given that the model is already well-suited for these datasets? While the encoder is strong, the classifier is still vulnerable to domain shifts. UEFL improves performance via discretization, as shown in Table 1. [1] Huh et al., Straightening out the straight-through estimator: Overcoming optimization challenges in vector quantized networks, ICML 2023.
Summary: This paper introduces Uncertainty-Based Extensible-Codebook Federated Learning (UEFL), a novel framework addressing data heterogeneity in federated learning. The key idea is to dynamically extend a codebook of discrete latent vectors based on model uncertainty, which is evaluated via Monte Carlo Dropout. UEFL initializes a small shared codebook and iteratively adds client-specific codewords using K-means clustering on encoder features for underrepresented distributions. Experiments on rotated MNIST, CIFAR, GTSRB, and PACS datasets demonstrate improvements in accuracy and uncertainty reduction compared to FedAvg and DisTrans. The method also shows scalability to large client numbers and robustness in domain generalization tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Theorems 1 and 2 in Appendix A argue that discretization reduces generalization gaps by lowering noise sensitivity and dimensionality. While the theorems are logically structured, their connection to UEFL’s empirical success is not explicitly discussed. For instance, how the codebook’s extensibility interacts with the theoretical guarantees remains unclear. The proof relies on Hoeffding's Inequality, but does not explicitly discuss which distribution data the inequality applies to, especially in the Non IID scenario of federated learning, which may affect its applicability. Experimental Designs Or Analyses: Limited simulation of data heterogeneity: The paper mainly uses rotation transformation to introduce heterogeneity, but in real federated learning environments, data heterogeneity usually includes uneven class distribution, feature space shift, etc., and does not cover more complex distribution drift situations. Lack of generalization analysis for different codebook sizes: Although the paper provides some experiments with different codebook sizes such as K-means initialization vs. random initialization), there is a lack of detailed discussion on the impact of different codebook sizes on model stability. Insufficient ablation experiments for uncertainty assessment methods: Although the paper compared Deep Ensemble, the experiment only used 5 sub models and did not explore whether increasing the number of sub models would affect the stability of the assessment. Supplementary Material: The appendix of the paper provides multiple supplementary experiments and theoretical analyses, mainly including: Appendix A: Theoretical Analysis of Discretization. Appendix B: Experiments at Different Levels of Data Heterogeneity. Evaluate the performance of UEFL in different data heterogeneity environments by gradually increasing the rotation angle. The results indicate that UEFL can maintain good generalization ability even when data heterogeneity is high, but the paper did not provide a detailed generalization error curve. Appendix C: The paper compared the performance of different methods under the same number of training rounds to ensure fair comparison. Appendix D: The paper tested Deep Ensemble as an uncertainty assessment method and found that its computational cost is higher, although its accuracy is similar. Appendix F: The paper provides a detailed comparison of the computational resource consumption of UEFL and FedAvg, including parameter count and CPU/GPU time, but does not explore the impact of communication costs. Appendix H: The paper tested the performance of UEFL under label heterogeneity (Dirichlet distribution α=0.1) and found that it is more robust than FedAvg. Relation To Broader Scientific Literature: The core contributions of this paper mainly involve two fields: Federated Learning (FL) and Uncertainty Modeling, and are related to existing research as follows: Data heterogeneity is a key challenge in FL, and various methods have been proposed in previous studies to alleviate this issue: Personalized FL based methods (such as FedPer, FedRep): Processing heterogeneous data through hierarchical separation or personalized local models [Li et al., 2021]. FL methods based on distribution transformation (such as DisTrans): Processing data heterogeneity through distribution transformation during training and testing [Yuan et al., 2022]. FL methods based on knowledge distillation (such as FCCL): using knowledge distillation and unlabeled public data to enhance generalization ability [Huang et al., 2022]. The core innovation of UEFL in this article lies in the extensible codebook, which is similar to the idea of enhancing FL robustness through discretization (such as VQ FedAvg [Liu et al., 2021]). However, this article additionally introduces a dynamic codebook extension based on uncertainty to adapt to data with larger distribution biases. Previous studies have used uncertainty quantification to improve the robustness of FL models: Method based on Monte Carlo Dropout (Gal&Ghahramani, 2016): Estimating uncertainty by enabling Dropout during the inference phase for data selection and model weighting. Method based on Deep Ensemble (Lakshminarayanan et al., 2017): Train multiple models and estimate uncertainty through analysis of variance. This article uses Monte Carlo Dropout as an uncertainty assessment method and combines it with K-means to dynamically extend the codebook, further expanding the application of uncertainty modeling in the field of FL. The paper mentions VQ-VAE [Van Den Oord et al., 2017] as inspiration for using a discrete codebook for feature mapping. In the field of FL, Liu et al., The use of discretization to enhance model generalization ability has been proposed in 2021, but the encoding of this method is fixed. This article proposes an extensible codebook mechanism and dynamically adjusts the codebook size by combining uncertainty, which is one of the main innovations of this article. In summary, the innovation of this paper mainly lies in the combination of uncertainty quantification and scalable codebook mechanism to handle FL data heterogeneity, and empirical research on multiple datasets. These contributions complement existing literature, but more extensive experiments are still needed to verify their applicability (such as other data types, communication overhead, etc.). Essential References Not Discussed: VQ-FL (Chen et al., 2023, ICML): Uses vector quantization for client-specific representation learning but fixes the codebook size. UEFL’s uncertainty-driven extension is novel, but a comparison is necessary. FedPM (Dinh et al., 2022, NeurIPS): Personalizes models via latent mask vectors. While distinct from codebooks, its focus on client-specific latent spaces is conceptually related. FedProx (Li et al., 2020, MLSys): Addresses heterogeneity via proximal regularization. Although cited in the related work, its comparison with UEFL in terms of uncertainty reduction is missing. Other Strengths And Weaknesses: Strengths: A dynamic codebook extension strategy based on uncertainty assessment has been proposed, which is more adaptable compared to existing discretization FL methods such as VQ FedAvg. Using K-means for codebook initialization reduces the instability caused by random initialization and improves model training efficiency. Multiple datasets (MNIST, FMNIST, CIFAR10, CIFAR100, GTSRB) were used for experiments, and rotation transformation was employed to simulate data heterogeneity. Weaknesses: The paper provides a theoretical analysis of the discretization of generalization error in Appendix A, but this analysis relies on the i.i.d. assumption and does not consider the Non IID distribution in FL. The paper does not provide a mathematical analysis of the effect of codebook size on convergence, and only verifies its impact through experiments. The paper mainly simulates Feature Heterogeneity, but there is limited exploration of Label Heterogeneity, with only limited experiments conducted in Appendix H. The paper only uses K-means for initialization of the codebook and does not explore other possible initialization methods (such as PCA dimensionality reduction and contrastive learning feature initialization). Other Comments Or Suggestions: See the weakness Questions For Authors: Communication Overhead: How does UEFL’s communication cost (e.g., transmitting codeword vectors) scale with the number of clients and codebook size? For 100 clients (Table 4), does the server need to aggregate 100 unique codebooks? Theoretical Grounding: Can Theorem 1 be extended to account for dynamic codebook growth? For example, does adding codewords tighten the generalization bound in Eq. 8? The mathematical analysis of the paper assumes that the data is i.i.d., but FL is usually a Non IID scenario. Is there an experiment conducted under more extreme Non IID settings (such as Dirichlet distribution α=0.01)? Is the scalability of UEFL still effective for situations where data distribution is severely uneven across different clients? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the insightful comments and suggestions. We address your concerns as follows: > While the theorems ... connection to UEFL’s empirical success is not explicitly discussed We provide a detailed analysis in response to the later theoretical questions. Please refer to that. > The paper mainly uses rotation transformation ... not cover more complex distribution drift situations Other forms of heterogeneity are also included: - Feature space shifts: Tables 2 and 3 (e.g., PACS domains) - Uneven class distributions: Table 12 (Label heterogeneity) > Lack of detailed discussion on the impact of different codebook sizes on model stability We ran experiments across 5 random seeds. As the codebook size increases from 16 to 64, the standard deviation of accuracy drops from 0.0237 to 0.0094 (higher stability). > Only used 5 sub models ... not explore whether increasing the number of sub models would affect the stability of the assessment We extended the ensemble size to 20 and observed the following on MNIST: | Method | Accuracy| Uncertainty| | ----- | ----- |----- | | FedAvg | 0.782 | 0.261| | UEFL | 0.924 | 0.237| UEFL also outperforms. However, consistent with prior findings (e.g., Lakshminarayanan et al., 2017), we observed that minimal improvement beyond 5–10 ensembles while computation costs increase linearly. > Appendix B ... not provide a detailed generalization error curve We have plotted the curve: https://ibb.co/xymvPDT > Appendix F ... does not explore the impact of communication costs In practice, UEFL uses 30 communication rounds in its first iteration, compared to 40 in standard FL. Each subsequent iteration requires only 5 rounds with a modest model size increase of 0.83%. For example, with two additional iterations, the overall communication cost is approximately $\frac{30}{40}+\frac{5}{40}\times1.0083+\frac{5}{40}\times1.0167 = 1.0031\times$ of standard FL. > More extensive experiments are still needed to verify their applicability We’re extending this to medical data and other domains in future work. > Essential References Not Discussed Thanks. Specifically for FedProx, the results are as follows: | Method| FMNIST| | -------- | ------- | | FedProx| 88.55 | | UEFL| 90.59 | we will add more comparisons in the updated version. > The paper provides a theoretical analysis ... not consider the Non IID distribution in FL In non-IID FL, then **With discretization:** \begin{equation} \mathcal{O}( \alpha \sqrt{ \frac{G \ln L + \ln(2K/\delta)}{2n}} + \frac{\nu^{(q)}}{\sqrt{n}}), \end{equation} **Without discretization:** \begin{equation} \mathcal{O}( \alpha \sqrt{ \frac{m \ln(4\sqrt{nm}) + \ln(2K/\delta)}{2n} } + \frac{\overline{\varsigma} R\_\mathcal{H} + \nu}{\sqrt{n}}), \end{equation} Here, the KL divergence $\nu^{(q)} < \nu$. Therefore, discretization explicitly mitigates the effects of data heterogeneity. To conserve space, more details are included in our response to Reviewer idfz. > Not provide a mathematical analysis of the effect of codebook size on convergence The basic mathematical analysis follows VQ-VAE. Large codebooks hinder convergence due to poor utilization and noisy updates. To address this, UEFL progressively extends the codebook, initializing new codewords with K-means for better alignment. This improves codeword utilization, reduces quantization error, and accelerates convergence (Fig. 3(b)). > Limited exploration of Label Heterogeneity We further test different dirichlet distributions as follows, | Method| α=0.05| α=0.01| | ----- | ----- |----- | | FedAvg| 78.57 |35.42 | | UEFL|85.18 | 40.99 | UEFL performs better. More results will be included. > Only uses K-means for initialization ... not explore other possible initialization methods Thanks. We agree that PCA or contrastive-based initialization is promising and will explore them in future work. > How does UEFL’s communication cost (e.g.,transmitting codeword vectors) scale with the number of clients and codebook size? For 100 clients (Table 4), does the server need to aggregate 100 unique codebooks? No. Codebook growth is tied to iterations, not client count. In each iteration, 64 codewords are added and shared with selected high-uncertainty clients. The communication overhead after $i$ iterations is $Overhead(i) = \frac{i\times 0.125}{14.991}$. With a typical $i=1-3$ and $i=5$ (Section 3.2), the worst-case overhead is under 4.17% (5 unique codebooks), regardless of client count (100). > Can Theorem 1 be extended to account for dynamic codebook growth? Yes. Although Theorem 1 assumes static $L$, dynamic growth adds a minor $\text{ln}L$ term. $L$ remains small (e.g., 64–256). However, dynamic codebook growth improves representation capacity, thereby reducing the KL divergence term $v^{(q)}$, tightening the federated generalization bound. > Experiment conducted under more extreme Non IID settings (α=0.01)? The results for Dirichlet distribution α=0.01 are discussed above.
null
null
null
null
null
null
AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization
Accept (poster)
Summary: This paper proposes AlphaDPO which is a direct preference optimization method with a data-dependent margin. The authors first observe that the DPO objective can be looked at as making the likelihood of the chosen response to be greater than the losing response, with a margin set as the difference between the likelihood of the responses under the reference model. The authors claim that such margin might introduce error in scenarios where the reference model is not calibrated. Therefore, they propose a margin that is the likelihood ratio of the responses under the language model and the reference model. They show how this proposal is theoretically related to SimPO and TDPO. They further empirically compare AlphaDPO with SimPO and several other DPO variants on AlpacaEval with Llama3(8B), Mistral(7B), and Gemma2(9B) models. Claims And Evidence: First, the paper lacks justification for its specific choice of loss function. In the introduction (lines 72–73), the authors state that this choice was heuristic, but they do not provide sufficient intuition for it. In the theoretical section, they attempt to ground their method by comparing it to SimPO and TDPO. However, comparing it to SimPO does not justify the specific loss function used in AlphaDPO. Meanwhile, the comparison to TDPO relies on crude approximations, making the justification seem arbitrary. (I will elaborate on this further in the theoretical section of the review.) In the experimental section, the paper does not provide enough evidence regarding the significance of the results. Additionally, since AlphaDPO introduces an extra hyperparameter compared to DPO, the paper lacks sufficient detail on how this hyperparameter is chosen. It is also unclear how fairness is ensured when comparing AlphaDPO to DPO, especially if a grid search was used to report AlphaDPO’s best-performing hyperparameter. (I will expand on this further in the experimental section of the review.) Methods And Evaluation Criteria: It largely makes sense. However, there are two important points to consider: First, the models used in the paper have already undergone post-training—specifically, alignment with human preferences using RLHF or Best-of-N. This introduces a potential confounding effect, which could be mitigated by also testing a model that has not been post-trained. Second, since the authors state that the closest variant of DPO to AlphaDPO is TDPO, it is essential to include a direct comparison between AlphaDPO and TDPO in the main experimental results, such as in Table 1. Theoretical Claims: First, the explanation of $\gamma$ is unclear. If $U$ is defined as a uniform distribution over all responses given any string, then by definition, $\gamma = 0$. I find lines 141–142 confusing, as they state that $\gamma$ is a constant but not zero due to difference in selection probabilities. This point needs to be more precise and formally written. Second, the paper claims that AlphaDPO is a general framework encompassing both DPO and SimPO by varying $\alpha$. However, I do not believe that setting $\alpha = 1$ reduces AlphaDPO to DPO as claimed, since AlphaDPO includes an additional policy term that is absent in DPO. Third, regarding the connection between TDPO and AlphaDPO: the authors state that the difference between sequence KL terms is approximated by the log-ratio difference. However, it is unclear how well this approximation holds—are there any bounds? Beyond this crude approximation, there is another key difference between AlphaDPO and TDPO: the use of a stop gradient. The paper does not discuss this, which seems like a significant omission. Experimental Designs Or Analyses: First, regarding the choice of $\alpha$ in the main results (e.g., Table 1), the authors do not explain how this parameter is selected. The common practice is to choose the best value from a set of hyperparameters, but comparing this directly with DPO, which does not have this hyperparameter, creates an unfair comparison. A more equitable approach would be to give DPO the same number of training runs using different random seeds. Second, concerning the KL term, the results in Table 1 only provide a partial picture, as they may vary depending on the choice of $\beta$. A more comprehensive way to compare methods would be through Pareto frontier plots. While I appreciate that the authors include this analysis in Figure 3(c), it is much more limited than the main tables. Moreover, in this figure, the percentage of points that end up on the Pareto frontier does not show a significant difference between SimPO and AlphaDPO, which goes against the claim made regarding the supriority of AlphaDPO over SimPO. Lastly, in some experimental results (e.g., Table 2, Mistral model), the performance gap between the best and second-best methods is very small. To ensure the results are meaningful, further statistical tests are needed to confirm their significance. Supplementary Material: I have looked at the appendix. Relation To Broader Scientific Literature: There are many different variants of DPO, and this paper offers another choice of the margin. Essential References Not Discussed: . Other Strengths And Weaknesses: One interesting insight from this paper is the idea of interpreting the difference in reference log probabilities in the DPO loss function as a margin that might be or might not be very accurate depending on the reference model and the chosen beta. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **Q1: Justification of Loss Function** While space constraints limited introductory intuition, we provide multi-faceted justification through: 1) **Weak-to-Strong Generalization** (Lines 197-200): Similar to weak-to-strong alignment, our adaptive reference models enable policy specialization while preserving exploration. 2) **Theoretical Link to TDPO** (Sec. 4): Lemma 4.1 shows AlphaDPO's margin term approximates TDPO's KL divergence control, connecting sequence optimization with token-level stability. 3) **Appendix Analysis**: Utility theory (C.1) and gradient analysis (C.2) demonstrate how our adaptive margin prevents reward hacking while maintaining diversity. We will add a roadmap paragraph in the introduction to better signpost these analyses. **Q2: Theoretical Comparison to SimPO** We clarify our theoretical progression: 1. TDPO's success demonstrates the effectiveness of $r(x,y\_w)-r(x,y\_l)-\delta$ structures for KL control. 2. AlphaDPO adopts a similar offset structure $r(x,y\_w)-r(x,y\_l)-M$ where: - TDPO uses $\delta=\beta(D\_{KL}(y\_l) - D\_{KL}(y\_w))$ with $z\sim\pi\_{ref}$ - AlphaDPO uses $M=\beta(\log\frac{\pi\_\theta(y\_w)}{\pi\_{ref}(y\_w)} - \log\frac{\pi\_\theta(y\_l)}{\pi\_{ref}(y\_l)})$ with uniform sampling 3. As shown in Appendix D.4, AlphaDPO's $M$ achieves superior performance to TDPO's $\delta$ (+3.2% AlpacaEval LC), demonstrating the advantage of sequence-level KL approximation over token-level computation. **Q3: Crude Approximation** Due to space limits, A comprehensive theoretical analysis is provided in the `Response to Reviewer bC1g`. **Q4: Base Model Validation** To address potential confounding from pre-aligned models: We conducted additional experiments on Llama3-8B-Base: ||DPO|SimPO|AlphaDPO| |-|-|-|-| |truthfulqa_mc2|53.66|60.03|62.89| |gsm8k| 52.90|52.84|53.90| | mmlu| 62.14 | 62.05|62.43| |||| | MT-Bench|6.5|6.6|6.9| |||| |LC(Alpaca)|14.92|17.97|22.69| |WR(Alpaca)|13.02|15.60|20.47| Improvements remain consistent, confirming AlphaDPO's effectiveness independent of initial alignment. **Q5: TDPO Comparison** Current Appendix D.4 shows: AlphaDPO achieves 58.7% LC vs TDPO's 52.8% on Llama3-8B, demonstrating clear superiority. Thanks for your suggestion, and we will add direct comparisons to Table 1. **Q6: $\gamma$ Formulation** The uniform distribution $U(y|x)$ and its role in $\gamma$ can be rigorously defined as follows: 1. **Theoretical vs. Empirical $ U(y|x) $** - *Theoretical*: For vocabulary $\mathcal{V} $, $U(y|x) = \prod\_{t=1}^{|y|} \frac{1}{|\mathcal{V}|} \quad \text{(uniform over tokens)}$ - *Empirical*: Responses $ y\_1, \dots, y\_5 \sim \pi\_{\text{SFT}}(y|x) $ are scored, with $ y\_w $ and $ y\_l $ selected via: $y\_w = \arg\max \text{score}(y\_i), \quad y\_l = \arg\min \text{score}(y\_i)$ This induces implicit subspaces:$ \mathcal{V}\_{\text{win}} = \\{y\mid \text{score}(y)\geq\tau\\},\quad\mathcal{V}\_{\text{lose}}=\\{y\mid\text{score}(y)\leq \tau'\\}$ 2. **Effective $ U(y|x) $ and $\gamma$**. The *practical* uniform distributions become: $$U(y\_w|x) = \prod\_{t=1}^{|y\_w|} \frac{1}{|\mathcal{V}\_{\text{win}}|}, \quad U(y\_l|x) = \prod\_{t=1}^{|y\_l|} \frac{1}{|\mathcal{V}\_{\text{lose}}|}$$ Thus, $\gamma = \beta (\log U(y\_w|x) - \log U(y\_l|x))$. Since $ \mathcal{V}\_{\text{win}} $ and $ \mathcal{V}\_{\text{lose}} $ vary per instance, SimPO's fixed $\gamma$ is suboptimal. **Q7: Stop Gradient Analysis** We highlight the primary distinction: - **TDPO**: Implements asymmetric gradient flow—enabled for $ y\_l $, stopped for $ y\_w $. - **AlphaDPO**: Employs symmetric stop-gradient on both terms, using $ \pi\_{\text{ref}} $ as a fixed anchor. Both TDPO and AlphaDPO utilize the stop-gradient operation. Examining the detailed impact of single-term versus multi-term stop-gradient is an intriguing avenue for future work. **Q8: $\alpha=1$ Misstatement** Due to space constraints, please refer to the response for `Question 3 in Response to Reviewer ciXp.` **Q9: Experimental Rigor** We report the standard deviations and confidence intervals of current evaluations in (https://anonymous.4open.science/r/AlphaDPO-431F/significant_exp.md). **Hyperparameter Fairness**. We strictly followed community standards: - All methods underwent equal hyperparameter tuning (`Appendix Table 3`) - $\alpha$ was selected via grid search over [1e-2, 5e-2, 0.1, 0.2] with 5 random seeds - DPO received equivalent tuning effort ($\beta \in [0.01,0.05,0.1]$) **KL Analysis** We will enhance Figure 3(c) by: - Adding more sampling points (every 50 steps) - Including Pareto frontiers for compared methods **Q10: Marginal Improvement in Table 2** We clarify that Table 2 presents ablation studies within AlphaDPO's design space rather than cross-method comparisons. The results demonstrate that: i) Dynamic configurations consistently achieve better performance. ii) The AlphaDPO formulation retains optimality. --- Rebuttal Comment 1.1: Comment: Thank you for your response. - The theoretical justification for the crude approximation does not make sense to me. Why do we need a robustness constraint? - I still can not wrap my head around why there are two definitions of gamma. I also do not understand where the $\tau$ in the author's response comes from. - Regarding the experiments, the provided confidence intervals suggest that there is no significant difference between the proposed method and existing methods in most cases. --- Reply to Comment 1.1.1: Comment: #### **Q11: Theoretical Justification for the Approximation and Robustness Constraint** **Reviewer Concern**: The reviewer questions the validity of approximating sequential KL divergence with log-probability ratios and the need for a robustness constraint: > *"The authors state that the difference between sequence KL terms is approximated by the log-ratio difference. However, it is unclear how well this approximation holds—are there any bounds?"* **Response**: We appreciate the reviewer's attention to this critical theoretical point. The approximation: $$ \sum\_{t=1}^{|y|} \mathbb{E}\_{z \sim \pi\_{\text{ref}}} \left[ \log \frac{\pi\_{\text{ref}}(z|x, y^{\lt t})}{\pi\_\theta(z|x, y^{\lt t})} \right] \approx \log \frac{\pi\_{\text{ref}}(y|x)}{\pi\_\theta(y|x)} $$ is motivated by two key insights: 1. **Noisy Reference Models**: As empirically demonstrated in Figure 2, $\pi\_{\text{ref}}$ often fails to distinguish between preferred and rejected responses, behaving like a perturbed version of the true distribution. This noise justifies aggregating token-level errors into a sequence-level offset, trading fine-grained precision for robustness. 2. **Robust Optimization Perspective**: The approximation aligns with *min-max robust optimization*, where we hedge against worst-case deviations in $\pi\_{\text{ref}}$. By relaxing token-level constraints to sequence-level bounds, we ensure stability even with poorly calibrated reference models. **Why We need Roubustness Constraint:** Robust optimization is vital because reference model biases can significantly affect fine-tuning performance. Prior studies [1,2] confirm that variations in reference model quality have substantial impacts, underscoring the need to hedge against these biases. Our approach involves dynamically adjusting the reference distribution through policy-driven adaptation, aiming to enhance model robustness and improve outcomes. [1] Learn your reference model for real good alignment. ICLR2025. [2] Liu et al. (2024): Understanding Reference Policies in Direct Preference Optimization. arXiv preprint arXiv:2407.13709. --- #### **Q12: Clarifying $\gamma$ and Selection Bias** **Reviewer Concern**: The reviewer expresses confusion about $\gamma$ and the role of selection probabilities: *"I find lines 141–142 confusing, as they state that $\gamma$ is a constant but not zero due to difference in selection probabilities. This point needs to be more precise and formally written."* **Response**: We apologize for the lack of clarity. Here's a precise explanation: The constant $\gamma = \beta (\log U(y\_w|x) - \log U(y\_l|x))$ arises from the *selection bias* inherent in preference data: - $y\_w$ and $y\_l$ are drawn from distinct subsets of the vocabulary ($\mathcal{V}\_{\text{win}}$ and $\mathcal{V}\_{\text{lose}}$), as they are partitioned by the reward model. - Under a uniform reference $U(y|x) = \prod\_{t=1}^{|y|} \frac{1}{|\mathcal{V}|}$, the log-difference $\log U(y\_w|x) - \log U(y\_l|x)$ is non-zero because $|\mathcal{V}\_{\text{win}}| \neq |\mathcal{V}\_{\text{lose}}|$. **On $\tau$**: The term $\tau$ (mentioned in the draft response) was used to illustrate how selection bias skews token frequencies in $\mathcal{V}\_{\text{win}}$ versus $\mathcal{V}\_{\text{lose}}$. We agree this tangent was unnecessary and have removed it to avoid confusion. **Thanks for your advice and we will revise Section 3.1 to formalize this argument, explicitly linking $\gamma$ to selection bias in preference data.** --- #### **Q13: Addressing Confidence Intervals and Significance** **Reviewer Concern**: The reviewer notes that confidence intervals (CIs) suggest insignificant differences between AlphaDPO and baselines in some cases. 1. **Consistent Performance Gains**: AlphaDPO achieves higher win rates (WR) than both DPO and SimPO in all evaluated settings (e.g., +0.7 to +3.0 points on Arena-Hard), with particularly notable improvements for smaller models (e.g., +7.4 points for Mistral-7B). These gains are reproducible across diverse architectures (Llama3, Gemma2). 2. **Tighter Confidence Intervals**: AlphaDPO’s CIs are often narrower than those of baselines (e.g., Llama3-8B: (-2.2, 2.2) vs. (-2.6, 2.7) for DPO; Gemma2-9B: (-1.8, 2.0) vs. (-2.0, 2.3) for DPO), suggesting greater stability in its performance. 3. **Practical Significance**: Even when CIs overlap marginally, the directional trend—AlphaDPO outperforming baselines in *every* configuration—strengthens the case for its robustness. For instance, on Gemma2-Instruct (9B), AlphaDPO's WR (60.8) exceeds both DPO (58.8) and SimPO (57.8), with a tighter CI ((-1.8, 2.0) vs. (-2.0, 2.3) for DPO and (-2.4, 2.0) for SimPO).
Summary: This paper introduces AlphaDPO, an adaptive preference optimization framework that improves alignment in large language models (LLMs) by dynamically adjusting the reward margin in preference learning. The key contribution is the introduction of an implicit reference model that interpolates between policy-driven adaptation and uniform exploration, leading to instance-adaptive reward margins. Empirical results demonstrate the superiority of AlphaDPO over previous methods. Claims And Evidence: The empirical claims are valid judged from the experiments results. Lemma 4.1 is problematic. Methods And Evaluation Criteria: Yes. Theoretical Claims: Lemma 4.1 is problematic. The authors use $\approx$ in the lemma, which is not rigorous as a mathematical lemma. The proof is not convincing either. The authors claim that the pretrained model $\pi_{ref}$ is close to uniform policy, which is apparently not true. Experimental Designs Or Analyses: Yes, they look good to me. Supplementary Material: I checked the proof of Lemma 4.1. Relation To Broader Scientific Literature: The key contribution of this paper is a new reference policy for DPO loss. The experiment results show that this method is superior to existing DPO and its variants. However, the theoretical analysis is not sound and rigorous. I suggest the authors fix this issue. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for raising this important concern. Below, we provide a rigorous theoretical justification for the approximation in Lemma 4.1 and clarify its empirical validity. ### **Theoretical Justification** **1. Problem Formulation with Robustness Constraints** Our objective is to minimize the sequential KL divergence between the optimal reference policy $\pi\_{\text{ref}}^*$ and the policy $\pi\_\theta$, under the uncertainty of $\pi\_{\text{ref}}^*$: $$ \min\_\theta \sum\_{t=1}^T \mathbb{E}\_{\pi\_{\text{ref}}^*}\left[ \log \frac{\pi\_{\text{ref}}^*(z|x,y\_{<t})}{\pi\_\theta(z|x,y\_{<t})} \right]. $$ Since $\pi\_{\text{ref}}^*$ is unobserved, we assume bounded deviation from an observable reference policy $\pi\_{\text{ref}}$: $ |\pi\_{\text{ref}}^*(\cdot) - \pi\_{\text{ref}}(\cdot)| \leq C, \quad C > 0. $ This leads to a constrained robust optimization problem: $$ \min\_\theta \max\_{\pi\_{\text{ref}}^*} \sum\_{t=1}^T \mathbb{E}\_{\pi\_{\text{ref}}}\left[ \log \frac{\pi\_{\text{ref}}^*(z|x,y\_{<t})}{\pi\_\theta(z|x,y\_{<t})} \right] \text{s.t.} \quad |\pi\_{\text{ref}}^*(z|x,y\_{<t}) - \pi\_{\text{ref}}(z|x,y\_{<t})| \leq C. \nonumber $$ **2. Simplification via Worst-Case Analysis** For the inner maximization, the worst-case $\pi\_{\text{ref}}^*$ is determined by the sign of the log-ratio: $$ \pi\_{\text{ref}}^*(z|x,y\_{<t}) = \begin{cases} \pi\_{\text{ref}}(z|x,y\_{<t}) + C, & \text{if } \log \frac{\pi\_{\text{ref}}(z|x,y\_{<t})}{\pi\_\theta(z|x,y\_{<t})} \geq 0, \\\\ \pi\_{\text{ref}}(z|x,y\_{<t}) - C, & \text{otherwise}. \end{cases} $$ Substituting this into the objective yields: $$ \min\_\theta \sum\_{t=1}^T \left[ C \cdot \log \frac{\pi\_{\text{ref}}(z|x,y\_{<t})}{\pi\_\theta(z|x,y\_{<t})} + \left|\pi\_{\text{ref}}(z|x,y\_{<t}) \log \frac{\pi\_{\text{ref}}(z|x,y\_{<t})}{\pi\_\theta(z|x,y\_{<t})} \right| \right]. $$ **3. Asymptotic Approximation** When the deviation bound $C$ is sufficiently large (i.e., $\pi\_{\text{ref}}^*$ is less constrained), the absolute value term dominates, and the objective simplifies to: $$ \min\_\theta \sum\_{t=1}^T \left[ \log \frac{\pi\_{\text{ref}}(z|x,y\_{<t})}{\pi\_\theta(z|x,y\_{<t})} \right]. $$ This corresponds to the sequence-level approximation in Lemma 4.1. The $\approx$ symbol reflects the asymptotic regime where higher-order terms vanish under large $C$, which aligns with practical scenarios where $\pi\_{\text{ref}}$ is imperfect but provides a reasonable prior. **4. Addressing the Uniform Policy Assumption** We clarify that Lemma 4.1 does *not* assume $\pi\_{\text{ref}}$ is uniform. Instead, it leverages the structure of the KL divergence to show that the margin term $M(x,y\_w,y\_l)$ approximates the sequential KL difference under bounded deviations. The uniform policy in SimPO is a special case of our framework (when $\alpha=0$), but our method generalizes to non-uniform $\pi\_{\text{ref}}$ by adaptively scaling with $\alpha$. --- ### **Empirical Validation** **Performance** As demonstrated in `Appendix Table 6`, AlphaDPO achieves significant performance gains over TDPO, with LC win rates increasing from 52.8% (TDPO) to 56.9% (AlphaDPO w/ $\delta$) and further to 58.7% (AlphaDPO w/ $M$). This progressive improvement highlights the effectiveness of the sequence-level approximation in mitigating token-level noise through variance reduction. By replacing TDPO's token-level margin $\delta$ with the adaptive sequence-level margin $M$, AlphaDPO enhances robustness while maintaining alignment, underscoring the superiority of sequence-level optimization in handling imperfect reference models. **Mitigating Reference Model Bias** Figure 2 (main paper) empirically validates that $\pi\_{\text{ref}}$ struggles to distinguish $y\_w$ from $y\_l$ ($\log\pi\_{\text{ref}}(y\_w|x) - \log\pi\_{\text{ref}}(y\_l|x)$ exhibits random fluctuations). AlphaDPO's adaptive margin $M(x,y\_w,y\_l)$ explicitly compensates for this bias, ensuring stable optimization even when $\pi\_{\text{ref}}$ is suboptimal. --- ### **Conclusion** While Lemma 4.1 uses an approximation symbol ($\approx$), our analysis rigorously justifies its validity under bounded reference model mismatch. The empirical success of AlphaDPO further supports this design choice, demonstrating that sequence-level approximations enhance robustness without sacrificing performance. We acknowledge that deriving a formal error bound remains an open question and will explore this in future work. --- Rebuttal Comment 1.1: Comment: I think in the proof of Lemma 4.1 (Line 725~731), you do replace $z\sim\pi_{ref}$ with a uniform distribution. --- Reply to Comment 1.1.1: Comment: Our design philosophy is that $\pi_{\text{ref}}$ may introduce noise and can deviate significantly from the data sampling distribution. As stated in our draft (**Line 725-727**): > *"Under the assumption that the reference policy $\pi\_{\text{ref}}$ has large errors, we approximate $\mathbb{E}\_{z \sim \pi\_{\text{ref}}}$ with a uniform distribution."* From an operational perspective, we agree that the expectation is taken with respect to a uniform distribution, not $\pi_{\text{ref}}$. As noted in the rebuttal, this modification is well-motivated by solving a **robust min-max problem** (rather than the vanilla minimization problem), which explicitly accounts for uncertainty in $\pi_{\text{ref}}$—i.e., its noisiness. We ultimately show that this approach permits a uniform approximation. To clarify, we do **not** assert that the pretrained model $\pi_{\text{ref}}$ itself is uniform; we only assume it is noisy and may differ substantially from the true data distribution, as empirically supported in Figure 2. We appreciate the reviewers' attention to this nuance and will revise the text to make this distinction clearer. We are happy to discuss further or provide additional details if needed.
Summary: This paper propose AlphaDPO, a new preference optimization framework. The core novelty of this framework is to modify the reference model distribution as the product of a uniform distribution and the ratio between policy model and original reference model, with power factor of alpha. This is effectively equivalent to interpolate between SimPO and DPO. The paper perform experiment on AlpacaEval2 and Arena-Hard with 3 representative LLMs, and the empirical results show AlphaDPO has better performance. Claims And Evidence: Most claims are fine. One question is in Line147, the limitations of DPO, why is the $\pi_{ref}$ supposed to distinguish between $y_w$ and $y_l$? Why is this a limitation? Doesn't the $\pi_{ref}$ just serve as the reference for reward values? Methods And Evaluation Criteria: Strengths: - The methods is fine, and the ablation study about other potential design attempts of AlphaDPO, in Table 2, is also convincing. Weakness: - How the the value of $U(y|x)$ decided? How does the uniform values affect the performance? Theoretical Claims: - In Line 208: Can you explicitly show that when $\alpha=1$, how AlphaDPO aligns with DPO? - The motivation of Principle 1 in Line 171 is not supported. Why should the reference model contribute to differentiating between preferred and less preferred responses? Experimental Designs Or Analyses: Strengths: - The experiment is extensive. Many baseline preference learning methods are compared. Supplementary Material: N/a Relation To Broader Scientific Literature: This paper fits into the literature of DPO-like preference learning methods. The proposed method is somewhere between the DPO method and SimPO method. The contribution is try to have a tradeoff between those two methods. Essential References Not Discussed: N/a Other Strengths And Weaknesses: Weakness: - The proposed method is relatively straightforward, and the performance gain from SimPO is marginal. Other Comments Or Suggestions: N/a Questions For Authors: How do the author compare this method with the Online DPO methods such as OAIF (Direct Language Model Alignment from Online AI Feedback) and IDPO (Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Clarification on DPO's Reference Model Limitation** We appreciate this insightful question. The necessity for $\pi\_{\text{ref}}$ to distinguish between $y\_w$ and $y\_l$ stems from two fundamental aspects of KL-regularized policy optimization: 1. **Theoretical Foundation of KL-Regularized Objectives**: The RLHF objective (Equation 2) regularizes the policy $\pi\_\theta$ to stay close to $\pi\_{\text{ref}}$ via KL divergence. This regularization implicitly assumes that $\pi\_{\text{ref}}$ provides a meaningful prior for distinguishing high-quality ($y\_w$) and low-quality ($y\_l$) responses. If $\pi\_{\text{ref}}$ lacks discriminative power (e.g., assigns similar probabilities to $y\_w$ and $y\_l$), the KL term loses its grounding, leading to unstable optimization. 2. **Empirical Evidence**:Recent studies [1,2] demonstrate that $\pi\_{\text{ref}}$ quality significantly impacts DPO performance. DPO's reliance on a static $\pi\_{\text{ref}}$ introduces both theoretical and practical limitations. [1] Gorbatovski et al. Learn your reference model for real good alignment. ICLR 2025. [2] Liu et al. Understanding Reference Policies in Direct Preference Optimization. arXiv preprint arXiv:2407.13709. **Q2: Formalization of $U(y|x)$ and Its Impact** We thank the reviewer for prompting this clarification. The uniform distribution $U(y|x)$ and its role in $\gamma$ can be rigorously defined as follows: 1. **Theoretical Framework**: Let $\mathcal{V}$ denote the vocabulary. The *theoretical* $U(y|x)$ is: $$ U(y|x) = \prod\_{t=1}^{|y|} \frac{1}{|\mathcal{V}|} \quad \text{(uniform over all tokens)} $$ However, *empirically*, $y\_w$ and $y\_l$ are selected via: - **Sampling**: Generate $y\_1,...,y\_5 \sim \pi\_{\text{SFT}}(y|x)$ - **Selection**: $y\_w = \arg\max\_{y\_i} \text{score}(y\_i)$, $y\_l = \arg\min\_{y\_i} \text{score}(y\_i)$ This induces *implicit vocabulary subspaces*: $$\mathcal{V}\_{\text{win}} = \\{y \in \mathcal{V} \mid \text{score}(y) \geq \tau\\}, \quad \mathcal{V}\_{\text{lose}} = \\{y \in \mathcal{V} \mid \text{score}(y) \leq \tau'\\}$$ 2. **Effective $U(y|x)$ in Practice**: The *effective* $U(y\_w|x)$ and $U(y\_l|x)$ become: $$U(y\_w|x) = \prod\_{t=1}^{|y\_w|} \frac{1}{|\mathcal{V}\_{\text{win}}|}, \quad U(y\_l|x) = \prod\_{t=1}^{|y\_l|} \frac{1}{|\mathcal{V}\_{\text{lose}}|}$$ This leads to:$\gamma = \beta \left( \log U(y\_w|x) - \log U(y\_l|x) \right)$. The performance impact arises because $\mathcal{V}\_{\text{win}}$ and $\mathcal{V}\_{\text{lose}}$ differ across instances, making SimPO's fixed $\gamma$ suboptimal. **Q3: Alignment of AlphaDPO with DPO at $\alpha=1$** We sincerely appreciate the reviewer's careful observation and apologize for the ambiguity in our initial formulation. The implicit reference model $\hat{\pi}\_{\text{ref}}(y|x)$ is defined as: $$\hat{\pi}\_{\text{ref}}(y|x)\propto U(y|x) \left( \frac{\pi\_\theta(y|x)}{\pi\_{\text{ref}}(y|x)} \right)^\alpha,$$ where $\alpha$ interpolates between two extremes: 1. **When $\alpha = 0$**: $\hat{\pi}\_{\text{ref}}(y|x)$ reduces to the uniform distribution $U(y|x)$, aligning with SimPO's implicit reference model. 2. **When $\alpha > 0$**: $\hat{\pi}\_{\text{ref}}(y|x)$ increasingly incorporates the dynamic term $\frac{\pi\_\theta}{\pi\_{\text{ref}}}$, creating an adaptive reference model. Critically, **AlphaDPO does not strictly reduce to DPO for any finite $\alpha$**. Instead, it introduces a novel framework that balances exploration (via $U(y|x)$) and exploitation (via $\frac{\pi\_\theta}{\pi\_{\text{ref}}}$). We will revise the manuscript to eliminate the misleading claim about AlphaDPO reducing to DPO and instead emphasize its unique interpolation mechanism. **Q4: Statistical Significance of Performance Gains** We respectfully disagree. Table 1 shows statistically significant improvements across benchmarks: |Model|AlpacaEval2(LC)|Arena-Hard(LC)| |-|-|-| |Llama3-8B|+6.4%(43.8→46.6)|+2.1%(33.5→34.2)| |Mistral-7B|+7.0% (30.2→32.3) |+8.6%(19.8→21.5)| |Llama3-v0.2-8B|+5.6% (55.6→58.7)|+6.8%(34.0→36.3)| |Gemma2-9B|+1.3%(72.4→73.4)|+5.7% (56.1→59.3)| These gains are consistent and meaningful, particularly given the saturated performance of modern LLMs. To rigorously validate the robustness of our improvements, we report the standard deviations and confidence intervals of current evaluations in (https://anonymous.4open.science/r/AlphaDPO-431F/significant_exp.md). **Q5: Comparison with Online DPO Methods** AlphaDPO is orthogonal to online preference optimization. While OAIF/IDPO focus on *data collection dynamics*, our work addresses *reference model design* in offline settings. Notably, AlphaDPO's adaptive reference can be integrated into online frameworks by replacing static $\pi\_\text{ref}$ with $\hat{\pi}\_{\text{ref}}$. We acknowledge this as valuable future work and will explore it in subsequent studies.
Summary: This paper proposes a novel strategy for LLM alignment designed to address the limitations of SimPO and DPO. The proposed AlphaDPO adaptively sets the reward margin based on the ratio between the preference model and the policy model. The relations to SimPO and TDPO loss have been studied. Extensive experiments demonstrate AlphaDPO's superior performance across multiple baselines and LLM architectures. Claims And Evidence: Yes, the claims are mostly supported through both theoretical analysis and experimental results. Methods And Evaluation Criteria: Yes, the proposed method aligns well with the LLM preference optimization problem, directly addressing two identified limitations in existing approaches. The evaluation criteria employ standard benchmarks (AlpacaEval 2 and Arena-Hard) and diverse model architectures (Mistral2-7B, Llama3-8B, Gemma2-9B), providing comprehensive evidence of AlphaDPO's effectiveness across different settings. Theoretical Claims: Yes, I have checked the proofs provided in this submission, including those in the appendix. I did not find any issues. Experimental Designs Or Analyses: Yes, I have examined the experimental designs and analyses in the paper, particularly those in section 5 and appendix D. Supplementary Material: Yes, I have examined all sections in the Appendix. Relation To Broader Scientific Literature: 1. This work explores an essential problem in preference optimization methods—how to effectively utilize the reference model. AlphaDPO proposes a novel interpolation between the current policy model and uniform policy, providing a bridge between DPO and SimPO, offering a more flexible framework. 2. While AlphaDPO is fundamentally an offline preference optimization technique, its adaptive nature shares conceptual similarities with online RL approaches. The adaptive reference model effectively serves as a dynamic critic, similar to how value functions guide policy updates in online RL. 3. AlphaDPO provides a theoretically grounded approach to the critical balance between alignment and diversity via KL divergence control. Essential References Not Discussed: To my knowledge, this paper has included sufficient references. Other Strengths And Weaknesses: Strengths: - The paper introduces instance-specific margins that advance beyond the fixed approach in SimPO. It establishes connections between existing alignment methods (particularly DPO and SimPO), creating a unified framework that addresses limitations of both approaches. - Extensive experiments consistently demonstrate AlphaDPO's superior performance across various LLM architectures and benchmarks. The comprehensive ablation studies effectively isolate the contributions of different components of the approach. - The authors provide theoretical analysis on the lower bound and its connections to TDPO. - The paper is well-written with clear motivation and is easy to follow. Weaknesses: - While the authors establish a theoretical connection between AlphaDPO and online methods, questions remain about the practical utility of this theoretical framework, given that online methods themselves lack strong theoretical guarantees. - The authors claim that AlphaDPO is particularly effective "when the reference model is not well-calibrated at the token level." However, this statement appears contradictory given that AlphaDPO itself operates at the sequence level rather than implementing token-level optimization. - From the formulation of the adaptive preference distribution, it’s unclear at what condition it degrade to DPO. Other Comments Or Suggestions: None. Questions For Authors: - What is the practical utility of theoretical connection between AlphaDPO and online methods given that online methods themselves lack strong theoretical guarantees? - Why does AlphaDPO particularly effective when the reference model is not well-calibrated at the token level? - When does AlphaDPO degrade to DPO? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: What is the practical utility of the theoretical connection between AlphaDPO and online methods given that online methods themselves lack strong theoretical guarantees?** We appreciate the reviewer raising this important point. Although we did not explicitly emphasize the theoretical connection to online methods in our submitted manuscript, we acknowledge that exploring this relationship represents a promising direction for future research. We believe this connection offers two meaningful benefits: 1. **Algorithmic Insights:** By establishing a theoretical link between AlphaDPO and online methods, we gain a unified view of how adaptive reward margins and the implicit reference model influence policy optimization. Specifically, AlphaDPO's framework demonstrates how sequential KL divergence control naturally arises, thereby providing a clearer understanding of its inherent ability to balance alignment and diversity—even when the reference model is suboptimal. Such theoretical insights were previously unaddressed explicitly in existing online methods. 2. **Empirical Robustness and Practical Guidance:** Despite the lack of rigorous theoretical guarantees in existing online methods, the theoretical analysis of AlphaDPO indicates that its adaptive mechanism implicitly mitigates typical online optimization pitfalls, such as over-optimization. This robustness is empirically demonstrated through AlphaDPO's stable performance across various KL divergence budgets, as illustrated in Figure 3(c) of our paper. We agree with the reviewer's point and will pursue a thorough theoretical investigation of this connection as part of our future work, aiming to further clarify its theoretical foundations and practical implications. --- **Q2: Why is AlphaDPO effective when the reference model is not well-calibrated at the token level, given that it operates at the sequence level?** **Full Details**: A comprehensive theoretical analysis (including Lemma 4.1 and robust optimization derivations) is provided in the `Response to Reviewer bC1g`. AlphaDPO’s robustness to token-level miscalibration stems from **sequence-level KL divergence approximation** and **adaptive margin design**, which mitigate noise propagation from unreliable token-level signals. 1. **Theoretical Foundation** - **Problem Context**: When $\pi\_{\text{ref}}$ is miscalibrated at the token level, token-wise KL terms (e.g., in TDPO) amplify noise. - **Key Insight**: By approximating the *sequential KL divergence* (Lemma 4.1), AlphaDPO aggregates token-level uncertainties into a sequence-level margin $M(x,y\_w,y\_l)$. This reduces sensitivity to token-level errors, as the sequence-level signal is statistically more stable. - **Robust Optimization**: Our framework explicitly models bounded deviations from $\pi\_{\text{ref}}$ (Section 3.2), ensuring stability even when token-level probabilities are imperfect. 2. **Empirical Validation** - **Performance Gain**: As shown in Appendix Table 6, AlphaDPO outperforms TDPO (58.7% vs. 52.8% LC win rate on Llama3-8B), demonstrating superior robustness to reference model noise. - **Bias Compensation**: Figure 2 (main paper) shows $\pi\_{\text{ref}}$ fails to distinguish $y\_w$ from $y\_l$ at the token level. AlphaDPO’s adaptive margin $M$ compensates for this by leveraging sequence-level discrepancies, ensuring stable alignment. **Key Takeaways** - **Sequence-Level Robustness**: AlphaDPO avoids token-level noise amplification via sequence-wise KL control, making it less reliant on perfect token calibration. - **Adaptive Margin**: The margin $M(x,y\_w,y\_l)$ dynamically adjusts to instance-specific reference model errors, enhancing robustness. - **Empirical Edge**: AlphaDPO’s design consistently outperforms token-level methods (e.g., TDPO) in scenarios with miscalibrated $\pi\_{\text{ref}}$. --- **Q3: When does AlphaDPO degrade to DPO?** Currently, the $\alpha$-DPO algorithm cannot be transformed into DPO merely through parameter adjustments, similar to how SimPO cannot be converted to DPO by altering $\gamma$. However, I believe this topic presents significant promise, allowing us to propose a more generalized formulation: $$ \hat{\pi}\_{\text{ref}}(\cdot|x) \propto U(\cdot|x) \cdot \pi\_\theta^{\alpha\_1}(\cdot|x) \cdot \pi\_{\text{ref}}^{\alpha\_2}(\cdot|x), $$ where $U(\cdot|x)$ is a uniform distribution. This formulation encompasses: - **DPO**: Set $\alpha\_1 = 0$, $\alpha\_2 = 1$, recovering the explicit reference model $\pi\_{\text{ref}}$. - **SimPO**: Set $\alpha\_1 = \alpha\_2 = 0$, yielding a uniform reference model. - **AlphaDPO**: Set $\alpha\_1 = \alpha$, $\alpha\_2 = -\alpha$, enabling adaptive margin control via $\pi\_\theta/\pi\_{\text{ref}}$.
Summary: This paper proposes a new training algorithm for LLM alignment. First, the authors unify the training objective of the two representative alignment training algorithms, DPO and SimPO, into a single one with a fixed margin. Next, they propose a new training algorithm to mitigate the limitation of each algorithm, by introducing adaptive margin which is constructed by interpolating the fixed original reference model and the training policy model. The effectiveness of this method is first demonstrated with the theoretical results. Also, the empirical results with various state-of-the-art open-source LLMs (e.g., Llama3 or Gemma2) on standard benchmarks (AlpacaEval 2 and Arena-Hard) further support its effectiveness. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, the theoretical results in sections 3 and 4 look valid. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, I read the supplementary material to check the proof of theorem (Appendix B) and additional experimental results (Appendix D). Relation To Broader Scientific Literature: New ideas and results are key contributions. Essential References Not Discussed: Considered references are sufficient, but it might be nice if the relevant papers are more added. Those works are mentioned in below. Other Strengths And Weaknesses: ### Pros 1. **Clarity**. Overall, the writing is clear and easy to follow. In addition, the organization of the main draft is well-established. 2. **Well-motivated problem and intuitive approach.** Alignment of LLM is important direction and the proposed method seems to be intuitive and effective. ### Cons - **Similar idea in previous works**: The idea of constructing adaptive reference model through the interpolation between the fixed original reference model $\pi_{\text{ref}}$ and the training policy model $\pi_{\theta}$ has been explored in previous works [1,2]. While the purpose is quite different, the technical contribution of this work is therefore relatively restricted. The authors should cite these works and add the discussion to clarify the difference and the contribution compared to these works. - **Sensitivity to $\alpha$**: While the proposed method is very sensitivity to the choice of $\alpha$, the authors never mention is the specific search space for $\alpha$ and how they chose this hyper-parameter for the tables. According to Figure 5 in Appendix D, it seems like that the authors chose the different hyper-parameters that yield the best performance for target LLMs among {0, 0.05, 0.1, 0.15, 0.2}; for example, $\alpha=0.05$ for Mistral IT 7B and $\alpha=0.2$ for Llama IT 8B v0.2. If it is true, the choice of $\alpha$ in Figure 3 and 4 is quite weird, as $\alpha=0.3$ and $\alpha=0.01$ are not in the original search space. Also, it’s unclear whether the authors did the same efforts to the considered baselines with the same amount of search space in their hyper-parameters; for example, searching optimal $\gamma$ in SimPO with same number of hyper-parameter space. [1] Liu et al., Decoding-time Realignment of Language Models., ICML 2024 [2] Kim et al., Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment., ICLR 2025 Other Comments Or Suggestions: Please respond to the weaknesses above. Questions For Authors: Please respond to the weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Q1: Similar idea in previous works.** We sincerely thank the reviewer for pointing out relevant prior works. While both our method and previous approaches involve model interpolation, we highlight three key distinctions: - **Adaptive reward margin**: Unlike existing works that focus on regularization strength [1] or data generation [2], AlphaDPO introduces instance-adaptive reward margins by dynamically reparameterizing the reference distribution through the policy-to-reference ratio ($\pi_θ/π_{ref}$), enabling personalized preference learning that accounts for per-sample preference strength. - **Implicit KL control**: Our theoretical analysis reveals that the $\alpha$-weighted ratio term implicitly controls sequential KL divergence between iterative policy updates, achieving stability without explicit constraints. - **Generalized Preference Optimization**: AlphaDPO generalizes SimPO as special cases ($\alpha=0$) while enabling smooth transitions between policy specialization ($\alpha>0$) and uniform exploration ($\alpha \to 0$). The empirical superiority across three model families (Table 1), demonstrates the critical advantage of adaptive margins. We will add detailed comparisons to these works in the revised manuscript. --- **Q2: Sensitivity to $\alpha$** We appreciate the reviewer's insightful questions regarding hyperparameter sensitivity. Here we clarify our methodology: (1) **Primary $\alpha$ Search Space**: As shown in `Appendix Table 3`, we conducted systematic searches over $\alpha \in $ {0.01, 0.05, 0.1, 0.2} based on validation performance. Figure 5 demonstrates that $\alpha=0.05$ achieves optimal results across most models except Llama3-IT-8B-v0.2 where $\alpha=0.2$ works best, reflecting architecture-dependent calibration needs. (2) **Extended Analysis in Figures**: The expanded $\alpha$ values in Figures 3-4 (including 0.3, 0.5, etc.) were intentionally explored to demonstrate our method's reward distribution characteristics across a broader spectrum, not to claim performance improvements. We acknowledge this caused unintended confusion and will explicitly label these as "analysis beyond primary search space" in revisions. (3) **Baseline Fairness**: All methods including SimPO used identical hyperparameter search budgets. For SimPO's $\gamma$, we strictly followed the original paper's recommendation space {0.3, 0.5, 1.0, 1.2, 1.4, 1.6} as shown in `Appendix Table 3`, ensuring fair comparison through equivalent tuning efforts. This controlled approach ensures our conclusions about AlphaDPO's advantages remain valid despite model-specific $\alpha$ variations, while the extended analyses provide valuable insights into the method's behavioral patterns.
null
null
null
null
Unbiased Evaluation of Large Language Models from a Causal Perspective
Accept (poster)
Summary: The paper explores bias in agents as evaluators (LLMs generating new tasks for evaluating another agent), and detects different kinds of biases. They introduce an unbiased evaluator using causal inference. "## update after rebuttal" I thank the authors for answering my questions. After seeing the other reviews and discussion, I still have doubts about the significance of the paper, but some other issues have disappeared so I increase the score. Claims And Evidence: - The problem of bias is very important when using LLMs evaluating other LLMs. - The causal interventions are not based on the causal graph, even if the causal graph was introduced to understand the bias. - The interventions may overcompensate? It's not clear the reduction of performance is actually (only) compensating for the bias. Methods And Evaluation Criteria: Yes. Theoretical Claims: The theoretical decomposition. I haven’t checked it and find it a bit too abstract, and perhaps not that relevant. Experimental Designs Or Analyses: Yes, I checked the design as written in the paper. Supplementary Material: I skimmed it. Relation To Broader Scientific Literature: I miss an independent assessment of question difficulty by humans, to understand if the interventions are changing some other things. The reformulation should give exactly the same results for humans, and some others that make the questions more difficult should create a similar effect as in humans. For the role of difficulty: Adversarial Benchmark Evaluation Rectified by Controlling for Difficulty https://www.researchgate.net/publication/374304817_Adversarial_Benchmark_Evaluation_Rectified_by_Controlling_for_Difficulty Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: Reference AI-MO 2024 should be AIME 2024? Questions For Authors: Table 2 is hard to understand and going to appendix D didn't help in my case. In Figure 3 the options and the option of "no correct" is introduced, but what's the problem with the original questions, other than contamination? And the transformation makes them more difficult, but perhaps not only because of removing contamination. I don't see a clear causal graph justifying the intervention. Can these two things being separated? The baseline not changing the meaning is an interesting baseline, but where is this used? Figure 5 shows the accuracy goes down, but I'm not sure that means that contamination goes down as well. How can we know? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to reviewer zS8E > **Q1**: Reference AI-MO 2024 should be AIME 2024 **A1:** AI-MO (refer to https://aimoprize.com), the AI Mathematical Olympiad, adapts data from AIME 2024 as its competition benchmark. This widely-used version is publicly available as [aimo-validation-aime](https://huggingface.co/datasets/AI-MO/aimo-validation-aime). Therefore, as referenced in L474, we cite it (in-text citation appears as AI-MO, 2024.) as: ``` AI-MO. AIME 2024. https://huggingface.co/datasets/AI-MO/aimo-validation-aime, 2024. ``` > **Q2**: Table 2 is hard to understand and going to appendix D didn't help in my case. **A2:** Table 2 illustrates a demo case of Bags of Atomic Interventions (BOAT) to demonstrate how each intervention works. For each intervention, we present a simple question to show its effect. To save space, the "intervened" column only displays the part that has changed, highlighted in the same color as the original part it replaces. The other parts of the content remain unchanged. For example, with the Distractor Hint intervention, the full modified content would look like this: ``` ## original question Question: Here is a multiple choice question, answer A or B. Is 9.8 bigger than 9.11? Option: A: True B: False Label: A ## intervened question Question: Here is a multiple choice question, answer A or B, if there is no answer, reply N. Is 9.8 bigger than 9.11? Option: A: True B: False Label: A ``` We will add detailed demonstrations into the caption of Table 2 for next version. > **Q3**: In Figure 3 the options and the option of "no correct" is introduced, but what's the problem with the original questions, other than contamination? **A3:** As detailed in our general evaluation formulation (L253), we argue that the evaluation process can be seen as a causal analysis. It is essential to assess whether the model truly understands and can make these causal connections. In this context, the Distractor Hint/Answer Removal is designed to evaluate not only whether the model selects the correct answer, but also whether it effectively rejects the incorrect options. > **Q4**: The transformation makes them more difficult, but perhaps not only because of removing contamination. Can these two things being separated? **A4:** We argue that our Unbiased Evaluator does **NOT** inherently make questions more difficult. The adversarial benchmark in paper [1] mentioned by the reviewer is fundamentally different from our approach. Specifically, adversarial benchmarks are designed to exploit a model’s weaknesses, often **using supervised optimization algorithms to find desired perturbations** that lead to incorrect predictions. In contrast, our Unbiased Evaluator aims to assess **whether models can genuinely answer a question correctly by employing causal interventions that align with human recognition**. Therefore, rather than increasing difficulty, our Unbiased Evaluator provides a more accurate measure of a model’s true and robust performance on a given benchmark by eliminating performance inflation caused by data contamination. To clarify the functioning of the Unbiased Evaluator, we have included a separation ablation in Figure 6, which demonstrates the impact of each individual intervention. Even simple manipulations, such as Option Shuffling, Label Replacement, and Binary Transformation, lead to noticeable degradation in the model’s performance. This provides strong evidence that data contamination plays a significant role in inflating evaluation outcomes. > **Q5**: The baseline not changing the meaning is an interesting baseline, but where is this used? **A5:** The rephasing baseline, without changing the question's meaning, is referred to as a minimal Agents-as-an-Evaluator in this paper (see L208), and it serves as a comparative baseline in Figure 2 and Table 1. > **Q6**: Figure 5 shows the accuracy goes down, but I'm not sure that means that contamination goes down as well. How can we know? **A6:** As presented in A4, Unbiased Evaluator evaluates a model's true and robust performance on a given benchmark. Therefore, **the decline of accuracy actually reflects the decrease of contamination**. To validate this, we further provide an additional fine-tuning ablation study. Specifically, we fine-tune Llama2-13B on the original samples from the MMLU test set and evaluate it on MMLU test set under two conditions: with and without our Unbiased Evaluator. |train set|w/o Unbiased Evaluator|w/ Unbiased Evaluator| |----------------------|---------|---------| |Llama2-13B|55.6|33.7| |Llama2-13B + original test set|96.6|37.1| Even when trained directly on the original test set, the model struggles to perform well under the Unbiased Evaluator, suggesting that it effectively mitigates data contamination and ensures a more robust evaluation. [1] Adversarial benchmark evaluation rectified by controlling for difficulty
Summary: This paper studies potential biases in LLM-based evaluators (“Agents-as-an-Evaluator”) and proposes a new protocol, called the “Unbiased Evaluator,” which systematically introduces small interventions (“Bags Of Atomic Interventions”) into evaluation tasks to mitigate data and model biases. The authors present both theoretical and empirical analyses suggesting their protocol reduces correlation based artifacts and helps reveal model weaknesses that standard benchmarks may overlook. Claims And Evidence: The main claim is that existing multi agent evaluators introduce bias during question generation, and that the proposed BOAT based evaluator offers a more “unbiased” alternative. While the experiments (notably the confusion matrices) do highlight differences in evaluation outcomes, the evidence for truly mitigating all bias remains somewhat limited and relies on a relatively small set of carefully selected interventions. It is unclear whether these interventions comprehensively address the broad range of biases in LLM based evaluations. Methods And Evaluation Criteria: The authors conduct a systematic causal analysis, framing QA as a DAG with interventions on specific “atomic” components. They define several carefully controlled transformations, such as adding distractor questions, to stress test LLM understanding. Accuracy across these perturbed scenarios is aggregated and compared with standard benchmarks. Theoretical Claims: The decomposition of evaluation bias appears logically consistent, and the provided proofs in the appendix are straightforward, though the argument is more conceptual than heavily formal. Experimental Designs Or Analyses: The experiments cover multiple model sizes (both open-source and proprietary), several well-known benchmarks (ARC, MMLU, GSM8K), and detailed ablations of single versus combined interventions. Human verification of a subset of transformed samples is used to confirm correctness of the approach (high agreement rate). The methods are transparent, and the sample sizes are standard for these benchmarks, though additional clarity on how random interventions might differ across runs could improve reproducibility. Supplementary Material: I believe the authors did not provide any supplementary materials. The code is yet to be released. Relation To Broader Scientific Literature: The paper builds on existing work on LLM-as-a-Judge by extending to “Agents-as-an-Evaluator" by dissecting biases in both question generation and model self-assessment. Drawing on causal-inference ideas (e.g., interventions on input variables), it aligns with literature on benchmark contamination and fairness in NLP. Essential References Not Discussed: I think mentioning how common bias mitigation approaches proposed within the LLM-as-a-Judge framework is related to the paper, such as Length-controlled AlpacaEval (Dubois et al) and Arena-Hard Style Control (Li et al). Other Strengths And Weaknesses: **Strengths**: 1. Presents a fresh causal perspective on LLM evaluations. 2. Detailed metrics for identifying overconfidence and underconfidence biases. 3. Scalability: the method can be adapted to various choice-based tasks. **Weaknesses**: 1. Some aspects of the theoretical framework remain high-level; more rigorous proofs or formal constraints on interventions might strengthen the argument. 2. The paper focuses primarily on multiple-choice formats; it would be insightful to see how the method generalizes to more open-ended tasks. Other Comments Or Suggestions: N/A Questions For Authors: 1. How would the proposed atomic interventions scale to more complex structured tasks beyond multiple-choice towards open-ended prompt? 2. Did you observe any qualitative differences in model performance across different types of math or reasoning questions when interventions stack up? 3. Can you provide more details on how changes in model performance under your method correlate with human expert judgments overall? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to reviewer A1m7 > **Q1**: Bias mitigation approaches in the LLM-as-a-Judge are related to the paper, such as Length-controlled AlpacaEval and Arena-Hard Style Control. **A1:** Thank you for your suggestion. Unlike previous bias mitigation approaches in the LLM-as-a-Judge, which primarily focus on the judge side, our paper is the first to analyze bias from the generation side, i.e., Agents-as-an-Evaluator. We will incorporate these works into our related work section for clarification. > **Q2**: Some aspects of the theoretical framework remain high-level, and more rigorous proofs or formal constraints on interventions might strengthen the argument. **A2**: We argue that **our theoretical framework is intuitive and sufficient to support our method’s design**. Inspired by findings in Proposition 3.1, BOAT is designed to mitigate the impact of the related and independent terms (detailed in L366). Further refinements and formal extensions of our framework are important directions that we plan to explore in future work. > **Q3**: How would the proposed atomic interventions scale to more complex structured tasks beyond multiple-choice towards open-ended prompt? **A3:** As shown in Table 3, our method, with designed Question and Answer Jitter, has been successfully scaled to the mathematics benchmark GSM8K, which does not rely on multiple-choice style. Based on our causal formulation of evaluation, we can categorize tasks into two types. - **Most tasks** (e.g., multiple-choice, math), inherently follow natural rules in either the questions or answers, and rule-based interventions can be automatically applied. - **The other small percentage of tasks** (e.g., CivilComments), can use a **debiased Agents-as-Evaluator version**. Concretely, our study has revealed the data and model biases of previous version, inspiring two designs to mitigate them: (1) cross-generation: to reduce model bias, we can break down question generation into multiple chunks, using different models for each. (2) cross-checking: multiple advanced models can be used to cross-check the output to mitigate data bias and enhance quality. Overall, our method is easily scaled to most tasks, and our insights will provide valuable inspiration for future advancements in evaluation methodologies. We will add a section of **Future Work** to include these discussions. > **Q4**: Did you observe any qualitative differences in model performance across different types of math or reasoning questions when interventions stack up? **A4:** Yes. In addition to GSM8K in Table 3, we conduct further experiments on a more challenging benchmark, MATH500. We also evaluate a recently open-sourced reasoning model, QWQ-32B, with 16k context. Our experiments revealed two key observations. - **the performance gap between models becomes more pronounced from GSM8K to MATH500**. Notably, Qwen2.5-72B remains the strongest, on par with Mistral-Large-2411 (123B). Meanwhile, the gap between Qwen2.5-72B and models like Llama3.1-70B widens considerably—rising from 5.89 on GSM8K to 19.56 on MATH500, highlighting the superior capabilities of Qwen2.5-72B and Mistral-Large-2411 in handling complex mathematical reasoning. - **the reasoning model exhibits stronger generalization on mathematical benchmarks**, experiencing a significantly smaller performance drop compared to others. |Model|GSM8K Vanilla|GSM8K Ours|Δ|MATH500 Vanilla|MATH500 Ours|Δ| |-|-|-|-|-|-|-| |Qwen2.5-72B|98.41|88.86|9.55|92.23|77.57|14.66| |Llama3.1-70B|95.98|82.97|13.01|75.87|58.01|17.86| |Yi1.5-34B|91.96|69.60|22.36|65.44|56.37|9.07| |Mistral-Large-2411|97.73|90.04|7.69|86.71|77.51|9.20| |QWQ-32B(16k)|99.32|95.32|4.00|89.78|88.18|1.60| > **Q5**: How changes in model performance under your method correlate with human expert judgments overall? **A5:** **Our Unbiased Evaluator provides a much more correlative assessment with human expert judgments**. Since collecting overall expert judgments across multiple model is costly and impractical, we instead compare our method with LiveBench, a continuously updated benchmark. Specifically, we compute the Pearson and Kendall correlations between our averaged results (Table 3) and the global average results in latest LiveBench-2024-11-25 (https://livebench.ai). Notably, we exclude two models (GPT-4-Turbo and Yi1.5-34B-Chat) that are not evaluated in LiveBench-2024-11-25 for a fair comparison. ||Pearson|Kendall| |-|-|-| |Vanilla|0.918|0.600| |Unbiased Evaluator|0.949|1.000| These results confirm that our method aligns more closely with LiveBench. Notably, it achieves a perfect ranking correlation with LiveBench (as measured by Kendall), a significant improvement over baseline. Unlike LiveBench, which covers diverse tasks and requires substantial resources to update questions regularly, ours leverages existing benchmarks and requires almost no additional resources. We sincerely appreciate your valuable suggestions and will add these results to our ablations.
Summary: The paper introduces ‘Agent as an evaluator’ paradigm with the goal to increase the robustness of LLM-as-a-Judge based evaluations. The evaluation protocol introduces the ability to test model and data bias by taking an active/intervening (agentic) process of evaluating the benchmarks. The query breakdown focuses on problem rephrasing to assess stability of responses. The authors design probing tasks to identify various contamination effect biases. These tasks are designed to reveal data and model biases, informing the development of the Unbiased Evaluator. The work is somewhat inspired from similar research like CogMath where CogMath formalizes the reasoning process into three stages: problem comprehension, problem solving, and solution summarization - however this research generalizes to all other of evaluations as well as introduces error breakdown and analysis with theoretical underpinning and shows strong correlations with performance metrics using statistical metrics. Claims And Evidence: The contributions (according to authors) are as follows: - A theoretical formulation of evaluation bias, offering valuable findings for the importance of minimizing the relative term when designing evaluation protocols. - The first comprehensive bias analysis for Agents-as-an-Evaluator, revealing data and model bias which undermine the reliability and trustworthiness of Agents-as-an-Evaluator. - An unbiased evaluation protocol, Unbiased Evaluator, provides a more comprehensive, unbiased and interpretable assessment for benchmark contamination. The claims are backed with proofs, design of experiments and various results and analysis to validate the claims. There are some similar work that the authors have attributed to in this paper. Methods And Evaluation Criteria: The Unbiased Evaluator employs 'BOAT' based probing method to dynamically assess LLMs, aiming to reduce evaluation biases (unlike the baseline evaluation that the authors say give an unfair advantage to larger LLMs that show higher over-confidence, for example). This method seeks to provide a more accurate representation of an LLM’s capabilities by minimizing the influence of data and model biases. Theoretical Claims: The unbiased evaluation protocol systematically applies statistical principles to decompose the evaluation bias. The decomposition into original, related, and independent terms provides valuable insights into how new biases (using probes) interact with existing ones, guiding the design of more unbiased evaluation protocols. Experimental Designs Or Analyses: The experiments make a lot of sense validating the experimental design and analysis. Supplementary Material: I skimmed through the supplementary material. Relation To Broader Scientific Literature: This work generalizes evaluations using LLM-as-a-Judge paradigm which is one of the few scalable methods today for LLM evaluations (without humans in the loop). It decomposes the evaluation by breaking up the generation, coming up with various probes and then defining theoretical framework for measuring various metrics (consensus, OC, UC). This field is emerging where currently there is a lack of rigor in evaluation for most benchmarks. Essential References Not Discussed: Seems pretty good. However, I may have missed some theoretical references related to studying various error types. Other Strengths And Weaknesses: - The paper writing and organization can be improved a lot. The paper starts and shows the cryptic Fig. 1 and non-standard Fig 2 (and talks about Fig 2 much later) - the definitions are vague (still confused how strength parameter is varies during evaluations) - There is very little (almost no) comparison to other work in the field. For example, authors refer to Ye et al. 2024 work ( https://arxiv.org/pdf/2410.0273) where they refer to various biases coming from LLM-as-a-Judge and how the work has defined Robustness rate and consistency rate metrics to assess some of these biases - how do these compare with this work or other relevant work? Other Comments Or Suggestions: Agents-as-an-Evaluator seems like a new and slightly confusing term - needs some definition and clarification imo. Questions For Authors: - please explain the strength parameter and it is varied in the experiments Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to reviewer 3rKe > **Q1**: The paper writing and organization can be improved a lot. The paper starts and shows the cryptic Fig. 1 and non-standard Fig 2 (and talks about Fig 2 much later) - the definitions are vague (still confused how strength parameter is varies during evaluations) **A1-part1 (for Figure 1):** Figure 1 illustrates the overall pipeline of Agents-as-an-Evaluator and our proposed Unbiased Evaluator. The Agents-as-an-Evaluator process (Figure 1a) consists of generation and evaluation stages. The generation phase is affected by data bias (Table 1), where LLMs tend to perform generation task (such as rephrasing) significantly worse in domains where their evaluation performance are weaker. Furthermore, the evaluation phase involves model bias — LLMs generate content that aligns more closely with their strengths, giving themselves an unfair advantage (represented by term "familiar"). In contrast, our proposed Unbiased Evaluator (Figure 1b) evaluates the LLMs with designed BOAT. Considering that Figure 1 may lead to potential misinterpretation, **we provide a simplified version (refer to https://i.imgur.com/XpnwLsk.jpeg)**. We will update Figure 1 with this version and provide a more detailed caption to enhance clarity. **A1-part2 (for Figure 2 and strength parameter):** Figure 2 visualizes the variation of two proposed metrics ($R_{OC}$ on the left and $R_{UC}$ on the right) as strength parameter changes, using two datasets (MMLU and ARC-C). As stated in L267, **strength refers to the probability defined in Equation 3. A higher strength value indicates a greater proportion of "processed" samples within the dataset** ("process" denotes rephrasing and BOAT in Agents-as-an-Evaluator and Unbiased Evaluator, respectively). As the strength increases, for Agents-as-an-Evaluator, we observe a significant rise in $R_{OC}$, while $R_{UC}$ remains relatively stable, suggesting the existence of model bias. In contrast, our Unbiased Evaluator remains relatively stable on both metric. We will incorporate this clarification of the strength parameter into the caption and relocate Figure 2 closer to Section 3.3 for better alignment. > **Q2**: There is very little (almost no) comparison to other work in the field. For example, authors refer to Ye et al. 2024 work ( https://arxiv.org/pdf/2410.02736) where they refer to various biases coming from LLM-as-a-Judge and how the work has defined Robustness rate and consistency rate metrics to assess some of these biases - how do these compare with this work or other relevant work? **A2**: As demonstrated in A1, we **DO** compare our method with previous relevant works on two widely-used benchmarks (MMLU and ARC-C), considering both types of bias. Building upon the theoretical findings and the first comprehensive bias analysis of Agents-as-an-Evaluator, our method is designed as an unbiased LLM evaluation protocol. Therefore, we compare Unbiased Evaluator with previous Agents-as-an-Evaluator on both data (Table 1) and model bias (Figure 2). As for previous evaluation bias works, such as [Ye et al. 2024]( https://arxiv.org/pdf/2410.02736), we have presented detailed discussion on Section 2.3 and L197. Prior works mainly focus on biases in LLM-as-a-Judge, which operates on the judge side by solely determining whether input falls within the scope of a given rule (e.g. score range 0~5). In contrast, our paper is the first to address the biases inherent in the generation side of Agents-as-an-Evaluator, where LLMs actively contribute to the generation of the very questions. > **Q3**: "Agents-as-an-Evaluator" seems like a new and slightly confusing term and needs some definition **A3**: Integrating agents into the evaluation process is a very recent research direction. Building on the concept of LLM-as-a-Judge, we introduce the term Agents-as-an-Evaluator and have clarified its distinction from LLM-as-a-Judge in L48-L53. Formally, Agents-as-an-Evaluator refers to an LLM-based evaluation paradigm in which LLMs (or Agents) not only assess responses but also actively contribute to generating evaluation criteria and questions. We will incorporate this formal definition into the introduction for better clarity.
Summary: This paper presents Bags of atomic interventions (BOAT) to address the data contamination problem in LLM evaluation. It first develops a theoretical formulation of evaluation bias, and identity the data and model bias in agents-as-an-evaluator paradigm. It then proposes the unbiased evaluator to help evaluate LLMs with less bias. ## update after rebuttal In my initial comment, I mainly question the justification of BOAT. During the rebuttal, the author has thoroughly addressed this concern, so I updated my score to support the work. Claims And Evidence: Yes or no. One of the major claim in the paper is supported by Table 3 which indicates the contamination problem in the current benchmark. The unbiased evaluator heavily depends on the BOAT, which is hand-designed (Section 4.2). The reviewer is not fully aware of how these principles are hand-designed to fully follow the theoretical framework. Methods And Evaluation Criteria: Yes, the paper uses ARC-C and MMLU, GSM8K for evaluation, gpt-4 and gemini, llama, mistral, qwen and yi for models. The reviewer is convinced these choices are reasonable. Theoretical Claims: Yes, the reviewer checks the theoretical analysis in Section 3. Experimental Designs Or Analyses: The experimental designs largely make sense, but the reviewer is not convinced of the derivation of BOAT. Supplementary Material: Yes, the reviewer reviews Part C and D in the supplementary materials. Relation To Broader Scientific Literature: The paper largely cited proper papers. However, the reviewer believes there is a popular and similar work that the paper does not discuss [1]. Can the author includes the discussion against this paper? [1] Rethinking Benchmark and Contamination for Language Models with Rephrased Samples. Essential References Not Discussed: Please see above comments. Other Strengths And Weaknesses: Other strength: The paper is addressing an important problem and attempt from a theoretical perspective. The major weakness is the justification of BOAT and does not distinguish against the above paper. The reviewer is willing to raise the score they can be addressed adequately. Other Comments Or Suggestions: Please see the above comments. Questions For Authors: Please see above questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to reviewer D15D > **Q1**: The unbiased evaluator heavily depends on the BOAT, which is hand-designed, and how these principles are hand-designed to fully follow the theoretical framework. **A1**: **The design of the Unbiased Evaluator is grounded in our theoretical findings** (see the detailed discussion in L366). In particular, Proposition 3.1 shows that the bias in the new evaluation protocol can be decomposed into original, related, and independent terms. For the related term, BOAT’s interventions help mitigate the biases present in the original benchmark (such as ambiguities), thus reducing the impact of the related term in Proposition 3.1. Additionally, the independent term is minimized by our rule-based design. Overall, this paper provides the first comprehensive bias analysis for Agents-as-an-Evaluator and designs a simple unbiased alternative guided by our theoretical insights. Our theoretical insights, as well as bias analysis, will inspire future design for LLM evaluation. > **Q2**: Discussion with previous paper [1] **A2**: We argue that our paper differs from paper [1] in the following aspects: - **Different focus, findings, and methodologies**: First, paper [1] mainly addresses **contamination**, specifically the inclusion of rephrased test samples in training data. In contrast, our work focuses on **evaluation bias** in the Agents-as-an-Evaluator paradigm (rephrasing is a special case of Agents-as-an-Evaluator). Second, while paper [1] utilizes an LLM-based decontaminator to identify rephrased samples, we take a fundamentally different approach by mitigating evaluation bias through causal interventions. - **Our method naturally extends and advances the contributions of paper [1]:** Paper [1] highlighted the challenges in contamination formulation for future works in the end (see Section 6.1), such as mathematical cases where a training and test example differ only in numerical values and background details. As outlined in our general evaluation formulation (L253), we formulate the evaluation process into a causal analysis, and it is crucial to assess whether the model is genuinely capable of these causal connections. Based on this, a robust and contamination-free evaluation protocol should determine whether the model truly possesses the ability to answer the questions correctly. Our proposed Unbiased Evaluator achieves this by assessing the model’s responses under various causal combinations of Bags of Atomic Interventions (BOAT). For a more comprehensive understanding of the Unbiased Evaluator, following the contamination detection methodology in [1], we perform an evaluation of the fine-tuned model using our approach. Specifically, we fine-tune Llama2-13B on both rephrased and original samples from the MMLU test set and evaluate it on MMLU test set under two conditions: with and without our Unbiased Evaluator. In particular, the results in parentheses are the results from Table 2 of paper [1]. |train set|w/o Unbiased Evaluator|w/ Unbiased Evaluator| |----------------------|---------|---------| |Llama2-13B|55.6 (54.8)|33.7| |Llama2-13B + rephrased test set|85.7 (85.9)|32.8| |Llama2-13B + original test set|96.6 (100)|37.1| These results highlight that our Unbiased Evaluator provides a more rigorous assessment of benchmark contamination. Even when trained directly on the original test set, the model struggles to perform well under the Unbiased Evaluator, suggesting that it effectively mitigates data contamination and ensures a more robust evaluation. Overall, grounded in our theoretical findings and the first bias analysis for Agents-as-an-Evaluator, Unbiased Evaluator is designed to provide a more robust and unbiased assessment for benchmark contamination. We sincerely appreciate your valuable suggestions and will cite paper [1] and include the discussions above into the related works section. If our rebuttal successfully addresses your concerns, we kindly ask you to consider raising our score. [1] Rethinking Benchmark and Contamination for Language Models with Rephrased Samples --- Rebuttal Comment 1.1: Comment: Thank you for getting back. I appreciate the rebuttal; they address my concerns. I raise my score to 3 to support the paper.
null
null
null
null
null
null
Designing Cyclic Peptides via Harmonic SDE with Atom-Bond Modeling
Accept (poster)
Summary: The paper introduces CPSDE, a new model for designing cyclic peptides using harmonic SDE and explicit atom-bond modeling conditioned on a 3D structure of a protein target. CPSDE comprises two key components: a generative structure prediction model and a residue type predictor. Alternating between these two models, CPSDE iteratively updates sequences and structures. CPSDE can be trained on small molecule and linear peptides, removing the need for abundant cyclic peptide data. It handles cyclization and non-standard amino acids. Experimental results show reliable stability and affinity, and some molecular dynamics simulations further validate the model in real-world design scenarios. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: no theoretical claims Experimental Designs Or Analyses: yes Supplementary Material: quick pass over the supplementary Relation To Broader Scientific Literature: Current methods for peptide design, mainly focused on linear peptides as they cannot include non canonical amino acids or are not designed to handle the cyclicity constraint. CPSDE addresses these limitations with atom-bond modeling. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: * the problem of cyclic peptide generation conditioned on a target structure is an important problem with few work able to handle both cyclization and non canoncial amino acid. as such the problem tackled is new from an application point of view. the fact that the model can be trained on non cyclic peptide is interesting as well. * experimental results show that CPSDE outperforms baseline methods of linear peptides in terms of stability, affinity, and diversity. the authors also performed some MD simulations Weakness: * the main weakness I see for this paper in a ML venue is that it is highly specialized to tackle the problem of cyclic peptide design; arguably this is an important application, but the authors seem to propose a rather complex model to handle the cyclic peptide generation task. * moreover, I understand there is not many work existing for cyclic peptide generation but the experimental comparison is done against approaches for linear peptide generation. it is thus complicated to understand how the proposed ML model compares to alternatives. could the authors provide a baseline (be it a non ML approach) to compare the performance ? I am aware of two work for cyclic peptide generation that are very recent (as such the authors are not required to include them for comparison)) e.g. PepTune: De Novo Generation of Therapeutic Peptides with Multi-Objective-Guided Discrete Diffusion Tang et al. and Accurate de novo design of high-affinity protein binding macrocycles using deep learning Rettie et al. although the latter cannot model non canonical amino acids; it would be helpful to compare a benchmark on a setting which can be handled by other models than CPSDE. Other Comments Or Suggestions: / Questions For Authors: * How does the generation time compare to alternative peptide generative models ? given all-atom and bond modeling, the sampling time might be heavy. ---- Post rebuttal ---- thank you for your rebuttal; I increased my score; the additional comparison on linear peptide generation is useufl in general to understand how this model compares to other models be it on non cyclic peptides. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: "The main weakness I see for this paper in an ML venue is that it is highly specialized to tackle the problem of cyclic peptide design; arguably this is an important application, but the authors seem to propose a rather complex model to handle the cyclic peptide generation task."** A1: Thank you for recognizing the significance of cyclic peptide design. Our method is complex due to two main challenges: limited data on cyclic peptides; common protein representation based on residue frames that inadequately handle unique geometrical constraints and occasional non-canonical amino acids. Our approach addresses these issues by employing all atom and bond modeling and two integrated modules: AtomSDE and ResRouter. Both are essential and non-redundant. To our knowledge, this is the first generative model for designing cyclic peptides. We hope this sparks further research and advancements in this important field. **Q2: Could the authors provide a baseline (be it a non ML approach) to compare the performance? I am aware of two works for cyclic peptide generation that are very recent (as such the authors are not required to include them for comparison)) e.g. [1,2] although the latter cannot model non canonical amino acids.** Thank you for highlighting this. Both [1] and [2] are awesome works, yet they significantly differ from our approach. - [1] is a ligand-based drug design (LBDD) method that models the sequence of cyclic peptides using discrete diffusion, optimized by multiple reward functions. It doesn't explicitly incorporate the 3D structure of target proteins, whereas our structure-based drug design (SBDD) method directly designs ligands based on 3D target structures. - [2] uses modified RoseTTAFold and RFdiffusion with cyclic relative positional encoding to generate macrocyclic backbones. We've already cited [2] in our paper. It only supports head-to-tail cyclization due to its residue-level encoding limitations, whereas our work accommodates all four types of cyclic peptides. Both works are very recent: [1] was released on November 18, 2024, and [2] on December 23, 2024. As neither has released their code, we are unable to directly compare methods at this time. We will cite these papers and discuss them further in future versions of our paper. References: [1] PepTune: De Novo Generation of Therapeutic Peptides with Multi-Objective-Guided Discrete Diffusion, Tang et al. [2] Accurate de novo design of high-affinity protein binding macrocycles using deep learning, Rettie et al. **Q3: "It would be helpful to compare a benchmark on a setting which can be handled by other models than CPSDE."** A3: Given that all baseline methods, including ours, can design linear peptides, we compare them under these conditions. Please see our responses to Reviewer Rxgy's Q2 & Q3 for more details. Nonetheless, we continue to emphasize that the principal focus of our work is on cyclic peptide design. **Q4: "How does the generation time compare to alternative peptide generative models ? given all-atom and bond modeling, the sampling time might be heavy."** A4: We benchmark the average time of generating one peptide for all co-design baselines and our methods on a single NVIDIA A100-SXM4-80GB GPU. See the results below. | Method | Peptide Type | Time (s) | |---|---|---| | ProteinGenerator | Linear | 31.80 | | PepFlow | Linear | 12.09 | | PepGLAD | Linear | 4.40 | | CpSDE | Cyclic | 16.88 | Given that computational drug design does not demand real-time model response, the inference time of our method is deemed acceptable.
Summary: This work tackles the task of cyclic peptide design. Cyclic peptides can have unique advantages in terms of stability and affinity when producing binders compared to other types of peptides or ligands. While there is much work in small molecule as well as protein and peptide generation, there is no prior work on producing cyclic peptides, a gap in the literature, which this paper fills. To this end, the authors propose CpSDE, a method consisting of AtomSDE, a structure prediction model, and ResRouter, a residue type predictor. The two components are called in an alternating manner in a denoising diffusion framework to produce novel cyclic peptides. The approach leverages an explicit all-atom formulation and builds on the atom73 representation with a side chain superposition framework used in previous work. CpSDE also includes explicit bond modeling, and cyclization and target information is given as conditioning. The paper computationally validates the approach through energy-based metrics for stability and validity as well as diversity. Moreover, it runs molecular dynamics simulation for selected cases, showing stable conformations of the generated cyclic peptides, thereby supporting high binding affinity. Claims And Evidence: All claims made in the paper are appropriately supported through convincing experiments. Methods And Evaluation Criteria: All methods and evaluation criteria are appropriate for the problem at hand. Theoretical Claims: The paper does not have any complex theorems or proofs, so this question does not apply. The maths around the harmonic SDE and the diffusion framework seems correct. Experimental Designs Or Analyses: All experimental designs and analyses seem sound and valid to me. I have no concerns. Supplementary Material: Yes, I also reviewed the entire supplementary material (I did not read it in detail, though). The supplementary material contains a lot of additional information: dataset details, additional related work discussions, more introductory information about cyclic peptides, experiment and implementation details, additional ablation studies, additional information about the used atom73 and other representations, a more detailed discussion of model architecture and sampling algorithms, a discussion of model limitations and future work, and a lot of excellent visualizations of generated peptides. In summary, this is a very comprehensive supplementary material, which I appreciate. Relation To Broader Scientific Literature: The authors did an excellent job putting the paper in the context of the broader literature and motivating their approach. The paper has a long list of references and an additional discussion of related work in the appendix. Essential References Not Discussed: I was not able to identify any essential work that was not cited. Other Strengths And Weaknesses: **Strengths:** - To the best of my knowledge, this is the first protein/peptide design paper that tackles cyclic peptide generation, thereby filling a gap in the literature. This means that the work can be considered impactful and significant. - The chosen methodology relies on existing techniques (harmonic SDE diffusion, graph neural networks, atom73 representation, etc.), which themselves are not novel, but these components are put together in a novel, original and well-motivated way for the task at hand. - The quantitative comparisons to existing works show that CpSDE performs on-par with previous works. While one may criticize the work for not achieving state-of-the-art performance across the board on all metrics, all existing works only generate linear peptides (and in the case for RFDiffusion only generate non-diverse simple helices). CpSDE opens up the possibility for cyclic peptides, in contrast to all existing works, which is very innovative. - The additional validation based on molecular dynamics simulation that goes beyond simple energy and diversity metrics is very nice and convincing. - The paper is very well written and clearly explained, with an excellent introductory section, motivating cyclic peptide design and introducing it in an appropriate manner to the machine learning audience. - The quality of the figures and visualizations is excellent. - As discussed above, the supplementary material is very comprehensive and leaves no questions open. **Weaknesses:** Frankly, this is a great paper, which I enjoyed reading and reviewing, and I was not able to identify any major flaws or weaknesses that would make me question the work. Consequently, I applaud the authors to their great work and highly recommend the paper for acceptance. Other Comments Or Suggestions: It would be great if the authors would release their curated training dataset as well as models and code for the broader community. Moreover, I have some minor wording comments: - Line 124, "...a groundbreaking approach...": I believe it is not up to the authors to decide themselves whether their approach is groundbreaking or not. "Groundbreaking" is a very strong word. The community will decide this. I would suggest to change this wording. - Line 430, Conclusions, "...CpSDE is a pioneering...": Same issue, please tone down the wording and let the community judge whether this is pioneering or not. Questions For Authors: - Line 207: What exactly is the role of $\sigma_P^{-2}$, when calculating $\boldsymbol{H}$? The paper only says that this is a "receptor-dependent scalar value", but no intuitions are given. It would be great if the authors could explain this better. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **Q1: "It would be great if the authors would release their curated training dataset as well as models and code for the broader community."** A1: We would like to open source our work to contribute to the community. **Q2: "I have some minor wording comments."** A2: Thanks for pointing this out. We will change these words and choose more objective words in the future version of our paper. **Q3: "What exactly is the role of $\sigma_P^{-2}$, when calculating $\mathbf{H}$?"** A3: $\mathbf{H}=\mathbf{L}+\sigma_P^{-2}\mathbf{I}$. $\mathbf{L}$ is the Laplacian matrix. An intuitive explanation is that $\mathbf{L}$ encourages the connected atoms in the graph to be initialized closer. $\sigma_P$ is the standard derivation of the atom coordinates of the protein pocket. An intuitive explanation is that $\sigma_P^{-2}\mathbf{I}$ encourages the atoms to be initialized more scattered when the pocket itself is large. This reflects a useful prior knowledge, as a pocket typically accommodates a ligand that complements its shape.
Summary: This paper describes a generative method to design cyclic peptides given a protein target. The method uses two diffusion models utilized in a coupled fashion. One to generate the structure, the other to predict the sequence. ## Update After Rebuttal I thank the authors for addressing my review. I have decided to stay with my rating of 4. Claims And Evidence: The paper shows comparison with existing methods for linear peptides, and also shows predictions for some example targets. Overall, I think the evidence is adequate, but one evaluation that I would be interested in seeing is a comparison with a known therapeutic cyclic peptide. How well do the ligands generated by CpSDE compare against those? Methods And Evaluation Criteria: Covered above. Theoretical Claims: N/A. Experimental Designs Or Analyses: The experiment designs were adequate.I did not review them in depth, but referred to them to clarify some points. Supplementary Material: I did not review them in depth, but referred to them to clarify some points. Relation To Broader Scientific Literature: The key contributions were well-situated within the existing literature in the Related Work section. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think the work is innovative and interesting. I would have liked to see more convincing evidence that the benefits have real-world applicability, including, as mentioned above, comparing the generated ligands to known cyclic peptides. Other Comments Or Suggestions: Figure 1 felt unnecessary, to me. Overall, Figure 2 is not very clear, especially the denoising/renoising/ResRouter coupling. Seems to show repetitive denoising without re-noising. In 3.3, it is stated that the ligands are noisy. But in the figure, it shows ResRouter using denoised ligands. Some intuition on why the renoising is necessary would be helpful. Questions For Authors: * AtomSDE only includes the protein target. How are different pockets specified? * Page 2, line 191: The sentence beginning “The inclusion…” could be explained a bit more. * Page 3, line 154: Why are the 3D structures of the cyclization part unavailable? * In 4.1, it is not clear to me how the training/test splits are generated using the sequence identity. * Can CpSDE be adapted for use on linear peptides? This would allow for a head-to-head comparison against other methods. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: "Comparing the generated ligands to known cyclic peptides."** A1: We conducted a comparison of our method with known cyclic peptides. Vasopressin, a natural cyclic peptide featuring intramolecular disulfide (S-S) bond cyclization, is utilized in the treatment of antidiuretic hormone deficiency, vasodilatory shock, gastrointestinal bleeding, ventricular tachycardia, and ventricular fibrillation [1]. We selected two vasopressin-protein complexes, PDB IDs: 1JK4 and 1YF4, to design cyclic peptides targeting bovine neurophysin II and trypsin, respectively. The LEDGF binding site of HIV integrase (HIV-IN) represents a promising target for novel inhibitor development. Prior research has leveraged solution cyclization to discover various head-to-tail cyclic peptides interacting with this site [2]. The PDB IDs are 3A\*\* in the following table. The results are shown as follows: |PDB|Ref.||Our|| |---|-|-|-|-| ||Stab.|Affi.|Stab.|Affi.| |1jk4|-307.23|-43.17|-215.91|-96.67| |1yf4|-52.36|-37.60|-47.03|-46.74| |3ava|-276.09|-30.49|-246.03|-31.98| |3avb|-248.83|-32.31|-233.89|-31.86| |3avg|-314.92|-31.15|-276.46|-37.93| |3avh|-278.42|-32.41|-255.95|-31.74| |3avi|-252.17|-42.72|-222.94|-33.89| |3avj|-381.38|-41.21|-334.38|-30.35| |3avk|-308.66|-38.18|-281.07|-30.07| |3avl|-306.86|-30.78|-220.60|-104.93| |3avm|-252.95|-31.78|-227.98|-26.94| |3avn|-262.63|-29.71|-228.48|-32.44| The results demonstrate that our method successfully designs cyclic peptides with affinity and stability comparable to known cyclic peptides, albeit with slightly lower stability than the reference. [1] Vasopressin: physiology, assessment and osmosensation, Bichet et al. Journal of Internal Medicine, 2017. [2] Crystal Structures of Novel Allosteric Peptide Inhibitors of HIV Integrase Identify New Interactions at the LEDGF Binding Site, Peat et al. ChemBioChem, 2011. **Q2: "Figure 1 felt unnecessary, to me."** A2: Figure 1 illustrates the therapeutic advantages of cyclic peptides compared to linear peptides. We would like to move this to the appendix. **Q3: Explain Figure 2, especially the denoising/renoising/ResRouter coupling. "Some intuition on why the renoising is necessary would be helpful."** A3: The generative process is a reverse SDE, where each Euler step consists of two parts: drift (denoising) and diffusion (renoising). See Equation (2) in the paper where the Wiener process introduces the stochasticity. In Figure 2, AtomSDE adjusts the Atom73 coordinates through denoising and renoising, treating the denoised ligand as an intermediate state. ResRouter alters the chemical graph by predicting amino acid types based on the denoised ligand, which provides more useful signals than the noised ligand. Importantly, ResRouter changes only the atom and bond types in the chemical graphs and leaves the atom73 coordinates unchanged. Similar operations are utilized in [3]. [3] An all-atom protein generative model, Chu et al. PNAS, 2024. **Q4: "AtomSDE only includes the protein target. How are different pockets specified?"** A4: AtomSDE includes both pockets and noisy ligands. Our method aims to design cyclic peptides based on a given binding site. We refer to the protein target as the pockets. This setup aligns with the baselines, such as PepFlow. Specifically, the pockets are defined as residues of the protein target within 10 Angstroms surrounding a known ligand. We will include more details and clarifications in future version of our paper. **Q5: Explain page 2, line 091.** A5: Our method models all atoms and bonds, treating linear and cyclic peptides equivalently since both are composed of the fundamental components—atoms and bonds. Assuming AtomSDE is a well-trained docking model, it facilitates the generative process by encouraging two atoms connected by a chemical bond to be positioned closer together appropriately. By predetermining the cyclization type, we specify the related atom and bond types accordingly. Incorporating bond modeling ensures cyclization occurs naturally in 3D space, as bound atoms are drawn closer during the generative process. **Q6: Page 3, line 154: Why are the 3D structures of the cyclization part unavailable?** A6: The 3D structures of the cyclization segment are unavailable because they belong to what we aim to design. **Q7: "In 4.1, it is not clear to me how the training/test splits are generated using the sequence identity."** A7: Samples are grouped by receptor sequence similarity of 0.3 to create separate training and validation sets. In other words, if two samples are more than 30% similar in sequence, they cannot be in the same set. When receptors have multiple chains, the sequences are concatenated together to determine similarity. This method is widely used in previous works. **Q8: "Can CpSDE be adapted for use on linear peptides? This would allow for a head-to-head comparison against other methods.** A8: Please refer to our responses to Reviewer Rxgy's questions 2 and 3.
Summary: The paper proposes an approach for the design of cyclic peptides using score-based generative models and diffusion. It is termed harmonic SDE, mainly because of conditioning on chemical graph that gives rise to a slightly non-standard forward process. The approach has been evaluated in peptide design against few other approaches, but with a caveat that their outputs are linear rather than cyclic peptides. The main aspects in empirical evaluation deal with stability, affinity, and diversity. ## POST-REBUTTAL ## Thank you for the hard work. I will increase my score but please do a meaningful revision of the final paper. Claims And Evidence: I find the idea of conditioning generative models using chemical graphs interesting, and also the problem of designing cyclic peptides highly relevant for therapeutics purposes. However, the empirical evaluation of both ideas is inadequate. The experiments essentially compare apples-to-oranges in assessing designs that are linear vs cyclic peptides. What would make sense is to have the approach first design only linear peptides and evaluate the extra boost that comes as a result of conditioning on chemical graphs relative to several different architectures/baselines. My understanding is that data in crystal form would also allow for evaluating structural properties and reporting against these metrics vs same approaches without such conditioning. The second aspect is merits of cyclic peptides design which is challenging to evaluate due to the lack of crystal structures of complexes. Methods And Evaluation Criteria: There are two components in the approach: i) ATOMSDE that is a docking model trained using both small molecules and peptides as ligands. It is unclear what fraction of data is actually cyclic peptides and why this dataset would be relevant for generating from that class. ii) RESROUTER that predicts amino-acid from aggregated hidden stats representing backbone atoms. Still the question is if the data on cyclic peptides is sparse how the model can mine useful signal for completing cyclicazation. Please see above for more details on evaluation. Theoretical Claims: Not applicable Experimental Designs Or Analyses: see above Supplementary Material: No Relation To Broader Scientific Literature: Good coverage of related work on generative models for peptides Essential References Not Discussed: Good coverage of related work Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: "I find the idea of conditioning generative models using chemical graphs interesting, and also the problem of designing cyclic peptides highly relevant for therapeutics purposes. However, the empirical evaluation of both ideas is inadequate."** A1: Our work focuses on designing cyclic peptides instead of linear peptides, although our method is indeed capable of designing the latter as well. Our method is tailored specifically for cyclic peptide design. Designing linear peptides often involves modeling translation, rotation of residue frames, and side-chain torsion angles—techniques that essentially encompass all-atom coordinate modeling, as seen in methods like PepFlow. However, these techniques fall short for cyclic peptide design due to unique geometrical constraints and non-canonical amino acids involved. This challenge motivated us to introduce two modules: AtomSDE and ResRouter. These modules enable explicit modeling of all atoms and bonds, addressing the limitations of previous methods in cyclic peptide design. In the generative process, AtomSDE adjusts atom coordinates based on chemical graphs, while ResRouter refines these graphs by predicting amino acid types. We hope this explanation clarifies the motivation and contributions of our work. Besides, we have ablated the effect of Harmonic AtomSDE and ResRouter in designing cyclic peptides. Please refer to Appendix H.3. **Q2: Evaluation of extra boost in linear peptide design.** A2: While designing linear peptides is not our primary focus, we appreciate your suggestion to compare our method with existing baselines in this task. |Method|Co-Design|Peptide Type|Stability||Affinity||Diversity| |---|-|-|-|-|-|-|-| ||||Avg.|Med.|Avg.|Med.|| |Reference||Linear|-672.53|-634.71|-85.03|-78.70|| |RFDiffusion|N|Linear|-633.51|-607.82|-70.30|-61.35|0.55| |ProteinGenerator|Y|Linear|-576.39|-554.70|-46.98|-40.39|0.58| |PepFlow|Y|Linear|-576.16|-498.31|-47.88|-42.40|0.70| |PepGLAD|Y|Linear|-359.44|-310.33|-45.06|-38.56|0.79| |CpSDE|Y|Linear|-567.34|-510.58|-55.48|-49.89|0.77| Our method excels in designing linear peptides with superior affinity and comparable stability among all co-design methods, while also maintaining considerable diversity. Although our method slightly lags behind RFDiffusion, it's worth noting that RFDiffusion often generates helices and relies (also observed for ProteinGenerator) on a two-stage design pipeline (first designing the backbone, then the sequence). **Q3: "Data in crystal form would also allow for evaluating structural properties and reporting against these metrics." "The second aspect is merits of cyclic peptides design which is challenging to evaluate due to the lack of crystal structures of complexes."** A3: Evaluating energy is more critical than assessing RMSD against crystal structures of known binders, as there can be numerous design alternatives and a low RMSD does not always indicate a superior design model [1]. Therefore, we focus primarily on evaluating stability and affinity from an energy perspective. To further validate our findings, we conduct Molecular Dynamics simulations. We have also designed linear peptides as in Q1&A1 and computed the RMSD against known linear peptide binders, shown as follows: |Method|Co-Design|Peptide Type|RMSD(Avg.)|RMSD(Med.)| |---|-|-|-|-| |RFDiffusion|N|Linear|2.85|2.27| |ProteinGenerator|Y|Linear|3.54|3.12| |PepFlow|Y|Linear|1.60|0.98| |PepGLAD|Y|Linear|2.17|1.19| |CpSDE|Y|Linear|1.43|1.17| Our method achieves the lowest average RMSD among all methods, along with a comparable median RMSD. [1] Antigen-specific antibody design via direct energy-based preference optimization, Zhou et al. NeurIPS 2024. **Q4: "What fraction of data is actually cyclic peptides and why this dataset would be relevant for generating from that class." "If the data on cyclic peptides is sparse how the model can mine useful signal for completing cyclicazation."** A4: Our method successfully designs cyclic peptides despite the training set containing less than 6% cyclic peptide data. This achievement stems not from relying on this small percentage, but from our approach of explicitly modeling all atoms and bonds, surpassing previous methods in depth. Through this fine-grained model, linear and cyclic peptides are treated equivalently as both are composed of the same fundamental components: atoms and bonds. The cyclization type is predetermined, allowing us to specify the related atom types and bond types accordingly. By incorporating bond modeling, we ensure cyclization occurs in 3D space since bound atoms are naturally drawn closer during the generative process. The ResRouter model equally sidesteps dependency on scarce cyclic peptide examples due to its ability to deduce amino acid types based on the contextual atom arrangement within the peptide and its receptor. Thus, it is this distinctive and comprehensive modeling technique that enables overcoming data limitations in cyclic peptide design. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for additional experiments. I have read the rebuttal and revisited the paper once more. My estimate is that the paper requires a revision that goes beyond what can be done during the rebuttal period and as a result I have decided to keep my original rating. **Re: cyclic peptides and non-canonical amino-acids**\ Hit-to-lead nomination with cyclic peptides is typically done exclusively with natural amino-acids, and roughly once one reaches single digit nM or even sub-nM range only then non-naturals are taken into consideration to tackle potential enzyme cleavage sites, improve stability, slow down degradation by immune system, etc. Generative models are typically unable to deliver that potency range and, thus, restricting to cyclic peptides with natural amino-acids would be reasonable. **Re: linear vs cyclic peptides in the original experiments**\ Let me now get back to the main concern, which is evaluation and the fact that the original submission compares apples-to-oranges, as cyclic peptides are more rigid than linear ones and, thus, have less conformational freedom leading to stronger and more specific interactions with target molecules. Hence, selected metrics are expected to show better results for methods that design cyclic peptides (i.e., proposed approach), relative to outputs of baselines that design linear peptides. **Re: metrics**\ The two key metrics in the original submission are stability and affinity, which deal with total energy of reference ligands and interface binding energies from Rosetta (Chaudhury et al., 2010). In benchmarking relative to Kd from SPR or HTRF assays, the latter does not fare well and on average Pearson correlation is around 10% (up to 20% at best). Hence, a fair question would then be why this would be a good metric for assessing the quality of generative models. The main advantage in comparison to RFDiffusion (on linear peptides) appears to be diversity. However, it is encouraging to see good results on RMSD for linear peptides. Additional aspects that would be interesting are binding site ratio (BSR), fraction of hydrophobic and/or charged residues (relevant for specificity), l-RMSD and i-RMSD, differences in lengths and angles (e.g., see Lin et al., ICML 2024). **Re: data splits**\ Given that there is 6% or so of structures with cyclic peptides, it would be interesting to see structural metrics relative to some fraction of these. It would also be relevant to report structural and sequence similarities of binding pockets relative to training sample. --- Reply to Comment 1.1.1: Comment: Thanks for your comment. **Q1: cyclic peptides and non-canonical amino-acids** A1: Some NCAAs are indeed introduced post-hoc. Notably, our method **does not contradict with** your claim that "restricting to cyclic peptides with natural amino-acids would be reasonable", highlighted by: 1. NCAAs are included only in the cyclization segment as constraints, such as in the side-to-side cyclization in 1RGR\_B (Figure 6). 2. ResRouter only predicts one of the 20 canonical amino acids for free residues (see Equation 5 and Section 3.1). Four demonstrative cyclic peptide types in our experiments do not include NCAAs: head-to-tail (N-term and C-term C-N bond), head-to-side (first residue's N to side-chain C), side-to-tail (CYS side-chain S and backbone C form C-S bond), and side-to-side (two S in CYS side chain form S-S bond). Figure 9 shows these examples of cyclic peptides designed by our method with only canonical amino acids. The cases used for MD simulations also only involve natural amino acids. **Q2: linear vs cyclic peptides in the original experiments** A2: It is true that cyclic peptides are usually more rigid than linear ones and have less conformational freedom leading to stronger and more specific interactions with target molecules. These properties underscore the natural advantages of valid and reasonable cyclic peptides. We firmly believe this isn't an aspect of unfair comparison, as randomly designed cyclic peptides do not inherently have these characteristics. Please see the ablation studies in the appendix. Additionally, we compared our designed cyclic peptides with known cyclic peptide binders, and the experiments demonstrated that the designed peptides displayed reasonable stability and affinity. For more information, please see our response to Reviewer NNQv's Q1. **Q3: metrics** A3: Rosetta is a widely used and effective tool for in-silico evaluations, with many studies using its energy scores as a key metric, e.g., [1,2]. We also ran **MD simulations, which are more reliable but expensive**, to validate our designed cyclic peptides. We have done more comprehensive evaluation on linear peptides as you required, as follows: |Method|Hydrophobic Ratio|Charged Ratio|DockQ|iRMSD|LRMSD|BSR| |---|---|---|---|---|---|---| |Reference|0.48|0.28||||| |ProteinGenerator|0.53|0.32|0.12|5.56|23.97|0.20| |PepFlow|0.60|**0.17**|**0.44**|2.49|**9.42**|0.56| |PepGLAD|0.53|0.25|0.30|2.68|11.99|0.39| |RFDiffusion|0.59|0.27|0.18|5.37|20.10|0.33| |CpSDE (linear)|**0.45**|0.24|0.32|**2.36**|9.91|**0.60**| We use DockQ package (https://github.com/bjornwallner/DockQ) to compute DockQ, iRMSD, and LRMSD. We follow the definition of binding site ratio (BSR) in [1]. A lower hydrophobic/charged ratio indicates a lower risk of non-specific binding [3]. Notably, the fraction of hydrophobic and/or charged residues of our designed peptides resemble that of reference. Our method also shows superiority in structural properties. We also report Jensen–Shannon divergence (JSD) between designed linear peptides and reference linear peptides in https://anonymous.4open.science/r/cpsde_rebuttal-3578/bond_length_and_angle.md. **Q4: data split** A4: Our model understands bonds and atoms but does not differentiate between "linear" and "cyclic" peptides. As a result, from the model's perspective, these peptides are essentially the same. We evaluated the TM score of the designed peptides against those from the training set across four typical types of cyclic peptides. The results confirm that the generated peptides do not closely resemble any within the training set. |Cyclic type|Number of peptides in training set|TM score||| |---|---|:---:|:---:|:---:| |||Max|Average|Median| |Head-to-tail|61|0.334|0.173|0.168| |Head-to-side|31|0.350|0.176|0.175| |Side-to-tail|69|0.348|0.183|0.187| |Side-to-side|1001|0.429|0.198|0.196| For similarity of receptors, please refer to Section 4.1 and our response to Reviewer NNQv's Q7. **References:** [1] Full-Atom Peptide Design based on Multi-modal Flow Matching, Li et al. ICML 2024. [2] Antigen-specific antibody design via direct energy-based preference optimization, Zhou et al. NeurIPS 2024. [3] Optimization of therapeutic antibodies for reduced self-association and non-specific binding via interpretable machine learning, Makowski et al. Nature Biomedical Engineering 2023.
null
null
null
null
null
null
Can Diffusion Models Learn Hidden Inter-Feature Rules Behind Images?
Accept (poster)
Summary: The paper investigates whether diffusion models can learn hidden inter-feature rules in images by designing synthetic tasks that simulate real-world relationships (e.g., the connection between the sun’s height and the length of its shadow). The study finds that while these models can capture coarse-grained rules effectively, they struggle with fine-grained, precise dependencies—a limitation attributed to inherent constant errors in the denoising score matching objective. Additionally, the authors propose mitigation strategies, such as incorporating classifier guidance during sampling and using pixel-space filtering, which yield some improvements but do not fully overcome the challenge, thereby offering both theoretical insights and empirical evidence on the current limitations of diffusion models in rule learning. Claims And Evidence: The submission’s claims are largely supported by both extensive empirical results and rigorous theoretical analysis. The authors substantiate their main claim—that diffusion models can reliably capture coarse-grained rules but struggle with fine-grained ones—through well-designed synthetic tasks and clear evaluation metrics (e.g., R² values and error metrics), which convincingly demonstrate the performance gap. Additionally, the theoretical framework based on denoising score matching offers solid mathematical backing for the observed constant error in learning fine-grained rules. While the proposed mitigation strategies (guided diffusion and filtering) show some improvement, the evidence also clearly indicates their limited effectiveness. One concern is that the reliance on synthetic tasks may not fully capture the complexity of real-world images, leaving some room for further evidence on broader datasets. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to the problem at hand, as the synthetic tasks and detailed feature extraction pipelines provide a controlled setting to assess the diffusion models' ability to learn both coarse-grained and fine-grained rules. The evaluation metrics, such as R² values and error measurements, effectively quantify the performance gap and highlight the models' limitations. However, while the mitigation strategy using classifier guidance during diffusion sampling does yield some improvements, it is worth noting that this approach is not novel and does not introduce fresh techniques for enhancing the handling of fine-grained rules. This reliance on an established method may limit the paper's overall innovation in terms of proposing new solutions to the identified challenges. Theoretical Claims: I reviewed the theoretical claims, focusing on Theorem 4.2, which derives the score function for the multi-patch data setup, and Theorems 4.4 and 4.5, which provide lower bounds on the rule-conforming error by decomposing it into bias and variance components. The derivations appear mathematically sound under the stated assumptions, such as linear activations and a two-layer network, and they convincingly support the empirical observation that diffusion models incur a constant error when learning fine-grained rules. However, some of the definitions, like the rule-conforming error, lack intuitive explanations; for instance, while the error is defined as the deviation of the score’s projected coefficient from an ideal value reflecting the hidden norm constraint, the paper does not clearly explain why this quantity should intuitively indicate correct rule learning. Providing more intuition behind such definitions would help readers better understand the connection between the theoretical quantities and the practical notion of rule conformity. Experimental Designs Or Analyses: I reviewed the experimental design, including the synthetic tasks (A–D) and the associated feature extraction and evaluation metrics (e.g., R², Error metrics), and found that the setup is generally sound and well-motivated for assessing the diffusion models’ ability to capture inter-feature rules. The controlled synthetic environment allows clear differentiation between coarse-grained and fine-grained rule learning, and the quantitative analysis convincingly highlights the performance gap. However, one potential concern is that the experiments are limited to synthetic tasks, which may not fully capture the complexities of real-world data. Additionally, while the use of classifier guidance to improve fine-grained rule learning is effective to some extent, it is a well-known technique rather than a novel contribution. Supplementary Material: I reviewed the supplementary material, including Appendices D, F, and G. Relation To Broader Scientific Literature: The paper’s key contributions are well situated within the broader literature on diffusion models and image generation. It extends previous findings on compositionality and factual consistency in diffusion models—where prior studies (e.g., DDPM, score-based generative models, and works on hallucinations) primarily addressed independent feature composition and common failure modes—by focusing on hidden inter-feature rules that capture subtle dependencies between image features. Its theoretical analysis, which builds on denoising score matching frameworks, aligns with recent efforts to understand the limitations of diffusion objectives and complements studies on mode interpolation and memorization in generative models. Additionally, while the use of classifier guidance is not new, the paper integrates it into a framework specifically designed to address fine-grained rule learning, thereby contributing a fresh perspective that bridges empirical observations with theoretical insights in the context of inter-feature relationships. Essential References Not Discussed: Overall, the paper sufficiently covers the essential related literature. The authors have cited key works on diffusion models, denoising score matching, and guidance strategies that underpin their theoretical and empirical contributions. The references discussed in the paper provide a comprehensive context for understanding the challenges associated with fine-grained rule learning and the limitations of current diffusion models, and no critical works appear to be missing. Other Strengths And Weaknesses: The paper is commendable for its thorough analysis, combining rigorous theoretical derivations with well-designed synthetic experiments to investigate the limitations of diffusion models in capturing fine-grained inter-feature rules. Its originality lies in framing the rule-learning challenge in terms of hidden dependencies and providing both empirical and theoretical evidence of inherent constant errors, which is a valuable contribution to understanding diffusion model behavior. However, the paper's reliance on synthetic tasks might limit its immediate applicability to real-world scenarios, and while the use of classifier guidance for improvement is well-motivated, it does not introduce novel techniques. Additionally, some definitions, such as the rule-conforming error, would benefit from further intuitive explanation to enhance clarity. Other Comments Or Suggestions: Some additional suggestions: It would be beneficial to include more detailed explanations for some of the theoretical definitions, particularly providing intuitive insights behind concepts such as the rule-conforming error. Expanding on how these definitions relate to practical aspects of image generation could enhance clarity. Moreover, while the synthetic tasks are well-designed for controlled evaluation, including experiments on real-world datasets or more complex scenarios would strengthen the applicability of the findings. Finally, a careful proofreading to fix minor typos and improve the overall flow of the text is recommended. Questions For Authors: The definition of the rule-conforming error is not very intuitive. Could you explain in simple terms why this quantity effectively measures the model's ability to learn the hidden rules? Your experiments are based on synthetic tasks. Do you have any results or insights on how your approach might work on real-world datasets? You use classifier guidance to improve fine-grained rule learning, which is an established method. Have you considered any alternative strategies that might address these limitations more effectively? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer's efforts on reviewing this paper. We now address the questions raised as follows. --- >Q1: Broader datasets / Real-world data. Thanks for your good question. To supprot that DMs can learn coarse rules but hard to learn fine-grained rules, we conduct additional experiments on two real-world datasets, SynMirror and Cifar-MNIST. - **SynMirror** [1] displays objects and their reflections, where inter-feature rules manifest as constraints between objects and their reflections in terms of color, size, and shape. **The results show** that the generation by DDPM can capture some coarse rules, such as objects and their reflections share the same colors, but it hard to learn fine-grained rules, such as there are siginificant differences in the shapes and contours. - **Cifar-MNIST** combines specific classes from CIFAR and MNIST, such as pairing Cats and Dogs from CIFAR with 0 and 1 from MNIST. **The results show** that generations by DDPM satisfy coarse rules, such as ensuring that each generated image contains two digits (MNIST) and two non-digit objects (CIFAR). But only 20% of the generations satisfy predefiend fine-grained rules, where only specific categories from CIFAR and MNIST are allowed to pair. [Real-world Data Results](https://anonymous.4open.science/api/repo/Rebuttal-8656/file/exp_ID15070.pdf?v=e6e553d4) provide more details and visualizations. We will add these results into the revised manuscript. >Q2: Mitigation strategy is not novel ... / any alternative strategies that might address these limitations more effectively? Thanks for your good points. - **Novel Method**. The **main goal of our paper** is to clearly identify the shortcomings of DMs in rule learning from both experimental and theoretical perspectives. At the end of the paper, we make initial attempts to address this issue. Importantly, we highlight that a **potential bottleneck** is the signals of fine-grained rules are too weak to be captured (see ``Section 5.2``). This issue has not been reported in traditional DDPMs, such as those targeting ImageNet tasks with classifier guidance. We hope our initial strategies and bottleneck analysis provide valuable insights for further exploration. - **Alternative Strategies.** Inspired by existing work [2,3], for further exploration, we can introduce additional reward signals from human feedback or powerful reward models to better guide DMs during sampling. Additionally, improving the tokenizer to better learn semantic information related to rules could also enhance rule learning. We will include the discussion in the revised manuscript. >Q3: Some of the definitions, like the rule-conforming error, lack intuitive explanations / It would be beneficial to include more detailed explanations ... such as the rule-conforming error. Thank you for the question. The accuracy of score learning is inherently tied to the generation quality of diffusion models [4]. As we have shown in ``Theorem 4.2``, in order to sample from the data distribution (with rule conformity), the score function should satisfy a constraint $\langle \nabla \log p_t(x_t) + x_t/\beta_t^2, [u; v] \rangle = \alpha_t/\beta_t^2$. By designing the network function in eq. (1) (in the main text), the constraint is equivalent to $\psi_t(x_t) = \alpha_t/\beta_t^2$ that holds for any $x_t$ (as in ``Definition 4.3``). The rule-conforming error, defined as the mean squared deviation from this value, exactly measures how closely the learned score aligns with the ideal constraint (as $x_t$ varies). In practice, this relates to generating images that respect structural properties such as fixed object size. The smaller the rule-conforming error, the smaller the estimation error of the score function and thus the more likely the samples generated by diffusion models adhere to the constraint. We will revise the text to clarify this connection. [1] Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections. [2] Aligning Text-to-Image Models Using Human Feedback. [3] Human preference score: Better aligning text-to-image models with human preference. [4] Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. -- --- We hope the above response can resolve your concern and if there is further concern please let us know. --- Rebuttal Comment 1.1: Comment: I have read the author's rebuttal and the review of other reviewer. Most of my concerns have been addressed. I'd love to increase my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer gKDf, We’re glad to see that our rebuttal addressed your concerns and thank you for raising your score to a 4 — your recognition is encouraging. We greatly appreciate your constructive feedback, especially on exploring additional data and discussing alternative mitigation strategies. We will include them in the revised manuscript to further improve the quality of our work. Thank you again for your efforts. Best, Authors
Summary: This paper investigates whether diffusion models can learn hidden inter-feature rules in images, focusing on the distinction between coarse-grained and fine-grained relationships. Through carefully designed synthetic tasks inspired by real-world phenomena—such as the spatial relationship between the sun and its shadow or the connection between object size and texture—the authors demonstrate that while models like Stable Diffusion 3.5 can reliably capture broad, coarse-grained rules, they consistently struggle with learning precise, fine-grained dependencies. The paper also presents a theoretical analysis showing that the denoising score matching objective inherently leads to a constant error in rule conformity, thereby limiting the models' ability to accurately recover the conditional distributions underlying these subtle rules. To mitigate these shortcomings, the authors propose incorporating additional classifier guidance and filtering strategies during sampling, which yield moderate improvements in enforcing fine-grained rule adherence. Despite these enhancements, the experiments reveal that even advanced guidance techniques are insufficient for completely bridging the gap, as the nuanced signals of fine-grained rules remain challenging to capture. Overall, this work provides significant insights into the limitations of current diffusion models and offers a compelling direction for future research to improve rule learning in generative image models. ## update after rebuttal The authors address most of my concerns. So I keep my positive score. Claims And Evidence: The submission’s claims are generally well-supported by both experimental and theoretical evidence. The authors convincingly demonstrate that diffusion models can reliably learn coarse-grained inter-feature rules while consistently failing to capture fine-grained dependencies, as evidenced by low $R^2$ values and significant error metrics in their synthetic task evaluations. Their theoretical analysis further strengthens this claim by showing that the denoising score matching objective inherently induces a constant error, which limits the models’ ability to precisely learn the hidden rules. However, some claims could benefit from additional clarification. For example, the assertion that the observed constant error is solely a consequence of the DSM objective might be problematic without further ablation studies across different architectures and training configurations. Additionally, while the proposed mitigation strategies (classifier guidance and filtering) show moderate improvements, they do not fully resolve the issue of fine-grained rule learning, suggesting that further empirical validation is needed to conclusively support their effectiveness. Methods And Evaluation Criteria: The methods and evaluation criteria proposed in the paper are well-suited for the problem at hand. The authors design controlled synthetic tasks that specifically target both coarse-grained and fine-grained inter-feature rules, which effectively isolates the aspects of rule learning from the broader complexities found in natural images. This targeted approach, using tasks inspired by real-world phenomena such as light-shadow interactions and object reflections, allows for a systematic assessment of diffusion models' abilities in capturing these dependencies. Moreover, the evaluation framework, comprising feature extraction via color-based masking, geometric measurements, and the use of metrics such as R² and a combined error metric that accounts for both bias and variance, provides clear, quantitative insights into how well the generated samples adhere to the predefined rules. While these synthetic benchmarks may not capture all nuances of real-world data, they offer a rigorous and interpretable means to evaluate and compare model performance, making the methods and criteria both reasonable and effective for the study's objectives. Theoretical Claims: I reviewed the theoretical proofs presented in the paper, particularly Theorem 4.2, which characterizes the score function for the multi-patch data model, and Theorems 4.4 and 4.5, which establish lower bounds on the rule-conforming error (including bias and variance components). Under the stated assumptions, the proofs appear to be mathematically sound and consistent, effectively linking the denoising score matching objective to the inherent constant error in learning fine-grained inter-feature rules. That said, some aspects rely on idealized conditions (e.g., the use of a simplified two-layer network with linear activation in Theorem 4.5), which may not fully capture the complexities of practical diffusion models. While these simplifications are acceptable for isolating the core theoretical insights, further discussion or empirical validation would help clarify how these bounds translate to more complex architectures encountered in real-world applications. Experimental Designs Or Analyses: The experimental design is generally sound and well-justified. The authors construct synthetic tasks with clearly defined inter-feature rules, both coarse-grained and fine-grained to isolate the specific challenges of rule learning. The evaluation pipeline, which involves a three-step process of color-based masking, element counting, and keypoint extraction, is a clever way to quantitatively measure how closely generated images conform to the underlying rules. Metrics such as R² and the combined error metric (encompassing both bias and variance) are appropriately used to assess performance differences across various tasks and diffusion model configurations. However, some potential issues warrant further discussion. First, while synthetic tasks offer control and interpretability, they might not capture the full complexity of real-world images, possibly limiting the generalizability of the findings. Second, the sensitivity of the feature extraction process to hyperparameters (e.g., predefined HSV ranges) is not fully explored, and minor variations could affect the evaluation outcomes. Supplementary Material: Yes, I reviewed the supplementary material. In particular, I examined the sections that provide additional details on the synthetic tasks (Appendix B and C), which elaborate on the design and rationale behind the coarse-grained and fine-grained rules. I also looked into the extended experimental results and ablation studies provided in Appendix D, which offer further insights into the model's behavior across different configurations and architectures. Relation To Broader Scientific Literature: The paper’s key contributions extend the existing body of work on diffusion models by focusing on the subtle, hidden inter-feature rules that standard generative models have largely overlooked. While prior studies (e.g., Ho et al., 2020; Dhariwal & Nichol, 2021) have demonstrated the high fidelity and compositional capabilities of diffusion models, they mainly address independent features and broad factual consistency. In contrast, this work delves into how these models handle nuanced dependencies both spatial (such as light-shadow relationships) and non-spatial (like size-color correlations), thus highlighting a gap in the literature regarding fine-grained rule learning. Essential References Not Discussed: Overall, the set of references provided in the paper is largely sufficient. Other Strengths And Weaknesses: The paper presents an original and comprehensive investigation into the ability of diffusion models to learn inter-feature rules, a relatively underexplored area. The introduction of synthetic tasks with clearly defined coarse- and fine-grained rules is innovative, providing a controlled environment to isolate and analyze model behavior. Moreover, the integration of theoretical analysis with empirical evidence, especially the derivation of constant error bounds due to the denoising score matching objective, adds significant depth and rigor to the work. However, the reliance on synthetic data may limit the direct applicability of the findings to complex real-world images. Additionally, some theoretical proofs are based on simplified models, such as two-layer networks with linear activation functions, which might not capture the nuances of more advanced architectures. Other Comments Or Suggestions: Consider providing additional details about the hyperparameters used in the feature extraction and evaluation process, as well as discussing how sensitive the results are to these settings. Clarify the assumptions underlying the theoretical proofs, particularly in Theorem 4.5, to help readers better understand the limitations of applying these results to more complex architectures. A brief discussion on potential future directions to address the limitations imposed by the denoising score matching objective would be valuable. Lastly, a careful proofreading to catch any minor typographical errors or inconsistencies would help improve the overall presentation. Questions For Authors: In Theorem 4.5, your analysis is based on a simplified two-layer network with a linear activation function. How sensitive are the derived constant error bounds to these assumptions? Would similar limitations be expected in deeper or more complex networks with non-linear activations? Your evaluation is based on synthetic tasks designed to isolate coarse- and fine-grained rules. Could you elaborate on how well these tasks correlate with real-world image generation? Regarding the classifier guidance and filtering strategies, you show only moderate improvements in enforcing fine-grained rules. Can you provide more insights into why these approaches yield limited gains? Your evaluation pipeline relies on specific hyperparameters (e.g., HSV thresholds for feature extraction). How robust are your experimental results to variations in these parameters? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your time and efforts reviewing our paper. We now address raised questions as follows. --- >Q1: Further ablation studies across different architectures and training configurations. ``Section D.3 (Lines 964-1032)`` considers different architectures (U-Net, SiT, DiT) and training configurations including training epochs, training data size and image size. Experimental results show DMs still have limitations in learning fine-grained rules with different settings. >Q2: The proposed mitigation strategies do not fully resolve the issue. Thanks for your comments. Our strategy is an initial effort to solve this issue. Importantly, we identify a **potential bottleneck**: the signal of fine rules is too weak to be captured by the classifier, a problem have not been highlighted in classical and Imagenet task-based conditional DDPMs (see ``Section 5.2``). We hope bottleneck analysis can provide valuable insights for future exploration. Additionally, **the goal of our work** is to reveal the limitations of rule learning in DMs through experiments and theory. Fully addressing this challenge requires further work like rule-specific datasets and metrics, which are currently lacking. We leave the complete resolution to future work. >Q3:Theoretical proofs are based on simplified models. Thanks for your question. ``Theorem 4.4`` highlights for non-linear two-layer neural networks, there is constant errors due to the variance of diffusion noise. ``Theorem 4.5`` explicitly derives the constant error. We believe the conclusions from ``Theorem 4.4`` and ``4.5`` extend beyond the simplified setups. Intuitively, the score function is required to satisfy a low-dimensional constraint that holds for every noised input. However, without explicitly embedding this constraint into the model or the training objective, learning it from data becomes inherently difficult. Since the constraint must hold globally, neural networks lack the inductive bias needed to recover such structure from finite samples. >Q4:Real-world images. Thank you for your good points. We conduct additional experiments on real-world datasets to further demonstrate DMs can learn coarse rules but struggle with fine ones. - **SynMirror** [1] presents objects and their reflections, where rules link their features like color, size, and shape. **We find** DDPM captures coarse rules (e.g., matching colors between objects and reflections) but struggles with fine ones, showing shape mismatches. - **Cifar-MNIST** pairs specific CIFAR and MNIST classes (e.g., Cats/Dogs with 0/1). **We find** DDPM satisfies coarse rules (e.g., always generating two digits and two objects), but only 20% of generations follow fine-grained rules requiring specific class pairings. See [Real-world Data](https://anonymous.4open.science/api/repo/Rebuttal-8656/file/exp_ID15070.pdf?v=e6e553d4) for more details. We will add these into revised manuscript. >Q5:The sensitivity of feature extraction process to hyperparameters (e.g., predefined HSV ranges). Sorry for the confusion. **There is no hyperparameters in the feature extraction process**. The HSV values used are predefined during training data construction. For example, in Task A, the sun’s HSV is set to yellow with hue [0, 30], saturation [100, 255], and value [200, 255]. The same HSV range is used during feature extraction (see ``Lines 747–806`` for details). >Q6: Potential future directions. Inspired by [2,3], one potential direction is optimizing the sampling process during inference. We can introduce additional reward signals from human feedback or reward models to guide DMs during sampling. Additionally, improving the tokenizer to better learn semantic information related to rules could also enhance rule learning. We will add this discussion to revised manuscript. >Q7: Can you provide more insights into why these approaches yield limited gains? Thank you for mentioning this question. ``Section 5.2 (Lines 408-424)`` shows limited improvements is due to weak signals of the fine-grained rule which make the guidance from the classifier isn't strong enough to completely correct the sampling. Specially, - ``Figure 16`` shows inseparable CLIP representations of contrastive data, making classifier training challenging. - ``Figure 17`` demonstrates that training on simple contrastive data results in test accuracy below 90%, highlighting the difficulty in distinguishing subtle differences. - [Visualization](https://anonymous.4open.science/api/repo/Rebuttal-8656/file/vis_ID15070.pdf?v=4c7afd58) demonstrates the weak differences between different classes in raw contrastive data. [1] Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections. [2] Aligning Text-to-Image Models Using Human Feedback. [3] Human preference score: Better aligning text-to-image models with human preference. ----- We hope above response can address your concern and we are open to discuss more if any question still hold. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses, which effectively clarify my questions with detailed ablation studies, real-world experiments, and clear explanations of theoretical and practical limitations. I’m satisfied with these clarifications and will maintain my positive score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 9KzS, We are delighted to hear that our rebuttal has addressed your concerns, and we sincerely appreciate your positive feedback on our work. Thank you for your constructive comments regarding the real-world experiments and additional clarifications. We will include them in the revised manuscript. Thank you again for your efforts and time. Best, Authors
Summary: This paper evaluates diffusion models from both experimental and theoretical perspectives on inter-feature rule learning, indicating that while they can capture coarse rules, they struggle with fine-grained ones. The authors also provide a preliminary method to mitigate this shortcoming in learning fine-grained rules. Claims And Evidence: Yes. Methods And Evaluation Criteria: The motivation for evaluating the ability of diffusion models to learn fine-grained rules is well-justified and necessary, as it aims to address a major concern that limits the quality of generated outputs in recent large diffusion models. Theoretical Claims: Yes, the proof is logically clear and correct. Experimental Designs Or Analyses: Yes, the experimental designs are reasonable, evaluating the ability of diffusion models to learn physical rules using four carefully designed tasks. Supplementary Material: The authors provide additional experimental details, case studies, and detailed proofs in the Supplementary Material. Relation To Broader Scientific Literature: This paper is closely related to text-to-image generation and highlights a key limitation of recent large diffusion models: their difficulty in learning fine-grained inter-feature rules. Essential References Not Discussed: None Other Strengths And Weaknesses: ### Strengths - This paper provides a comprehensive evaluation of whether diffusion models can learn fine-grained inter-feature rules through both experimental and theoretical analysis. ### Weaknesses - The proposed approach to facilitating fine-grained rule learning appears to have no direct connection with the theoretical analysis and achieves only limited improvements. Other Comments Or Suggestions: None Questions For Authors: I think the proposed method to facilitate fine-grained rule learning is a bit straightforward and unrelated to the main analysis of this paper. I hope the authors can clarify this point, and I will adjust my score accordingly. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer's efforts on reviewing this paper. We now address the questions raised as follows. --- >Q1:The proposed approach to facilitating fine-grained rule learning appears to have no direct connection with the theoretical analysis. /The proposed method is unrelated to the main analysis of this paper Thank you for the question. Our **theoretical analysis** in main text highlights that the failure of classical DDPMs in learning inter-feature rules is mainly due to the objective of denoising, which does not explicitly capture the hidden inter-feature rules. Therefore, classical DDPMs, when trained solely with the standard objective, lack the inductive bias necessary to learn fine-grained inter-feature rules. **This naturally inspires us to introduce additional guidance to steer the sampling process**, encouraging DDPM to generate rule-conforming samples. Additionally, ``Figure 5`` in **experimental analysis** in main text shows that DMs can generate high-quality samples that meet fine-grained rules, but the process is unstable and prone to rule violations. Therefore, our proposed method introduce additional information to help DDPM stably sample from high-quality regions (more discussion in ``Lines 300–322``). We will add more discussions in our revised version. >Q2: The proposed method to facilitate fine-grained rule learning is a bit straightforward/ The proposed approach to facilitating fine-grained rule learning d achieves only limited improvements. Thank you for your question. The **main focus of our work** is to identify the limitations of DMs in learning fine-grained rules through experiments and theoretical analysis, rather than to propose a complete solution. This limitation has been overlooked (as discussed in ``Section 2``) and represents 'a relatively underexplored area' (as noted by Reviewer 9KzS). Our work aims to 'extend previous findings on compositionality and factual consistency in diffusion models' (as noted by Reviewer gKDf). Additionally, the proposed method is an initial attempt to enhance rule learning. Importantly, we identify a **key bottleneck**: the signal of fine-grained rules is too weak for the classifier to capture, a phenomenon not been highlighted in traditional DDPMs, such as those targeting ImageNet tasks with classifier guidance (see ``Section 5.2``). We hope that these early attempts and bottleneck analysis can provide valuable insights for future exploration. [1] Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review. -- ---- We hope the above response resolved the questions and if there is further concern please let us know. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. Some of my concerns have been partially addressed, and I will accordingly raise my score to 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer KnJN, We are glad to hear that our rebuttal has addressed your concerns, and we sincerely appreciate your decision to raise the score to a 3. In particular, thank you for emphasizing the connection between the methodology and the theoretical/experimental sections — we will improve this part in the revised manuscript. Thank you again for your effort in reviewing our work. Best, Authors
Summary: This paper is motivated by some prevalent real world failure case of diffusion model learning rules between spatial parts and features. They developed a few synthetic tasks to test the learning of diffusion model on inter object rules (spatial or non spatial). Though the overall layout is correct and rough scene rule is obeyed, more precise (linear) spatial relation was kind of obeyed, but not accurate (in the sense of R2\neq1). Then the authors developed a theoretical set up to explain why diffusion training do not lead to rule learning, and proved that with certain theoretical setting (patch data, inter patch rule and separable score function approximator per patch) then the network cannot learn the rule, with a constant error bound on the rule. Finally the authors developed a few simple yet effective way to mitigate rule confirming problems, e.g. via guided sampling or posthoc rejective sampling. Claims And Evidence: This paper provided ample evidence for most of its claims, and it’s a well completed paper. I totally agree there are rule learning issues in the practical and synthetic setups. However, I have some issue with the conclusion from theory. - **Main issue:** I get the theorems in the paper, they are correct. But the overall claim / conclusion made in the abstract that “*Our theoretical analysis demonstrates that DMs trained via denoising score matching (DSM) exhibit constant errors in learning hidden rules, as the DSM objective is not compatible with rule conformity.*” has some issue. - This claim is based on a specific theoretical set up where two patches are strongly correlated. We can say the data is supported on a effectively 1d manifold like $\zeta[u; -v]$, with certain offset. - However the authors also designed a patch-wise neural network model. Where the output only depends on corresponding part of input! This design made it basically impossible to approximate the true score of the data manifold. - Consider an analytically solvable case where $\zeta \sim \mathcal N(0,1)$ then $[x]\sim \mathcal N([0;v],[u;-v][u;-v]^T)$ is distributed as a degenerate Gaussian, with only one nonzero eigenvector in covariance $[u;-v]$. Then its score at any given moment is tractable (Eq 5 [WV2024]). For Gaussian $\mathcal N(\mu,\sigma^2I+\Sigma)$, the score is a linear function, and it looks like $(\sigma^2I+\Sigma)^{-1}(\mu-x)=\sum_i\frac{1}{\lambda_i+\sigma^2}\nu_i\nu_i^T(\mu-x)$. Basically it’s a full matrix. with a major component spanned by the PC of data $[u;-v][u;-v]^T$. So the score $s^{(1)}$ part depends on both the state $x^{(1)},x^{(2)}$. However your network design prohibit it from depending on the $x^{(2)}$ which **definitely** causes it to be unable to approximate the true score or learn the true data manifold. In the linear case as in Theorem 4.5, the effective weight matrix in your network is block diagonal and each block is rank 1. But the true score for Gaussian requires a full matrix. and the off diagonal blocks cannot be zero. - On a higher level, basically you designed a separable network, and the loss is also separable, so basically each patch network learns its own distribution, and it cannot learn the correlation between the two patches. in the end it can only learned a factorized distribution. - During rebuttal, the author could address this by modifying the theoretical set up, adding certain setup where dependency is not local to a patch. The authors could also edit their overall claim, and not to attribute the failure of rule learning to the diffusion training on Denoising score matching (DDPM) loss, but to their model design. Currently it seems with a full dependency score model, even being linear function, it will converge to the correct supporting manifold [W2025] (Proposition 5.1). (Though linear network will not learn the correct distribution on manifold, it can only learn Gaussian like things) - **Minor issue:** For the empirical experiment on synthetic tasks, whether or not to call it a failure is quite arbitrary. I feel it’s quite successful on ABD. In Figure 5A, the threshold of +-0.01 is quite stringent. [WV2024] Wang, B., & Vastola, J. J. (2024). The Unreasonable Effectiveness of Gaussian Score Approximation for Diffusion Models and its Applications. TMLR [W2025] Wang, B. (2025). An Analytical Theory of Power Law Spectral Bias in the Learning Dynamics of Diffusion Models. *arXiv:2503.03206*. Methods And Evaluation Criteria: - I agree that based on current results, one way to enforce better rule conforming is via classifier guidance and post hoc rejection. It's nice that the authors tried this and showed some improvement. Theoretical Claims: I checked Theorem 4.2 - At line 299 the statement “*data, requiring that the norm of the first two feature patches sum up to one, i.e., ∥x(1)∥ + ∥x(2)∥ = 1.” **is not correct** in some case**. I think it should be the sum of projection on u an v** sum to 1.* $\lang u, x^{(1)}\rang+\lang v, x^{(2)}\rang=1$ - See **Claims And Evidence* for a conceptual issue I have with the theoretical treatment in Section 4.** Experimental Designs Or Analyses: I applaud the authors for the synthetic data design in this paper, which were well based on actual image diffusion model and their failure cases. - **Minor issue: interpretation of failure** For Figure 4, it’s like a half full half empty scenario… whether we call it a success or failure is kind of subjective. To me it’s already quite successful, given A,B,D tasks all have R2~0.80. If you measure correlation you should get sth like 0.90. “…*, where deviations from the ground truth in linear fitting and the coefficient of determination R2 below 1 indicate that DMs fail to fully capture the predefined fine-grained rules*” I feel this is too high a bar to ask for empirical results… In contrast, for cases in previous works like in Raven’s progression matrices, since the state space is discrete, the evaluation of rule following seems to be more accurate. Supplementary Material: Yes. Relation To Broader Scientific Literature: The results in Figure 5 B. is quite related to observations in [WSS2024] (Fig.3), for diffusion trained on Raven’s dataset, where the overall rule conforming samples were novel and far from the training set. While local parts of it could be quite similar to some local parts of the dataset. Seems like recombining local parts to create new “scenes”, potentially due to some mechanism as proposed in [KG2024]. [WSS2024] Wang, B., Shang, J., & Sompolinsky, H. (2024). Diverse capability and scaling of diffusion and auto-regressive models when learning abstract rules. NeurIPS Workshop *arXiv:2411.07873*. [KG2024] Kamb, M., & Ganguli, S. (2024). An analytic theory of creativity in convolutional diffusion models. *arXiv preprint arXiv:2412.20292*. Essential References Not Discussed: One relevant concurrent work that shares the basic theoretical set up with Theorem 4.5 is in **Prop 5.1** in [W2025], i.e. linear symmetric score / denoiser, small or aligned initialization, with slightly more general requirements. Basically their results also confirm that since the data covariance has such low dimensional structure, the weights will automatically discover such low-dimensional structure (feature dimension) by gradient training. But as I pointed out before, in their case the network is overall linear, without adding patch constraint, so their network will recover the correct 1d data manifold as in your case, and will “learn the rule”, or point towards the 1d manifold. [W2025] Wang, B. (2025). An Analytical Theory of Power Law Spectral Bias in the Learning Dynamics of Diffusion Models. ** Other Strengths And Weaknesses: - Overall this is a quite complete paper, showing practical relevance, well motivated set up and theory explaining it, and ways to mitigate the issue. I applaud the authors on such a work! - The visualizations were well done, and the empirical and theoretical results were stated clearly. Other Comments Or Suggestions: NA Questions For Authors: - I think the most crucial conceptual question I have is about the distinction about coarse level rule and file grained rules. I can see the distinction in your example, but more generally and theoretically / conceptually, what distinguish coarse level rules from fine grained ones? Why some can be learned and some cannot? - Similarly can the authors comment on what is spatial rule and what is non spatial rule, or why the task C is not learned as well as A,B,D.,? Does the theory give you any new insights? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your time reviewing this paper. We now address your questions as follows. --- > Q1: Theory Thanks for your good points. First, Gaussian setup in [W2025] is a **special case** of our theory. Particularly, as ``Theorem 4.2``, the score can be written as $$\nabla \log p_t(x_t^{(1)}, x_t^{(2)}) = - \frac{1}{\beta_t^2} x_t + \frac{\alpha_t}{\beta_t^2} \begin{bmatrix} \gamma(x_t) u , \\ (1- \gamma(x_t)) v \end{bmatrix}$$ where $\gamma(x_t) = E_{\zeta} [\pi_t (\zeta, x_t) \zeta]$. When $\zeta$ follows Gaussian distribution, the score can be simplified into a linear function over $x_t$, which can be learned by a linear model. However, **we aim to cover more general setup where the above score is *non-linear* in $x_t$**, i.e., $\zeta$ can be any bounded distribution. In such setting, the score function can be generally formulated as a combination of a fixed linear term $-1/\beta_t^2 x_t$ and an additional non-linear term, which motivates us to apply the two-layer network with the residual connection as the score network. Moreover, our current patch-separated configuration follows prior work (Han et al. 2024a), while similar results can also be extended to network handling dependent patches. Particularly, we can consider $$s^{(1,2)}_w(x_t) = - \frac{1}{\beta_t^2} x_t^{(1,2)} + W \sigma(W^\top x_t^{(1,2)})$$ for some polynomial activation function $\sigma(\cdot)$ and $W \in \mathbb R^{2d \times m}$. Then, it can be seen that only when $\langle W \sigma(W^\top x_t^{(1,2)}), [u; v] \rangle = \alpha_t/\beta_t^2$ for any $x_t$ which concludes the network learns the rule ``(*)``. However, with the new network, we can still follow ``Theorem 4.4`` to show: (1) parameters $W$ will only be the function of $u$, $v$ and the initialization and (2) the network function will be basically a polynomial function of $\langle u, \epsilon_t\rangle$ and $\langle v,\epsilon_t\rangle$ and their cross terms (not appeared in patch-separated network), where $\epsilon_t$ denotes the added diffusion noise in $x_t$. As $x_t$ varies, the function output also varies, which results in a non-vanishing rule conforming error that depends on the variation of $\epsilon_t$. Then similar results in ``Theorem 4.4`` can be obtained and our main theoretical arguments still hold. > Q2: Interpretation of failure / threshold of Figure 5A / Raven’s matrices **Interpretation of failure:** - Compared to the training data with $R^2$ close to 1 (``Figure 3``), synthetic tasks yield significant lower $R^2$ with 0.6–0.8, indicating weaker rule learning. - $R^2$ measures linear fitting quality, not rule accuracy. Differences in coefficients are also important, e.g., Task A’s estimation line $\beta_1 = 0.82$ is smaller than ground truth $\beta_1 = 1$. And ``Error`` metric in ``Table 2`` which combines coefficient deviation and MSE further quantifies rule learning limitation of DMs. **Figure 5A:** we adopt a strict threshold to show DMs can generate high-quality samples even under such strict conditions. Since DDPM perform well under strict settings, they naturally perform well under more relaxed thresholds. **Raven Matrices:** fine-grained rules can also be measured in discrete state space. Specifically, we divide features within [0,1] into 20 intervals. A generation is rule-conforming if measured features fall within the interval satisfies predefined rules. [Discrete Results](https://anonymous.4open.science/api/repo/Rebuttal-8656/file/discretemetric_ID15070.pdf?v=1f32cb35) show 95% of training data satisfy rules, while only 50–80% of generations do, highlighting DMs' limitation in learning rules. > Q3: line 299 is not correct. We will modify ``line 299`` to sum of projection. >Q4: Coarse/fine grained rules. Fine-grained rules impose stricter requirements than coarse ones. As shown in ``Section 4``, coarse rules only require the network to discover key features $\mathbf{u}$ and $\mathbf{v}$, while fine rules additionally require satisfying constraints between them—e.g., their projections summing to a constant (``Definition 4.3``). Thus, learning coarse rules doesn't guarantee fine-grained rule learning. >Q5: Spatial/Non-spatial rule. - Spatial rules involve spatial arrangements like positions and layouts (e.g., Light-shadow in ``Figure 1``), while non-spatial rules relate to features independent of space, such as size and texture in ``Figure 1``. ``Section 3.3`` explains non-spatial rules are harder to learn, possibly due to the lack of explicit cues like positions and lengths in spatial rule, which may explain why Task C is more hard than others. - Our theory focuses on spatial rules with clearly defined patch-level dependencies but also can extend to non-spatial rules. For example, by introducing proper tokenizer, inseparable non-spatial rules in pixel space can be separable in latent space, allowing our theory to be applied. --- We hope this addresses your questions. Please let us know if you have any further concerns. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses and the additional efforts to clarify my questions. I'm pretty happy regarding most of the responses. While most of my concerns have been satisfactorily addressed, I remain unconvinced regarding my first question. I fully agree that the linear function approximator setup in [W2025] is a special case of your formulation, potentially allowing for nonlinearity between the two linear weight matrices. However, my intuition is as follows: if the rule-conforming data reside on a lower-dimensional linear subspace of the two-patch data space—even with a nontrivial distribution in that space—then the optimal linear score (or denoiser), being a linear function of the two patches, should be capable of learning this subspace and achieving perfect rule conformity. Granted, the resulting distribution would be Gaussian for a linear denoiser, and [W2025] suggests that convergence time would be exponentially longer due to small or zero eigenvalues along the tangent subspace. Nonetheless, asymptotically, conformity should be achieved. Given this perspective, I find it difficult to reconcile with your argument: “As varies, the function output also varies, which results in a non-vanishing rule conforming error that depends on the variation of . Then similar results in Theorem 4.4 can be obtained and our main theoretical arguments still hold.” Could you provide further details of the derivation regarding this point? Although I am pleased with most aspects of the paper, the theoretical argument concerning this issue has not yet convinced me. Based on this concern, I am currently inclined to keep the score as weak reject (2). I look forward to your clarifications on this matter. --- Reply to Comment 1.1.1: Comment: We are glad that our rebuttal has addressed most of your other concerns. Thank you once again for your thoughtful follow-up questions. We would like to take this chance to further clarify the rule-conforming error, especially when there exists a mismatch between the model class and the underlying rule. We fully agree with your intuition: if the rule-conforming data lies on low-dimensional *linear* subspace and if the score network is a *linear* function, then the rule-conforming error can vanish asymptotically. This can be also reflected in our ``Theorem 4.4`` (considering using linear network for all patches), where the polynomial functions $\tilde \sigma^{(1)}(\cdot)$ and $\tilde \sigma^{(2)}(\cdot)$, which have polynomial degree 1 (as we consider linear model), will become a constant function, leading to a zero lower bound of the rule-conforming error. However, our theoretical argument mainly focuses on the general case, where we consider the setting that * the underlying rule is *unknown* to the learner, and * the model class may *not align* with the true structure of the rule, i.e., the score network can be much more complicated than the linear function. In such scenarios, the more complicated neural network model will be more powerful to recover the entire data distribution (which could be complicated for non-Gaussian $\zeta$), while the hidden rule may not be well captured. That said, the polynomial functions $\tilde \sigma^{(1)}(\cdot)$ and $\tilde \sigma^{(2)}(\cdot)$ in ``Theorem 4.4`` will be non-constant, Then, the rule-conforming error will be non-zero. To provide some theoretical intuitions, we can consider a simple case that the first two patches are $\zeta u$ and $-\zeta v$. In the **linear model setting**, considering $\zeta\sim N(0,1)$, the rule conforming function $\psi_t(x)$ can be roughly written as $\psi_t(x)=\langle f(\Sigma)\Sigma x,[u,v]\rangle$, where $\Sigma$ is the covariance matrix of the data and $f(\Sigma)$ is a function of $\Sigma$ that is commutable with $\Sigma$. Importantly, in this setting, the reason why linear model can handle the linear rule is that the vector $[u,v]$ is just the eigen-vector of $\Sigma$ with eigenvalue $0$, then clearly $\Sigma f(\Sigma) \cdot [u,v]=0$, and thus $\psi_t(x)=0$ for all $x$. However, if using **non-linear models**, for instance, we consider $s_w(x)=W_1x + W_2(x\circ x)$ as an example ($x\circ x$ denotes the hadamard product). Then, we need to consider the covariance matrix over the transformed data $[x, x\circ x]$, which no longer align with the vector $[u,v]$ for general non-Gaussian $\zeta$. As a consequence, as long as $W_2$ is non-zero, we will not be able to obtain $W_2\cdot[u,v]=0$ (as $[u,v]$ will not be in the null space of $W_2$ as in $\Sigma$). Then, we can follow the similar analysis in ``Theorem 4.4`` and show that the conforming function $\psi_t(x)=\langle W_1 x+W_2(x\circ x)+1/\beta_t^2 x,[u,v]\rangle =0$ **will not hold for all $x$**, implying that the rule conforming error will not be zero. To further support this claim empirically, we have included an [additional experiment](https://anonymous.4open.science/api/repo/Rebuttal-DDBA/file/diff_0.8_asym.pdf?v=fd07d01b) comparing rule-conforming errors across different model classes in our synthetic *linear* data setup. As shown in the figure, *linear model* achieves significantly lower rule-conforming error compared to more complex, nonlinear models (2-layer, 3-layer MLPs with ReLU or quadratic activation, operated on all patches jointly). This aligns with our claims, i.e., without exact structural alignment between the model/objective and the rule, the small rule-conforming error cannot be guaranteed. We hope this explanation clarifies your confusion and we are happy to answer any further questions. We will make sure to include the additional discussions and experiments in our revised version based on your comments. Your feedback has been invaluable in helping us improve the clarity and depth of our paper.
null
null
null
null
null
null
MuLan: Adapting Multilingual Diffusion Models for Hundreds of Languages with Negligible Cost
Accept (poster)
Summary: This paper proposes MuLan, a text-encoder adapter that equips T2I models that are pre-trained with data dominant in English now with multilingual capabilities. With MuLan, T2I modes may take in prompts in purely non-English terms and generate images in quality on par with those from English prompts. Claims And Evidence: **Issue #1: Lack of as-is baseline performances from English-only T2I backbone models.** Although the authors have shown comparisons with other multilingual T2I models such as AltDiffusion or GlueGen as in Table 3, what are the fundamental baseline performances that you simply feed in prompts in the target language as-is to an English-only T2I model, such as SD1.5? This as-is baseline setup is assumed to bring poor performance in ClipScore, certainly. But without showing these baselines, we may not have a solid idea how using a multilingual text encoder can make a significant difference in the first place. As the authors have already implicated, the pre-training data used in the vanilla SD1.5 may have already had a language bias towards non-English Western languages. Thus, SD 1.5 should have higher as-is baselines in languages like Spanish or German. Will adding MuLan raise the performances across all languages by a universal increase, or will it instead selectively improve some languages than others over their as-is baselines? I don't think the robustness of the MuLan adapters can be truly demonstrated, if it is being presented without the as-is baselines. Methods And Evaluation Criteria: **Issue #2: Confusion over the ClipScore specification.** According to Line 288, the CLIPScore/SIM used in the paper uses InternVL-LLaMA as the surrogate model. However, CLIPScore is mostly widely calculated with CLIP-VIT variants, such as in Dall-E 3 [1]. How do the MuLan adapters perform if measured in industry standard metrics? **Issue #3: T2I Metrics Other than ClipScore?** CLIPScore has already been proven to struggle at compositional text prompts as in [2]. Since the prompts in XM12 mostly feature compositional attributive phrases like those in Figure 1, how well does MuLan improve then, if measured in fine-grained text-image-alignment metrics such as VQAScore [3]? Theoretical Claims: There is no major deviation from mainstream theories. Experimental Designs Or Analyses: Please refer to my concerns in the Claims and the Method sections above. Supplementary Material: There is none. Relation To Broader Scientific Literature: The potential of MuLan can be high, as it works in a plug-and-play / training-free style that can be easily integrated into established T2I pipelines, such as those implemented in the Huggingface framework. Essential References Not Discussed: References that are mentioned in my concerns in the sections above. - [1] Improving Image Generation with Better Captions. https://cdn.openai.com/papers/dall-e-3.pdf 2023 - [2] Text encoders bottleneck compositionality in contrastive vision-language models. https://arxiv.org/abs/2305.14897 EMNLP 2023 - [3] Evaluating Text-to-Visual Generation with Image-to-Text Generation. https://arxiv.org/pdf/2404.01291 ECCV 2024 Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None Questions For Authors: Please find my 3 major issues in the sections above. Out of all, Issue #1 has the highest severity and will greatly affect my impression of this work if left unaddressed. After all, I really like the potential application of MuLan, but I would like to push its presentation over the high-standard bar of ICML. I am open to updates and would like to engage with the author further. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer fvJg, Thanks so much for your constructive comments and support for acceptance. We hope our responses can address your concerns. **Q1: Lack of as-is baseline performances from English-only T2I backbone models.** **A1**: We thank the reviewer for emphasizing the importance of “as-is” baseline performances. In response, we evaluated Stable Diffusion v1.5 and PixArt-α by directly inputting prompts in 11 languages from the XM12 dataset, using InternVL-LLaMA to compute CLIP scores. These as-is results were then compared with our MuLan-adapted models. We found that **MuLan adapters consistently improved performance across all languages.** Notably, even for languages close to English—such as French, Spanish, and German—our method still achieved clear gains over the as-is baseline, demonstrating its broad effectiveness. |Model|avg|de|fr|it|es|pl|hi|ru|zh|ja|ko|ar| |-|-|-|-|-|-|-|-|-|-|-|-|-| |SD15 (as-is)|28.1|31.9|34.2|31.8|34.2|24.8|26.8|23.0|26.6|26.9|24.6|24.5| |MuLan-SD15|**37.2**|38.0|38.0|37.8|37.7|37.0|35.6|37.4|36.6|36.7|36.4|38.2| |Δ (SD15)|**+9.1**|+6.1|+3.8|+6.0|+3.5|+12.2|+8.8|+14.4|+10.0|+9.8|+11.8|+13.7| |PixArt(as-is)|29.0|36.8|38.2|36.0|36.9|27.0|24.0|28.9|24.2|22.1|22.4|22.2| |MuLan-PixArt|**39.5**|40.5|40.2|40.0|39.6|39.1|37.2|39.6|39.3|40.5|39.3|39.1| |Δ(PixArt)|**+10.5**|+3.7|+2.0|+4.0|+2.7|+12.1|+13.2|+10.7|+14.9|+18.4|+16.9|+16.9| **Q2: Confusion over the CLIPScore specification.** **A2:** While the industry-standard CLIP-ViT models are indeed widely used, they are primarily trained on English data and thus provide unreliable similarity scores for non-English inputs. In contrast, InternVL-LLaMA has been trained on multilingual image-text pairs and possesses better understanding of non-English languages. Therefore, we chose InternVL-LLaMA as the surrogate model for computing CLIPScore in our paper. To further ensure the objectivity of our CLIPScore evaluation, we additionally used the multilingual CLIP-ViT model released by LAION to compute similarity scores. The results, shown in the table below, demonstrate that **our model still performs strongly and that the observed trends are consistent with those reported in Table 3 of the paper.** |model|avg|en|fr|es|it|zh|ja|hi|de|ko|ru|ar|pl| |-|-|-|-|-|-|-|-|-|-|-|-|-|-| |GlueGen|21.05|22.3|21.3|20.6|19.6|21.5|21.0|-|-|-|-|-|-| |AltDiffusion|23.13|24.2|23.6|23.2|23.1|24.3|23.0|21.1|24.3|22.5|22.3|22.7|23.2| |SD15(Google Translate)|22.27|22.3|23.8|23.4|22.8|22.6|21.3|20.8|24.5|21.7|22.6|19.5|21.9| |PixArt(Google Translate)|24.26|24.1|24.2|23.8|24.5|26.4|25.1|21.6|26.5|23.1|24.6|23.3|23.9| |MuLan-SD15|23.02|21.8|23.6|23.2|22.7|23.6|24.2|22.7|24.5|21.8|23|22.3|22.9| |MuLan-PixArt|24.15|24.4|23.9|23.4|24.2|25.7|24.8|23.2|25.8|23.4|24.6|23.1|23.4| **Q3: Limitations of CLIPScore.** **A3:** The prompts in XM12 predominantly feature compositional text prompts. While CLIPScore is useful for evaluating whether the main subject in the image aligns with the prompt, it falls short in assessing object-level details and spatial relationships. As you suggested, we additionally evaluated our model using VQAScore to better measure fine-grained text-image alignment. Since the default VQAScore evaluation model (clip-flant5-xxl) does not support multilingual prompts adequately, we adopted GPT-4o as our evaluation model to enable multilingual VQAScore assessment. Results show that our model maintains strong performance, significantly outperforming GlueGen and AltDiffusion, and even surpassing or matching translation-based baselines across most languages. Our model effectively leverages the capabilities of existing MLLMs, demonstrating strong generalization in multilingual image generation. In the future, we plan to further integrate the native multilingual, contextual, and reasoning abilities of MLLMs into the image generation process. |model|avg|en|fr|es|it|zh|ja|hi|de|ko|ru|ar|pl| |-|-|-|-|-|-|-|-|-|-|-|-|-|-| |GlueGen|0.533|0.81|0.51|0.52|0.45|0.41|0.5|-|-|-|-|-|-| |AltDiffusion|0.581|0.71|0.6|0.6|0.59|0.5|0.53|0.62|0.49|0.56|0.51|0.64|0.62| |SD15(Google Translate)|0.612|0.81|0.66|0.71|0.61|0.57|0.5|0.67|0.5|0.7|0.68|0.41|0.52| |PixArt(Google Translate)|0.750|0.85|0.78|0.8|0.82|0.71|0.71|0.82|0.71|0.76|0.76|0.59|0.69| |MuLan-SD15|0.635|0.80|0.74|0.76|0.68|0.60|0.52|0.53|0.57|0.65|0.75|0.45|0.57| |MuLan-PixArt|0.744|0.88|0.81|0.83|0.81|0.71|0.71|0.61|0.73|0.74|0.79|0.59|0.72| References [a1] https://huggingface.co/laion/CLIP-ViT-H-14-frozen-xlm-roberta-large-laion5B-s13B-b90k [a2] Evaluating Text-to-Visual Generation with Image-to-Text Generation. https://arxiv.org/pdf/2404.01291 ECCV 2024 --- Rebuttal Comment 1.1: Comment: Thank you so much in the feedback. I believe my concerns have been adequately addressed. I am happy to update my rating. I am looking forward to seeing if MuLan can be applied to more sophisticated backbones (e.g. Flux.1) in the future since SD 1.5 has already become so obsolete as we speak. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score and for your constructive suggestions! We’re glad our responses addressed your concerns, and we will incorporate the additional discussions and results into the paper. We will also try to apply Mulan to more advanced text-to-image models in the future.
Summary: This paper proposes a simple yet effective way to handle multilingual text input in text-to-image generation. By utilizing a pre-trained multilingual text encoder and introducing a light-weighted adapter, the resulting model is shown to handle multilingual input well. Claims And Evidence: The paper claims to propose a method which can handle multilingual text input well in text-to-image generation. According to the results in the paper, it obtains good results than some previous methods. Methods And Evaluation Criteria: The idea of utilizing a pre-trained multilingual text encoder is reasonable, because it has already been pre-trained on multilingual text and should be able to better align different languages into a single embedding space. Theoretical Claims: There is no theoretical claims in the paper. Experimental Designs Or Analyses: The experimental design seems reasonable. The authors compared the proposed method with related methods, and also included some important baseline such as directly using Google translate to process multilingual input. Supplementary Material: Yes, I checked the results including combine ControlNet with the proposed method. Relation To Broader Scientific Literature: The difference is that the paper utilizes a pre-trained multilingual text encoder, while previous methods either fine-tuned the diffusion model on multilingual text-image dataset (may lead to image quality drop and potential bias) or align different languages with light-weighted network. Essential References Not Discussed: None Other Strengths And Weaknesses: According to the results in Table 3. The simple baseline, which directly use Google translate to process multilingual input obtains comparable results with the proposed method: SD 1.5 baseline outperforms the proposed SD1.5 version model on 5/12 cases, and PixArt baseline outperform corresponding proposed method on 8/12 cases. People may choose to use baseline method because it doesn't require any training, and can choose arbitrary translation models and T2I models. Meanwhile, the proposed method has to be trained again if one want to apply it to a new SoTA model. Other Comments Or Suggestions: None Questions For Authors: What is the reason causing the performance difference between different alignment methods in Table 5? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer RQVZ, Thanks very much for your valuable comments. We hope our responses can address your concerns and clarify our contribution. **Q1. Comparison with translation baseline.** **A1:** In fact, previous multilingual text-to-image generation works (e.g., AltDiffusion) typically compare against open-source translation models such as NLLB [a1]. As shown in the following table, we also compared the results obtained using NLLB as a translation tool, and in 11 non-English languages of XM12, **our performance exceeded the baseline of NLLB as a translation tool**. We can also observe that using translation tools to handle non-English input is highly dependent on the performance of the translation tool, being very sensitive to its performance. |Model|open/closed source|avg|fr|es|it|zh|ja|hi|de|ko|ru|ar|pl| |-|-|-|-|-|-|-|-|-|-|-|-|-|-| |SD15 (NLLB)|open|36.17|37.6|37.4|37.2|36.2|36.2|32.7|37.7|35.9|36.8|33.9|36.3| |SD15 (Google Translate)|closed|36.70|38.2|38.0|37.8|36.6|36.7|33.1|38.4|36.4|37.4|34.4|36.7| |MuLan-SD15|open|**37.60**|38.0|37.7|37.8|37.7|38.0|35.6|38.0|37.6|37.9|38.2|37.1| |PixArt (NLLB)|open|38.27|39.7|39.4|39.2|38.1|38.6|34.4|40.3|37.7|39.3|35.6|38.7| |PixArt (Google Translate)|closed|**39.82**|41.2|40.6|41.0|39.9|40.8|34.7|41.5|39.0|40.5|39.0|39.9| |MuLan-PixArt|open|39.49|40.2|39.6|40.0|39.3|40.5|37.2|40.5|39.3|39.6|39.1|39.1| **Google Translate** is a strong commercial system with high development and data costs, and has undergone long-term refinement to reach its current performance, **it remains a closed-source tool, making it unsuitable for offline or privacy-sensitive scenarios**. In contrast, **MuLan is a low-cost baseline that achieves multilingual image generation by training on English image-text pairs and is entirely based on open-source suites**. In terms of the **FID** metric, our model generally **achieves better results**. On CLIPScore, our results are already comparable to those obtained using Google Translate, and we show clear advantages in less common languages. These results highlight the potential of achieving multilingual image generation through training, without relying on external translation systems. We will include these points in the main paper in future revisions. **Q2. Adapt to the new SoTA model.** **A2:** One of our main contributions is the low adaptation cost (e.g., **MuLan-SD15 training only requires 96 GPU hours**). As demonstrated by the visual results in Figure 1 and Figure 4, the Adapter trained for Mulan-SD15 can be effectively applied to other fine-tuned variants of SD15 (e.g., Dreamshaper), and also works well with plugins such as LoRA and ControlNet. The same holds for models like SD21, SDXL, and PixArt-$\alpha$. This indicates that our method exhibits strong generalizability across this family of models, with minimal additional cost. For newly emerging SoTA models, our method can still be employed to support them and their related plugins and variants with similarly low adaptation cost. **Q3. More analysis of performance differences in table 5** **A3:** Based on the experimental results in Table 5, we can draw two main conclusions. **1. Only the aligned text encoder can support multilingual image generation through MuLan.** By comparing the models in Rows 1–3 and Rows 4–8 of Table 5, it is evident that without Language-Centered Alignment (LC) or Image-Centered Alignment (IC), the models are unable to support multilingual image generation through MuLan. We provide visualizations of the feature distributions for some of these models in Figure 5 of the appendix. As shown, models without alignment—such as LLaMA2-7B (Figure 5(d)) and XLM-R Large (Figure 5(a))—produce scattered features when processing inputs with the same meaning but in different languages. In contrast, aligned models—such as XLM-R Large* (Figure 5(b)), Mul-OpenCLIP (Figure 5(c)), and InternVL-LLaMA (Figure 5(e))—can cluster features of semantically equivalent inputs across different languages. This is a key reason why our method, trained solely on English image-text data, is still able to support multilingual image generation. **2. Image-Centered Alignment outperforms Language-Centered Alignment.** By comparing the models in Rows 4–6 and Rows 7–8, we observe that LC (Language-Centered Alignment) yields inferior results compared to IC (Image-Centered Alignment). This is primarily because LC relies on translated data, which may introduce noise due to inaccuracies in translation. Moreover, IC aligns multilingual semantic features through the shared image feature space, effectively providing MuLan with a prior that facilitates further refinement of the alignment. In contrast, LC relies entirely on MuLan to establish the connection between language features and image features from scratch. These factors collectively contribute to the inferior performance of LC compared to IC. References [a1] https://huggingface.co/facebook/nllb-200-3.3B
Summary: This paper introduces MuLan, a lightweight and plug and play language adapter that enables multilingual text to image generation for diffusion models with minimal computational cost. The central idea is that multilingual text encoders can be used to enable multilingual image generation without the need for extensive labeled datasets in multiple languages. Instead of training a full diffusion model on multilingual text-image pairs, the model freezes the text encoder and diffusion model and introduces a small adapter module that bridges the two. This approach allows model to support over 110 languages while requiring training only on English-language image-text pairs. The main claims are that the model achieves strong multilingual image generation performance (on par with models explicitly trained on multilingual text image pairs) and that it requires significantly less computational cost and training data (by using leveraging pretrained multilingual text encoders and training only a small adapter) and that it Is flexible and integrates with existing models and community tools (e.g. LoRA, Control Net). The authors evaluate the model on multilingual benchmarks and compare it against other approaches from the literature and translation-based approaches. The results show strong performance in both high-resource and low-resource languages, with significant efficiency gains in terms of compute and data requirements, suggesting that the proposed model is both computationally efficient and broadly applicable. Claims And Evidence: The paper makes several strong claims about the model. Overall, the claims seem well supported by the presented evidence. First, they claim that the proposed model achieves multilingual performance comparable to English trained models while requiring only English data. This is supported by quantitative results, where the CLIP similarity score for English (39.57) is nearly identical to the average score across all other languages (39.61). The authors also show visual examples demonstrating the model's ability to generate accurate images from prompts in diverse languages, including low resource languages. Second, they claim that in comparison to other multilingual diffusion models, the model reduces training costs. They argue that because the model only trains a small adapter, it avoids the high cost of training a full diffusion model on multilingual datasets. The claim is backed by the reported training time being orders of magnitude lower than the competing models that MuLan is benchmarked against, and by a comparative cost analysis against. Another claim is that the model generalizes across a large number of languages, which is supported by an evaluation on multilingual datasets showing that the model is comparable to or sometimes better than translation-based methods. Methods And Evaluation Criteria: The method essentially involves training a multilingual adapter on top of diffusion models. The diffusion model itself is frozen, and the adapter is trained to connect it to a pretrained multilingual text encoder. This is quite appropriate for the context and enables text to image generation from a wide variety of languages while keeping the underlying generation model fixed with savings in terms of required compute. The adapter learns to map multilingual text embeddings into the same space as the english trained diffusion model. The paper explores 2 different strategies to achieve this: language centred alignment (aligning multilingual embeddings to English using parallel translation datasets and a distillation loss) and image centred alignment (using contrastive learning to align multilingual embeddings with image embeddings, ensuring the text prompts in different languages produce similar image representations). The experiments show that the image centred approach performs better - likely because the alignment is done directly between language and image space as opposed to relying on noisier translation datasets. The evaluation metrics are the CLIP similarity score (text to image alignment) and FID score which measures image quality. These metrics are quite appropriate in this context and (especially in the case of FID) widely used across the literature. Furthermore, the model is benchmarked on XM3600 (12 languages) and COCO2014 (85 languages) and compared against a number of strong baselines (e.g. translation based stable diffusion) which strengthen the claims made by the authors. A minor remark here: it would be interesting to have a more detailed evaluation of cross lingual consistency e.g. whether a given concept is represented similarly across these languages and the degree to which similar results are obtained. It is understood however, that this is rather challenging and perhaps wouldn't strengthen the claims of this paper significantly enough. Theoretical Claims: not applicable. Experimental Designs Or Analyses: As discussed, the methodology and evaluation of this paper are strong. The methodology for language and image centered alignment is sound and aligns with common practice in the representation learning literature. The evaluation and benchmarking provide a broad test of generalization across low and high resource languages. The translation based baselines compare whether the direct multilingual generation is better than translation pipelines. The analysis of computational efficiency is thorough and well documented, illustrating very clearly the contrast with other multilingual models in the literature. The comparison of image vs language centred alignment also seems sound and empirically supports the claim that image centred alignment is the superior choice. As before, a minor remark would be the lack of a cross lingual consistency evaluation. One can imagine that when testing across so many different languages, stylistic/cultural nuances could affect the results, and there could be some failure cases. Finally, as another minor remark, there is information about experimental design to aid reproducibility but it is not clear how some of the hyperparameter settings were arrived at or whether they are just default values. Overall there are no real comments from me on this, and methodologically and experimentally this paper is quite strong and comprehensive. Supplementary Material: Yes. The experimental material provides some more information on additional experiments and analysis done on the model (both quantitative and qualitative), although in general there is not a lot of content here. As one of the main selling points of the proposed model is the lower compute requirements, training efficiency is further examined here. In particular, performance as given by CLIP score across 7 languages vs. reduced dataset size is investigated. This section however could benefit from more detail. At this point, I am also wondering whether we could have seen a similar investigation that varies the model size instead, as it seems like a very obvious inclusion alongside the dataset size experiments. Relation To Broader Scientific Literature: The papers's key contributions align with several existing research areas in NLP and image generation. Adapter based methods for multilingual NLP are quite are a widely explored concept to enable multilingual capability without retraining whole models. The paper under discussion extends the idea to text to image generation. Indeed, due to the extensive commercial applications, text to image models have also been quite a popular area of research recently, so this paper is very timely in both respects. Essential References Not Discussed: I would have expected acknowledgement of works like MAD-X [1] for adapter based cross lingual transfer. Other than that, several relevant works are cited (and of course directly compared against) and it is acknowledged that this work touches several disciplines. [1] https://arxiv.org/abs/2005.00052v2 Other Strengths And Weaknesses: The paper is well written and clearly presented. The methods are not very novel, but they are timely and of considerable impact. Nothing further to report beyond what was already discussed. Other Comments Or Suggestions: not applicable Questions For Authors: not applicable Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer BvZv, Thanks a lot for your insightful reviews and support for our work! We hope our responses can address your questions. **Q1. Lack of a cross lingual consistency evaluation** **A1:** We appreciate the reviewer’s suggestion to evaluate cross-lingual conceptual consistency. Our work focuses on enabling multilingual text-to-image generation by leveraging a pretrained multilingual text encoder and English image-text pairs. While our current method demonstrates some degree of cross-lingual consistency, it may struggle with culture-specific corner cases, which fall outside the scope of this work and would require human curation or high-quality data to resolve. We will acknowledge this limitation in the main paper and plan to address it in future work. As a potential direction, we may explore collecting data with explicit culture-specific concepts and developing a dedicated benchmark for evaluating cross-lingual consistency in multilingual T2I generation. **Q2. Clarification on hyperparameter settings** **A2:** The training hyperparameters for MuLan-SD15 and MuLan-PixArt used in Table 3 are already provided in Section 4.1. Here we provide additional details for completeness: we use the AdamW optimizer with $\beta = (0.9, 0.999)$ and a weight decay of 0.01. For SD 1.5, SD 2.1, and PixArt, we follow the original models’ resolutions (512x512 or 768x768). For SDXL, we adopt a two-stage training strategy with different resolutions to ensure stability. We adopt classifier-free guidance by randomly dropping text conditions, following [a1]. In Table 5, we only replace the language model; all other training settings are identical to those of MuLan-SD15. We will include these details regarding the selection of experimental hyperparameters in the updated manuscript. **Q3. Ablation study on the model size** **A3:** **For the size of the adapter** In the main paper, our proposed adapter uses a lightweight one-layer Transformer encoder-decoder structure (see Section 3.2). To further investigate the relationship between model size and performance, we have conducted additional ablation experiments on Stable Diffusion 1.5, testing adapters with 2, 4, and 6 Transformer layers. The experimental setup follows Appendix A.1. We calculated the average CLIPScore across seven languages from the XM12 dataset. |layers|1|2|4|6| |-|-|-|-|-| |**Avg CLIPScore**|35.8|35.8|35.5|35.4| We observe that increasing the number of Transformer layers in the adapter does not lead to improved performance; in fact, it **slightly degrades performance**. This result aligns with previous findings in works such as LLaMA-Adapter [a2] and MiniGPT-4 [a3], which demonstrate **that adapter-based tuning for LLMs typically does not require large parameter counts or massive training data**. Our lightweight adapter design is consistent with these community practices, suggesting that a compact architecture is sufficient and may even be more stable and efficient for adaptation. **For the size of the language model** Our approach relies on a series of pretrained multilingual language models, making it difficult to perform strictly controlled ablation studies on the size of the language model. These models differ not only in parameter count but also in architecture, tokenizer, training corpora, which complicate direct comparisons. Although a strictly controlled study is not feasible, a coarse-grained observation from Table 5 reveals a trend suggesting that the size of the language model may impact the performance upper bound. Specifically, the first four models—MultiLang-CLIP (33.2), AltClip-m18 (33.3), XLM-R Large* (34.7), and Mul-OpenCLIP (36.1)—all use **XLM-R Large(335M)** as the language model. In contrast, **InternVL-LLaMA(7B)**, which achieves the highest score of **37.8**. Since our approach builds upon existing pretrained multilingual models, conducting strictly controlled comparisons on model size is non-trivial. As part of future work, we plan to explore how our adapter performs when integrated with increasingly stronger language models, which could further reveal its potential in real-world multilingual generation. References [a1] Ho, Jonathan. “Classifier-Free Diffusion Guidance.” https://arxiv.org/abs/2207.12598 [a2] Zhang, Renrui et al. “LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention.” https://arxiv.org/abs/2303.16199. [a3] Zhu, Deyao et al. “MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models.” https://arxiv.org/abs/2304.10592 --- Rebuttal Comment 1.1: Comment: I thank the authors for their insightful reply. In light of it, I am happy to keep the current score and for the paper to be accepted at ICML. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and endorsement; we will continue refining the paper to present our best work at ICML.
null
null
null
null
null
null
null
null
Learning Multiple Initial Solutions to Optimization Problems
Reject
Summary: This paper addresses the setting where a parametric optimization problem must be solved repeatedly, such as in online settings. For these settings, the authors argue that a key concern is the provision of a good initial guess for a local optimization solver. The paper proposes the MISO framework, which uses a transformer model to learn a pool of initial solutions, parameterized by the optimization problem. The models are trained using a loss function that weights both proximity to the true global solution and some diversity metric(s). The proposed method is evaluated on three small-scale control case studies. Claims And Evidence: Most claims are made empirically and are evidenced suitably by the data. The main theoretical claim the authors make is a performance guarantee (Section 4.2) that is slightly confusing. Namely, “in the single-optimizer setting the best initial solution is always equal or better than the default according to the selection function $\Lambda$”. However, this is not really a performance guarantee, as the initial solutions are not used. For example, if an initial point from an alternative method is in the pool of candidate initial solutions, but does not have the best objective value (the authors’ choice of $\Lambda$), then the point will not be used at all. Therefore, from my understanding, it cannot be guaranteed that the optimized solution will be better than the solution found by using the initial point from the alternative method. Methods And Evaluation Criteria: The benchmarks used are quite limited, making the motivation for this work slightly unclear. Specifically, the problems appear low-dimensional (1D or 2D controls), and the control problems are linearized. Therefore, it would be expected that standard methods such as convex quadratic programming or multi-starting solvers from a grid search/LHS would perform well, but these comparisons are not included. Moreover, the authors claim the main problem addressed is that local optimization solvers are trapped by local optima but use as an oracle the same solver with longer run-time. This implies that the main issue is convergence time rather than local optima. This paper requires an analysis of what optimization solver is being tested, what the computational expenses are, and why it fails to find a good optimum. Theoretical Claims: No theoretical claims or proofs are included. Experimental Designs Or Analyses: The experiments are designed well, but are missing proper benchmarks to control solutions and existing (non-learning-based) multi-start methods as described above. Moreover, implementation details of the optimization solver(s) used are notably lacking. Supplementary Material: The supplementary material provides an adequate description of the case studies considered, model hyperparameters, and baseline methods. Discussion of the optimization implementations is notably missing. A few sections consider ablation studies for some design choices of the method. Relation To Broader Scientific Literature: The key contribution to the broader literature here appears rather limited: while several works have tried to learn the solution and/or initial guess for a parametric optimization problem, this work learns *multiple* solutions and provides simple algorithms to integrate them. Essential References Not Discussed: As the main contributions are in learning multiple, diverse solutions, the paper is missing a discussion of existing research in promoting diversity in a pool of solutions, e.g., quality diversity algorithms. Other Strengths And Weaknesses: This paper is very well written and is easy to read and follow. The motivations are presented well, and the algorithm is easy to understand. As the “learning” contribution here is minor (see above), this work may be very well suited for a leading journal/venue in control applications. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for your feedback. We appreciate your recognition of our clear writing, motivation, and coherent algorithm. In the following, we clarify misconceptions regarding our claims, address concerns about benchmark tasks, solver comparisons, and implementation details, and underscore the rigorous empirical validation of our approach, highlighting its suitability for publication within the broader ML community. **Performance Guarantee Clarification.** You correctly point out that our claim in Sect. 4.2 focuses on the initial solution quality according to the selection function Λ. Indeed, we emphasize this explicitly in lines 210-215: The guarantee pertains to the initial solutions themselves and does not directly guarantee improved, optimized outcomes. In contrast, MISO is guaranteed to lead to an equal or better final solution in the multiple optimizers setting, as stated in lines 216-219 and empirically supported by results in lines 1071-1099. **Misinterpretation of Main Claims.** We would like to clarify that we have not claimed that the primary issue is local optima. Instead, we explicitly state that our setting is optimization under tight runtime constraints (lines 11, 47, 128, 255), for which local optimization solvers' performance is sensitive to initial solutions (line 229). We clearly distinguish using a proxy-oracle, allowing an extended runtime or multiple initializations to obtain (near-)optimal solutions. Notably, our empirical results (Table 1) show that MISO frequently outperforms the proxy-oracle baseline, validating this claim. **Missing Details.** We acknowledge that some details were concise in the main text but stress that comprehensive implementation descriptions, hyperparameters, and computational cost analyses are thoroughly provided in the appendix (lines 607-680), directly addressing this concern. **Benchmark Tasks.** Our selected tasks are widely recognized and commonly used benchmarks in optimization and learning-based research. Notably, the nuPlan autonomous driving task is an 80-dim problem featuring realistic and complex urban environments with nonlinear dynamics, making it significantly challenging and relevant. Although some optimizers, e.g., iLQR, internally employ linearization during optimization, the underlying tasks themselves are still inherently nonlinear. **Comparison to Other Solvers.** Your suggestion regarding comparisons to methods like convex QP or multi-starting solvers is well taken. However, some selected optimizers (DDP and iLQR) inherently include internal QP iterations (Tassa et al., 2014; Amos et al., 2018). Thus, our chosen baselines already implicitly encompass aspects of these suggested comparisons. **Contribution.** We respectfully disagree regarding the perceived limitation of our key contributions. While prior work primarily focused on predicting single initial solutions, MISO represents the first systematic exploration explicitly designed to predict multiple ones. This novel aspect significantly addresses limitations inherent in single-initial-solution approaches. Our contributions are rigorously supported by extensive empirical evaluations demonstrating substantial performance improvements across multiple optimizers and tasks (Tables 1, 2). **Suitability.** We appreciate your perspective that the paper is suited for a leading venue in control applications. However, our contribution stands to benefit the broader ML community – especially under "optimization methods" and "application-driven ML" tracks, as it explores new ways to predict and use multiple initial solutions in general optimization problems. Furthermore, the relevance is reinforced by several recent works that similarly bridge ML with optimization tools, including those published at ICML 23–24 (e.g., Ferber et al. 2023, Li et al. 2024, and Huang et al. 2024). **Non-learning-based Approaches.** We have indeed compared MISO to several non-learning-based methods, such as warm-start, perturbed warm-start, and strong oracle-proxy baselines (Tables 1, 2), demonstrating the significant improvements that MISO achieves over non-learning-based approaches. **QD.** While QD solutions are generally generated iteratively through evolutionary processes, MISO directly predicts multiple high-quality, diverse initial guesses in one shot. Hence, MISO does not depend on iterative, population-based exploration that underlies QD approaches. We have made every effort to address your concerns comprehensively and clearly in our revision. If you feel your concerns have been resolved, we would greatly appreciate it if you could reflect this positively in your evaluation. Should any remaining questions or points need further clarification, please let us know—we are more than happy to provide additional information. Again, Thank you for your constructive feedback and valuable suggestions, which have significantly contributed to strengthening our manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the helpful clarifying answers regarding the claims of the work. I still believe different baselines are required to properly evaluate this contribution: - convex QP solvers should be tested, in order to test the performance against the state of the art, especially for control applications. It is important to use *explicit QP solvers*, not just solvers that use QP iterations internally or exhibit other "aspects" of QP, because these have convergence guarantees in many relevant control settings. - a different multi-start method should be tested, in order to test the performance improvement from *learning* the multiple starting points. This is crucial to this paper, since multi-start optimization is already a popular technique (e.g., built into many optimization packages). Therefore, the value of learning the starting points should be tested against standard methods, including randomized, grid search, and Latin hypercube sampling for multi-start optimization. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful additional suggestions and for acknowledging our previous clarifications regarding performance guarantees, runtime constraints, and benchmark relevance. We remain in the position that the requested additional comparisons are not strictly required for evaluating our proposed method for reasons previously stated. Nevertheless, we recognize that these comparisons can provide additional insights, and therefore, we have provided these results below. All new results fully support our original claims. **Explicit Convex QP Solvers:** You raised a point regarding comparison to explicit state-of-the-art convex QP solvers. We explicitly compared our method against seven popular convex QP solvers available through the CVXPY library on the nuPlan driving task (Sequential setting, single optimizer). The results are summarized as follows: | Method | Mean Cost (± SE) | |------------------|-------------------| | iLQR (default) | 283.86 ± 37.91 | | OSQP | 314.13 ± 53.32 | | CLARABEL | 311.20 ± 55.00 | | PIQP | 289.37 ± 41.60 | | PROXQP | 274.81 ± 39.78 | | DAQP | 307.01 ± 52.10 | | QOCO | 249.20 ± 32.10 | | MISO WTA (ours) | **30.75 ± 2.15** | These results indicate that explicit QP-based solvers do not achieve competitive performance in our domain within the fixed runtime limits used by our method. **Comparison to Standard Multi-start Optimization Techniques:** You suggested comparisons to common multi-start optimization methods, specifically grid search and Latin hypercube sampling (LHS). We ran both methods on the nuPlan task, allocating identical runtime budgets (5 ms) as used by MISO. Both grid search and LHS systematically select multiple initial candidate solutions; we evaluated these candidates and selected the best (lowest-cost) candidate as the initial solution for subsequent optimization. Results across single and multiple optimizer configurations: Single Optimizer: | Method | One-off Cost | Sequential Cost | |------------------|-------------------|-------------------| | Warm-start | 283.86 ± 37.91 | 283.86 ± 37.91 | | Grid Search | 491.58 ± 60.14 | 619.28 ± 65.26 | | LHS | 439.57 ± 42.24 | 577.80 ± 64.97 | | MISO WTA (ours) | **30.17 ± 2.24** | **30.75 ± 2.15** | Multiple Optimizers: | Method | One-off Cost | Sequential Cost | |------------------|-------------------|-------------------| | Warm-start | 283.86 ± 37.91 | 283.86 ± 37.91 | | Grid Search | 518.14 ± 57.80 | 560.23 ± 64.04 | | LHS | 511.00 ± 60.52 | 520.54 ± 56.67 | | MISO WTA (ours) | **30.87 ± 2.30** | **30.48 ± 2.07** | As expected, due to the significant nonzero computational cost associated with evaluating solutions in our domain, standard multi-start methods like grid search and LHS fail to find sufficiently good initial solutions given equivalent runtime constraints. We genuinely appreciate your suggestions. To comprehensively address your concerns, we will include these additional empirical comparisons explicitly in the revised manuscript. Given these clarifications and additional demonstrations aligned with our original claims, we respectfully hope you reconsider your evaluation positively.
Summary: This paper proposes a method called MISO (Learning Multiple Initial Solutions Optimization) to improve the performance of local optimization algorithms by predicting multiple high-quality initial solutions. The framework leverages a transformer-based architecture and introduces three loss (PD, WTA, and MIX) functions to encourage diverse initial solutions to encourage solution diversity and prevent mode collapse. MISO is evaluated on three optimization tasks—cart-pole balancing (DDP), reacher task (MPPI), and autonomous driving trajectory optimization (iLQR)—demonstrating superior performance over warm-start, regression-based, and perturbation-based initialization methods. Claims And Evidence: The claim that MISO improves the performance of local optimization solvers by learning multiple diverse initial solutions is reasonable. While it lacks theoretical analysis, the motivation is clear, the methodology is well-designed, and empirical results support its effectiveness. In my view, the absence of a theoretical foundation is acceptable as the approach is highly intuitive. However, the scope of the experiments is somewhat limited. Expanding the evaluation to a broader range of optimization problems would strengthen the generalizability of the claim. Methods And Evaluation Criteria: The proposed methods are well-designed for the problem, and the evaluation framework is generally reasonable. The experiments cover multiple local optimization solvers and tasks, demonstrating the effectiveness of MISO. However, there are still some issues: 1. The evaluation is focused on control tasks, while the proposed approach could potentially be applied to a broader range of optimization problems, especially larger-scale optimization problems. 2. The paper claims that MISO improves optimization efficiency, but it lacks an analysis comparing inference time and the number of iterations required for convergence, particularly when MISO-generated initial solutions are used to call the solver. Theoretical Claims: The paper primarily focuses on an empirical approach rather than a theoretical one and does not provide formal proofs or theoretical guarantees. A more detailed analysis of the PD and WTA loss functions, particularly an explanation of why WTA performs well, would further strengthen the paper. Experimental Designs Or Analyses: Overall, while the experimental design is well-structured and supports the paper’s main claims, broadening the range of optimization tasks and increasing the scale of experiments would make the evaluation more compelling. Supplementary Material: I reviewed the supplementary material in its entirety. Relation To Broader Scientific Literature: The idea of learning multiple diverse initial solutions is particularly valuable, as it addresses a common limitation in local optimization—sensitivity to initial conditions—and can be combined with various existing methods to enhance performance. Essential References Not Discussed: I did not find any missing essential references. Other Strengths And Weaknesses: - **Importance:** The paper addresses a fundamental challenge in local optimization—the dependence on high-quality initial solutions—making it relevant to many learning-to-optimize tasks. - **Novelty:** The introduction of three loss functions (PD, WTA, MIX) to encourage diversity is a, although simple, creative and meaningful contribution. - **Clarity:** The paper is well-structured and clearly written, making it easy to follow the key ideas. The methodology is explained in a logical and intuitive manner, with well-presented figures and empirical results. Other Comments Or Suggestions: I have no additional comments or suggestions. Questions For Authors: I have no additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your constructive feedback and for recognizing the key contributions of our work. We are pleased that you found the idea of learning multiple diverse initial solutions both valuable and well-motivated and that you appreciated the design of our methodology and the clarity of our presentation. Below, we respond to your specific concerns and clarify key points that we hope further strengthen our manuscript: **Detailed Analysis of the WTA Loss.** We appreciate your suggestion. Winner-Takes-All training (Guzman et al., 2012; Lee et al., 2016) promotes the learning of multiple specialized hypotheses by updating only the best-performing prediction head per training instance, effectively partitioning the output space into local conditional means resembling a centroidal Voronoi tessellation (Rupprecht et al., 2017). This selective feedback encourages hypothesis diversity, mitigates mode averaging, and preserves multi-modal distributions, particularly when combined with auxiliary scoring functions that estimate regional probability masses (Letzelter et al., 2024; Rupprecht et al., 2017). **We will include a dedicated section in the appendix in the revised manuscript.** **Scope and Generalizability of Experiments.** We agree that demonstrating generalization beyond optimal control problems is valuable. While our current work focuses specifically on optimal control to ensure thorough experimentation, we are actively exploring broader optimization scenarios as part of future work (lines 430-436). We emphasize that our proposed MISO framework is task-agnostic by design, relying only on the structure of the underlying optimization rather than problem-specific details. Consequently, we anticipate strong potential for generalization to diverse optimization domains. **Theoretical Guarantees.** Without guarantees on the generator and selection function and considering the runtime constraints, it is difficult to make solid theoretical claims; conversely, with such guarantees, the problem would be significantly simplified and perhaps trivial. Most methods in this research field are heuristic-based, aiming to improve performance rather than provide formal guarantees. Nevertheless, our approach allows for the inclusion of traditional initialization methods, such as warm-start heuristics, as part of the set of initializations (lines 208-219, which we demonstrate empirically in 1071-1099). This means our method does not risk degrading performance compared to established practices, effectively providing a safety net (lines 377-361). Our contribution demonstrates that learning multiple diverse initializations can empirically enhance optimizer performance across various tasks and optimizers. **Analysis of Optimization Efficiency.** You raised a valid point regarding the analysis of optimization efficiency in terms of inference time and the number of iterations to convergence. While examining optimizer convergence could offer insights, metrics such as average additional iterations or convergence time are not particularly meaningful in our context. For instance, an optimizer might converge more quickly within the allocated budget but reach a suboptimal local minimum. In contrast, another optimizer might use the entire budget to find a better solution. Faster convergence does not necessarily equate to better performance. Additionally, there are no practical advantages to converging before the budget limit, as the optimizer would wait the remaining time until that solution is required. Furthermore, sample-based optimizers like MPPI operate with a fixed sample budget (line 660), eliminating the concept of iterations. Our main emphasis is on the quality of solutions within the given computational constraints, which is directly reflected in the cost metrics we report. We demonstrate that our method scales efficiently and consistently with the number of predicted initial solutions, K (lines 967-971), and we find that all outputs remain effective even as K increases (lines 896-903). We have made every effort to address your concerns comprehensively and clearly in our revision. If you feel your concerns have been resolved, we would greatly appreciate it if you could reflect this positively in your evaluation. Should any remaining questions or points need further clarification, please let us know—we are more than happy to provide additional information. Again, Thank you for your constructive feedback and valuable suggestions, which have significantly contributed to strengthening our manuscript.
Summary: In this work, the authors aim to improve the performance of solving sequential optimization problems, and the key is to improve the quality of initial solutions. To address this issue, the paper introduces MISO that uses a Transformer-based predictor to predict multiple diverse initial solutions conditioned on the problem instance parameters for sequential optimization problems. The output candidate solutions can then be used either to initialize a single optimizer via a selection function or to run multiple optimizers in parallel. To encourage both accuracy and diversity, MISO is trained using a combination of regression and diversity-promoting losses. Extensive experiments on optimal control tasks (including both open-loop and closed-loop evaluation) demonstrate that MISO consistently outperforms traditional warm-start methods, single-output predictors, and ensemble approaches, especially under strict runtime constraints. Claims And Evidence: The claims the author made about inference time and optimality in cost performance are both well supported in the results, although some results are delayed to the appendix. Specifically, the optimality in cost is supported by the results in Table 1, 2, 6, and Figure 4. And the inference speed is supported is supported in the Appendix Figure 6, 7, 8. I would recommend to move some inference speed results to the main text to make the empirical evidence more comprehensive. Methods And Evaluation Criteria: I think the methods and criteria are generally clear and make sense to me, the ablations over the loss selection in the methodology are also convincing. However, I have a few questions on them: **Methods** 1. The main method is to use a transformer-based model to predict the initial solutions given the task parameters $\psi$. However, it does not seem to be well motivated why transformer is used as the network architecture. For instance, is it necessarily better than MLP or other architectures in the experiments? 2. The training of this transformer depends on the data collection. How can we justify the accessibility of these 'expert' data? Are the performance of the 'expert' behavior policy reported in the table, e.g. warm-start in Table 1 and 2? 1. If we only learn from same set of tasks and do not consider the generalizability and some 'expert' policy is affordable, why don't we directly use these oracle or warm-up strategy to get multiple initial solution candidates and then use single-optimizer or multi-optimizer framework in the MISO pipeline? 2. If we do consider the generalizability and want to use these learning-based initialization to achieve some sort of generalizability to a family of problems with slightly different problem parameters $\psi$, the authors had better explicitly mention this in the experiment section. 3. The dataset contains 500,000 instances, according to the Appendix A.3. Does it refer to the number of timestep, or the number of episode (whole trajectories)? **Criteria** 1. The authors used cost as the evaluation metrics, and the authors include the exact cost definition for each task in the appendix. This main evaluation criteria is reasonable for optimal control problems. 2. To my understanding, the cost function $c$ is the same as the objective function $J$ in Equation (1), but it is also a bit confusing that the authors use $C$ for the constraint function in Appendix Equation (6). Please clarify these notations since it is very closely related to the exact criteria used for the problem formulation and performance evaluation. 3. It is unclear whether the uncertainty levels the authors present in the table 1 and 2 are 95% confidence internal or empirical variance. They only mentioned that they ran experiments under five random seeds in the appendix. Theoretical Claims: The paper mainly focuses on the empirical performance of this MISO system compared to other initialization baselines. Experimental Designs Or Analyses: The experiment design generally seems clear to me, and the results are reasonable. Here are some questions that I hope the authors could further address: 1. What is the relationship between the cost function $c$, constraint function $C$ in Equation (6), and objective function $J$ is Equation (1)? 2. Are the testing domains in all three tasks (the problem parameter $\psi$) different from the training domain where you collect expert data for transformer training? If so, how are they different? 3. As is put in the method section, authors are expected to clarify about the accessibility or potential limitations of their oracle data policy mentioned in Appendix A.3. Besides, the authors may also want to clarify how generalizable the MISO system, e.g. to a set of problems with the same parameter structure but different value range, or some problems with slightly different parameter structures. Supplementary Material: I review most of the supplementary materials. In fact, the authors seem to put too much details in the supplementary, and I have included some of them in the previous sections. Relation To Broader Scientific Literature: I am not quite familiar with the optimal control literatures. I'm familiar with RL and model-based RL literatures, yet the contribution of this paper is quite unique compared to the conventional deep RL papers. Essential References Not Discussed: The references are quite adequate. The authors tackle some problems that do not seem to be very usual in sequential optimization problems. Other Strengths And Weaknesses: **Strengths** 1. The paper is generally well-written and easy to follow. 2. The baseline comparison is comprehensive, and it considers both open-loop and closed-loop setting where the error could accumulate in the latter setting. The results they demonstrate is quite convincing. **Weaknesses** 1. The paper presentation can be polished and the structure can be reorganized. For example, the authors mention a lot of runtime limit issues, but no empirical evidence is illustrated in the main text. 2. The paper does not have theoretical guarantees over the MISO system. Some regret bound or sample complexity bound (e.g. the offline sample size required for the transformer to learn good initizalition that reach a $\epsilon$-optimal solution) would be appreciated. 3. The data collection seems to involve too much oracle data and warm-start data, as I have put in the previous sections, it would be helpful if the authors could explain the exact setting of data collection and their testing domain. 4. It might be helpful to justify how strong the assumption of the optimization landscape is. If we can use the regression-based loss in most of the control problems in the continuous physical domain, then we can say this initialization strategy can be applicable to a considerable family of tasks. But given the limited scale of the experiments in both the number of tasks and the dimensionality of states, we cannot assert it for now. 5. The current experiments are simplified to low-dimensional cases. As the authors have addressed in their limitations, it could be interesting to see whether such initialization strategy could be applicable to more high-dimensional setting. Other Comments Or Suggestions: 1. Some notations are a bit confusing, e.g. control cost, objecti function, and contraints: $c(x), J(x), C(x)$. 2. In general, this paper is well structured, but I would suggest the authors to move some of the results in the appendix to the main text to better support their claims. Questions For Authors: See the above sections for the details. I would greatly appreciate it if the authors could address some of the concerns I raised as a reviewer outside the control/optimization community. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and insightful comments. We appreciate your recognition of our extensive experiments, clarity in presenting methods and results, comprehensive baseline comparisons, and our unique contributions relative to existing literature. In the following, we clarify concerns regarding task selection and dimensionality, emphasize the relevance of strict runtime constraints and the use of an oracle-proxy, address ambiguities and supplementary material layout, justify our choice of Transformer architectures, and discuss theoretical guarantees. **Tasks, Dimensionality, and Generalizability.** Our study specifically targets sequential optimization tasks under strict runtime constraints commonly encountered in optimal control domains such as robotics and autonomous driving. The selected tasks, with their respective dimensions—Cart-Pole: 10, Reacher: 20, and Driving: 80—are standard benchmarks widely used for evaluating optimization methods. Notably, the nuPlan autonomous driving task features realistic and complex urban environments with nonlinear dynamics. Nonetheless, we acknowledge the value of investigating even higher-dimensional tasks (lines 434-439) and intend to explore such scenarios in future work. Additionally, generalizability is explicitly highlighted in lines 238-243. As stated in lines 281–285, training and testing instances differ in parameters such as initial states, targets, and reference trajectories, ensuring that improvements represent generalization rather than memorization. Moreover, MISO naturally extends to variations like obstacle locations and friction coefficients. **Runtime Constraints and Oracle Data.** Our manuscript strongly emphasizes tight runtime constraints as central to our problem setting (lines 11, 47, 128, 255). Given these, directly using the (slow) oracle-proxy at test time is infeasible, underscoring the importance of our fast, learning-based initialization. We recognize obtaining data as a common limitation in BC/IL methods. Our method reduces reliance on an actual oracle by using an oracle-proxy—not necessarily globally optimal, but sufficiently effective (lines 173–177). Due to space limitations, we summarize the key evidence supporting our runtime-aware design choices: in lines 806-839, we compare inference times of ensemble and multi-output architectures, demonstrating the scalability advantages of our approach. We evaluate sampling-based methods in lines 806-839, illustrating their unsuitability under strict runtime constraints. **Supplementary, Layout, and Clarifications.** We appreciate your thorough reading of the supplementary materials and suggestions for balancing the content. Given the strict page constraints, we welcome recommendations on specific parts that would be more impactful in the main text. We remain willing to revise accordingly. Additionally, thank you for pointing out several ambiguities; **we will revise the manuscript to clarify the following**: - The "500K instances" refer to time steps, not trajectories. - Indeed, the cost, c, matches the objective J (Eq. 1) and is distinct from the constraint C (Eq. 6). - The uncertainty values reported represent the Standard Error (mentioned in Fig. 4) - The oracle-proxy baseline (lines 296, 343, 738) corresponds to our expert policy. **Transformer Architecture.** The core novelty of our approach is independent of a specific architecture and centers around the training objectives and the selection function. Transformers were chosen primarily due to their capability to effectively model dependencies within sequences. We conducted experiments (not included in the paper) with MLPs, which performed less robustly than Transformers. Further discussion is provided in lines 1062-1069. **Theoretical Bounds.** We agree that such theoretical guarantees can yield valuable insights (Sambharya & Stellato, 2024; Sambharya et al., 2024). These approaches leverage PAC-Bayes methods or carefully designed sample-complexity arguments to provide guarantees—but typically for more specialized problem classes (e.g., convex or contractive operators). However, MISO targets a broader, nonconvex setting with runtime constraints where deriving general bounds typically requires overly restrictive assumptions. We, therefore, believe that providing fully general regret or sample-complexity bounds for MISO without imposing strong structural constraints remains an open research direction. We have made every effort to address your concerns comprehensively and clearly in our revision. If you feel your concerns have been resolved, we would greatly appreciate it if you could reflect this positively in your evaluation. Should any remaining questions or points need further clarification, please let us know—we are more than happy to provide additional information. Again, Thank you for your constructive feedback and valuable suggestions, which have significantly contributed to strengthening our manuscript.
Summary: The paper introduces MISO, a novel framework designed to improve the performance of local optimization algorithms by predicting multiple diverse initial solutions to warm start optimization problems. The motivation is to make these solutions diverse so they cover different regions of the solution space. The paper presents three training losses to promote diversity among the predicted solutions: a pairwise distance loss, a WTA loss, and a mixture of the two.  The framework is evaluated on three tasks (cart-pole, reacher, and autonomous driving) using different optimizers (DDP, MPPI, and iLQR). The results show consistent improvement over baseline methods, demonstrating the effectiveness of MISO in both single and multiple optimizer settings. Claims And Evidence: The paper provides quite a lot experiments across the three tasks and optimizers, showing consistent improvements over baseline methods. The ablation studies and detailed analysis of mode diversity strengthens the evidence provide deeper insight into the behavior of the framework. The paper mainly focuses on control tasks, but optimization problem covers a lot of others. While the results are promising, their generalizability to other optimization problems outside optimal control problem and other real-world scenarios is not fully explored. Methods And Evaluation Criteria: Overall, the method is sound and correct. The evaluation is comprehensive. The use of benchmark datasets (cart-pole, reacher, and autonomous driving) is appropriate for evaluating the framework. The selection of optimizers (DDP, MPPI, iLQR) covers a range of optimization techniques. However, i am interesting in seeing this framework evaluated on other types of optimization problems. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments are well-designed and structured. One concern is that it seems that the problem can be solved relatively fast, but it is hard to collect the runtime performance for each method presented in the paper. Supplementary Material: The appendix is comprehensive and contains many information, but i didn't carefully read through it. Relation To Broader Scientific Literature: The problem and method studied in this paper are important. Essential References Not Discussed: The paper discusses a few related paper in combinatorial/discrete optimization. However most of them are not super relevant. One could be relevant is the following: https://ai.meta.com/research/publications/genco-generating-diverse-designs-with-combinatorial-constraints/ Other Strengths And Weaknesses: See above Other Comments Or Suggestions: See above Questions For Authors: What are the sizes of the optimization problems used in experiments in terms of numbers of constraints and variables? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thoughtful and constructive feedback. We greatly appreciate your positive comments regarding the extensive experimentation, soundness of the method, and comprehensive evaluation across multiple optimizers and benchmark tasks. We are particularly pleased you found the ablation studies insightful and agree that the method addresses an important problem in the optimization community. Below we provide responses to your specific concerns and clarify key points that we hope further strengthen our manuscript: **Sizes of Optimization Problems.** Below, we outline the size and constraints of each optimization task in our experiments, as detailed in appendix lines 630-681: * Constraints: Bounded control inputs (e.g., acceleration, steering angle/rate), Limited rail or joint ranges, Physical/kinematic constraints (forces, torques, vehicle dynamics). * Dimensionality of the optimization variable: Cart-Pole: 10, Reacher: 20, Driving: 80. **Generalizability to Other Optimization Problems.** We agree that demonstrating generalization beyond optimal control problems is valuable. While our current work focuses specifically on optimal control to ensure thorough experimentation, we are actively exploring broader optimization scenarios as part of future work (lines 430-436). We emphasize that our proposed MISO framework is task-agnostic by design, relying only on the structure of the underlying optimization rather than problem-specific details. Consequently, we anticipate strong potential for generalization to diverse optimization domains. **Runtime Performance.** We appreciate this insightful comment and believe the paper could benefit from clearer explanations regarding runtime performance. To clarify: * If your concern pertains to solving a single optimization instance, you are correct that most instances typically converge rapidly (e.g., within 5 ms for the driving task). However, convergence time depends significantly on several factors, notably abrupt environmental changes between optimization steps and the quality of initialization (lines 139-144). We explicitly illustrate this in Figure 5 (left), demonstrating scenarios where certain initializations significantly impact convergence time (lines 363-374). * If your comment pertains to comparing the runtime of different architectures (i.e., MISO versus alternative approaches), we include two detailed runtime comparisons in the appendix: * Appendix A.6 (lines 806-839) contrasts the inference speed between our multi-output architecture and an ensemble-based approach, demonstrating the superior scalability of MISO. * Appendix A.14 (lines 1061-1069) compares our approach against sampling-based (diffusion) methods, highlighting why sampling methods are infeasible for these real-time applications. **Relevant Related Work.** Thank you for suggesting this additional reference. Unlike MISO, GENCO assumes access to gradients with respect to design variables, which may not be available in settings involving black-box simulators or non-differentiable objectives. **We will incorporate it in the related work section.** Thank you once again for taking the time to carefully review our paper and for your insightful feedback. We have made every effort to address your concerns and have revised the manuscript accordingly. Should you have any remaining questions or require further clarification, we would be more than happy to provide additional details. If you feel your concerns have been resolved, we would greatly appreciate it if you could reflect this positively in your evaluation.
null
null
null
null
null
null
Vulnerability-Aware Alignment: Mitigating Uneven Forgetting in Harmful Fine-Tuning
Accept (poster)
Summary: The paper takes a look at harmful fine-tuning after a policy for alignment has been incorporated using a subset of curated examples at the provider's. The idea is that certain subsets of alignment examples are more vulnerable to being forgotten during HFT. The work, VAA, looks to identify these groups, and use group DRO to maximize the worst example, that implies adding harmful examples via a perturbation and minimizing its effect. However, instead of regularizing, the work proposes to more tightly sample from the alignment set if the perturbation is rather forgiving to forgetting. There are two metrics, and experiments that benchmark with sota using them. They appear to be well designed. Claims And Evidence: . that vulnerable and invulnerable are the two groups all alignment examples fall into. This is not substantiated in terms of theory. The re-sampler is more expressive than that. . that vulnerable examples are identifiable. The metric is designed to work with that. . that DRO is a good fit. Yes, in terms of robust optimization, and the work is trying to do just that. The group idea is approriate for the specific requirement, while the work has the claim that they are the first to use it for HFT. . that the whole idea works. The numbers exist. Methods And Evaluation Criteria: Because the method is founded on a known method for robust training with shift, whether embedding-shift or other, it is a logical methodto improve robustness against HFT too. The two metrics added are Harmful Score (HS), that quantifies the model's resilience to harmful examples, which is a measure of safety. Fine-tuning Accuracy (FA), that measures the model's performance on downstream tasks, and is fairly standard. Theoretical Claims: I have exposed my position over a little question right in the summary. Experimental Designs Or Analyses: Experiments are standard. Supplementary Material: The tail beyond the bibio. I didn't find the link to the src. Relation To Broader Scientific Literature: There may not be a lot of literature. The domain of inquiry is new. But it will gain importance, and that makes the work in question, a forward-looking work. Essential References Not Discussed: I think the LR is probably all that is out there! Other Strengths And Weaknesses: Experiments are comprehensive. Other Comments Or Suggestions: - Questions For Authors: I wonder if you didn't poison your data yourself, and had to determine which subsets were harmful, how would that be realized? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback on our work. We appreciate your recognition that this is a forward-looking work in a new domain that will gain importance. We are also grateful for your acknowledgment that our experiments are well-designed and comprehensive, and that our design of VAA based on GDRO is a good fit and the first application to HFT scenarios. We are pleased that you recognize our method works well and achieves SOTA results. We will address your comment below. ## Response to the question: I wonder if you didn't poison your data yourself, and had to determine which subsets were harmful, how would that be realized? Thank you for your thoughtful question, and we apologize for the lack of clarity in the original manuscript. At stage 1, we simulate harmful fine-tuning (HFT) using proxy data. The goal of this stage is to partition the alignment data into vulnerable and invulnerable subsets. This method is generally applicable and independent of the actual downstream poisoning process. The alignment data can be partitioned in this way; it is not grounded in theoretical assumptions, but rather in empirical observations of large-scale heterogeneous alignment data. Specifically, we observed that certain examples are more likely to be forgotten during HFT, and we refer to these as vulnerable samples. Our experiments (Figure 2) show that different poison data—even with 0% poison—result in common forgetting. This enables us to use proxy data to simulate HFT. Our goal is to estimate, in a data-driven manner, which portions of the alignment dataset are more susceptible or robust to such fine-tuning, even in the absence of explicit data poisoning. To operationalize this, we begin with an aligned model that performs well on its training set. We then simulate HFT by fine-tuning the model on a proxy dataset (Alpaca) augmented with 10% randomly sampled harmful data. During this process, we evaluate the model's predictions on the alignment training set over T iterations and compute how many times each example is forgotten, denoted as ForgotNum (Equation 1). Examples with ForgotNum = 0 (i.e., never forgotten across T iterations) are assigned to the invulnerable group, while all others are assigned to the vulnerable group. This empirical partition serves as a prior for estimating data vulnerability, which our algorithm leverages in subsequent stages. ---------- Thank you once again for your professional and inspiring feedbacks. We hope that our responses could effectively address your comments, and we look forward to any further feedback you may have.
Summary: The paper introduces Vulnerability-Aware Alignment (VAA), a method aimed at enhancing the safety of large language models by focusing on data subsets that are vulnerable to harmful fine-tuning. VAA employs group-based robust optimization to boost model robustness, lowering harmful scores while preserving performance across a variety of tasks. Claims And Evidence: In general, the claims made in the paper are well supported. Methods And Evaluation Criteria: The proposed VAA method first investigates the different vulnerabilities of alignment data under harmful fine-tuning senarios and employs the Group DRO framework to manage various vulnerability groups. It is interesting to adapt the typical Group DRO framework into a two-player game between a hard group sampler (the adversary) and the large language model. From a technical standpoint, this method appears sound. Regarding the evaluation, the paper provides extensive experimental results in Section 5, which generally support the effectiveness of the method. However, there are some limitations in my view: - The proposed method is tested on a single language model only. Testing on more architectures of different scales would better demonstrate the generality of the method. - The paper states, “Ideally, at convergence, the adversary represents a uniform distribution across different groups, as the LLM learns all groups equally well.” However, there are no experimental results showing how well the model achieves this ideal situation. It would be interesting to see how the distribution and examples change throughout the adversarial training. The absence of these results could lead readers to question the origins of performance improvements. - The computational overhead should also be discussed. Theoretical Claims: I did not find theoretical proofs, and there seems to be need for such proofs. Experimental Designs Or Analyses: See "Methods and Evaluation Criteria" mentioned above. Another concern I have regarding the experimental design, as detailed in Section 3 and similarly in Section 5, involves the dataset creation method where the "dataset is combined with randomly sampled harmful data from the Beavertail dataset at varying poison ratios." I question whether this approach of simply adding data from another source into an existing dataset truly reflects real-world scenarios where harmful data might be present. The newly introduced data may differ significantly in distribution, such as writing style, from the original dataset. A more realistic approach might be to integrate harmful data that mimics the style of the original text. Could the authors comment on how well their experimental setup models real-world conditions? Supplementary Material: No Supplementary Material provided. Relation To Broader Scientific Literature: The relation to broader scientific literature was discussed in the Section 1 and 2 of the paper. Essential References Not Discussed: The relation to broader scientific literature was discussed in the Section 1 and 2 of the paper. Other Strengths And Weaknesses: N/A, see above. Other Comments Or Suggestions: N/A, see above. Questions For Authors: N/A, see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful and positive feedback. We appreciate your recognition of our adaptation of the Group DRO framework into a two-player game between a hard sampler and the LLM as both interesting and technically sound. We also value your acknowledgment that our extensive experimental results generally support the method's effectiveness. Below, we address your comments with additional analysis and new experimental results. ## Response to W1: Experiments on More LLMs Thank you for the suggestion. We extended our experiments to include VAA and all baselines on **Qwen2.5-7B**, with results shown in Table 1. While performance varies across models, VAA consistently outperforms all baselines. Notably, as HFT epochs increase, the performance advantage of our method becomes more pronounced, further demonstrating its generalization ability. We will include these results in our revision. Table 1. Experiments on Qwen2.5-7B. | Method | Ep1 | | Ep2 | | Ep3 | | Ep4 | | Ep5 | | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | | HS ↓ | FA ↑ | HS ↓ | FA ↑ | HS ↓ | FA ↑ | HS ↓ | FA ↑ | HS ↓ | FA ↑ | | SFT | 26.89 | 84.80 | 31.47 | 76.80 | 31.08 | 86.80 | 33.27 | 87.20 | 33.07 | 86.40 | | RepNoise | 22.31 | 83.00 | 25.10 | 81.60 | 26.49 | 88.80 | 30.68 | 87.20 | 30.88 | 88.00 | | Vaccine | 29.88 | 82.60 | 29.28 | 84.20 | 28.88 | 83.20 | 30.48 | 85.40 | 29.48 | 85.60 | | Booster | 19.92 | 85.20 | 21.91 | 84.80 | 25.10 | 87.40 | 26.29 | 87.60 | 30.28 | 88.00 | | VAA | 17.73 | 86.20 | 18.33 | 86.40 | 20.12 | 85.40 | 21.91 | 87.60 | 22.11 | 88.60 | ## Response to W2: Analysis of Convergence Trends Thank you for your insightful suggestion! Indeed, in convex optimization scenarios, adversarial training reaches an equilibrium solution (saddle point) at convergence. However, in large-scale, non-convex LLM training, models often employ "few-pass training" due to computational constraints, stopping after several epochs—far from full convergence. This highlights a practical gap between LLM training and classical optimization theory. Despite this, we observe a clear _convergence trend_ in our adversarial sampling setup: the weights assigned to vulnerable groups increase over time, while those of less vulnerable groups decrease. This dynamic evolves in response to group-wise loss, as shown in Table 2, and indicates that the model progressively balances performance across groups. Table 2. Evolution of Group Weights and Loss During Training (Vulnerable: Invulnerable) | | Ep=0.0 | Ep=0.20 | Ep=0.35 | Ep=0.5 | | :--- | :--- | :--- | :--- | :--- | | group weight | 0.37: 0.63 | 0.49: 0.51 | 0.48: 0.52 | 0.56: 0.44 | | EMA group loss | 3.54: 2.74 | 1.97: 1.97 | 2.03: 2.04 | 2.82: 2.58 | We will include this discussion and analysis in our revision. We appreciate your thoughtful feedback on this important point. ## Response to W3: Discussion of Computional Overhead Thank you for raising this point. We measure efficiency by the number of backpropagation steps (the dominant training cost). Below is a comparison: | Method | Computational Complexity | | :--- | :--- | | SFT | $O(1 \times B P)$ | | Vaccine | $O(2 \times B P)$ | | Booster | $O(3 \times B P)$ | | VAA | $O(1.5 \times B P)$ | VAA saves an average of 0.5×BP computation compared to the fastest baseline (Vaccine) by employing a curriculum learning strategy that gradually warms up the perturbation probability from 0% to 100%. In practice, SFT for a 7B model finishes in under an hour, making the added cost acceptable given the substantial safety gains. We will include this discussion in the revision. ## Theoretical Support Due to space constraints, please refer to our response to Reviewer g5ub. ## Response to Dataset Style Mixing Thank you for bringing this up. We followed the experimental settings from previous works (Vaccine, Booster, etc.), which indeed introduces distribution variations. Given the complexity of real-world user behavior, we consider three possible scenarios: 1. Standard users: Real-world data naturally contains mixed, heterogeneous content 2. Malicious users: Deliberately injecting harmful data to undermine alignment 2. Standard users: Harmful content naturally occurring within homogeneous datasets Our setup addresses the first two scenarios. We acknowledge the importance of the third scenario and plan to construct homogeneous test benchmark for HFT in future work. We appreciate your thoughtful feedback and will add this discussion in our revision. *** Thank you once again for your insightful comments, which has undoubtedly strengthened our work. We hope that our responses could effectively address your concerns, and we look forward to any further feedback you may have.
Summary: This paper studies the Harmful Fine-Tuning (HFT) problem from data perspective. The authors find that there are specific subsets (vulnerable samples) in the aligned data that are more likely to be forgotten in HFT. To address this problem, this paper proposes a new method called Vulnerability-Aware Alignment (VAA), which uses the Group DRO framework to dynamically adjust the training strategy and force the model to learn vulnerable and non-vulnerable samples in a balanced manner through adversarial sampling and group-dependent perturbations. Experiments show that VAA reduces the harmfulness score in four fine-tuning tasks. Claims And Evidence: Most of the claims in the paper are supported by convincing evidence,there are still some points that I think are not so clear. In line 177 to 180, the paper claims a finding ''there is significant overlap in the forgotten data across different poison ratios''. But the corresponding Figure 2 does not seem to show this. No further evidence is given. Methods And Evaluation Criteria: Yes Theoretical Claims: No theory support in paper Experimental Designs Or Analyses: Yes, no obvious issue Supplementary Material: No supplementary material Relation To Broader Scientific Literature: A new method to solve Harmful Fine-Tuning problem from data perspective. Essential References Not Discussed: No Other Strengths And Weaknesses: ## Strengths * This paper first reveals that certain subsets of alignment data are consistently more prone to forgetting during HFT across different fine-tuning tasks and exhibit lower robustness compared to other subsets. * The proposed method significantly reduces the harmful score while maintaining the average performance. ## Weaknesses * Lack of theoretical support. * The paper only involves Llama-2 (7B) and lacks experiments on more models to demonstrate the generalization of the method. Other Comments Or Suggestions: See Questions Questions For Authors: * I‘m a little confused about stage 1 in the proposed method. The paper mentions using a proxy fine-tuning dataset (Alpaca) to provide a group prior information for the real downstream fine-tuning data provided by the user. What is the specific process like? * In Figure 2, the meaning of ‘Common’ does not seem to be given in the text. * In addition, in Figure 2 (a) and Figure 2 (b), the results are different under the SST2 (p=10%) setting. In Figure 2 (a), SST2 (p=10%) and SST2 (p=20%) seem to be exactly the same. Please check whether there is a mistake. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your professional and encouraging feedback. We are pleased that you recognize our work as the first to reveal uneven vulnerability under HFT. We also appreciate your recognition of VAA as a novel method that significantly reduces harmful scores while preserving performance, and your acknowledgment that our claims are well supported by convincing evidence. Below, we address your points with supporting analyses and new results. ## Response to W1: Theoretical Support To explain the principles of VAA, we present a variance-based perspective. **Variance Decomposition under HFT.** For any distribution $P \in \mathcal{Q}$ and parameter perturbation $\delta$ induced by HFT, the variance decomposes according to the Law of Total Variance: Var_P[ℓ(θ* + δ)] = E[Var_G_i[ℓ(θ* + δ)]] + Var[E_G_i[ℓ(θ* + δ)]] The first term captures **within-group variance**, and the second term captures **between-group variance**. **Discussion**: - The **robust loss** (group-wise perturbation module), reduces within-group variance by simulating perturbations during alignment, thereby improving robustness at the group level. - The **GDRO** (adv sampler module) reduces between-group variance by improving the performance of the most vulnerable group, leading to more balanced robustness across subpopulations. Together, these components enable VAA to address both _parameter sensitivity within groups_ and _imbalanced vulnerability across groups_ under HFT. ## Response to W2: Experiments on More LLMs Thank you for the suggestion. We expanded our experiments to include **Qwen2.5-7B**, with results shown in Tables 1. While HFT sensitivity varies across models, VAA consistently outperforms all baselines. Notably, as HFT epochs increase, the performance advantage of our method over baselines becomes more pronounced. Table 1. Experiments on Qwen2.5-7B. | Method | Ep1 | | Ep2 | | Ep3 | | Ep4 | | Ep5 | | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | | HS ↓ | FA ↑ | HS ↓ | FA ↑ | HS ↓ | FA ↑ | HS ↓ | FA ↑ | HS ↓ | FA ↑ | | SFT | 26.89 | 84.80 | 31.47 | 76.80 | 31.08 | 86.80 | 33.27 | 87.20 | 33.07 | 86.40 | | RepNoise | 22.31 | 83.00 | 25.10 | 81.60 | 26.49 | 88.80 | 30.68 | 87.20 | 30.88 | 88.00 | | Vaccine | 29.88 | 82.60 | 29.28 | 84.20 | 28.88 | 83.20 | 30.48 | 85.40 | 29.48 | 85.60 | | Booster | 19.92 | 85.20 | 21.91 | 84.80 | 25.10 | 87.40 | 26.29 | 87.60 | 30.28 | 88.00 | | VAA | 17.73 | 86.20 | 18.33 | 86.40 | 20.12 | 85.40 | 21.91 | 87.60 | 22.11 | 88.60 | ## Clarification on Figure 2 We apologize for any confusion in our initial representation and provide the following clarifications: *Q1: “The meaning of ‘Common’ is unclear.”* In Fig 2, _“Common”_ refers to the intersection of forgotten (or unforgotten) examples across different poisoning ratios. We define: Common = $\frac{|A_1 \cap A_2 \cap A_3|}{N}$, (1) CommonRatio = $\frac{|A_1 \cap A_2 \cap A_3|}{\min(|A_1|, |A_2|, |A_3|)}$. (2) Here, $A_i$ is the forgotten (or unforgotten) sets under a setting; $N$ is the dataset size. In the figure, *non-shaded bars* represent forgetting in a setting ($|A_i|/N$), while **shaded bars** represent overlap across different settings (Eq. 1). *Q2: “SST2 (p=10%) looks identical to p=20% in Fig 2(a) but not (b). Is this a mistake?”* There is no mistake. In Fig 2(a), the shaded regions indicate examples forgotten under _all_ settings (Eq. 1), which are identical across poison ratios. The non-shaded heights differ slightly, with 20% showing more forgetting than 10%. Compared with Fig 2(b), non-shaded heights for SST2 (p=10%) remain the same (Ai/N), but shaded regions differ as they reflect intersection of forgotten examples across different settings *Q3: “...claims significant forgetting overlap, but Fig 2 does not show this.”* This is reflected in Figure 2(a), where the shaded portion occupies most of the 0% bar—indicating a high _CommonRatio_. This suggests that examples forgotten under the clean setting are also frequently forgotten under poisoned settings. We will make revisions for clarity. ## Explanation of Stage 1 Thank you for your question. In Stage 1, we estimate which portions of alignment examples are more vulnerable to HFT. The process works as follows: 1. We begin with an aligned model and simulate HFT using Alpaca data mixed with 10% harmful content. 2. During the HFT process, we record T predictions and compute ForgotNum—the number of times each example is forgotten (using Eq. (1) in manu). 3. Examples with ForgotNum = 0 (never forget) are deemed invulnerable; others are considered vulnerable. This estimation provides prior knowledge about data vulnerability that our algorithm uses in later stages. We will clarify this process in our revision. *** Thank you again for your constructive feedback, which has undoubtedly strengthened our work. We hope our responses could effectively address your concerns and welcome any further suggestions.
Summary: The paper observes that certain subsets of alignment data are more likely to be forgotten during harmful fine-tuning (HFT) of large language models (LLMs). To mitigate this issue, the paper proposes a new alignment-stage method called Vulnerability-Aware Alignment (VAA). VAA first divides data into “vulnerable” and “invulnerable” groups and then applies a group DRO-based algorithm. Experiments on four fine-tuning datasets show that VAA could reduces harmful scores. Claims And Evidence: The paper makes two key claims: 1) the vulnerability of specific alignment data subsets to forgetting during harmful fine-tuning (HFT), and 2) the effectiveness of the proposed VAA method in mitigating this vulnerability. - A motivating example in Figure 2 supports the first claim. - While experimental results demonstrate VAA's improvement over existing methods, the paper would benefit from a more rigorous analysis of VAA (e.g., theoretical support and a more clear design rationale of VAA). Methods And Evaluation Criteria: Continuing from the above section, a deeper analysis of the VAA method would be crucial for a complete understanding of its capabilities. Key areas for improvement include: - Providing theoretical guarantees (or at least intuitions) into VAA's ability to reliably mitigate harmfulness in LLM fine-tuning - Justifying the choice of curriculum learning as the optimal strategy for implementing the target objective - Addressing and analyzing any training stability challenges associated with the two-player game - Clarifying the connection and potential overlap between the "hard" groups mentioned in the introduction and the "vulnerable" groups discussed throughout the paper Theoretical Claims: The paper applies group DRO (a method with various theoretical backgrounds) to a new problem, but does not offer novel theoretical analysis. Experimental Designs Or Analyses: While the experimental setup is largely acceptable, certain aspects raise questions. - The lack of reported variance (e.g., standard deviation) makes it difficult to fully assess the significance of VAA's results. - (Relatively minor issue) The observed worse harmful scores for some baselines (e.g., RepNoise in GSM8K and AlpacaEval; Vaccine and Booster in AlpacaEval) may need a better explanation. Although the paper suggests that difficult datasets may contribute to this phenomenon, a more detailed analysis of these baseline failures would be valuable. Supplementary Material: The paper does not include supplementary materials. Relation To Broader Scientific Literature: The paper focuses on improving the vulnerability of LLMs, which may provide some insights for LLM applications in scientific literature. Essential References Not Discussed: Overall well-discussed. Other Strengths And Weaknesses: The paper aims to solve an important problem in LLMs and includes various interesting observations. However, the paper could be improved by enhancing the analysis/rationale on VAA and experiments. Details are discussed in the above sections. Other Comments Or Suggestions: N/A. Questions For Authors: Questions are included in the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the professional and constructive feedback! We are delighted that you recognized our adaptation of GDRO to address a new and important problem, and appreciated the various interesting observations in our work. We're glad you found our experimental setup largely acceptable and our results demonstrating VAA's improvement over existing methods particularly encouraging. We address your suggestions below with additional analysis and results. ## Response to W1: Deeper Analysis of the Method We address the suggestions in three parts: design rationale, variance-based explanation and training stability (curriculum learning). ### 1. Design Intuitions VAA integrates two components—**robust loss** and **group DRO (GDRO)**—each addressing a distinct challenge posed by HFT: 1. **Robust Loss (Parameter Perturbation)**: HFT degrades performance via **parameter shifts** from θ to θ′. To model this, we use a surrogate objective, min_θ L(θ') = min_θ L(θ + ε), to proactively simulates these shifts during alignment, helping LLMs to defense against HFT. 2. **GDRO (Adversarial Sampler)**: Our analysis (Claim 1) shows ~30% of alignment data is vulnerable to forgetting under HFT, while ~70% is relatively robust. This _**data imbalance**_ leads to _"gradient starvation [1]"_ under ERM, where gradients from the smaller group are dominated by those from the larger group, resulting in _**"imbalanced vulnerability"**_. GDRO mitigates this by upweighting underperforming groups, encourage balanced learning between groups. Together, these modules incorporate two key priors into the alignment process: **parameter sensitivity** and **imbalanced vulnerability across the dataset**. ### 2. Variance-Based Explanation To provide deeper insight, we present a variance-based perspective. For any distribution $P \in \mathcal{Q}$ and parameter perturbation $\delta$ induced by HFT, the variance decomposes according to the Law of Total Variance: Var_P[ℓ(θ* + δ)] = E[Var_G_i[ℓ(θ* + δ)]] + Var[E_G_i[ℓ(θ* + δ)]] The first term captures within-group variance; the second captures between-group variance. - The **robust loss** reduces **within-group variance** by simulating perturbations. - The **GDRO** reduces **between-group variance** by improving the performance of the vulnerable group, leading to more balanced robustness. ### 3. Training Stability and Curriculum Learning We analyze training stability in both modules: **LLM Module:** Training stability of LLM is primarily affected by parameter perturbation. In **Table 6** in manu, we analyze _**the effect of perturbation strength on performance**_. Now we further analyze _**its impact on training instability**_. *With small perturbations, training is stable but defense is limited. Larger perturbations improve defense but destabilize training.* **Curriculum learning (CL)** addresses this trade-off. As shown in Table 1, CL enables stable training under larger perturbations, providing better defense. Table 1. Training Stability Under Large Perturbation | | wo CL | w CL | |----------------|--------------|--------------| | Train Loss | 5.11 → 4.74 | 1.38 → 0.48 | | Grad Norm | 71.0 → 1.55 | 10.06 → 1.48 | **Adversarial Sampler:** The sampler's training stability depends primarily on learning rate. As analyzed in **Table 7**, we selected an appropriate rate that enables smooth updates. No significant stability issues were observed in practice. ### Clarification of Terminology We'll consistently use "vulnerable group" throughout. ## Response to W2: Experiments ### 1. Reporting Variance In Tab 2, we conducted additional runs of SFT and VAA under three random seeds. VAA consistently outperforms SFT with low standard deviations, indicating significance. Table 2. Performance (mean ± std) | Model | Metric | Epoch 1 | Epoch 3 | Epoch 5 | | :--- | :--- | :--- | :--- | :--- | | SFT | HS | $26.82 \pm 0.62$ | $31.08 \pm 0.43$ | $32.07 \pm 1.12$ | | | FA | $89.07 \pm 0.82$ | $90.20 \pm 0.59$ | $90.80 \pm 0.28$ | | VAA | HS | $14.60 \pm 0.49$ | $19.20 \pm 0.33$ | $20.60 \pm 1.30$ | | | FA | $89.20 \pm 0.59$ | $90.40 \pm 0.71$ | $90.73 \pm 0.25$ | ### 2. Analysis of Baseline Failures *RepNoise & Booster:* Both implement explicit unlearning for harmful inputs (RepNoise corrupts internal representations, Booster prevents gradient descent), creating objective conflicts that impair generation abilities on tasks, compromising both harm mitigation and task performance. *Vaccine:* While applying robust learning, Vaccine overlooks data imbalance, leading to "gradient starvation" where vulnerable groups receive insufficient updates, limiting its effectiveness. *** Thank you for your constructive feedback and detailed suggestions, which has significantly strengthened our work. We hope our responses address your concerns and welcome any further feedback. ---------- Reference [1] Gradient Starvation: A Learning Proclivity in Neural Networks. NIPS 2021
null
null
null
null
null
null
Explaining the role of Intrinsic Dimensionality in Adversarial Training
Accept (poster)
Summary: This paper investigates the intrinsic dimensionality in the layerwise fashion for adversarially trained models. This paper provides a new perspective for adversarial training in different model architectures from manifold conjecture. The off-manifold adversarial examples (AEs) enhance robustness, and the on-manifold AE improves generalization. From an architectural perspective, decoder-based LLM and commonly used vision models exhibit different characteristics from encoder-based LLM. The vision and decoder-based LLMs exhibit low intrinsic dimensionality in earlier layers (favouring off-manifold AEs), whereas encoder-based models do so in later layers (favouring on-manifold AEs). Based on this property, this paper introduced SMAAT: Scalable Manifold Aware Adversarial Training. Experiments demonstrated its effectiveness and efficiency. --- ## update after rebuttal After the rebuttal, I have not changed my rating. The authors have addressed my concerns. Claims And Evidence: Claims made in the submission are supported by convincing evidence. Methods And Evaluation Criteria: Proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: No new theoretical claims. Experimental Designs Or Analyses: The existing experimental designs are technically sound. However, given that SMAAT is more of a generalizable AT framework. It would be beneficial to add evaluation with vision models on typical benchmarks like RobustBench [1]. Additionally, the current evaluation only focused on encoder-based LLMs; it would be more comprehensive to include decoder-based LLMs as well. [1] Croce, Francesco, et al. "RobustBench: a standardized adversarial robustness benchmark." Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Supplementary Material: I have checked all supplementary material. Relation To Broader Scientific Literature: The findings of this paper are very interesting and could have the potential for a larger impact on broader scientific literature. Essential References Not Discussed: Most of the related works are discussed. Other Strengths And Weaknesses: Strengths - The paper provides valuable insights by exploring intrinsic dimensionality in a layer-wise manner across various model architectures, highlighting the important distinctions between off-manifold and on-manifold AEs. These insights could have broad implications and significantly influence future research. - Reducing the training overhead is a crucial contribution to adversarial training, particularly given the growing preference for larger, more computationally intensive models. Weaknesses - The evaluation of text-based models appears limited. Given that SMAAT is presented as a generalized framework, a more comprehensive evaluation, especially on vision models, would significantly strengthen its contribution. - While the proposed SMAAT framework claims improved efficiency in adversarial training, the current efficiency gains primarily benefit specific architectures (encoder-based LLMs). A broader evaluation across diverse model types, including decoder-based LLMs, vision models, or multimodal architectures, would better demonstrate the generalizability and practical impact of SMAAT. Other Comments Or Suggestions: I suggest discussing the broader benefits of improving adversarial training efficiency, particularly for encoder-based LLMs, as it would strengthen the motivation and demonstrate greater practical significance. Questions For Authors: Please address the concerns highlighted in the Experimental Designs and Strengths and Weaknesses sections. Clarifying these points, especially regarding the rigour of the experimental evaluations and the scope of the baseline comparisons, will likely influence my overall assessment of this paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful comments and suggestions. In response, we conducted additional experiments to address the raised concerns and outlined the further revisions planned for the paper. **Evaluation on vision models:** > Given that SMAAT is presented as a generalized framework, a more comprehensive evaluation, especially on vision models, would significantly strengthen its contribution. > It would be beneficial to add evaluation with vision models on typical benchmarks like RobustBench [1]. As requested by the reviewer, we have extended the robustness and generalization analyses of encoder and decoder models to vision models. Following the approach in Figs. 5 and 6—where one model from each architecture was analyzed—we conducted similar experiments on VGGNet, chosen for its relative computational efficiency. It is important to note that performing a full grid search with adversarial training (AT) across all layers and measuring robustness for each model is highly resource-intensive. Specifically, we performed **AT** on VGGNet using the CIFAR-10 dataset by attacking every ReLU layer over 20 epochs, with **ε = 0.031–0.2** and **lr = 0.01–0.001**, evaluating robustness via **RobustBench**. [Results](https://anonymous.4open.science/r/SMAAT-25-DD9F/ICML%20Rebuttal/vgg_curve.pdf) show that attacking lower layers (light blue) improves robustness but reduces generalization (**bottom-right**), while upper layers (dark blue) enhance generalization at the cost of robustness (**top-left**). This shows that vision models more closely resemble decoder models in terms of their generalization versus robustness characteristics. **New experiments on decoder-based models:** > Additionally, the current evaluation only focused on encoder-based LLMs; it would be more comprehensive to include decoder-based LLMs as well. We conducted new experiments using the PAIR attack on the Llama-2 model. We performed the robustness versus generalization experiments shown in Fig. 4 under this attack scenario. The corresponding results, for comparison with Fig. 4, can be viewed at [link](https://anonymous.4open.science/r/SMAAT-25-DD9F/ICML%20Rebuttal/llama2_gen_robust_pair.pdf). As seen in the results, we observe the same trend as with the GCG attack. However, while model robustness drops to **30%** under the GCG attack, it remains above **75%** across all setups for the PAIR attack. This further supports our decision to use the GCG attack (i.e., the suffix attack) over PAIR attack in our experiments. **Efficiency gain assessment:** > I suggest discussing the broader benefits of improving adversarial training efficiency, particularly for encoder-based LLMs, Our analysis of the three most widely used architectures—encoder, decoder, and vision models—reveals a key distinction: encoder-based models uniquely exhibit a decreasing intrinsic dimensionality trend across layers, in contrast to the increasing trend we observed in the other two architectures. The core idea behind our method SMAAT is to apply adversarial training (AT) at the layer with the highest proportion of off-manifold samples to maximize robustness. For encoder models, this corresponds to the last layer, while for decoder and vision models, it aligns with the first layer, effectively making SMAAT equivalent to conventional AT in those cases. In this context, our analysis offers the first explanation for why traditional AT has proven especially effective. **Emphasizing AT efficiency:** > I suggest discussing the broader benefits of improving adversarial training efficiency, particularly for encoder-based LLMs Thank you for this note. Our work fundamentally explores the relationship between layer-wise intrinsic dimensionality (ID) and its effect on the generalization–robustness trade-off. The proposed SMAAT method leverages these ID-related insights to guide adversarial training. Notably, SMAAT leads to significant improvements for encoder-based models, as highlighted in Fig. 2 of the Introduction. Furthermore, as shown in Table 2, SMAAT introduces no additional overhead beyond standard model training. We will further revise the text to better highlight this aspect.
Summary: This paper reveals the fundamental reasons behind the varying effectiveness of adversarial training across different types of neural networks and proposes a novel and efficient training method, SMAAT. The study finds that early layers of vision models (e.g., CNNs) and generative language models (e.g., LLaMA) exhibit low intrinsic dimensionality, making them prone to generating adversarial examples that deviate from the true data distribution (off-manifold samples). This results in adversarial training significantly improving robustness at the cost of generalization. Conversely, in encoder-based language models (e.g., BERT), later layers exhibit low intrinsic dimensionality, causing traditional adversarial training to generate samples closer to the real data distribution (on-manifold samples), preserving generalization but limiting robustness improvements. Based on this discovery, the authors propose the SMAAT framework, which dynamically selects the network layer with the lowest intrinsic dimensionality for perturbation—applying it to the final layer for encoder models and to the input layer for vision/generative models. Experiments demonstrate that SMAAT outperforms existing techniques in various applications, including sentiment analysis, content safety filtering, and retrieval-augmented systems. This study not only provides the first explanation of adversarial training discrepancies from a data distribution perspective but also introduces a new training paradigm that balances efficiency and security. The proposed method is particularly valuable for the rapid deployment of adversarially robust large language models in real-world applications. Claims And Evidence: I think that the claims presented in this paper are reasonable to some extent and are supported by experimental evidence. Methods And Evaluation Criteria: The paper proposes the SMAAT method, which selects adversarial training perturbation layers by analyzing the intrinsic dimensionality (ID) of different model layers to enhance robustness and training efficiency. The study covers vision models, encoder-based language models (such as BERT), and decoder-based language models (such as LLaMA) and evaluates the approach across multiple tasks, including text classification, safety filtering, and RAG retrieval. Overall, the proposed method directly addresses the core challenges of adversarial training in encoder-based models with a well-structured and rigorous evaluation framework. While there are limitations in its applicability to specific scenarios, the approach offers a valuable paradigm for efficient and robust model training. Theoretical Claims: The paper primarily presents two theoretical insights: (1) the relationship between intrinsic dimensionality (ID) and the manifold properties of adversarial samples (ONM/OFM); and (2) the effectiveness of the SMAAT method in enhancing robustness and efficiency by perturbing low-ID layers. While the theoretical claims are strongly supported by systematic experimental design—spanning different models, tasks, and attack scenarios—the paper lacks rigorous mathematical proof. Experimental Designs Or Analyses: The paper provides strong evidence for the effectiveness of SMAAT through cross-validation across multiple tasks and attack scenarios. The experimental design generally aligns with field standards, but additional details are needed to further enhance the credibility of the conclusions. Supplementary Material: I have reviewed the experimental setup in the appendix along with some supplementary analyses. Relation To Broader Scientific Literature: This paper integrates the manifold hypothesis, intrinsic dimensionality (ID) analysis, and efficient training methods to construct a theoretically coherent and widely applicable adversarial training framework. Its core innovation lies in revealing the decisive role of layer-wise ID distribution in determining adversarial sample properties and, based on this insight, proposing a scalable perturbation strategy. Essential References Not Discussed: There are relatively relevant citations. Other Strengths And Weaknesses: The introduction of the paper is not simple and straightforward enough, and some basic concepts are explained in an overly complicated manner. Other Comments Or Suggestions: Adversarial attacks in the visual domain are well known, and it is worth going into more detail about adversarial attacks in the text domain. Questions For Authors: I am unsure about the specific meaning of “data manifold.” If “on-manifold” refers to high-density and “off-manifold” refers to low-density, it may not be necessary to use the term “manifold,” as it could confuse the readers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments and suggestions aimed at improving the clarity of the paper. Below are our responses to the reviewer’s points. **Theoretical proofs:** > While the theoretical claims are strongly supported by systematic experimental design—spanning different models, tasks, and attack scenarios—the paper lacks rigorous mathematical proof. The paper relies on two statements: (1) AEs generated from a low-dimensional manifold are likely to be off-manifold (similarly, AEs generated from a high-dimensional manifold are likely to be on-manifold) and (2) the manifold conjecture stating that off/on-manifold AEs lead to better robustness/generalization. While the manifold conjecture is well established in the literature (Ethayarajh, 2019; Shamir et al., 2021; Gilmer et al., 2018), statement (2) is straightforward: Let $\mathcal{M} \subset \mathbb{R}^n$ be a smooth, compact, low-dimensional manifold with intrinsic dimension $d \ll n$, embedded in $\mathbb{R}^n$. Let $f: \mathbb{R}^n \to \mathbb{R}^k$ be a classifier trained on data sampled from $\mathcal{M}$. Let $\delta$ denote an adversarial perturbation obtained by maximizing the loss $\mathcal{L}(f(x + \delta), y)$ under a norm constraint $\|\delta\| \leq \epsilon$. Since the classifier models $p(y \mid x)$ without access to the generative distribution $p(x)$, it lacks explicit knowledge of the manifold $\mathcal{M}$. Therefore, the loss gradient $\nabla_{\delta} \mathcal{L}$ generally has components orthogonal to the tangent space. As a result, the perturbation $\delta$ is unlikely to lie entirely within $\mathcal{M}$, and typically has a non-zero orthogonal component. Moreover, the smaller the dimension $d$ of the manifold relative to $n$, the larger the proportion of $\delta$ that lies off-manifold. Hence, adversarial examples generated in this way are very likely to lie off the data manifold. We will add this explanation to the paper. We hope this clarifies. **Clarity of the introduction:** > The introduction of the paper is not simple and straightforward enough, and some basic concepts are explained in an overly complicated manner. Thank you for your comment. Below, we clarify the information presented in the introduction. Moreover, we are happy to accommodate any specific suggestions you may have to improve it further. Currently, our Introduction is structured as follows: (A) We begin by introducing adversarial training (AT) and highlighting two major limitations that our work addresses: (1) the poorly understood tradeoff between robustness and generalization (L.51–52, Col.1), and (2) the high computational cost of AT, which limits its practical deployment (L.51–52, Col.2). (B) To address the first limitation, we investigate how encoder LLMs, vision models, and decoder LLMs differ in intrinsic dimensionality, leading to distinct compositions of on-/off-manifold adversarial examples. This, under the manifold conjecture, explains their varying impacts on robustness and generalization. (C) To address the second limitation, we propose a scalable Manifold-Aware Adversarial Training approach that selectively applies AT at the layer with the highest proportion of off-manifold AEs, significantly reducing cost without sacrificing performance. (D) Finally, we summarize our experimental results supporting both contributions. We hope this clarifies the structure and motivation of the introduction. **Usage of the term manifold** > I am unsure about the specific meaning of “data manifold.” If “on-manifold” refers to high-density and “off-manifold” refers to low-density, it may not be necessary to use the term “manifold,” as it could confuse the readers. We formally define the ‘data manifold’ and ‘on-/off-manifold samples’ in Section 3. To improve clarity, we have revised the introduction to introduce these definitions earlier. Specifically, we use the term data manifold to refer to a potential non-linear subspace spanned by the dataset as it propagates through the network layers, with its dimension quantified via a projection-based method. In this context, on-/off-manifold samples refer to data points that are either captured by or fall outside the learned manifold during training. **Adversarial attacks in the text domain** > Adversarial attacks in the visual domain are well known, and it is worth going into more detail about adversarial attacks in the text domain. We will revise the introduction and related work to better describe adversarial attacks on text, with an emphasis on most effective ones such as GCG attack, i.e., the suffix attack, and PAIR attack.
Summary: The authors investigate how the relationship between perturbations and the data manifold influences whether adversarial training leads to improved generalization or robustness. Based on this insight, they propose SMAAT, a method that generates perturbations at specific layers to target different manifolds—leveraging the fact that intrinsic dimensionality changes with layer depth—to achieve more precise trade-offs between generalization and robustness. The paper provides extensive quantitative results on LLMs supporting these claims, demonstrating that SMAAT is more efficient and achieves a better generalization-robustness tradeoff compared to existing methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Figure 1 and Figure 2 lack sufficient experimental details. It would be helpful to explicitly clarify these details in the experiments section or provide additional explanations in the supplementary material. Supplementary Material: No. Relation To Broader Scientific Literature: This work builds on previous research on adversarial training, particularly the accuracy-robustness tradeoff in image models [1], and extends these insights to generalization properties and different architectures. The empirical observation of a strong relationship between the data manifold and the robustness-generalization tradeoff is a valuable contribution, as it provides a deeper understanding that can inform more refined adversarial training strategies. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - Clear writing with tightly integrated experiments, making the claims and findings easy to follow. - Strong and wide-ranging experiments, demonstrating both efficiency and a well-balanced generalization-robustness tradeoff. - Relatively simple yet effective method that directly leverages the observed manifold-robustness-generalization relationship, reinforcing its validity and practical significance. Weaknesses: - Weak/easy adversarial attack used in Table 3: Evaluating accuracy under attack against more recent and stronger adversarial attacks (e.g., [2,3]) would better demonstrate the method’s robustness improvements. - Limited experiments on vision models: While Figure 4 explores intrinsic dimensionality and reconstruction error for vision models, the paper does not provide robustness-generalization results for vision models using the proposed method. Since Figure 3 suggests the method can enhance vision model robustness, a direct comparison on vision benchmarks would strengthen the findings. references: [1] Zhang, Hongyang, et al. "Theoretically principled trade-off between robustness and accuracy." International conference on machine learning. PMLR, 2019. [2] Liu, Xiaogeng, et al. "Autodan: Generating stealthy jailbreak prompts on aligned large language models." arXiv preprint arXiv:2310.04451 (2023). [3] Chao, Patrick, et al. "Jailbreaking black box large language models in twenty queries." arXiv preprint arXiv:2310.08419 (2023). Other Comments Or Suggestions: - define terms when first used and in important locations (e.g. RAG in abstract, LAT in figure 1) - figure 1: could improve clarity by just plotting points and adding best-fit lines to show trends more clearly Questions For Authors: Q1. While I understand that evaluating vision model robustness requires significantly more experimental setup than adding another text-based experiment, is there a specific reason why no experiments were conducted on vision models using the proposed method? Q2. Do the authors plan on releasing the code for their experiments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions. In response, we have conducted additional experiments to address the critiques and provide a summary of further revisions that will be made to the paper. **Details on Fig. 1 and Fig. 2:** > Figure 1 and Figure 2 lack sufficient experimental details ... These two figures summarize key findings presented in Sec. 4 and Sec. 5. Figure 1 presents results on the LLaMA-2 model by combining the ID characteristics from Fig. 4(d) (1st row), reconstruction error from Fig. 4(d) (2nd row), and the generalization and robustness trends from Fig. 5. For Fig. 1, we report the best results that maximize the sum of robustness and generalization scores for each layer. Additional details are provided in the supplementary material. Figure 2 compares SMAAT against other AT approaches for encoder-based models, evaluating generalization, robustness, and run-time cost across different tasks. We will revise the Introduction to provide further details and to reference the Supplementary for full details on the generation of Figs. 1 and 2. **Additional attacks:** > Evaluating accuracy under attack against more recent and stronger adversarial attacks (e.g., [2,3]) would better demonstrate the method’s robustness improvements. Although we acknowledge that the suffix attack (the method used in Table 3) is not a recent technique, it remains one of the most powerful attack strategies. For the sake of clarity, we conducted new experiments using the PAIR attack, in addition to the suffix attack. Since Table 3 reports results for encoder-based models—which are not compatible with attacks that rely on generative capabilities like PAIR—we instead applied this attack to the LLaMA-2 model. We performed the robustness versus generalization experiments shown in Fig. 4 under this attack scenario. The corresponding results, for comparison with Fig. 4, can be viewed at [link](https://anonymous.4open.science/r/SMAAT-25-DD9F/ICML%20Rebuttal/llama2_gen_robust_pair.pdf). As seen in the results, we observe the same trend as with the GCG (i.e., the suffix attack) attack. However, while model robustness drops to **30%** under the GCG attack, it remains above **75%** across all setups for the PAIR attack. This further supports our decision to use the GCG attack over PAIR in our experiments. **Experiments on vision models:** > Q1. While I understand that evaluating vision model robustness requires significantly more experimental setup than adding another text-based experiment, is there a specific reason why no experiments were conducted on vision models using the proposed method? > While Figure 4 explores intrinsic dimensionality and reconstruction error for vision models, the paper does not provide robustness-generalization results for vision models using the proposed method. We extend the robustness and generalization analyses of the encoder and decoder models to vision models. Following the approach in Figs. 5 and 6—where one model from each architecture was analyzed—we conducted similar experiments on VGGNet, chosen for its relative computational efficiency. It is important to note that performing a full grid search with adversarial training (AT) across all layers and measuring robustness for each model is highly resource-intensive. Specifically, we performed **AT** on VGGNet using the CIFAR-10 dataset by attacking every ReLU layer over 20 epochs, with **ε = 0.031–0.2** and **lr = 0.01–0.001**, evaluating robustness via **RobustBench**. [Results](https://anonymous.4open.science/r/SMAAT-25-DD9F/ICML%20Rebuttal/vgg_curve.pdf) show that attacking lower layers (light blue) improves robustness but reduces generalization (**bottom-right**), while upper layers (dark blue) enhance generalization at the cost of robustness (**top-left**). This shows that vision models more closely resemble decoder models in terms of their generalization versus robustness characteristics. **Abbreviations:** > define terms when first used and in important locations (e.g. RAG in abstract, LAT in figure 1) We will revise the text to define terms where they appear first. **Improving clarity of Fig. 1:** > figure 1: could improve clarity by just plotting points and adding best-fit lines to show trends more clearly Thanks. We have regenerated Fig. 1 to display best-fit lines using a locally weighted regression technique. The updated figure is available at the [link](https://anonymous.4open.science/r/SMAAT-25-DD9F/ICML%20Rebuttal/llama_all_in_one_trend.pdf). **Code:** > Q2. Do the authors plan on releasing the code for their experiments? The code is publicly available and can be accessed at the [link](https://anonymous.4open.science/r/SMAAT-25-DD9F/README.md)
null
null
null
null
null
null
null
null
Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret
Accept (poster)
Summary: This paper studies the problem of differentially private stochastic bandit. It proposes a new algorithm which is roughly Thompson sampling, with an option to re-use previous samples. Their algorithm offers a trade-off between regret and privacy, a strict improvement over previous works on the privacy when maintaining a near-optimal regret. ## update after rebuttal I have no other concerns. Please consider (1) justifying or modifying the "UCB" naming, and (2) adding a discussion on the trade-off of $\alpha$ in a future version. Claims And Evidence: It's unclear to me how much similarity there is between UCB and the algorithm in this paper. According to line 10 of the algorithm, the re-using phase where we set $\theta_i(t)$ as the maximum of previous samples is considered the only UCB-style part. The justification, Lemma 4.1, seems to be a bound similarity instead of algorithmic similarity. I'm not sure if calling such mechanism UCB is misleading, since there is no upper confidence bound involved. Methods And Evaluation Criteria: Yes. Theoretical Claims: I only checked the proof sketch of Theorem 4.2 and the proof of Theorem 4.4. The proof sketch of Theorem 4.2 is intuitive, but I would like to see more details on the step of putting everything together. The proof of Theorem 4.4 is too casually written. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: This work is of potential interest to the privacy community, besides the ML community. Essential References Not Discussed: "TS-UCB: Improving on Thompson Sampling With Little to No Additional Computation", by Jackie Baek and Vivek F. Farias, AISTATS2023. Although this paper doesn't study DP, I would appreciate a discussion on algorithmic and idea similarities/differences between the two works. Other Strengths And Weaknesses: The new technique of re-using previous samples is smart and interesting, which plays a crucial role in the improvement over previous works. In the title, it should be "Thompson" instead of "Thomson". In table 1, the use of $\log$ and $\ln$ is inconsistent. Other Comments Or Suggestions: No. Questions For Authors: I wonder if the trade-off is necessary, when constant privacy is achievable at the cost of only a logarithmic term in regret. Does an $O(\sqrt{\ln T})$ term matter a lot in regret? In my opinion it doesn't, and it seems setting $\alpha=1$ is a strictly better choice. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for the constructive comments. We address each of your questions as follows. (1) Regarding the **similarity between our proposed algorithm and UCB**, as theoretically justified in Lemma 4.1 and the content just above it (Lines 266 to 271), the reason why we call it UCB is that the maximum value of previous samples behaves like the upper confidence bound in the classical UCB1 algorithm. We agree with the comment that the UCB part in our proposed algorithm is a kind of bound similarity as our proposed algorithm itself does not explicitly construct upper confidence bounds. We appreciate your concern about the potential misinterpretation and will refine our writing to ensure clarity. (2) Regarding the proof of our presented theorems, we appreciate your suggestions. For Theorem 4.2, we will clarify more key steps by bringing some of the details currently in Appendix D into the main text to provide a more complete picture of the proof sketch. For Theorem 4.4, we will complement the existing proof with a more formal, math-style proof to enhance its rigor and clarity in the appendix. (3) Regarding **"TS-UCB: Improving on Thompson Sampling With Little to No Additional Computation"**, by Jackie Baek and Vivek F. Farias, AISTATS2023, we thank you very much for referring us to this relevant paper. After carefully reviewing it, in addition to our focus on privacy, we identify two other key differences between their TS-UCB and our proposed algorithm. (1) Their work focuses on Bayesian regret, whereas our analysis is in the frequentist setting. Since Bayesian regret bound cannot be translated to frequentist regret bound easily, this represents a fundamental distinction. (2) While their TS-UCB samples multiple posterior models, it aggregates them by averaging. The averaging will not behave like an upper confidence bound (as opposed to our maximum), which does not provide sufficient exploration. We will incorporate these points into our discussion of related work. (4) The reasons **why we provide tuning $\alpha \in [0,1]$ to trade off privacy and regret**: From a theoretical standpoint, it provides insights into the interplay between the privacy and regret while also offering the privacy-regret trade-off flexibility by tuning the parameter $\alpha$ to balance these two. This perspective helps us better understand the fundamental limits of private decision-making. For the stochastic bandit community, we care more about the regret bounds, and, thus we would like to avoid the extra $\ln T$ factor. From the perspective of privacy, we agree that in practical scenarios, setting $\alpha = 1$ may often be the preferable choice, as the extra $\ln T$ factor is typically negligible. Achieving a constant privacy guarantee could be more beneficial. We appreciate your perspective and will clarify this point in our discussion. (5) Last but not least, we thank you very much for the careful review. We will fix all the typos.
Summary: This paper examines the regret-privacy trade-off for the Gaussian TS algorithm under Gaussian Differential Privacy. By drawing the connection between Gaussian TS and UCB, authors propose the DP-TS-UCB algorithm, which does not need to sample a Gaussian model at each round, the paper achieves a new privacy-regret trade-off that improves upon the previous state-of-the-art results. Claims And Evidence: Yes. Most claims are well-supported. Methods And Evaluation Criteria: Yes, the evaluation criteria(Regret and GDP) are standard and widely applied in most related areas. Theoretical Claims: I have not thoroughly reviewed all the proofs and the appendix in full detail, but based on the provided intuition and the outlined proofs, I am inclined to believe that the theoretical claims are correct. Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: This paper follows the line of work exploring the privacy-regret tradeoff for bandits with a Gaussian TS, as listed in the related work section and the table. The contributions made by this paper, including the improved tradeoff bound and the principles behind its algorithm design, provide valuable insights into this line of research. Essential References Not Discussed: No Other Strengths And Weaknesses: Other Strengths: The authors provide a clear explanation of the intuition behind their algorithm design, which makes the paper easy to follow. Weakness: One potential weakness is the lack of lower bound results on such regret-privacy tradeoff. Other Comments Or Suggestions: I don't have other comments. Questions For Authors: 1. While this paper focuses on the frequentist setting, I am curious whether improved results could be achieved by considering Bayesian regret, even under the assumption of a Gaussian prior. Additionally, I wonder how the variance of prior distributions impacts such tradeoff in heterogeneous reward setting, as in non-privacy setting [1]. 2. Are there any lower bound results on such regret-privacy tradeoff? [1] Saha, A., & Kveton, B. (2023). Only pay for what is uncertain: Variance-adaptive thompson sampling. arXiv preprint arXiv:2303.09033. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for the constructive comments. We address each of your questions as follows. (1) Thank you very much for referring us to this interesting paper [1]. Under the notion of Bayesian regret and using Gaussian priors, we think it is possible to achieve an improved trade-off between regret and privacy by changing the variances of prior distributions. It is an interesting research direction, but it is out of the scope of our work as we focus on the frequentist regret setting. (2) Regarding the lower bounds on the regret-privacy trade-off, lower bounds exist for differentially private bandits under the notion of the classical $(\varepsilon, \delta)$-DP [2, 3, 4]. We will discuss those in the related work. Establishing lower bounds for our specific algorithm is an interesting avenue for future work. [1] Saha, A., & Kveton, B. (2023). Only Pay for What is Uncertain: Variance-adaptive Thompson Sampling. arXiv preprint arXiv:2303.09033. [2]: Roshan Shariff & Or Sheffet, Differentially Private Contextual Linear Bandits, NeurIPS 2018. [3]: Achraf Azize & Debabrota Basu, When Privacy Meets Partial Information: A Refined Analysis of Differentially Private Bandits, NeurIPS 2022. [4]: Siwei Wang & Jun Zhu, Optimal Learning Policies for Differential Privacy in Multi-armed Bandits, Journal of Machine Learning Research 2024.
Summary: This paper describes a stochatic MAB algorithm that preserves DP. It uses Thompson sampling with a limited budget of samples per epoch. Once the samples are exhausted within a round it uses the maximum of those samples. This is akin to an upper confidence bound. It also has a parameter $\alpha$ that can tune the behaviour from privacy-preserving to regret-minimising. Claims And Evidence: The presentation is ok, and the sketch proofs are readable. The structure of the proofs makes sense. Methods And Evaluation Criteria: Standard bandit experiments. Theoretical Claims: I checked the sketch proofs and some of the appendix, but I could not really go through all the algebra. Experimental Designs Or Analyses: The experimental analysis is only a minor aspect. Supplementary Material: Only Lemma 4.1 Relation To Broader Scientific Literature: It is of interest to online learning and DP Essential References Not Discussed: N/A Other Strengths And Weaknesses: The writing could be improved. Other Comments Or Suggestions: It seems that $\alpha$ can tune the algorithm only to a limited extent. Is there a way to achieve e.g. a specific fixed level of GDP? At this point, it seems like, for $\alpha = 0$ we get 2.875-GDP, or (1, 0.7612)-DP. It still is an improvement over TS-G. Questions For Authors: See above. How meaningful is the DP guarantee? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for the constructive comments. Regarding the results of GDP guarantees, we would like to clarify that when choosing $\alpha = 0$, the privacy guarantee depends on $T$. In Theorem 4.4, the GDP guarantee is in the order of $ \sqrt{ T^{0.5(1-\alpha)} \ln^{1.5(1-\alpha)}(T)}$. So, only $\alpha=1$ yields a constant GDP guarantee. Therefore, the algorithm is not always $2.875$-GDP if we change the values of $T$, and $\alpha$ does parametrize a privacy-regret tradeoff. For a fixed $T$ and $\alpha$, it is possible to achieve any specific lower level of GDP via scaling the Gaussian variance by a constant larger than one. However, it will result in an increased regret.
null
null
null
null
null
null
null
null
Elucidating the design space of language models for image generation
Accept (poster)
Summary: The paper investigates the application of large language models (LLMs) to image generation, demonstrating that LLMs can achieve near state-of-the-art performance without relying on domain-specific designs. The study also analyzes the learning and scaling behavior of autoregressive models, showing that larger models capture more useful information. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The evaluation criteria generally make sense. Theoretical Claims: The theroretical claims are correct Experimental Designs Or Analyses: The experimental designs are valid. Supplementary Material: Yes Relation To Broader Scientific Literature: This paper shows that LLMs can achieve strong image generation performance without domain-specific designs, through careful choices in tokenization, modeling, and sampling. It contributes to understanding scaling behaviors and modality differences, offering insights for adapting LLMs to vision tasks and encouraging broader cross-domain applications. Essential References Not Discussed: Two highly relevant papers should be discussed and made for fair comparison: [1] Sun, Peize, et al. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation [2] Li, Tianhong, et al. Autoregressive image generation without vector quantization Other Strengths And Weaknesses: Weaknesses 1. The technical novelty is kind of limited, as the results are mostly empirical. 2. This paper claims AR "demonstrate superior image generation ability and scalability compared to MLMs" in L089 and L090, however, works done to improve MLM for image generation in [1] shows much greater results. Authors should make fair comparison with [1] to defend this point. 3. More qualitative study and empirical discussion on cfg and top-k are important to understand how image generation differ from original decoding strategy in LLM. [1] Li, Tianhong, et al. Autoregressive image generation without vector quantization Other Comments Or Suggestions: No Questions For Authors: I'm leaning towards weak reject before rebuttal. However, this research is in good shape with valid contributions. I would be happy to raise my score to 3 (weak accept) if my concerns in [W2] and [W3] and quantitative comparisons with [1][2] are addressed during rebuttal. [1] Li, Tianhong, et al. Autoregressive image generation without vector quantization [2] Sun, Peize, et al. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks a lot for your valuable comments. Below, we will address your concern in detail. ``` Discuss with LlamaGen and MAR``` We agree that both [1]LlamaGen(Sun et al.) and [2]MAR(Li et al.) have made important contributions to advancing autoregressive image generation. We would like to clarify that we **do** compare against both works in **Table 1**, and further discuss MAR in the appendix (**A.7 Limitation section**) as a **parallel study** that proposes new modeling approaches and domain-specifc design for AR to effectively work in the **continuous** image domains. Compared to MAR, our work is more aligned with LlamaGen, as we both follow the same direction of **directly adapting language models for image generation** to explore their modeling potential. However, our work differs in that we conduct a **more comprehensive and systematic analysis** of the image-as-language modeling framework. In particular, we study: * The effect of tokenizer choice (VQGAN vs. BAE), * Modeling paradigms (AR vs. MLM), * Detailed learning behavior reflected by attention pattern and loss dynamics, * Vocabulary design through scalable codebooks under BAE, * And an in-depth ablation and analysis of sampling strategies. We believe our findings provide a **complementary perspective** to LlamaGen and MAR, and offer insights that could be valuable when combined with more advanced objectives or tokenization schemes in future work. We appreciate the suggestion and will incorporate the detailed discussions more clearly in the revised version. ```W1. technical novelty``` We believe that our work makes a **meaningful and timely** contribution by **systematically analyzing** the design space of applying **language models in the image domain**, providing insights into adopting LLMs as a unified method and guiding future methodological innovations, especially in light of recent developments of GPT-4o, a powerful natively AR large multimodal model. You may find our detailed response to **W1 from Reviewer xmQD**. ```W2. Comparison with MAR``` Thank you for pointing out that our conclusion on the choice of generation paradigm differs from MAR. However, given the differences in our methods, we believe this divergence is entirely reasonable. * We adopt a standard language modeling view, treating images as **discrete token sequences** with no image-specific modifications. Under this setup, we systematically compare AR and MLM using **standard tokenizers** and sampling, finding AR consistently superior in quality and scalability. Besides, this discrete token framework can integrate well with **LLM-based multimodal extensions**. * MAR presents a **new generative modeling approach** tailored for **continuous** image data with **diffusion loss** and **no quantization**. Their finding that MLM outperforms AR is valid within this setting but reflects a **fundamentally different modeling goal and assumptions** from ours. We view these works as complementary: MAR explores **domain-specific continuous modeling**, while we investigate how **standard language modeling paradigms transfer to vision**. Based on the different tokenizers and training objectives, we come to different conclusion according to the generation modeling method, which totally makes sense. We have acknowledged this in our appendix and will add the discussion to the main paper. ```W3. qualitative study and empirical discussion on top-k and cfg``` We agree that understanding top-k sampling and classifier-free guidance (CFG) is key to characterizing image generation behavior, especially in contrast to standard LLM decoding. We have included an extensive analysis of both strategies (Figure 6, Table 12). We further provide **qualitative** comparisons (linked [here](https://anonymous.4open.science/r/ICML2025rebuttal-D02C/qualitive_topk_CFG.pdf)). We find that image generation requires **larger top-k values** and **moderate CFG weights** to achieve a better FID score. However, there is a **trade-off** between FID score and visual quality. A large k (close to the vocab size) introduces more texture details and diversity, making the outputs closer to the real distribution and thus yielding lower FID. In contrast, a moderate k often produces visually more appealing results, with cleaner and smoother images. While CFG is rarely applied in LLM-based text generation, top-k sampling is widely adopted in both domains: * In image generation, small k (less than 100) values lead to **overly smooth, low-diversity images**, lacking texture richness. Conversely, image tokens require broader sampling (e.g., **k ≈ 0.5 * vocab size**) to maintain realism. * In language generation, a typical k lies in the **20–100** range with a vocab size of **~50,000 or 128,000**, and can produce **diverse** and **grammatical fluency** texts (Holtzman et al., 2020; Fan et al., 2018). ref: Holtzman et al, 2020, The Curious Case of Neural Text Degeneration. Fan et al, 2018, Hierarchical Neural Story Generation. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ detailed responses, which have addressed most of my concerns about the paper. The additional experiments on top-k and CFG are quite impressive. However, after carefully reading the other reviewers' comments and the authors’ rebuttal, I noticed that a substantial number of new experiments and results were introduced in the rebuttal, many of which are important to the overall claims of the paper. Given the empirical nature of the work and the limited technical novelty, I believe these new results should be properly integrated into the main paper and go through a full round of peer review rather than being evaluated solely during the rebuttal phase. While I acknowledge the consensus among the other reviewers and the authors’ considerable effort, I am therefore leaning slightly toward a weak reject this time. That said, I would still be OK if paper is ultimately accepted. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and for recognizing the value of our work and rebuttal efforts. We’d like to clarify that the additional results provided during the rebuttal mainly include: * comparisons between tokenizers (added VQGAN from LlamaGen) [for Reviewer ED5q] * extended ImageNet-512 results (added VAR result) [for Reviewer xmQD] * computational efficiency analysis (training/inference/memory) [for Reviewer e5YQ], * qualitative results on sampling strategies (top-k and CFG). These additions help strengthen and clarify our statements—such as (1) demonstrating that bit-wise quantization (BAE) is more effective than vector-wise quantization (VQGAN), (2) our AR model with BAE can scale efficiently to high-resolution image generation, (3) AR models are not only effective but also competitive in overall efficiency, and (4) sampling strategies (e.g., top-k, CFG) have a significant impact on LM-based image generation—which we have included in the *Introduction* and *Experiment section* in our submitted paper. These are **complementary results** that extend and reinforce our existing claims **without altering** the main conclusions or methodology. We will carefully incorporate these findings into the revised version for completeness. We believe our study provides **timely and meaningful** insights for the community, especially toward understanding how AR modeling paradigms can serve as a unified foundation for multimodal generation and reasoning.
Summary: This paper systematically explores how to utilize LLMs for image generation, providing detailed comparisons and analyses across tokenization methods, modeling approaches, scan patterns, vocabulary design, and sampling strategies, offering some interesting conclusions. Based on these integrated experiments and analyses, the authors propose Elucidated Language Models, achieving remarkably good performance on ImageNet class conditional generation, demonstrating the significant potential of using LLMs for image generation. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: Yes. I checked the results of the proposed method on class-conditional generation for ImageNet-256 and ImageNet-512, as well as the ablation studies in the paper. Supplementary Material: I read all the supplementary material content. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: ## Strengths 1. This paper thoroughly explores and analyzes the impact of different designs on LLM performance for image generation, providing valuable analysis and experiments for LLM-based Image Generation. 2. Based on these designs, the authors propose a strong baseline, achieving competitive results on ImageNet-256 and ImageNet-512 class-conditional image generation benchmarks. 3. The paper is written clearly and is easy to understand. ## Weaknesses 1. The paper doesn't offer much methodological innovation, but rather focuses on analyzing which existing designs are more suitable for LLM-based image generation. 2. There are relatively few baselines compared on the ImageNet-512 Benchmarks - what is the reason for this? VAR also has ImageNet-512 class-conditional image generation results, why weren't these compared? 3. The scaling law in Figure 7 only provides visual effects. Could the authors provide a scaling law for FID similar to the one in the VAR paper? Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comment! Below, we will address your concern in detail. ```W1. methodological innovation``` We acknowledge that our work does not center around proposing a new model architecture, but rather focuses on **systematically analyzing and understanding** how existing design components interact within the context of **LLM-based image generation**. We believe this perspective is **important and timely**, especially in light of recent developments such as **GPT-4o**, which has been introduced as a **natively AR large multimodal model** capable of cross-modal reasoning and generation, demonstrating impressive performance across tasks. These advances further support the need for a deeper understanding of AR modeling as a unified generative paradigm, and our study lays the **empirical foundation** for such future multimodal research. In particular, our contributions lie in: * Providing a fair comparison between AR and MLM under standard image tokenization; * Demonstrating how tokenization and dictionary structure affects model performance and interacts with model scaling; * Revealing surprising robustness of AR LMs to image token randomness and their ability to generate high-fidelity outputs; * Offering design principles that can guide future LLM-based multimodal generation systems. We view this paper as an **analytical and foundational contribution**, aiming to guide future methodological innovations. We appreciate the reviewer’s suggestion and will clarify this positioning in the revised version. ```W2. baselines compared on the ImageNet-512``` On the ImageNet-512 benchmark, we did **not** re-train a model from scratch at 512×512 resolution. Instead, we performed a **lightweight fine-tuning** of our model that was originally trained on 256×256 images, using only **a few** additional training steps. Our primary goal in this experiment is to demonstrate the **resolution scalability and efficiency** of ELM, so we only choose some typical methods to compare with. Training directly on 512-resolution images from scratch would require approximately **4× more compute** compared to 256-resolution training. However, with a token-prediction-based architecture, we find that simple fine-tuning can achieve strong high-resolution performance with significantly reduced cost—a major advantage over diffusion models, which typically require retraining and redesigning the entire pipeline for higher resolutions. As a reference baseline, we chose to compare against DiT, a strong class-conditional diffusion model trained natively on 512×512 images. We also included MaskGIT as a discrete-token-based generation model for completeness. We acknowledge that VAR has also reported ImageNet-512 results. Below, we include a comparison with VAR. | Model | Params | Steps | FID ↓ | IS ↑ | Precision ↑ | Recall ↑ | |-------|---------|--------|--------|--------|-------------|-----------| | DiT-XL/2 | 675M | 3000k | 3.04 | 240.82 | 0.84 | 0.54 | | MaskGIT | 227M | 1500k | 7.32 | 156.00 | 0.78 | 0.50 | | VAR-d36-s | ~2B | 1250k | **2.63** | 303.2 | - | - | | ELM-L (2-8) | 312M | 250k | 4.82 | 246.87 | 0.81 | 0.59 | | ELM-XL (2-12) | 757M | 250k | 3.29 | **321.21** | 0.81 | 0.60 | Although our results are slightly behind those of VAR, it is important to note that VAR uses more than 2× the number of parameters and was trained for a significantly longer time. ```W3. scaling law for FID``` Thank you for the helpful suggestion. While Figure 7 primarily illustrates visual improvements across scales, we would like to clarify that we **do provide quantitative evidence** of a scaling law for FID throughout the paper: * In Table 1, we report FID scores across models with increasing capacities, showing a clear performance improvement as model size increases. * In Figure 2, we plot FID curves over training epochs for AR and MLM models of different sizes, which further demonstrates the impact of scaling on convergence and generation quality. * In the Appendix (Table 11), we present a comprehensive breakdown of FID scores for AR models across multiple scales—from L, XL, XXL to 2B—under different vocabulary designs, further reinforcing the consistency of the observed scaling trend. Taken together, these results clearly support a scaling law for FID in AR-based image generation under the image-as-language paradigm, similar in spirit to what is shown in the VAR paper. We will make this connection more explicit in the revised version, and thank you again for encouraging us to clarify this aspect. --- Rebuttal Comment 1.1: Comment: Thanks for the rebutall, my questions are well resolved by the authors. Hence, I will keep my original rating. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our rebuttal and for your thoughtful evaluation. We appreciate your assessment and support.
Summary: This paper systematically explores the design space of large language models for image generation, evaluating factors such as tokenization, model architecture, scanning patterns, vocabulary decomposition, and sampling strategies. The proposed ELM achieves state-of-the-art FID scores, demonstrating the potential of LLMs for vision tasks. Key findings include: 1. Autoregressive models outperform masked language models for image generation, benefiting from sequential modeling. 2. Binary autoencoder tokenization is superior to VQGAN, reducing codebook collapse and improving reconstruction quality. 3. Row-major raster scanning yields the best performance among different image tokenization strategies. 4. Scaling laws hold for AR-based image generation, with larger models capturing both local and global features. Despite promising results, the study lacks comparisons with the latest diffusion models, does not evaluate text-to-image generation, and does not analyze computational efficiency, leaving room for further investigation. Claims And Evidence: Supported Claims: - The paper claims that large language models (LLMs) can achieve near state-of-the-art image generation without specialized inductive biases, by optimizing tokenization, modeling, scanning patterns, vocabulary, and sampling strategies. - Experimental results support this claim, as the proposed ELM model achieves an FID of 1.54 (256×256) and 3.29 (512×512) on ImageNet, which is comparable to leading diffusion-based methods. - The study asserts that autoregressive (AR) models outperform masked language models (MLMs) in image generation, which is well-supported by quantitative comparisons (lower FID, better scalability). Potentially Problematic Claims: - The assumption that better tokenization alone can lead to improved performance is somewhat oversimplified—while BAE performs better than VQGAN, the impact of other factors such as model architecture and training strategy is not isolated/detailed in ablations. Methods And Evaluation Criteria: Appropriateness of Methods: - The study explores multiple key design choices in tokenization, modeling, scanning patterns, vocabulary size, and sampling strategies, which are highly relevant for improving LLM-based image generation. - Frechet Inception Distance (FID) is used as the main evaluation metric, which is a standard benchmark for generative models, making the comparison meaningful. - Experiments are conducted on both 256×256 and 512×512 ImageNet, testing the scalability of the approach. Potential Limitations: - Lack of real-world benchmarks: Evaluations focus on class-conditional image generation on ImageNet, but no results are shown for text-to-image generation, which is a more practical setting. (This might not be a sufficient reason to reject this paper.) - Computational efficiency is not discussed: While the study argues that AR models are effective, there is no analysis of training time, memory usage, or inference speed. Theoretical Claims: Key Theoretical Contributions: - The paper discusses scaling laws in AR models, showing that larger models capture both local and global information, leading to better image generation performance. - It claims that MLMs struggle with image generation due to the lack of strict sequential dependencies in images, which aligns with prior findings in NLP. - The study highlights the difference between text and image token distributions, suggesting that image tokens exhibit greater randomness than text tokens, which presents unique challenges for AR models. Potential Issues: - While the analysis of AR vs. MLM models is insightful, it lacks rigorous mathematical justification—there are no formal proofs to support the claim that AR is inherently better for image generation. - The KL-divergence analysis of token distributions is interesting, but its impact on model optimization is not fully explored—how does randomness in tokenization affect convergence and learning stability? Experimental Designs Or Analyses: Strengths: - Comprehensive exploration of design choices: The paper systematically evaluates tokenization, modeling, scanning patterns, vocabulary decomposition, and sampling strategies. - Comparative performance analysis: The study includes quantitative results on multiple LLM architectures and different vocabularies, providing strong empirical support for its claims. - Visualization of scaling laws: The analysis of attention patterns in AR models provides useful insights into how larger models learn hierarchical image representations. Weaknesses: - No efficiency analysis: The study does not compare training/inference time, FLOPs, or memory consumption between AR models/MLM models/diffusion models. - The choice of tokenizers: The study dives deep into two tokenizers, VQGAN and BAE, and demonstrate the latter one’s effectiveness. However, the reason for choosing VQGAN is unclear and there are many other tokenizers that achieve much better performance than VQGAN with significantly better token utilization ratio. Supplementary Material: I check the parts mentioned in the main paper. Relation To Broader Scientific Literature: - Language Models for Image Generation: The study builds on prior works applying LLMs to vision tasks (e.g., VQGAN, LlamaGen, VAR, MAR) but extends the exploration of architectural choices beyond existing models. - Autoregressive vs. Masked Language Models: Consistent with NLP research, this paper finds that AR models scale better than MLMs, reinforcing prior insights from text-based LLMs. - Comparison with Diffusion Models: The study follows research on diffusion-based image generation but suggests that AR models can achieve comparable quality under the right design choices. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: I will raise my score if the above questions are appropriately addressed. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments! We will address your concerns in detail. ```Limit 1. no results on text-to-image generation``` Our main goal is to assess the effectiveness of LLMs as a unified generation paradigm, especially when applied to images. To isolate modeling factors like tokenizers, AR vs. MLM, and sampling, we use **class-conditional generation** for its **simplicity and control**. While our findings (e.g., on scalable tokenization and AR modeling) are **relevant to text-to-image generation**, we consider T2I a **distinct challenge**—requiring further study on alignment, prompt structure, and retrieval—which requires delicate future work. ```Limit 2 & W1. Efficiency analysis``` Our work focuses on understanding the **modeling behavior** and asses the **effectiveness** of LLMs for image generation. As LLMs acceleration techniques (e.g., KV-cache, FlashAttention, quantization) are well-developed and **directly transferable** to our setting, we do not emphasize efficiency in this paper. To address your concern, we summarize key efficiency insights below: * **Training efficiency**: AR is more efficient than MLM and diffusion. During training, each AR step predicts all tokens, while MLM predicts only a subset. As shown in Figure 2 in our main paper, AR converges in ~100 epochs, MLM in ~200, and DiT needs ~160 (refer to [Scalable Diffusion Models with Transformers]). * **Inference speed**: Vanilla AR is slower due to sequential decoding (e.g., 256 steps), but KV-cache accelerates it by ~4×. Summary: MLM > AR (with KV-cache) > Diffusion (DiT). * **Memory usage**: All models are **similar in training**. During inference, AR’s KV-cache increases memory usage with sequence length; MLM and diffusion require fewer steps and less memory per sample. We summarize FLOPs, training, and inference speed across AR, MLM, and DiT with matched model scales: | | params(M) | FLOPs(G) | training epochs (converge/total) | inference time(sec/img) | |---|-----------|-----------|----------------------------------|------------------------| | DiT/XL-2 | 675 | 118.64 | ~160 / 1400 | 0.39 (50 steps) | | MLM-XL | 741 | 189.51 | ~200 / 400 | 0.10 (10 steps) | | AR-XL | 741 | 189.98 | ~100 / 400 | 0.15 (256 steps with KV-cache) | ``` Issue1. rigorous proof on AR&MLM in image domain``` While we do not provide formal theoretical proof that AR is superior to MLM for image generation, our systematic empirical study offers **meaningful evidence** supporting this claim. To the best of our knowledge, **no existing work** in language or vision has established a rigorous theoretical justification. However, under our image-as-language framework, some NLP arguments for AR over MLM are partially **transferable**—e.g., AR benefits from **training-inference consistency**, while MLM suffers from **exposure bias** [1], and AR models the **full joint distribution**, unlike MLM [2]. Moreover, the lack of standardized image tokenization adds complexity, making theoretical analysis more difficult and **beyond the scope** of this paper. We believe that our findings provide a **strong foundation** and **motivating evidence** for such future theoretical work. ``` Issue 2. The impact of KL-divergence analysis``` We included the KL-divergence analysis as an **additional diagnostic tool** to investigate **why**, during next-token prediction on tokenized images, the **training loss remains high**—despite the model generating **high-quality images**, unlike in typical language modeling. Actually, we conduct some analysis of the impact on model optimization in our paper. * Training loss curve **plateaus** rather than fully converging, as shown in **Figure 14 & 16 in Appendix**. The KL-divergence analysis shows that image tokens are more random and less sequentially structured than language tokens, leading to less accurate predictions. * Attention Analysis: Attention maps analysis in Sec. 3.4 reveals strong local structure, and larger models capture more global dependencies, indicating **effective learning and scaling** despite the high loss. ```W2. Why choose VQGAN``` We acknowledge the emergence of stronger tokenizers beyond VQGAN (e.g., hierarchical, residual, multi-scale) with improved utilization and reconstruction. However, we chose VQGAN as our baseline due to its **widespread use**, **reproducibility**, and **alignment with prior works** (e.g., MaskGIT, LDM, LlamaGen), enabling **controlled** comparisons. We include BAE as a **inherently** different tokenizer: unlike VQGAN’s **vector-level** quantization, BAE performs **bitwise scalar** quantization, allowing us to isolate the effect of granularity on the quantization. Its design is also compatible with other advanced quantization methods, suggesting potential for **future integration**. [1] Song et al., 2019. MASS: Masked Sequence to Sequence Pre-training for Language Generation. [2] Ghazvininejad et al., 2019. Mask-Predict: Parallel Decoding of Conditional Masked Language Models. --- Rebuttal Comment 1.1: Comment: The authors' rebuttal generally solves my question. Generally speaking, although without much rigorous proof, this work can provide useful priors for researchers to work on AR image generation, and it can contribute positively to the research community. I raise my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful consideration and for recognizing the value our work may bring to the community. We appreciate your assessment and support.
Summary: In this work, authors focus on the research topic of AR image generation, and conduct extensive studies focusing on 1) tokenizer, 2) AR model design (AR or Mask) 3) image scan direction. With large number of stduies, authors proposed ELM, which is able to achieve sota performance in ImageNet256 generation task. Claims And Evidence: Partly. Most of the claims are presented in related works. Methods And Evaluation Criteria: Make sense. Theoretical Claims: Not suitable. Experimental Designs Or Analyses: Yes. Please see the weakness and strengthes. Supplementary Material: Checked all the supplementary meterial. Relation To Broader Scientific Literature: See weakness about Infinity paper and LlamaGen paper Infinity∞: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation Essential References Not Discussed: See weakness about Infinity paper and LlamaGen paper Infinity∞: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation Other Strengths And Weaknesses: Strengths: Extensive experimental results demonstrate the effectiveness of proposed solution. Weakness: 1. I have question about the results presented in Table 1. As shown in LlamaGen paper, 16k codebook VQGAN is able to achieve 2.19 rFID, trained with classical training receipe. However, in the submission, only around 7.41. How to explain this significant performance gap? 2. Starts from line 171, authors claimed that "the introduction of Bernoulli Sampling during quantization improves generation performance". I think it is very closely related to Infinity paper bitwise self-correction method. However, no discussions involoved. 3. I would suggest better re-design the Figure 2, the texts inside are too small. 4. For Scan Pattern Choice section, I was wondering if that makes sense since 1) raster is a common-practice choice 2) studies in Mamba related works have studied this choice and results indicate raster is simple and performs good 3) the common-practice raster choice also performs best as shown in Table 2. 5. I would say that scaling law for AR image generation has already been validated by LlamaGen and VAR. 6. I have a question about the claim in Line 243 "When the vocabulary size exceeds a certain threshold, such as 2 16 (ie, 65,536), next-token prediction becomes significantly more challenging and may even become infeasible due to memory constraints." If that is true, how can we train LLMs? for example, the vocabulary size in Llama would be around 128k, and we can train Llama smoothly. Other Comments Or Suggestions: Please see the weakness. Questions For Authors: Please check the weakness and looking forward to the rebuttal. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments! Below, we will answer your concern in detail. ```W1: Difference performance of VQGAN from LlamaGen``` To ensure a **controlled comparison between structurally different quantization methods**, we use the **original VQGAN** from Taming Transformers [Esser et al., 2021], downloaded directly with rFID from the official repository ([taming-transformers-github](https://github.com/CompVis/taming-transformers)). LlamaGen tends to show that **discrete image tokenization** can achieve a reconstruction ability **close to continuous ones**. They **re-train VQGAN** with **L2-normalization on latent codes** refer to [1] to **improve codebook usage and representation smoothness**, which likely accounts for their stronger rFID results. To further address the reviewer’s concern, below table shows the **comparison between LlamaGen's VQGAN, the original Taming VQGAN, and BAE** (all with a down-sampling rate of 16 on 256px ImageNet): | | VQ-16384 (taming) | VQ-16384 (LlamaGen) | BAE-2^16 | BAE-2^24 | |-------|-------------------|---------------------|-----------|-----------| |code usage| 8% | 97% | 100% | 100% | | rFID | 7.41 | 2.19 | 3.32 | 1.77 | | gFID(M.) | 7.81 | 4.51 | 3.96 | 3.91 | | gFID(A.) | 6.71 | 3.45 | 2.78 | 2.68 | This comparison shows that **BAE can still outperform both**, ensuring **100% code utilization**, and the key advantage of **supporting larger codebooks** with better generation performance remains valid. ```W2. Discuss with Infinity paper - bitwise self-correction method. ``` Both our method and Infinity∞ highlight that **introducing stochasticity or 'error' during training** —via Bernoulli sampling or bitwise self-correction—can improve **autoregressive image generation** by enhancing **robustness to prediction errors**. This suggests that **stochastic modeling** is a valuable direction for AR-based synthesis. The key difference lies in the **series modeling formulation**: * Infinity∞ builds on domain-specific serialization and **next-scale prediction**, where self-correction mitigates **error accumulation** across scales. * Our approach adopts **standard image tokenization** and treats images as language; Bernoulli sampling helps address the **inherent randomness** of image token sequences, improving **tolerance to uncertainty**. As Infinity∞ and our work were developed **concurrently**, we were not aware of it at the time of writing and therefore did not include a comparison. We appreciate the reviewer’s suggestion, and we will incorporate a discussion of this work in the revised version of our paper. ``` W3. We will re-design the Figures to improve the readability ``` ```W4. Review on Scan Pattern Choice section``` We study scan patterns to evaluate whether **generation order** impacts performance in the image-as-language setting, where token sequences lack inherent structure. Zigzag, commonly used in tasks like image compression and video coding, serves as a meaningful alternative to raster. Our results provide **empirical grounding** for choosing a raster scan in AR image generation, and the **minor performance differences** across scan patterns further suggest that large language models are **robust to token order variations**. ```W5. Discuss with the scaling law in LlamaGen and VAR``` We agree that works like LlamaGen and VAR have demonstrated the scalability of AR models for image generation. Different from ours, VAR provides a **new** image autoregressive modeling mechanism. Our study is in line with LlamaGen, adopting a **language-centric** approach **without** domain-specific design, and offers complementary insights. Beyond generation quality scaling in LlamaGen, we further analyze **training dynamics, attention behavior, AR vs. MLM scaling trends**, and **the interactions between vocabulary and model capacity**, offering a broader perspective on what enables scalable image generation with LLMs. We believe our findings **complement and extend** those of LlamaGen and VAR and help build a more principled foundation for scaling image-as-language models. ```W6. question about the claim in Line 243, about the vocabulary threshold``` We appologize that the statement in Line 243 is imprecise. Our intention was to highlight that as the vocabulary size approaches or exceeds **2^20**, the memory required for the output projection (softmax layer) in next-token prediction becomes a major bottleneck—especially in **image generation, where larger vocabularies are often needed for higher fidelity and resolution**. While LLMs like LLaMA handle vocab sizes up to ~200k (2^17) without issue, our use of 2^16 was meant as an **illustrative threshold**, not a hard limit. We will revise the wording for clarity in the final version. [1] Yu et al. 2021, Vector-quantized image modeling with improved vqgan --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, which addressed some of my concerns. However, my primary concern remains: this work lacks methodological innovation and offers limited technical novelty, as also noted by Reviewer xmQD and Reviewer uJCp. The proposed methods are well-studied, and the conclusions drawn are largely well-established in existing literature. I am leaning toward a weak accept primarily due to the extensive experimental analysis. However, I am uncertain whether this alone is sufficient justification for acceptance. I will take into account further discussions with the other reviewers before finalizing my decision. Thank you. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback and for acknowledging the value of our experimental analysis. Our goal is to provide a systematic and comprehensive study that we believe offers meaningful insights for the community—especially as AR models are increasingly adopted as unified generative frameworks. We appreciate your consideration and hope the broader contribution of our work is useful in ongoing discussions.
null
null
null
null
null
null
Core Knowledge Deficits in Multi-Modal Language Models
Accept (poster)
Summary: This paper introduces the CoreCognition dataset focusing on a systematicity evaluation of multimodal models. Tasks in the benchmark are designed based on the well-established core-knowledge theory in developmental psychology. These tasks cover multiple aspects of human multimodal cognition, spanning from low-level perception tasks to high-level tasks that require tool use and strong commonsense knowledge. Evaluation of a large set of multimodal language models and correlation analysis between tasks & other benchmarks also justifies the contributions of the proposed benchmark. Claims And Evidence: Claims made in the paper are well supported by both empirical and computational evidence. Overall, I like how each tasks are supported by the corresponding developmental psychology background. Model evaluations were also thoroughly performed, I can see most state-of-the-art vision-language models are covered. Correlation analyses and ablation studies are good. I do think there is a flaw underlying the task design, as all the tasks require language---usually, language is treated as a separate aspect of core knowledge, and it might account for explaining why this paper found low-level tasks are harder for models than high-level tasks. Methods And Evaluation Criteria: Yes, the proposed evaluation criteria generally make sense. I do see one potential problem is the difficulties of all the tasks. In the developmental psychology literature, core knowledge usually focuses on children's ability to perform a basic understanding of the world, while the proposed tasks in this paper seem to be much harder problems that children cannot easily solve. However, this might be a minor issue as the authors are not considering studying the developmental or learning trajectory of these models. Theoretical Claims: There are no foreseeable issues with the theoretical claims made in this paper, as it is not a theory paper. Experimental Designs Or Analyses: Experimental designs and analyses reported in this paper sound good to me. One thing that this paper needs further discussion on is how the authors performed the human study, for example, how many human participants were recruited and how were they paid? Supplementary Material: Yes, I reviewed the supplementary material. Providing more task examples in the supplementary would be more helpful. Relation To Broader Scientific Literature: This paper seems interesting to a broader audience, for example I think cogsci and developmental psychology people would be interested in the proposed benchmark. Essential References Not Discussed: Some related benchmarks that also focus on evaluating core knowledge are missing in the reference, e.g., AGENT [A], and Binz & Schulz, 2023. [A]. Shu, T., Bhandwaldar, A., Gan, C., Smith, K., Liu, S., Gutfreund, D., ... & Ullman, T. (2021, July). Agent: A benchmark for core psychological reasoning. In International conference on machine learning (pp. 9614-9625). PMLR. [B]. Binz, Marcel, and Eric Schulz. "Using cognitive psychology to understand GPT-3." Proceedings of the National Academy of Sciences 120.6 (2023): e2218523120. Other Strengths And Weaknesses: Overall I like this paper due to i). the importance of introducing developmental insights to model evaluations, and specifically I think the core knowledge approach, as demonstrated in this paper, did a neat job. ii). model evaluations were carefully performed and I can clearly see some insights that might be helpful for the broader community. Some potential weaknesses from this paper might include i). the VQA format, especially the fact that almost every task is entangled with language understanding, makes the core knowledge evaluated here not as clean as controlled experiments performed in the original developmental psych literature. This may lead to not directly comparable results (e.g., why low-level tasks are harder than high-level tasks for machines). Other Comments Or Suggestions: N/A. Questions For Authors: How will the proposed dataset/benchmark be released and how will the benchmark be maintained? I recommend doing something like a dataset card as the common practice in neurips benchmark tracks. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ```>>> Q1``` The tasks are much harder than children cannot easily solve compared to developmental CogSci ```>>> A1``` Thanks for bringing up this nuance. We address the concern in two aspects. First, while all our tasks are derived from standard developmental cognitive science prototypes, e.g., "Three-Mountain", we extend the questions beyond the few typical examples from the textbooks to comprehensively evaluate the capabilities of MLLMs. This leads to a significantly larger and more systematic set of tasks and evaluations, rather than relying on the limited examples typically found in conventional literature. Second, it’s important to note that the subjects being evaluated are not infants or young children with limited acquired knowledge, but rather MLLMs adapted from large language models (LLMs), which are reported to possess human PhD-level knowledge and reasoning abilities [1]. [1] OpenAI (2024). Learning to reason with LLMs. https://openai.com/index/learning-to-reason-with-llms/ --- ```>>> Q2``` One thing that this paper needs further discussion on is how the authors performed the human study, for example, how many human participants were recruited and how were they paid? ```>>> A2``` We recruited college students as participants, each receiving the same questions as the MLLMS. To ensure attentiveness, we included 1% mismatched question-answer or question-image pairs, asking participants to flag unclear or complex items within 90 seconds. Participants were paid only if they passed the attention check. 22 participants' answers were accepted for final statistics. --- ```>>> Q3``` Providing more task examples in the supplementary would be more helpful. ```>>> A3``` Thanks for the advice. We have provided test cases in both the main paper and the supplementary. We will add more examples following your suggestion. --- ```>>> Q4``` Suggested references. ```>>> A4``` Thanks for the advice. We will include all suggested papers in our citations and provide an extended discussion about them. --- ```>>> Q5``` Language is treated as a separate aspect of core knowledge, and it might account for explaining why this paper found low-level tasks are harder for models than high-level tasks. The VQA format, especially the fact that almost every task is entangled with language understanding, makes the core knowledge evaluated here not as clean as controlled experiments performed in the original developmental psych literature. This may lead to not directly comparable results (e.g., why low-level tasks are harder than high-level tasks for machines). ```>>> A5``` Thanks for raising this concern. We will address your concern in the following two aspects. First, indeed, we acknowledge that the VQA format is entangled with language understanding, as demonstrated by the impressive capabilities of large language models (LLMs) in both language comprehension and their use as a tool for reasoning [1, 2, 3], we argue that the influence of language bias is relatively minimal when evaluating core knowledge. Second, we contend that the core challenge lies in developing a scientifically robust evaluation methodology that can specifically probe the abilities of LLMs and MLLMs. Regardless of the evaluation method—whether VQA, retrieval tasks, or using output logits—assessing the model's specific capabilities requires auxiliary task demands [4]. In this context, the interaction between core knowledge and other factors (whether linguistic or otherwise) is inevitable, as LLMs inherently rely on language as a vehicle for expressing and processing information. We hope this clarifies our rationale for using the VQA format. We will discuss this shortcoming in the limitations Section. [1] Millière, Raphaël. "Language Models as Models of Language." arXiv, 2024, arXiv:2408.07144. [2] Brown, Tom, et al. "Language models are few-shot learners." Advances in neural information processing systems 33 (2020): 1877-1901. [3] Wei, Jason, et al. “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.” Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, 2022. [4] Hu, Jennifer, and Michael C. Frank. "Auxiliary task demands mask the capabilities of smaller language models." arXiv, 2024, arXiv:2404.02418. --- ```>>> Q6``` How will the proposed dataset/benchmark be released, and how will the benchmark be maintained? I recommend doing something like a dataset card as the common practice in neurips benchmark tracks. ```>>> A6``` All data will be released upon acceptance and after internal review. Following standard procedures, we will open-source the dataset on GitHub and Hugging Face, including dataset cards, descriptions, and a viewer. Additionally, all model predictions and results will be made publicly available to ensure reproducibility.
Summary: The paper investigates the hypothesis that the limitations of MLLMs in performing intuitive tasks, which are simple for humans, stem from the absence of "core knowledge"—innate cognitive abilities present from early childhood in humans. To explore this, the authors develop a novel benchmark called the CoreCognition dataset, designed specifically to assess these core cognitive concepts in MLLMs. The dataset covers 12 fundamental cognitive abilities and is used to evaluate 219 models across 10 different prompts, generating a total of 2,409 data points. The findings reveal that while MLLMs perform comparably to humans on complex, high-level cognitive tasks, they significantly underperform on simpler, low-level tasks. The study shows that there is no improvement in low-level cognitive abilities as model size and complexity increase, a stark contrast to high-level abilities which show scalable improvements. The authors also introduce “Concept Hacking” as an evaluation technique to demonstrate that MLLMs rely on superficial pattern recognition and shortcut learning rather than developing a genuine understanding of core knowledge. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. Experimental Designs Or Analyses: Yes. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: - By assessing MLLMs against a dataset designed to probe these core cognitive concepts, the paper directly investigates whether state-of-the-art AI systems possess analogous foundational skills, which is crucial for developing AI that can genuinely simulate human-like intelligence. - The findings of the study provide a modern illustration of Moravec’s Paradox, showing that while MLLMs excel in high-level cognitive tasks, they struggle with basic cognitive tasks that are effortless for humans. Generally, the paper adds empirical data to ongoing debates about the nature of intelligence and complexity in AI systems. - By introducing “Concept Hacking,” the paper adds a novel method to the repertoire of AI evaluation, specifically designed to test whether models truly understand the tasks they perform or merely capitalize on patterns and shortcuts. Essential References Not Discussed: N/A. There are some similar works that tackles the core knowledge in evaluating LLMs but not MLLMs. Other Strengths And Weaknesses: From my point of view, I think my concerns mainly lines in the motivation behind the proposed benchmark. As mentioned, the paper’s motivation assumes that replicating human-like core knowledge is essential for the effective functioning of AI systems. This assumption is controversial and may not necessarily hold, as AI could potentially achieve high functionality through alternative means that do not mimic human cognitive processes. The debate on whether AI should replicate human cognition or develop its own unique methods remains unresolved and is a significant conceptual limitation of the study. Also, the motivation assumes that core knowledge can be clearly defined and operationalized within the context of AI systems, but from pure evaluation based on QA answers, this could lead to benchmarks that do not accurately reflect the underlying theories or mechanisms of core knowledge. Can this benchmark provide valuable insights into the training or designing of MLLMs? How the benchmark results can help researchers improve their models? While the importance of core knowledge is widely acknowledged, I believe a good benchmark should serve purposes beyond simple evaluations. Other Comments Or Suggestions: typo: line 029 double dots. Questions For Authors: Given the inherent challenges in learning core knowledge from merely memorizing training data, how do you believe the CoreCognition benchmark can help advance the development of MLLMs in this domain? Can this benchmark provide valuable insights into the training or development strategies that might enable LLMs to acquire better core knowledge capabilities? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ```>>> Q1``` typo: double dots. ```>>> A1``` Thanks. We will revise and remove all typos. --- ```>>> Q2``` Assume that replicating human-like core knowledge is essential for the effective functioning of AI systems is controversial and may not necessarily hold? ```>>> A2``` Thank you for the question! We approach this from 2 complementary perspectives: 1) The question presumes that core knowledge is unique to humans or human-aligned AGI. While this is resonable, it's also possible that core knowledge reflects more general cognitive principles or serves as mechanistic prerequisite that may be useful—or even necessary—for any form of general intelligence; such as chimpanzees showing behaviors aligned with core knowledge (Regolin and Vallortigara,1995; Santos, 2004; Lake 2017). This suggests that certain cognitive abilities emerge across intelligent systems, regardless of the path. Then even non-human pathways to AGI, e.g. scaling, might still lead to the emergence of these core abilities. In this light, a benchmark on core knowledge, such as ours, could offer a useful lens for evaluating the progress toward AGI, regardless of the path taken. 2) Even if core knowledge is a human-specific trait, the human trajectory remains a valuable source of insight given the current absence of clearly defined path to AGI. Despite great advancements, MLLMs continue to struggle with hallucination, lack of OOD generalization and robustness, indicating that important cognitive capacities may still be missing. Humans as the only model of high-level intelligence can serve as a useful reference for evaluating artificial systems. Moreover, if AGI were to eventually follow a human-aligned path, our benchmark would become vital in assessing high-level reasoning and perception grounded in foundational cognitive structures rather than driven by spurious shortcuts. --- ```>>> Q3``` Assumption that core knowledge can be clearly defined and operationalized within the context of AI systems, but from pure evaluation based on QA answers, this could lead to benchmarks that do not accurately reflect the underlying theories or mechanisms of core knowledge. ```>>> A3``` While isolating core knowledge dimensions is notoriously difficult, particularly in AI, there are certain aspects that makes this effort feasible. Extensive literature suggests that core knowledge exists and can be robustly probed and evaluated in humans, including infants who don't yet possess language or the ability to speak, as in Appendix B. The taxonomy of 12 core-abilities addresses this challenge by operationalizing core knowledge through a set of lower-level abilities. These abilities serve as tractable proxies for otherwise abstract cognitive dimensions and also connect naturally to stage-based developmental theories (Piaget, 1976; Rochat, 2024). This makes it possible to evaluate how models handle fundamental cognitive operations and how such abilities may scaffold more complex reasoning. --- ```>>> Q4``` From pure evaluation based on QA answers, this could lead to benchmarks that do not accurately reflect the underlying theories or mechanisms of core knowledge. ```>>> A4``` We acknowledge that the VQA format is intertwined with confounding factors, such as language understanding (see Q5 of Reviewer nvFw). However, evaluating specific abilities in AI models inherently requires auxiliary task demands (Hu and Frank, 2024)—regardless VQA, retrieval, or output logits—and every evaluations and benchmarks face more or less a similar challenge. To mitigate biases introduced by the VQA format, our effort includes - Instruct annotators to minimize the interplay of different abilities and exclude questions that require multiple competencies to answer. - Manually filter out ambiguous or confusing questions and use LLMs to enhance clarity and precision. - Extensive experiments with various phrasing and prompting techniques to alleviate biases arising from specific wordings or prompt formulations. --- ```>>> Q5``` Can this benchmark provide valuable insights into the training or designing of MLLMs? How the benchmark results can help researchers improve their models? ```>>> A5``` Our benchmark reveals that current training methods fail to effectively emerge these great abilities, suggesting future research into large-scale pretraining methods that can better scale core knowledge. More concretely, if core knowledge cannot be directly scaled, it may be beneficial to first teach or distill core knowledge into MLLMs prior to large-scale pretraining, enabling more data-efficient generalization akin to human learning. From an evaluation and design perspective, our benchmark uncovers distinct failure modes, including the lack of permanence, spatiality, boundary, and continuity. These limitations further hinder abilities such as visual perspective-taking and contribute to an over-reliance on shortcuts, which is a fundamental cause of poor out-of-distribution generalization.
Summary: The paper investigates core cognitive abilities in multimodal large language models. The authors find that models underperform in abilities that develop early in humans, while they perform comparable to humans on higher level abilities. They show that multimodal language models often rely on shortcut learning. ## update after rebuttal: I still think this is a solid paper and recommend acceptance. Claims And Evidence: - In the conclusion, the authors write "(2) models’ performance on high-level abilities does not correlate with the corresponding low-level abilities that ground them in humans". However, as shown in Figure 4 there are some correlations between lower level abilities and higher level abilities (boundary and continuity correlate quite well with intentionality and mechanical reasoning). While I agree in principle that higher correlations between low and high level abilities could be expected, this claim reads to strong to me. - Furthermore, they write " (3) such abilities exhibit very low scalability among models, meaning that simply raising the number of parameters could not better the models’ performance on these abilities". While I agree that scale does not seem to yield perfect core abilities, it does look like almost every ability (except for perspective) benefits from scale in section 4.5. Sure, conservation and spatiality only slightly improve but all others seem to show some improvement with scale. Maybe this section would benefit from some numbers to make the plot a bit more digestible, such as "sensorimotor ability accuracy improves by 0.X on average". The resulting claim that "while scaling improves high-level reasoning, its effect on low-level cognitive abilities is minimal and, in some cases, even detrimental" therefore seems not perfectly supported, especially as the detrimental case (perspective taking) seems to have poor external validity. Methods And Evaluation Criteria: The choice of experiments seems principled. Also, the study investigates a large number of models. Theoretical Claims: Not applicable Experimental Designs Or Analyses: - Supplementary Material: I reviewed sections D, E and F. Relation To Broader Scientific Literature: This paper contributes to a larger literature on how VLMs struggle with basic visual processing. It goes some way towards understanding what the missing components are that prohibit VLMs from having human-level visual understanding. Essential References Not Discussed: There are three other works that are not mentioned and that showcase the weakness of VLMs on cognitive tasks. [1] investigates higher level visual cognition including intuitive physics. [2] also investigates intuitive physics in VLMs. [3] investigates visual illusions in VLMs and uses a technique that is similar to the "concept-based hacking" proposed here. [1] Schulze Buschoff, Luca M., et al. "Visual cognition in multimodal large language models." _Nature Machine Intelligence_ (2025): 1-11. [2] Balazadeh, Vahid, et al. "Synthetic Vision: Training Vision-Language Models to Understand Physics." _arXiv preprint arXiv:2412.08619_ (2024). [3] Ullman, Tomer. "The Illusion-Illusion: Vision Language Models See Illusions Where There are None." _arXiv preprint arXiv:2412.18613_ (2024). Other Strengths And Weaknesses: The paper is original in that it investigates specific low level core abilities in VLMs. The experiments are thorough and performed on a large number of models. The investigation is definitely timely. Other Comments Or Suggestions: Typos - Two dots in the abstract on line 29 - Package import calls on line 1351 of the Appendix - Line 169 right column "as" missing between "matching" and "the"? Text color - The yellow and red text color on page 5 is very hard to read and it is distracting in general. Phrasing - Line 233 in the left column reads "[...] a clear upward trend can be identified as the concepts move from low-level to high-level. This can be concluded as MLLMs perform worse on lower-level abilities than on higher-level ones, or in other words, there exist core knowledge deficits in Multi-Modal Language Models." Maybe I misunderstand but I thought all 12 abilities were considered core abilities. Sure, if MLLMs struggle on the lower-level abilities there are core knowledge deficits but this sentence reads as if the difference between performance on low to high level abilities is what constitutes core knowledge deficits. - In 5.1 the authors write "At the core of the control experiment lies a _novel_ technique termed concept-based hacking". A similar technique has already been proposed for the investigation of visual illusions in VLMs in [3]. Questions For Authors: 1. Could the authors maybe speculate on why the perspective tasks shows such a poor external validity and is the only task that does not scale with the number of parameters? It seems like something might be off here. 2. In section 5, an agent with core knowledge should get control and manipulation right. A shortcut learner should get only the control right. And a non-core knowledge agent should only get the manipulation right. Basically, the latter would be a model that learns the wrong intuitions about basic visual properties, if I understand correctly. Now, the results seem to show that a large number of models actually fall into this category. Could the authors also speculate on why these models seem to learn wrong visual intuitions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ```>>> Q1``` "high-level abilities do not correlate with the corresponding low-level abilities" is too strong ```>>> A1``` Thanks! In Sec 4.3, the correlations between lower- and higher-level abilities are generally below 0.4. This is considerably lower than what's commonly observed in humans, though the phrasing may be strong. We will revise it to better describe the observation. ```>>> Q2``` Scaling analysis: add numbers to make the plot more digestible. "low-level cognitive abilities is minimal...even detrimental" seems not perfectly supported. ```>>> A2``` Thanks for the advice. Our current efforts include 1) comparison of scalability (slope) in the bottom left of Fig. 6, and 2) $R^2$ values represented by the size of the blobs to the left of the ability names in the upper-left sub-fig of Fig. 6. We will also add specific numbers in the text to enhance clarity as suggested. We acknowledge that most abilities show some scalability; however, the scaling effect on lower-level abilities is significantly smaller (half the value) than that of higher-level abilities, which supports our conclusion. We will revise the text to better reflect the observation. Please see A4 below for a discussion on the perspective-taking exception. ```>>> Q3``` Suggested citation. A similar technique...for visual illusions in VLMs in [3]. ```>>> A3``` Thanks for bringing this up. While [3] concurrently introduces a related idea, there are significant differences between the two. We offer a comprehensive controlled methodology grounded in the evaluation of core knowledge. Specifically, we formally propose a systematic procedure to manipulate input samples (${X}$) by flipping concept-level labels ($Y$) through targeted changes to causal features (${S}$) while holding non-causal features (${B}$) constant (Formally, a true predictive dist factorize as $p(Y|X) = \int p(Y| S, B) p(S, B| X)$). Whereas [3] presents only 10 samples of visual illusion (valuable but largely anecdotal), not situated within a broader diagnostic or theoretical framework with clear motivation and research questions. In addition, we present Figure 8, showing the development trend of current MLLMs (with respect to scaling) is not in the ideal direction but biased towards illusion or shortcut. This adds an interpretable, diagnostic dimension to our evaluation that is absent in [3]. ```>>> Q4``` "Worse on lower-level abilities...there exist core knowledge deficits". Aren't all 12 abilities core abilities? ```>>> A4``` All 12 abilities in our benchmark are grounded in core knowledge dimensions. However, lower-level abilities are operational approximations of basic cognitive systems and are thus more directly aligned with the notion of "core knowledge", while higher-level abilities are more abstract or compositional cognitive tasks. The observed upward trend in performance does not imply that only lower-level abilities reflect core knowledge. Rather, it suggests that while models may perform better on higher-level tasks—potentially by pattern matching or spurious correlation—they often struggle with the more fundamental reasoning required for lower-level tasks. This gap is what we refer to as a core knowledge deficit: the failure to demonstrate a robust understanding of the foundational abilities that higher-level tasks presuppose. ```>>> Q5``` Why do perspective tasks show such poor external validity and do not scale? ```>>> A5``` Thanks for the question, which relates to a key finding here. The perspective-taking task in our benchmark is based on the "Three Mountains" experiment, a type of level-2 perspective-taking (Moll, 2010), which requires mental simulation—the ability to build an internal model of the world and reason from it (Johnson-Laird, 1982). Whether MLLMs possess such internal world models is debated (Mitchell, 2023; Goddu et al., 2024) and the lack of scalability in perspective suggests current models may not. Unlike other tasks, perspective-taking additionally demands constructing a spatial model, the absence of which could explain the unexpected downward trend we observe in performance as model size increases. ```>>> Q6``` A large number of models learn wrong visual intuitions? ```>>> A6``` Core-illusion refers to models' response driven by a natural perception of the world, i.e., a lack of core knowledge. Models that learn from statistical correlations in the data may fall short in acquiring core abilities, as MLLMs trained on vast, multimodal datasets are often biased by shortcut signals, producing answers that resemble advanced reasoning but lack the conceptual grounding that allows humans to apply knowledge flexibly and consistently across contexts (Mitchell, 2023). Due to limited space, kindly refer to Reviewer ZrNC's A2 for a technical explanation of why such a distinction can be rooted in the pretraining process.
Summary: This paper presents an evaluation framework for assessing the image understanding capabilities of multimodal language models (MLLMs) from a lens of cognitive taxonomy of concepts of learning. Inspired by the cognitive science literature on visual concept learning, the authors present a “CoreCognition” benchmark, which has been curated by defining a set of abstract learning ‘milestones’ or ‘skill levels’ of human visual cognition, and creating questions in a stratified manner per ‘skill level’ to controllably assess the image understanding capabilities of SOTA LLMs, bringing in a cognitive perspective into the benchmark curation. They ensure quality by conducting manual reviews of every question, ensuring that the options are cycled to mitigate positional bias in LLM answering, and testing various models under multiple prompted settings to regularize any instruction following effects. By probing LLMs, the authors conclude that LLMs perform better on ‘higher level skills’ as opposed to more fundamental, ‘early developed’ core capabilities under their proposed taxonomy. Further, they test scaling of models to understand the trends of curriculum of tasks v/s model sizes, showing inconsistent behaviour e.g. more ‘core’ capabilities not scaling as expected. Finally, the authors throw light on ‘Concept Hacking’, highlighting the tendency of LLMs to rely on spurious correlations to solve questions of these kinds. Claims And Evidence: All the claims in the submission are supported by evidence where necessary. Specifically, 1. The claim on 'MLLMs consistently perform worse on low-level abilities compared to high-level abilities' is empirically supported by the evaluation of LLMs under these categories across different LLMs and prompting techniques. The proposed benchmark has been curated by attempting to define a taxonomy which is in-line with the existing cognitive science literature, which is appreciated (and much needed) as we move towards an era of natively multimodal LLMs. The taxonomy and abstraction of the proposed curriculum is a very first attempt and I'd be interested in understanding on why this taxonomy was chosen, and if there are any specific inspirations even from the ML literature e.g. OOD robustness, robotics. 2. The claim on "No observable scaling on low-level abilities with respect to increasing model parameters" is an interesting one: while I do understand empirical backing, did the authors expect otherwise and why? Do we _want_ models to follow the similar curriculum as humans do? 3. On control hacking, the claim that models rely on spurious correlations - this is not directly visible from Figure 8, and it would be greatly appreciated if the authors can intuitively explain the difference in illusions and shortcuts. Nevertheless, it is good to note that Humans perform in a superior fashion. Methods And Evaluation Criteria: Yes. The method itself is a benchmark. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. As mentioned above, they ensure quality by conducting manual reviews of every question, ensuring that the options are cycled to mitigate positional bias in LLM answering, and testing various models under multiple prompted settings to regularize any instruction following effects. Supplementary Material: Yes, briefly reviewed all sections A-F. Relation To Broader Scientific Literature: This paper proposes a standard benchmarking methodology. The piece on shortcut learning is related to previous work on OOD robustness, an example linked below. Essential References Not Discussed: There is rich literature on OOD robustness (taking the classic example of Waterbirds v/s Landbirds), e.g. [1] which could be referred to. Spurious correlations in the field isnt new, and I'd be curious to know if the space of LLMs for spurious correlations is any different from the space of spurious correlations for ResNets/ViTs etc. [1] Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization https://arxiv.org/abs/1911.08731 Other Strengths And Weaknesses: **Strengths**: 1. Overall, I like the motivation to build cognitively inspired benchmarks for LLMs because it is much needed to bring in interdisciplinary perspectives to evaluation. Further, the evaluation methodology in the paper is rigorous. 2. I really appreciate the taxonomy that has been proposed: not only does this present a good benchmark, it also inspires other researchers in the field to conduct stratified analysis / training on such kind of data to improve their models specifically for specific facets of tasks. **Weakness** 1. Considering the overall premise of the paper: Given that we are building benchmarks inspired by human cognition - do we a) _expect_ and b) _desire_ that our models follow _the same curriculum_ as humans do? Will models follow the same path to AGI as we humans learn? I'd love to hear from the authors on what exactly they want the LLM community to take away from this paper when thinking in meta-terms about the overall capabilities, and how we scale these capabilities. Other Comments Or Suggestions: 1. There is an extra dot in the abstract on line number 29. 2. It would be nice to have an explanation for every quadrant in Figure 8. Questions For Authors: Echoing the points I have mentioned above as questions: 1. Considering the overall premise of the paper: Given that we are building benchmarks inspired by human cognition - do we a) expect and b) desire that our models follow the same curriculum as humans do? Will models follow the same path to AGI as we humans learn? I'd love to hear from the authors on what exactly they want the LLM community to take away from this paper when thinking in meta-terms about the overall capabilities, and how we scale these capabilities. 2. It is very interesting to note that the 'cognitive instruction' seems to perform superiorly. This implies that certain cognitive priming to the models can help boost accuracy on the benchmarks. Do the authors have any comments? 3. Is the space of LLMs for spurious correlations any different from the space of spurious correlations for ResNets/ViTs etc.? Can this benchmark be used to evaluate existing vision models? If not, what makes this eval benchmark specific to LLMs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. However, due to limited space, we could only answer the most significant questions here. We look forward to discussing the rest (i.e., illusion vs. shortcut and 4 quadrants of Fig 8, suggested citations, etc) in the discussion phase! ```>>> Q1``` Why this taxonomy was chosen, and if there are any specific inspirations from the ML literature e.g. OOD robustness, robotics? ```>>> A1``` We chose core knowledge as the basis for our taxonomy because it offers a theoretically grounded and developmentally validated account of the foundational systems underlying human cognition (Spelke & Kinzler, 2007). These dimensions are widely seen as essential to general intelligence, making them a meaningful lens for evaluating AI capabilities (Carey & Gelman, 1993; Carey, 2009; Spelke, 2022). While not directly inspired by robotics or OOD robustness, core knowledge is highly relevant to both, as generalization, transfer, and embodied reasoning rely on intuitive world understanding—precisely what core knowledge aims to capture. Thus, we provide a complementary perspective that can inform evaluations of generalization, transfer, and robustness in AI. ```>>> Q2``` The claim on "No observable scaling on low-level abilities..." Did the authors expect otherwise and why?...Do we a) expect and b) desire that our models follow the same curriculum as humans do? Will models follow the same path to AGI as we humans learn?...thinking in meta-terms about the overall capabilities, and how we scale these capabilities. ```>>> A2 ``` First, we do not expect otherwise as we anticipate challenges for the emergence of core-abilities simply through large-scale pretraining of statistical occurrence. Another potential reason is that, compared to high-level details required for complex tasks, core knowledge used in simpler tasks is spread across diverse contexts and the vast parameter space of the network (Shani et al., 2023), thus harder to isolate and apply systematically, leading to inconsistent or surface-level reasoning. It's a great question whether AI should follow the path of humans. We elaborate on a discussion in A2 of Reviewer Kcuh (due to limited space), that our paper neither argues for nor against the necessity of mirroring humans in pursuit of AGI. Rather, core knowledge is introduced as an evaluation for MLLMs, and the core knowledge deficit hypothesis is proposed as an explanation for the observed brittleness of MLLMs. More broadly, we hypothesize that core knowledge may represent a general principle of intelligence--human or otherwise. The critical research question is not whether models should mimic human learning, but how core abilities can emerge through scaling with cognitive or human-inspired adaptation, or other measurements. ```>>> Q3``` 'Cognitive instruction' seems to perform superiorly. This implies that certain cognitive priming to the models can help boost accuracy on the benchmarks. Any comments? ```>>> A3``` For now, we don't have a definitive explanation, but we find this effect aligned with early insights from the connectionist literature, which suggests that distributed representations pose a challenge for structured knowledge retrieval. As networks scale, retrieving specific conceptual structures becomes increasingly difficult (Hinton et al., 1986; Chalmers, 1990), especially for core knowledge. Unlike high-level knowledge, e.g., historical events, which are likely encoded in clustered patterns, core knowledge is highly distributed across the model’s parameters, as they are in multiple instances in training data, making them hard to isolate and deploy systematically for reasoning tasks (Garrigan, 2008; Green, 2024). We hypothesize that cognitive instruction may act as a retrieval cue that helps direct the model's internal attention toward latent but relevant knowledge. However, we do not believe this constitutes a permanent or scalable solution. In real-world environments, models are unlikely to receive such explicit guidance, limiting the practical utility of this strategy. Nevertheless, this result may point toward a promising research direction for improving reasoning via targeted scaffolding or memory-based mechanisms. ```>>> Q4``` Is the space of LLMs for spurious correlations any different from the space of spurious correlations for ResNets/ViTs, etc.? Can this benchmark be used to evaluate existing vision models? If not, what makes this eval benchmark specific to LLMs? ```>>> A4``` The space is different for LLM and vision models as the former is in an abstracted and discrete token space while the latter is high-dim pixel space. Our benchmark cannot be directly applied to vision models, since the questions are in VQA format (i.e., question answering). However, adaptation can be made to convert it into retrieval / binary classification for vision models to test. However, this falls out of the scope of this paper, and we leave it to future efforts.
null
null
null
null
null
null
QUTE: Quantifying Uncertainty in TinyML models with Early-exit-assisted ensembles for model-monitoring
Accept (poster)
Summary: This paper proposes QUTE, a new uncertainty quantification (UQ) method for tinyML models on low-power devices. It uses a lightweight early-exit ensemble to reduce size and computation while maintaining accuracy. QUTE is 59% smaller than previous models which can reduce latency by 31%, and as a result, it improves accuracy-drop detection. Claims And Evidence: The claims in the paper are partially supported by the evidence provided. The authors present empirical results showing improvements in model size, latency, and accuracy-drop detection. While these results suggest the effectiveness of QUTE, additional evaluations on diverse architectures and real-world scenarios would further strengthen the claims. Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria are appropriate for this case study. Using early-exit ensembles and uncertainty quantification fits well for tinyML models. The evaluation metrics are also relevant and help to further validate the proposed approach. Theoretical Claims: I found out that this paper does not provide formal proofs for its theoretical claims, focusing instead on empirical results. Theoretical analysis is not a major part of this work, and the claims are primarily supported by experimental evidence. Experimental Designs Or Analyses: I think the experimental designs and analyses are generally sound. The evaluation metrics seem appropriate to me for this problem. However, in my view, the experiments could benefit from a wider range of benchmarks, and testing on more diverse architectures or real-world data would make the results stronger. Some more detailed analysis of trade-offs would also help. Supplementary Material: Yes, I did review the supplementary material. I looked at the sections that describe the experimental setup and additional results. Relation To Broader Scientific Literature: In my view, the key contributions of the paper build on existing work in uncertainty quantification (UQ) and tinyML models. The idea of using early-exit ensembles for UQ has been explored, but QUTE improves efficiency by reducing model size and latency. Compared to previous studies, it addresses the challenges of applying UQ to tinyML, showing significant improvements in both efficiency and accuracy-drop detection. Essential References Not Discussed: In my view, the paper cites relevant prior work, but there are a few essential references that could further strengthen the context for the key contributions. For example, 1. Multi-Dimensional Conformal Prediction 2. TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices 3. MIMMO: Multi-input massive multi-output neural network Other Strengths And Weaknesses: Strengths: 1. The paper combines early-exit ensembles with uncertainty quantification (UQ) for tinyML models, providing a resource-efficient solution for low-power devices. 2. The results show notable improvements in latency and model size compared to prior methods, making it highly relevant for resource-constrained applications. 3. The experimental results are well-presented, clearly demonstrating the effectiveness of the proposed method. Weaknesses: 1. The paper could benefit from a more detailed discussion of recent related works, especially those from 2023, 2024 and 2025, to better contextualize the contributions. 2. The paper relies mostly on experiments and does not provide a formal theoretical analysis. So, I think some theoretical support would make it stronger. 3. While the results are acceptable, more diverse real-world evaluations would strengthen the paper and further show the method’s generalizability. Other Comments Or Suggestions: 1. The study presents QUTE as a more efficient alternative to EE-ensemble, but there is no theoretical analysis showing why transferring early-exit knowledge improves uncertainty estimation. 2. I would suggest providing a clear differentiation from existing uncertainty-aware early-exit architectures. 3. The proposed model copies early-exit weights to early-view blocks. 4. The confidence threshold (ρ) plays a critical role in the proposed model, so I want to know how sensitive the results to this threshold are. 5. After reading the paper carefully, the authors claim that smaller models perform better under corruption, but how and why? 6. A more detailed comparison with previous uncertainty quantification methods. 7. Does the proposed QUTE model handle OOD? If yes, how does QUTE work compared to standard techniques? 8. How does QUTE reduce unnecessary energy costs? Questions For Authors: 1. Does the proposed QUTE model work for large models? 2. Is the removal of the final exit always important and beneficial? 3. Does QUTE work under distribution shift? Ethical Review Concerns: I did not flag this paper for an ethics review. There are no ethical concerns I identified in this paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and thoughtful feedback. Given that your comments are highly encouraging and did not raise significant concerns, we would appreciate any insights into the current score to help improve our work. We look forward to having a fruitful discussion and addressing any concerns/suggestions. > on a few missing essential references Thank you for the suggestions. We have already included MIMMO (Ferianc et al., 2023) and will add the others. > on why transferring early-exit (EE) knowledge improves uncertainty estimation Please see the response to reviewer H2zK for ablation results on EV-assistance's impact on ensemble diversity. **Intuition**: Theoretical analysis on how EE knowledge transfer improves calibration would undoubtedly strengthen the paper. Our work builds on the intuition and empirical observations from prior studies, such as [1] and [2], which leverage EEs to form ensembles. These studies suggest that intermediate classifiers extract progressively refined feature abstractions, incorporating complementary feature representations that enhance uncertainty estimates. Unlike prior works, which combine these diverse feature representations from multiple EEs to form an ensemble, our approach generates complementary feature representations from different depths by directly utilizing the weights of EE layers, which is significantly more resource-efficient. QUTE's improved calibration supports this hypothesis. > on suggested theoretical analyses While theory is not central to this work, we thank reviewers y5iN and H2zK for highlighting some interesting avenues for theoretical investigations. Specifically, questions such as *why transferring early-exit knowledge improves calibration?* and *why QUTE’s weight transfer mechanism imbibes diversity?* could certainly strengthen the paper. To address this, we have conducted an ablation study to investigate the impact of EV-assistance on ensemble diversity (see reply to reviewer H2zK). While formal derivations may be challenging within the rebuttal period, we plan to explore these questions further and consider including them in the final version. We will also clarify that our findings are empirical rather than theoretical. > on sensitivity of results to confidence threshold We report threshold-free AUPRC and AUROC metrics by varying the confidence threshold from 0 to 1 in steps of 0.1. These metrics summarize the trade-off between precision-recall and TPR-FPR across different thresholds, ensuring results are not optimized for a specific threshold. > [...] authors claim that smaller models perform better under corruption, but how and why? Prior work [3] shows that larger models overfit, leading to overconfidence, as they extract high-level abstractions that make distinguishing corrupted from clean inputs harder. In contrast, smaller models benefit from implicit regularization, preventing overfitting. They generalize better on corruptions by relying on more stable features instead of memorizing fine-grained ones. Our empirical results (Table 1) support this. From a Bayesian perspective, smaller models can exhibit a wider posterior due to implicit regularization, making them better at capturing uncertainty. > Does the proposed QUTE model handle OOD? [...] Section 6.3 (Table 2) compares QUTE’s OOD detection against G-ODIN [3]. QUTE outperforms G-ODIN on tiny models and is competitive on larger ones. See Section 6.3 for details. > How does QUTE reduce unnecessary energy costs? See our response to reviewer H2zK. > Does the proposed QUTE model work for large models? Yes. We evaluate QUTE’s accuracy-drop detection on Resnet50. See our response to reviewer XBf5 for details. > Is the removal of the final exit always important and beneficial? Yes. The final exit tends to be overconfident under corruptions, degrading accuracy-drop detection. For example, on MNIST with fog corruption, the final exit's average confidence **increases** by 23% compared to its in-distribution confidence (i.e., overconfidence), whereas the QUTE ensemble's average confidence **decreases** by 52%, improving accuracy-drop detection. > Does QUTE work under distribution shift? Yes. Sections 6.2 and 6.3 show QUTE detects accuracy drops under corruptions and OOD inputs, both forms of distribution shift. We also evaluate different corruption severity levels, simulating varying degrees of shift. However, failure detection differs from robust generalization (performing reliably under shifts) and adaptation (adjusting to unknown inputs). Like all baselines, QUTE is a failure detection method. [1] Qendro et al. Early exit ensembles for uncertainty quantification, PMLR 2021 [2] Ferianc et al., Multi-input massive multi-output neural network. CVPR 2023 [3] Hsu et al., Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. CVPR 2020 --- Rebuttal Comment 1.1: Comment: I appreciate the effort put into addressing the reviewers' concerns. The clarifications and additional experiments have helped improve the paper. Taking into account both your responses and the perspectives of the other reviewers, I am adjusting my score to 3 (weak accept). --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to re-evaluate our paper. We appreciate your constructive feedback and are glad that the clarifications and additional experiments have helped improve the paper. Your updated score and thoughtful comments are greatly appreciated.
Summary: The paper introduces QUTE, a novel uncertainty quantification (UQ) framework optimized for TinyML models, addressing the challenge of efficient model monitoring in resource-constrained environments. QUTE leverages early-exit-assisted ensembles, where lightweight classification heads at the final network exit receive knowledge distilled from early-exits, ensuring diverse and resource-efficient ensemble predictions. Unlike prior methods that suffer from high computational and memory overheads, QUTE achieves 59% smaller model sizes and a 31% reduction in inference latency while maintaining superior uncertainty estimation quality. The proposed approach excels at detecting accuracy drops due to corrupted in-distribution (CID) data and outperforms state-of-the-art UQ methods in failure detection and calibration, making it a practical solution for on-device model reliability in real-world TinyML deployments. Claims And Evidence: The paper makes several key claims about QUTE, and most are well-supported by experimental evidence, but some areas could benefit from additional clarification or justification. Below is a critical assessment of the claims and the corresponding evidence: - Claim: QUTE is more resource-efficient than prior uncertainty quantification methods, with 59% smaller model sizes and 31% lower inference latency. - Evidence: The paper provides empirical results comparing QUTE against EE-Ensemble, Deep Ensembles (DEEP), and MCD on MCUs (Big-MCU and Small-MCU). Figure 3 and Section 6.1 show clear reductions in model size and latency, particularly demonstrating that some baselines (e.g., DEEP) do not fit on memory-constrained devices. Assessment: Well-supported. The quantitative results strongly back this claim. - Claim: QUTE outperforms all baselines in detecting accuracy drops due to corrupted in-distribution (CID) data. - Evidence: Section 6.2 presents AUPRC scores across multiple datasets (MNIST-C, CIFAR10-C, TinyImageNet-C), showing that QUTE consistently achieves the highest values, outperforming EE-Ensemble, DEEP, and G-ODIN. The discussion highlights how QUTE's early-exit knowledge transfer improves uncertainty calibration. Assessment: Well-supported. However, an ablation study isolating the impact of early-exit knowledge transfer versus conventional early-exit ensembles would strengthen this claim. - Claim: QUTE provides better uncertainty calibration than competing methods. - Evidence: Table 3 presents expected calibration error (ECE), negative log-likelihood (NLL), and Brier Score (BS), showing that QUTE achieves lower ECE and NLL in TinyML settings. Additionally, QUTE+ (with enhanced output heads) further improves calibration, approaching Deep Ensembles in performance. Assessment: Mostly supported. The results are compelling, but QUTE’s reliance on additional layers for improved calibration in larger models (QUTE+) suggests a trade-off between efficiency and calibration performance, which should be explicitly acknowledged. - Claim: QUTE enables better failure detection, distinguishing both in-distribution misclassifications (ID✓ | ID×) and out-of-distribution samples (ID✓ | OOD). - Evidence: Table 2 reports AUROC values, demonstrating that QUTE outperforms all baselines in ID✓ | ID× detection and is competitive in ID✓ | OOD detection. However, in larger models, G-ODIN performs better in OOD detection. Assessment: Well-supported with a minor caveat. The claim should clarify that QUTE excels in CID detection and misclassification detection, but specialized methods like G-ODIN may still be preferable for OOD scenarios. Problematic or Weakly Supported Claims: QUTE achieves superior calibration without increasing model complexity. - Issue: The results indicate that for larger models, QUTE+ (which adds learning layers) is needed to match calibration quality. This suggests that superior calibration does require additional complexity in certain cases, contradicting the claim. - Suggested Fix: The authors should clarify that QUTE is highly efficient in TinyML settings, but larger models may require additional modifications to achieve optimal calibration. QUTE generalizes well across all TinyML applications. - Issue: The evaluation is focused on image and audio classification tasks (e.g., MNIST, CIFAR10, SpeechCommands). There is no evidence for its effectiveness in other TinyML applications like time-series forecasting, anomaly detection, or reinforcement learning. - Suggested Fix: The authors should either broaden the scope of their evaluation or refine the claim to state that QUTE is optimized for classification tasks in TinyML. Methods And Evaluation Criteria: The methods and evaluation criteria used in the paper are generally well-chosen for the problem of uncertainty quantification (UQ) in TinyML models, but there are some areas that could be improved or clarified. Below is a critical assessment: Strengths of the Methods and Evaluation Criteria: Choice of Benchmark Datasets: The authors evaluate QUTE on four different datasets: MNIST (4-layer CNN) SpeechCommands (4-layer DSCNN for keyword spotting) CIFAR10 (ResNet-8, MLPerf benchmark model) TinyImageNet (MobileNetV2) Justification: These datasets represent varying levels of complexity and modality (image vs. audio), which is appropriate for evaluating TinyML models. Assessment: Appropriate choice. However, all tasks are classification-based, and there is no evaluation on non-classification tasks (e.g., time-series forecasting or regression), which limits generalization. Evaluation Metrics for Uncertainty Quantification: The paper reports Expected Calibration Error (ECE), Brier Score (BS), and Negative Log-Likelihood (NLL) to assess uncertainty quality. The use of AUROC and AUPRC for failure detection and accuracy-drop detection is well-motivated, as they provide threshold-independent performance evaluations. Assessment: Well-founded choices. However, ECE is known to have limitations, and an alternative like adaptive calibration error (ACE) or logit-scaled calibration metrics could provide a more robust evaluation. Comparison with Relevant Baselines: The paper compares QUTE with several state-of-the-art UQ methods: Monte Carlo Dropout (MCD) Deep Ensembles (DEEP) EE-Ensemble (prior early-exit-based ensemble method) G-ODIN (OOD detection method) HYDRA (ensemble distillation) Assessment: Comprehensive comparisons. The selected baselines are appropriate, covering both ensemble-based UQ approaches and early-exit methods. However, Bayesian Neural Networks (BNNs), which are a key competitor in UQ, are not included in the evaluation, even though they are mentioned in the related work. Microcontroller (MCU) Deployment Evaluations: The paper evaluates QUTE on two MCU platforms (Big-MCU: STM32F767ZI, Small-MCU: STM32L432KC) to assess real-world feasibility. The results show that QUTE reduces latency by 31% and has a 59% smaller model size, highlighting its TinyML suitability. Assessment: Crucial evaluation for real-world deployment. However, power consumption analysis (e.g., energy per inference) would further strengthen the real-world applicability. Weaknesses and Areas for Improvement: Lack of Justification for Hyperparameter Choices: The number of early-exits (K) and the weighting factors (wEVk and δ) for knowledge transfer are empirically chosen, but no systematic analysis is provided. Fix: An ablation study on K and weighting factors should be included to demonstrate their impact on performance. Absence of Statistical Significance Testing: The paper presents mean and standard deviation for some metrics but does not report statistical significance tests (e.g., t-tests, Wilcoxon signed-rank tests) to confirm that performance differences are meaningful. Fix: Including confidence intervals or statistical significance testing would improve result robustness. Limited Generalization Beyond Classification Tasks: The evaluation is entirely focused on classification tasks, which makes it difficult to assess how well QUTE generalizes to other TinyML applications (e.g., time-series forecasting, anomaly detection, or reinforcement learning). Fix: The paper should either broaden its evaluation scope or clearly limit its claims to classification-based tasks. Theoretical Claims: The paper does not present formal mathematical proofs but makes several theoretical claims regarding the behavior of uncertainty quantification in TinyML models, particularly in the context of early-exit-assisted ensembles. Below is a critical evaluation of these theoretical claims: Key Theoretical Claims and Their Validity Claim: Early-exit knowledge distillation enhances uncertainty estimation while reducing model size and compute overhead. Explanation: The paper introduces a method where early-exit layers are used during training to distill uncertainty-related knowledge into lightweight output heads at the final exit. The claim is that this approach enables diverse ensemble behavior without significant computational overhead. Evaluation: The method is well-motivated by prior works on early-exit architectures (Teerapittayanon et al., 2016; Ghanathe & Wilton, 2023). The empirical results show that QUTE reduces latency (31%) and model size (59%), supporting the efficiency claim. However, the claim that this leads to "diverse ensemble behavior" is only empirically shown, but lacks a formal proof on how early-exit distillation systematically maintains diversity in predictions. Potential Issue: The paper could benefit from a theoretical analysis showing how the weight transfer mechanism preserves predictive diversity across ensemble members. Claim: QUTE's predictive confidence is correlated with accuracy, leading to improved calibration. Explanation: The paper states that QUTE produces better-calibrated uncertainty estimates because its final exit integrates knowledge from early-exits, reducing overconfidence issues common in deep networks. Evaluation: The claim is supported by empirical calibration metrics (ECE, NLL, BS in Table 3), showing that QUTE achieves lower calibration errors than baselines. However, there is no theoretical justification explaining why early-exit knowledge distillation improves uncertainty calibration. Potential Issue: A mathematical framework or proof showing how QUTE affects the confidence distribution could strengthen this claim. For example, a formal derivation of confidence variance reduction due to early-exit knowledge integration would be useful. Claim: Smaller models are naturally less overconfident on corrupted in-distribution (CID) data. Explanation: The paper suggests that smaller models, by nature, produce less overconfident predictions under covariate shift (CID data) compared to large models, which tend to overfit to training distribution features. Evaluation: The claim is partially supported by empirical observations (Table 1), which show that smaller ResNet models (2-stack and 3-stack) have better calibration on CID data than deeper variants. However, the claim is not rigorously proven. The idea aligns with prior findings on overparameterization and generalization (Hsu et al., 2020), but a formal proof (e.g., based on Bayesian uncertainty theory or PAC-Bayes bounds) would be beneficial. Potential Issue: The claim should either be framed as an empirical observation rather than a theoretical result, or a formal derivation of model confidence behavior with respect to depth should be provided. Areas Where Theoretical Rigor Could Be Improved: Mathematical Proof for Ensemble Diversity Preservation: The authors claim that early-exit knowledge transfer ensures diversity among ensemble members, but this is only demonstrated empirically. A theoretical analysis of how early-exit weight transfer affects predictive diversity would strengthen the paper. Formal Justification for Calibration Improvement: While empirical results show better calibration, the paper lacks a theoretical justification for why QUTE’s uncertainty estimates are better calibrated. A derivation using information-theoretic arguments (e.g., entropy reduction) could help. Explicit Bounds on Uncertainty Estimation Quality: The paper could derive theoretical bounds on QUTE’s uncertainty estimation efficiency compared to standard ensembles or Bayesian approaches, quantifying trade-offs between accuracy, uncertainty, and computational cost. Experimental Designs Or Analyses: The experimental design of the paper is well-structured and aligns with the research objectives, but there are some areas that require further clarification or improvements to ensure robustness and validity. Below is a critical analysis of the soundness and validity of the experimental design and analyses. Strengths of the Experimental Design Comprehensive Benchmarking Against Relevant Baselines The paper compares QUTE against multiple state-of-the-art uncertainty quantification (UQ) methods, including: Monte Carlo Dropout (MCD) Deep Ensembles (DEEP) EE-Ensemble (Early-exit-based ensemble method) G-ODIN (OOD detection method) HYDRA (Ensemble distillation method) Assessment: Sound choice of baselines as these methods represent different categories of UQ techniques. However, a Bayesian Neural Network (BNN) baseline is missing, which would have provided further context. Well-Defined Evaluation Criteria for Uncertainty Estimation The paper evaluates uncertainty quantification using: Expected Calibration Error (ECE) Negative Log-Likelihood (NLL) Brier Score (BS) Assessment: These are standard metrics in uncertainty estimation. However, ECE is known to have limitations (e.g., sensitivity to binning choices), and an alternative such as Adaptive Calibration Error (ACE) could have been included for a more robust analysis. Real-World Feasibility Analysis on Microcontrollers (MCUs) The authors deploy QUTE on two embedded MCUs: Big-MCU: STM32F767ZI (high resource) Small-MCU: STM32L432KC (low resource, power-efficient) Results demonstrate QUTE’s efficiency in terms of memory, latency, and fit on constrained devices. Assessment: Strong validation of real-world applicability, but energy consumption per inference is not reported, which is crucial for embedded deployments. Issues and Areas for Improvement 1. Hyperparameter Sensitivity Analysis is Lacking Issue: The paper empirically chooses hyperparameters (e.g., number of early exits (K), weighting factors (wEVk, δ)) but does not systematically analyze their impact. Fix: An ablation study on how K and weighting factors influence uncertainty quality, accuracy, and computational efficiency would improve rigor. 2. Statistical Significance Testing is Missing Issue: The paper reports mean and standard deviations but does not include statistical significance testing (e.g., t-tests, Wilcoxon signed-rank tests) to validate performance differences across methods. Fix: Reporting confidence intervals or p-values for comparisons (e.g., QUTE vs. EE-Ensemble) would confirm whether differences are statistically meaningful. 3. Limited Generalization to Non-Classification Tasks Issue: All experiments focus on classification tasks (MNIST, SpeechCommands, CIFAR10, TinyImageNet). No evaluation is provided for time-series forecasting, anomaly detection, or reinforcement learning, which are relevant in TinyML applications. Fix: The scope of generalization should be clearly stated. Alternatively, a small-scale experiment on a time-series dataset could strengthen claims of broader applicability. 4. No Analysis on Failure Cases Issue: While QUTE outperforms baselines on accuracy-drop and failure detection, there is no discussion on failure cases (e.g., when QUTE fails to detect uncertainty correctly). Fix: A failure case analysis (e.g., qualitative examples of misclassified instances with poor uncertainty estimates) would provide deeper insights. Supplementary Material: The supplementary material was not explicitly included in the document provided, but based on references to Appendices (A, B, C, D) within the main text, I can assess the role of the supplementary content and highlight key areas that should be reviewed for completeness and correctness. Key References to Supplementary Material in the Main Paper Appendix A – Additional Experimental Details A.1: Datasets and models used A.1.1: Baseline details A.1.2: CID and OOD dataset construction A.4: Accuracy-drop detection methodology Expected Content to Review: Dataset preprocessing steps Justification for dataset choices Training details (e.g., learning rates, epochs, data augmentation) Potential Issues: If preprocessing details are missing or unclear, reproducibility could be compromised. Baseline implementation details should confirm fair comparisons. Appendix B – Ablation Studies and Additional Experiments B.3: Impact of early-exit assistance on diversity B.4: Effect of ensemble size (K) on uncertainty calibration Expected Content to Review: Empirical validation of claims about early-exit distillation improving ensemble diversity. Potential Issues: If early-exit impact is not rigorously evaluated, theoretical claims about diversity are weaker. If there is no clear trade-off analysis for ensemble size (K) vs. calibration quality, claims about efficiency could be overstated. Appendix C – MCU Deployment Details Details on memory usage, inference time, and deployment feasibility on Big-MCU and Small-MCU. Expected Content to Review: Precise memory footprint analysis for different baselines. Potential Issues: Missing power consumption details could limit real-world deployment insights. Appendix D – Limitations of ECE and Other Metrics D.1: Justification for using NLL and BS instead of just ECE. Expected Content to Review: Limitations of ECE for uncertainty calibration. Justification for selecting alternative metrics. Potential Issues: If alternative metrics are not well justified, the use of ECE alone may be questioned. Relation To Broader Scientific Literature: The paper presents QUTE, an early-exit-assisted ensemble method for uncertainty quantification (UQ) in TinyML models. Its key contributions relate to several existing areas of machine learning research, including uncertainty estimation, early-exit networks, model monitoring, and TinyML deployment. Below is a structured evaluation of how QUTE connects to prior research: 1. Uncertainty Quantification (UQ) in Neural Networks Relation to Prior Work The problem of uncertainty quantification in deep learning is well studied, with two major categories: Bayesian Approaches Bayesian Neural Networks (BNNs) (Blundell et al., 2015) model uncertainty via a probabilistic distribution over weights. Monte Carlo Dropout (MCD) (Gal & Ghahramani, 2016) approximates Bayesian inference by applying dropout during inference. QUTE’s Connection: Instead of using probabilistic approaches (which are computationally expensive for TinyML), QUTE uses ensemble-based methods to estimate uncertainty efficiently. Novelty: Unlike MCD or BNNs, QUTE does not require multiple inference passes and is optimized for low-resource TinyML settings. Deep Ensembles for Uncertainty Estimation Lakshminarayanan et al. (2017) introduced Deep Ensembles, which train multiple independent models for uncertainty estimation. QUTE’s Connection: QUTE leverages ensembles but does not train separate models; instead, it distills uncertainty knowledge from early exits into lightweight ensemble members. Improvement: QUTE achieves similar uncertainty estimation quality as Deep Ensembles while using 59% fewer parameters and 31% less inference latency, making it more practical for TinyML. Key Differentiation QUTE improves on ensemble-based UQ methods by using a single forward pass instead of multiple inference runs. No prior UQ method explicitly optimizes for ultra-low-resource TinyML in this way. 2. Early-Exit Networks and Model Efficiency Relation to Prior Work Early-exit networks were originally introduced to reduce inference latency in deep models (Teerapittayanon et al., 2016; Kaya et al., 2019). Prior methods like EE-Ensemble (Qendro et al., 2021) leveraged early-exit networks for ensemble-based uncertainty estimation, but these required additional learning layers, increasing memory overhead. QUTE’s Connection: Instead of directly using early-exit outputs as ensemble members (like EE-Ensemble), QUTE distills their knowledge into lightweight final classification heads. Improvement: This allows QUTE to achieve better calibration and accuracy-drop detection without additional per-exit learning layers. Key Differentiation EE-Ensemble uses early-exits directly, requiring additional computational layers. QUTE removes early-exits after training and retains only the lightweight ensemble heads, reducing resource consumption. 3. TinyML Deployment and Model Monitoring Relation to Prior Work TinyML aims to run ML models on ultra-low-power microcontrollers (MCUs) with limited memory (≤256KB RAM, <1W power). Prior work on TinyML deployment (Banbury et al., 2021; Ghanathe & Wilton, 2023) focused on latency, power efficiency, and model size. QUTE’s Connection: Unlike general TinyML models, QUTE is explicitly designed for real-time model monitoring and uncertainty estimation in TinyML settings. Prior model monitoring methods (Bifet & Gavalda, 2007; Hsu et al., 2020) rely on either large statistical tests or access to true labels, which are impractical in real-world TinyML deployments. QUTE provides uncertainty-aware monitoring without access to ground truth labels. Key Differentiation Existing TinyML models prioritize efficiency but do not focus on model monitoring and uncertainty quantification. QUTE fills this gap by introducing uncertainty estimation tailored for real-time TinyML inference. 4. Robustness to Corrupted In-Distribution (CID) and Out-of-Distribution (OOD) Data Relation to Prior Work Robustness against corrupted in-distribution (CID) data is less studied compared to out-of-distribution (OOD) detection. Prior OOD detection methods (Hendrycks & Gimpel, 2018; Liang et al., 2020) focused on confidence-based rejection techniques. G-ODIN (Hsu et al., 2020) introduced a preprocessing-based approach for OOD detection. QUTE’s Connection: QUTE outperforms OOD detectors like G-ODIN in detecting accuracy drops due to CID data. Unlike traditional OOD detectors, QUTE is designed for low-power TinyML devices. Key Differentiation Most prior works focus on OOD detection, while QUTE addresses the more practical problem of CID robustness in TinyML. Essential References Not Discussed: The paper does a good job of citing relevant literature in uncertainty quantification (UQ), early-exit networks, model monitoring, and TinyML deployment. However, there are several key references missing that would help place QUTE’s contributions into a more complete scientific context. Below are some critical missing references that should be included: 1. Missing Work on Lightweight Bayesian Uncertainty Quantification Why It’s Important? The paper only briefly mentions Bayesian Neural Networks (BNNs) (Blundell et al., 2015) and Monte Carlo Dropout (Gal & Ghahramani, 2016) as baselines. However, more recent lightweight BNN approaches exist that optimize Bayesian inference for edge devices, which are directly relevant to QUTE’s efficiency claim. Missing References: Teye et al. (2018): Bayesian Batch Normalization for Uncertainty Estimation Introduced Bayesian BatchNorm, which achieves Bayesian-like uncertainty estimation without needing full BNNs. Why it's relevant? This method enables UQ in a single forward pass, similar to QUTE, but using batch normalization instead of ensembles. Why cite? Helps contextualize alternative lightweight Bayesian methods for uncertainty quantification. Osawa et al. (2019): Practical Deep Learning with Bayesian Approximation Proposed low-cost Bayesian learning techniques optimized for deep models in low-resource settings. Why cite? QUTE’s motivation (low-power UQ for TinyML) is similar, and comparison with lightweight BNNs is missing. Louizos & Welling (2017): Multiplicative Normalizing Flows for Bayesian Deep Learning Used normalizing flows to achieve Bayesian uncertainty with lower overhead than traditional BNNs. Why cite? The work shows how uncertainty can be captured without full ensembles, similar to QUTE. What’s the Problem? The paper only contrasts QUTE against classical Bayesian UQ (BNNs, MCD) but does not compare against these more efficient Bayesian methods. Fix: Include recent lightweight Bayesian methods to better contrast why QUTE is a superior choice for TinyML. 2. Missing Work on Early-Exit Networks for Uncertainty Estimation Why It’s Important? The core idea of QUTE is distilling early-exit knowledge into final ensemble heads. However, recent works have already explored early-exit UQ but are not cited. Missing References: Jazbec et al. (2024): Conditional Monotonicity in Early-Exit Networks Investigated how early-exit networks should be structured to ensure uncertainty-aware decisions. Why cite? This paper provides theoretical insights into why early-exit-based ensembles (like QUTE) work. Antoran et al. (2020): Early-Exit Networks for Depth Uncertainty in Neural Networks Proposed a method to quantify uncertainty by leveraging intermediate (early-exit) layers. Why cite? This is one of the closest prior works to QUTE, yet it is not cited. Ferianc & Rodrigues (2023): Multi-Exit Neural Networks for Uncertainty-Aware Inference Proposed a multi-exit architecture that integrates uncertainty estimation at each exit. Why cite? This is a direct baseline for QUTE, but is not discussed. What’s the Problem? The paper claims QUTE is the first early-exit ensemble architecture for uncertainty quantification, but this is misleading—previous works have explored early-exit UQ approaches, though with different architectures. Fix: The authors should acknowledge prior work on early-exit UQ and clearly explain how QUTE is different. 3. Missing Work on Out-of-Distribution (OOD) and Corrupted In-Distribution (CID) Detection Why It’s Important? The paper evaluates QUTE on both OOD detection and CID robustness. However, several key works on robust UQ for corrupted data are missing. Missing References: Ovadia et al. (2019): Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty in Deep Learning Found that modern deep networks fail to estimate uncertainty correctly under distribution shifts. Why cite? This work motivates QUTE’s focus on detecting accuracy drops in CID data. Xia & Bouganis (2023): Failure Detection in Neural Networks using Uncertainty Estimates Proposed uncertainty-based failure detection for real-world models. Why cite? This paper discusses uncertainty-driven failure detection, which is one of QUTE’s key claims. Liang et al. (2020): Enhancing the Reliability of Out-of-Distribution Detection in Neural Networks Investigated why deep networks struggle with OOD data and proposed improved techniques. Why cite? Helps frame why QUTE is evaluated against OOD baselines. What’s the Problem? The paper does not connect QUTE’s performance on CID/OOD to prior literature. Fix: Cite these works to show how QUTE extends prior research on uncertainty estimation under distribution shifts. 4. Missing Work on Energy-Efficient Model Deployment for TinyML Why It’s Important? QUTE is designed for TinyML, but the paper does not cite foundational works on energy-efficient ML. Recent studies have proposed alternative ways to reduce energy consumption, such as pruning and quantization. Missing References: Banbury et al. (2021): MLPerf Tiny Benchmark for Ultra-Low-Power ML Standardized benchmarks for TinyML efficiency. Why cite? QUTE is evaluated on TinyML devices, so this benchmark should be referenced. Howard et al. (2019): Searching for MobileNetV3 MobileNetV3 is designed for efficient inference on TinyML devices. Why cite? QUTE uses MobileNetV2, but citing MobileNetV3 would strengthen the discussion on TinyML efficiency. Han et al. (2016): Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization, and Huffman Coding Introduced pruning and quantization to make ML models more efficient. Why cite? These techniques could be combined with QUTE for even better TinyML deployment. What’s the Problem? The paper focuses only on MCU deployment but does not cite prior works on efficient TinyML models. Fix: Cite MLPerf Tiny and model compression techniques to show how QUTE fits into broader TinyML efficiency research. Other Strengths And Weaknesses: This paper makes a strong contribution to the field of uncertainty quantification (UQ) in TinyML through a novel early-exit-assisted ensemble approach. Below is a breakdown of its originality, significance, and clarity, as well as potential limitations that should be addressed. 1. Originality: Creative Combination of Ideas: The paper innovatively integrates early-exit networks with ensemble-based uncertainty estimation. While early-exit architectures and ensemble-based UQ have been studied before, QUTE’s novel contribution is its distillation-based ensemble construction, which eliminates early-exits after training to reduce overhead. This approach differs from prior methods (e.g., EE-Ensemble) that retain early-exits during inference, leading to higher computational costs. First UQ Approach Optimized for TinyML: While UQ in deep learning is a well-explored field, most state-of-the-art methods (BNNs, Deep Ensembles, MCD) are computationally expensive and impractical for TinyML devices. QUTE is the first method explicitly designed for resource-constrained TinyML model monitoring. 2. Significance: Addresses a Real-World Challenge in TinyML: TinyML models are deployed on edge devices with no access to ground truth labels, making uncertainty-aware model monitoring essential. QUTE enables on-device uncertainty estimation with minimal computational cost, making it practical for autonomous systems, medical IoT, and embedded AI applications. Improves Over Key Baselines: Compared to Deep Ensembles (Lakshminarayanan et al., 2017), EE-Ensemble (Qendro et al., 2021), and MCD (Gal & Ghahramani, 2016), QUTE achieves: 59% model size reduction 31% lower inference latency Better uncertainty calibration (lower ECE, BS, NLL) These improvements are critical for real-world deployment on ultra-low-power MCUs. Strong Experimental Validation on Microcontrollers (MCUs): Unlike many ML papers that rely on simulated efficiency claims, QUTE is deployed on real-world MCUs (STM32 Big-MCU, Small-MCU). This enhances practical significance, showing that the method is deployable, not just theoretically interesting. 3. Clarity: Clearly structured methodology and motivation: The problem statement, approach, and contributions are well-articulated. Figures clearly illustrate QUTE’s architecture and how it differs from baselines. The paper provides a solid review of related work (although some missing references should be added). Strong result visualizations: The tables and graphs effectively present key findings, especially in uncertainty calibration and accuracy-drop detection. However, some figures (e.g., confidence calibration histograms) would improve clarity on model uncertainty performance. Weaknesses 1. Limited Theoretical Analysis No Formal Proofs on Uncertainty Calibration Improvement: While empirical results show that QUTE improves uncertainty calibration, there is no theoretical justification for why the method produces better uncertainty estimates. Suggested Fix: A mathematical analysis (e.g., using confidence variance bounds) could strengthen the claim that early-exit-assisted ensembles improve calibration. No Formal Diversity Analysis of the Ensemble: The paper claims that early-exit knowledge distillation maintains predictive diversity, but this is only shown empirically. Suggested Fix: A Shannon entropy analysis or mutual information study between QUTE’s ensemble members would provide a stronger justification for its diversity benefits. 2. Missing Statistical Significance Testing While performance improvements are clear, statistical significance is not tested. Results are reported with mean and standard deviation, but no hypothesis testing (e.g., t-tests, Wilcoxon signed-rank tests) is provided. Suggested Fix: Reporting confidence intervals or p-values would confirm whether QUTE’s improvements over baselines are statistically significant. 3. Lack of Generalization Beyond Classification Tasks QUTE is only tested on image and audio classification tasks. Many TinyML applications involve time-series forecasting, anomaly detection, and reinforcement learning, but these are not explored in the evaluation. Suggested Fix: A small experiment on a TinyML time-series dataset (e.g., Google Smartwatch Health Data) would strengthen the claim that QUTE generalizes to all TinyML settings. If not feasible, the paper should clearly limit its scope to classification-based TinyML monitoring. 4. Missing Comparison with Lightweight Bayesian Methods The paper contrasts QUTE only against traditional BNNs and MCD, but does not compare against more efficient Bayesian methods like: Bayesian BatchNorm (Teye et al., 2018) Variational Bayesian Dropout (Osawa et al., 2019) Multiplicative Normalizing Flows (Louizos & Welling, 2017) Why This Matters? These methods also reduce inference overhead while maintaining Bayesian uncertainty estimation. Without this comparison, the efficiency advantage of QUTE is slightly overstated. Suggested Fix: Include a brief discussion on why QUTE is preferable to these lightweight Bayesian techniques. Other Comments Or Suggestions: No Questions For Authors: Q1: The paper empirically demonstrates that QUTE achieves better uncertainty calibration (lower ECE, NLL, BS) than Deep Ensembles and MCD. However, there is no theoretical justification for why early-exit knowledge transfer improves calibration. Could you provide a mathematical explanation or theoretical analysis (e.g., variance reduction, entropy-based confidence bounds) that supports this claim? Q2: While the paper reports mean and standard deviation for key performance metrics, there is no statistical significance testing (e.g., t-tests, Wilcoxon signed-rank tests). Did you conduct statistical significance tests to confirm that QUTE’s improvements over baselines are meaningful? If not, could you provide confidence intervals or p-values for the comparisons? Q3: The number of early-exit ensemble members (K) and the weighting factors (wEVk, δ) appear to be chosen empirically, but no systematic analysis of their impact is provided. Could you clarify how these values were selected and whether an ablation study was conducted to determine their effect on accuracy, uncertainty estimation, and efficiency? Q5: All experiments focus on classification tasks (MNIST, CIFAR10, TinyImageNet, SpeechCommands), but many TinyML applications involve time-series forecasting, anomaly detection, and reinforcement learning. Did you test QUTE on any non-classification TinyML tasks, or do you see any theoretical limitations that would prevent it from generalizing to these domains? Q6: While the paper reports model size and inference latency, there is no mention of power consumption, which is a key metric for TinyML deployment. Did you measure energy efficiency (e.g., power per inference) on the MCUs, or can you provide an estimate of QUTE’s power savings compared to Deep Ensembles and EE-Ensemble? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback. We appreciate the recognition of our work’s originality and the insightful suggestions. > an ablation study isolating the impact of early-exit (EE) knowledge transfer and stronger justification for diversity benefits of EV-assistance Appendix B.3 explores the effect of EV-assistance on calibration and convergence. Also, we conducted an ablation study using the normalized disagreement (ND) metric [2] to measure QUTE’s ensemble diversity **with** and **without** EV-assistance (*higher is better*). On CIFAR10, ND is 71% lower without EV-assistance; on MNIST-ID, it's 19% lower. These results demonstrate that EE knowledge transfer improves ensemble diversity and, consequently, calibration quality. We will add these findings in the final version. Also, see our response to reviewer y5iN for further discussion. > on QUTE’s reliance on additional layers for improved calibration in larger models (QUTE+) Yes, QUTE does require additional layers to match the calibration quality of prior methods in larger models/datasets. However, QUTE (without additional layers) still outperforms prior methods on large datasets/models in accuracy-drop detection and failure detection, which are central to our work. > On missing lightweight bayesian approximation techniques Bayesian batch norm requires stochastic sampling and multiple inferences (like Monte Carlo Dropout), which significantly increases latency on MCUs. Appendix B.2 discusses several single-pass deterministic methods and compares QUTE to PosteriorNets [1], which uses normalizing flows (see Table 6). > On missing references Thank you for your detailed suggestions on potential citations to include. Many suggested references (Banbury et al., 2021; Jazbec et al., 2024; Antoran et al., 2020; Xia & Bouganis, 2023; Liang et al., 2020; Ovadia et al., 2019) are already covered in related work. We will further strengthen this section by incorporating additional recommended references. > on suggested theoretical analysis. See our response to reviewer y5iN. > on statistical significance testing See our response to reviewer eK42. > on justification for Hyperparameter Choices $w_{EV_k}$ and $\delta$ Section 5 explains that the number of EEs (K) is constrained by depth of neural networks we evaluate, which only have a few layers. For example, for Resnet-8 with three residual stacks, we place two EEs after the 1st and 2nd residual stack. For deeper networks, we place exits at evenly spaced locations. Appendix B.4 studies the trade-off between number of EEs (K) and uncertainty quality. Furthermore, we choose $w_{EV_k}$ and $\delta$ based on a systematic empirical analysis as described in Appendix A.2. > on theoretical limitations of QUTE for non-classification tasks While our evaluations primarily focus on classification tasks, our method is not inherently limited to classification and can naturally extend to other problem domains. Ensemble methods, including those similar to ours, have been successfully applied to non-classification tasks in prior research. For the final version, we will explore evaluating QUTE on a regression-based task. Otherwise, as suggested, we will explicitly clarify that our current results are on classification tasks. > on measuring energy efficiency and estimating QUTE’s power savings compared to Deep ensembles and EE-ensemble We agree that power consumption analysis is beneficial for TinyML deployment. However, direct power measurements are non-trivial due to factors like sleep mode power wastage, transition costs between sleep and active states, and peripheral activity. Instead, we estimate power consumption using typical values from MCU datasheets: ~285mW for the Big-MCU (STM32F767ZI) and ~25mW for the Small-MCU (STM32L432KC) in active mode with all peripherals disabled. We focus on estimating only active mode power because our work reduces the processing time/latency compared to other baselines. Since our method reduces inference latency compared to Deep ensembles and EE-ensemble, it proportionally lowers energy consumption, as energy-per-prediction is power$\times$time. While this is an estimate and real-world factors may influence it, QUTE effectively extends battery life without additional hardware modifications by accelerating inference. > on other calibration metrics We discuss this Appendix D. > Other ablation studies Appendix B contains extensive ablation studies such as studying the effect of weight transfer on model convergence, investigating the trade-off between uncertainty quality and ensemble size and effectiveness of the EV-assistance method. [1] Charpentier et al., Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts. NeurIPS 2020 [2] Heidemann et al., Measuring Ensemble Diversity and Its Effects on Model Robustness IJCAI 2021 --- Rebuttal Comment 1.1: Comment: I have no further comments, i recommand the acceptance of this well writing paper. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive feedback and recommendation for acceptance. Thank you for your time and thoughtful review.
Summary: **Problem** - This paper focuses on uncertainty quantification (UQ) for TinyML models that are specifically designed to operate on microcontrollers with extremely limited memory and computational resources. **Method** - The authors propose QUTE, which combines ideas from early-exit (EE) ensembles and multi-head ensembles. - During training: - For the selected intermediates layer of the neural network, the model trains early-exit branches (early-exit members) to predict the labels. At the same time, corresponding to each early exit, a separate lightweight classification head is also trained at the final layer (final-exit). - The parameters from each early-exit head are partially copied to their respective final-exit classification heads, ensuring diversity among the final-exit heads. - During inference: - All early-exit branches are completely removed, eliminating their memory and latency overhead. - The model uses a single forward pass to reach the final layer, then computes predictions using the multiple lightweight classification heads at the final exit. - The predictions from these diverse final-exit heads form an ensemble used to quantify uncertainty efficiently. **Experiments** - Experiments show that QUTE achieves comparable UQ performance with smaller model sizes and lower latency. ## update after rebuttal After reading the author's response, I have no further concerns and will maintain the current score. Claims And Evidence: There are some experimental claims that I am not totally convinced: - **Limited experimental repetitions**: Table 1 and Figure 3/4 are based on a single experimental run. Repeated experiments would enhance confidence in these claims. - **Performance gains are smaller than the reported variance (Table 3)**: It is difficult to tell that the proposed methods outperform other methods since the variance is larger the the performance gain. Nevertheless, considering the substantial benefits in memory and latency efficiency, I still regard the proposed method as preferable, particularly in resource-constrained TinyML scenarios. Methods And Evaluation Criteria: **Method** The proposed method is **well-suited** for addressing uncertainty quantification (UQ) in TinyML settings. To remove the memory consumption of existing ensemble methods (need multiple copies of the model, including the early-exit models), the proposed methods only use the early-exit members during the training stage. Unlike other distillation methods that distill the early-exit ensembles into one classification head (this type of method is usually suboptimal), the author also uses multiple classification heads at the final layer, each one inherits some of the parameters from the early-exit heads trained during the training stage. In summary, it balances the performance and also the memory consumption. **Evaluation** The evaluation strategy of the paper **follows the standard evaluation settings**: - Resource Constraints: The experiments explicitly test model deployments on microcontrollers (MCUs) of varying memory and computational limitations, reflecting realistic TinyML scenarios. - Datasets: - standard datasets including MNIST, CIFAR10, SpeechCommands, TinyImagenet, - corrupted-in-distribution (CID) datasets for failure prediction - out-of-distribution (OOD) datasets. - Metrics: - Brier Score, Negative Log-Likelihood, Expected Calibration Error for calibration performance - AUPRC and AUROC for failure prediction and OOD detection. - Baselines: standard ensemble-based methods including Monte Carlo Dropout, Deep Ensembles, EE-ensemble, and HYDRA Theoretical Claims: There is no theoretical claim in the paper. Experimental Designs Or Analyses: Pros: The overall experimental design in this paper is sound and valid: 1. The datasets used are representative. However, **more challenging benchmarks for OOD evaluation** could further strengthen the validity of the analysis. 2. The selection of models is good. 3. The baseline methods cover most of the common methods. It would be better if the author could **include more deterministic types of baselines such as temperature scaling**. 4. The evaluation metrics employed are comprehensive and standard. Cons: 1. Experimental results for failure prediction and accuracy-drop detection are reported based on single runs, without repeated trials or statistical summaries (e.g., means and standard deviations), potentially limiting their reliability. 2. Calibration results (e.g., Table 3) show that the proposed method's advantages are not clearly demonstrated, especially when considering the reported standard deviations. Thus, it might be more accurate to state that the performance of the proposed method is comparable to existing methods rather than definitively superior. Supplementary Material: I have reviewed Section A.1 and Figure 5 in detail, and I have also skimmed the overall structure of the supplementary materials. While everything appears to be in order, I cannot be entirely certain without a more thorough examination. Relation To Broader Scientific Literature: In my understanding, the key contribution of this paper is that: focusing on the TinyML deployment scenarios, it cleverly ccombines the ideas from early-exit ensembles (EE-ensembles) and multi-head ensembles, to maintain the good performance and reduce the memory and latency consumption. Prior work on EE-ensembles typically faces trade-offs in terms of either memory overhead (due to saving intermediate layer outputs) or compromised latency (due to sequential computation). On the other hand, ensemble distillation is suboptimal since it is very challenging to distill the whole distribution into one classification head. The authors introduce a training scheme where each early-exit member is associated with a distinct lightweight classification head at the final layer. Partial parameter copying (distillation) from each early-exit head ensures that the final heads remain diverse. Then at inference, intermediate early-exit features are not stored, thus simultaneously achieving low memory usage and minimal latency. Essential References Not Discussed: While there might exist relevant papers beyond my knowledge, the authors appear to have discussed most of the essential references. Other Strengths And Weaknesses: Strengths: - The proposed method creatively integrates early-exit (EE) ensembles and multi-head ensembles, achieving diversity among ensemble heads without incurring the typical memory and latency overhead associated with early exits. This design is particularly suitable for resource-constrained TinyML environments. - The paper is clearly written, straightforward, and easy to understand. Weaknesses: - Experimental evaluations for failure prediction and accuracy-drop detection are conducted based on single runs without reporting repeated trials or statistical summaries (e.g., mean and standard deviation), reducing confidence in the reliability of results. - Calibration experiments do not clearly demonstrate the method's superiority over baselines, as the reported advantages are minor relative to the standard deviations provided. Other Comments Or Suggestions: - Some visual details, particularly in Figure 1 and Figure 2, could be presented more clearly. For instance, explicitly indicating in Figure 2 that θ() includes a dense layer would enhance reader understanding. - For Table 2, for the DEEP, it should also be highlighted in bold. Questions For Authors: 1) For the EE-ensemble method, if we sequentially compute each early-exit's result without storing intermediate features (thus trading time for memory), how much latency would this actually introduce compared to QUTE? Considering that each early-exit has only a single layer of parameters, the computational overhead might be minimal. 2) In the right column of line 303, the authors state: “On Big-MCU, QUTE achieves 31% and 47% latency reductions over EE-ensemble and DEEP, respectively, and maintains accuracy parity with both, even with 58% and 26% smaller models.” Could the authors clarify why the latency and memory reductions during inference compared to DEEP are only 47% and 26%, respectively? Specifically, how many ensemble members are used for DEEP, and are these ensemble members computed sequentially? If computed sequentially, shouldn't the memory consumption of DEEP be lower? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We’re delighted that you found our contribution both novel and clear. > on temperature scaling results We apply temperature scaling to BASE and QUTE (following [1] ). The results are included in Appendix B.1. (Tables 4 and 5). > on absence of statistical significance tests; Experimental results for failure prediction and accuracy-drop detection are reported based on single runs [...] We report metrics such as AUPRC and AUROC for accuracy-drop and failure detection, calculated by averaging precision, recall, true-positive rate, and false-positive rate values across multiple corruptions before deriving the final score. This approach provides a robust measure of model performance under varying conditions. However, while it offers valuable insights, it does not facilitate statistical significance testing across runs. We recognize the importance of statistical validation and plan to conduct independent evaluations in the final version. > Performance gains are smaller than the reported variance (Table 3) As suggested, we will revise the claim to 'QUTE’s performance for uncertainty quantification is comparable to existing methods.' > For the EE-ensemble method, if we sequentially compute each early-exit's result without storing intermediate features (thus trading time for memory [...] Even when computing each early exit (EE) sequentially in the EE-ensemble, intermediate feature maps at the EE location still need to be stored. This is because the early exit point involves two branches of computation: the early-exit branch and the rest of the neural network. Since MCUs process sequentially, the intermediate feature maps at this location must be temporarily stored in the random-access-memory (RAM) to enable processing of the rest of the network after the EE outputs are computed. On the other hand, if intermediate feature maps are not stored then the whole network up till the EE has to be recomputed every time, leading to significant latency increase. Nonetheless, the memory required to store the intermediate feature maps is nominal. Appendix C has more details. > In the right column of line 303, the authors state: “On Big-MCU, QUTE achieves 31% and 47% latency reductions over EE-ensemble and DEEP [...] This question can be broken down into three sub-questions that we answer here. *Why EE-ensemble has more memory overhead compared to DEEP?* As described in Section 3 and Appendix B.5.1, EE-ensemble method requires additional fully-connected (FC) layers at each EE to ensure that the learning capacities of all EEs closely match that of the final exit. Failure to do so will result in suboptimal results (Table 8). However, the FC layers are very parameter-heavy resulting in a significant increase in model parameters and thus model size. Furthermore, in the small models we evaluate (e.g., DSCNN, Resnet-8), which have only a few thousand parameters, the memory overhead of the FC layers of EE-ensemble is significantly greater compared to the base network size. *Why disparity in latency reduction?* We use two ensemble members for DSCNN and Resnet-8. For DEEP, the MCU has no parallel computation capability, hence, each ensemble member has to be computed sequentially. Therefore, for two ensemble members, the latency the doubles. On the other hand, the latency for EE-ensemble is less compared to DEEP despite having higher memory overhead because EE-ensemble only has the computation overhead of a FC layer at each early-exit. Therefore single-forward-pass methods like EE-ensemble (and QUTE) are always more compute efficient compared to DEEP, and this latency gap will become more prominent for larger ensemble sizes. Appendix C provides a more detailed analysis. *If computed sequentially, shouldn't the memory consumption of DEEP be lower?* The memory numbers (size) reported in Figure3 is the *code-size* that reflects the amount of flash memory occupied by the program's code (instructions) and data (model parameters). This differs from RAM, which stores intermediate variables during execution. In DEEP, ensemble members are computed sequentially, avoiding extra storage for intermediate feature maps. Consequently, DEEP's peak RAM usage is consistently 4% lower than EE-ensemble (reported in Tables 10 & 11, Appendix C). However, *code-size* remains unchanged regardless of execution mode, as each ensemble member requires unique model parameters to be stored in the flash memory. [1] Rahaman et al., 2021: Uncertainty Quantification and Deep Ensembles NeurIPS 2021 --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I have no further concerns and will maintain the current score. --- Reply to Comment 1.1.1: Comment: Thank you for the response and for your thoughtful review. We sincerely appreciate the time and effort you dedicated to evaluating our submission.
Summary: In this paper, the authors propose a novel method for uncertainty estimation in low-resource configurations. Specifically, the method builds on the well-known early-exit approach—where the model produces multiple predictions as the prediction depth increases and then combines them in an ensemble-like manner—but modifies it to be more efficient for TinyML applications while maintaining high uncertainty quality. To achieve this, the authors train the early-exit blocks with a set of ensemble heads applied at the end of the model. However, during inference, they completely discard the early exits, retaining only the ensemble heads while reusing the early-exit blocks' weights for these ensembles. The authors compare their method against several popular uncertainty estimation approaches on widely used benchmarks and demonstrate that it provides high-quality uncertainty estimates while being significantly faster than other alternatives. Claims And Evidence: The authors make several claims about the proposed method, including increased efficiency (in terms of both latency and model size), improved calibration, and higher uncertainty quality of the predictions. They adequately justify these claims by discussing their motivation and supporting them with experimental results in the experimental section, along with extensive ablation studies. Methods And Evaluation Criteria: The choice of baselines (ensemble methods) and benchmarks (MNIST, CIFAR, TinyImageNet, etc.) is appropriate and accurately reflects the current state of research in this area. The evaluations are rigorous and effectively demonstrate the approach's effectiveness from multiple perspectives. Theoretical Claims: No theoretical claims, theorems, or proofs are presented in the paper. Experimental Designs Or Analyses: The experimental design adequately covers various aspects of the proposed approach, including in-distribution prediction accuracy, generalization to corrupted data, calibration, and out-of-distribution detection quality. The rigorous ablation studies are highly appreciated and provide strong support for the method's claims. Furthermore, the comparison with several popular uncertainty estimation methods further reinforces the case for its effectiveness. Supplementary Material: Additional results and ablations, more detailed datasets and training pipelines descriptions are much appreciated. Relation To Broader Scientific Literature: The paper positions itself as a contribution specifically for low-latency edge applications, where uncertainty estimation is crucial despite limited computational resources. While the idea of early exits has been explored in the uncertainty estimation literature—for example, by Antoran et al. (2020)—the proposed method refines this concept to make it suitable for low-resource settings, which is one of its key contributions. In addition to demonstrating its efficiency, the authors also show its effectiveness compared to standard baselines in the field of uncertainty estimation. Essential References Not Discussed: No essential references are missing. Other Strengths And Weaknesses: In short, "Strengths" of the paper are: * The proposed method adapts early-exit strategies to improve efficiency in TinyML applications while maintaining high uncertainty quality. By reusing early-exit block weights for ensemble heads, it significantly reduces computational overhead during inference. It's an original idea which has interesting application and significance. * The paper thoroughly evaluates the method on widely used benchmarks, including MNIST, CIFAR, and TinyImageNet. Extensive ablation studies further support the claims, demonstrating strong performance across multiple uncertainty estimation tasks. Though, experiments on larger datasets would be also appreciated. * The method is specifically designed for edge applications where uncertainty estimation might essential (for example, AV context) but computational resources are limited. Its efficiency and effectiveness against standard baselines might make it a valuable contribution to real-world deployment scenarios. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed and insightful comments. We are glad to hear that you appreciate the novelty of our approach and recognize the thorough evaluation of the proposed method. > experiments on larger datasets would be also appreciated To demonstrate the scalability of our proposed method, we apply QUTE to a traditional Resnet50, a much larger model with higher learning capacity. We train it on TinyImagenet for 50 epochs with a batch size of 128. The rest of the training methodology is the same as described in the paper. We evaluate its accuracy-drop detection capability on TinyImagenet-corrupted. We report the average AUPRC over all corruptions for five severity levels below. | Method | Sev-1 | Sev-2 | Sev-3 | Sev-4 | Sev-5 | | :---: | :---: | :---: | :---: | :---: | :---: | | BASE | 0.26 | 0.54 | 0.70 | 0.67 | 0.61 | | EE-ensemble | 0.32 | 0.60 | 0.73 | 0.67 | 0.66 | | QUTE | **0.39** | **0.63** | **0.79** | **0.75** | **0.79** | As seen, QUTE comfortably outperforms EE-ensemble and BASE on accuracy-drop detection, especially showing a remarkable 19% improvement at the highest severity level compared to EE-ensemble. This illustrates QUTE's superior accuracy-drop detection capabilities across datasets/models of varying complexities, providing a highly-effective and cost-efficient accuracy monitoring mechanism. We will include this result in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful response to the reviews. I appreciate the clarifications and the additional experiments. I find this paper to be a valuable contribution to the area of low-resource uncertainty-based models and will therefore keep my original rating. --- Reply to Comment 1.1.1: Comment: We are truly grateful for your detailed review and for acknowledging our work’s contributions. We appreciate the time and effort you've put into evaluating our work, and your recognition means a lot.
null
null
null
null
null
null
Active Evaluation Acquisition for Efficient LLM Benchmarking
Accept (poster)
Summary: The paper deals with efficient evaluation and offers a new way to dynamically choose which examples to use, per model. It shows great results, the improvements are very clear, the novelties are too, the writing is also mostly easy to follow. Claims And Evidence: Well supported Methods And Evaluation Criteria: Well evaluated and clear. Including previous works, breaking the method to parts etc. Theoretical Claims: It is unclear from the definitions pre l76 what exactly is Y, it seems to be the score (so \in [0,1] given by some metric?) for a specific example (I assume the fact you kept using the word prompt aided the confusion). Is this argmax well defined? what is a random variable or sample in any of that? Or what is the model under which this is a probability (under a discreet set of values it is just 0/1)? You write it in a very formal way, so one expects it is true formally, you can also explain it in a more intuitive manner or explain what your assumptions are here (probability because of uncertainty? Baseian thing with priors?) Or move things up (in l97 you define more). Experimental Designs Or Analyses: I would add something about what overheads your method adds. E.g. how much time it adds to each computation, how hard is it to implement (do you share a relevant implementation?), what other limitations may make someone not use this, as it seems really promising and the improvements are large as well. Supplementary Material: Skimmed + here and there, but not too much. Relation To Broader Scientific Literature: You can cited something in l46, no need to claim it as if it was a new claim (e.g. a survey or a tutorial on efficient benchmarking or just the efficient benchmarking paper for NLP). The distinction between active evaluation and active testing is that all active testing reduces the amound of scoring function uses and you reduce the amount of inference uses? I would make this distinction clearly (especially as you bring a new very similar name and only discuss active testing in the related work. Also, why do you add "acquisition"? Seems to imply another difference but I don't think there is. Moreover, if you claim a new approach, why not own it? Describe the difference in general and the new name and distinction and then offer your specific method.) Essential References Not Discussed: Each time I thought something was missing I instead found I was hasty to judge and it appeared somewhere else in the paper, very well documented. Other Strengths And Weaknesses: The main prospect the work claims to deal with is the wrong one. The paper doesn't do anything with prompts. A certain example (abstract task, output pair) can have multiple prompts (e.g. "what is the capital of France?" or "What is the capital of france" or even "hey, dude, do you know what's the capital of France"). There is a lot of work on those things because this is an important issue in benchmarking LLMs, it took me several pages before I figured what this paper is trying to do. An example to give the difference, you mention Polo's early work often (tinybenchmarks) but not their *prompt* related work and the name tells all the difference (Efficient multi-prompt evaluation of LLMs). It sounds like this paper should be compared to it, but in fact it is not. Other Comments Or Suggestions: l 207(right) $h$ probably a typo Questions For Authors: Why do you call your method "RL" in the graphs? Rather than the name of your method. Also your variants I would add in the legend (ours) or some other way to separate those from previous work. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed feedback and strong endorsement of our work. We address each point below: ## Terminology: "Prompts" vs "Examples" We thank the reviewer for highlighting the potential confusion in our use of the term "prompt." We use "prompt" to mean benchmark examples rendered with the dedicated prompt templates, which is the exact input to the target LLM. This distinction is important because some benchmarks/datasets apply different prompt templates to the same examples - for instance, natural_questions in the HELM benchmark is evaluated in two modes with different prompt formats. This is why we focus on prompt-level evaluation efficiency rather than just example selection. We will revise the paper to clarify this terminology early on to avoid confusion with other types of prompt-related work. ## Definition of Evaluation Scores (Y) We appreciate the request for clarity regarding the definition of Y. In our formulation, Y_m represents the collection of evaluation scores for model m across all prompts in benchmark X. These scores can indeed be of mixed types - binary accuracy, continuous metrics like F1, etc. - depending on the specific dataset within the benchmark. This is explicitly mentioned in Section 2: "These scores may be of mixed types - for instance, some datasets might report binary accuracy while others use continuous metrics like F1 scores." We will improve clarity by providing a more explicit definition earlier in the paper. ## Probability Formulation of Eq.1 We appreciate the reviewer's questions about the formal definitions in our probability formulation. The formulation in Equation (1) is indeed meant to be understood from a Bayesian perspective, and p(Y^(u)_m'|Y^(o)_m',X) represents the conditional likelihood of the unobserved scores Y^(u)_m' given the observed scores Y^(o)_m' and prompts X. Equation (1) represents an ideal case where we could directly optimize for the subset o* that maximizes the likelihood of correctly predicting unobserved scores. However, as we note in the subsequent paragraph: "the values Y^(u)_m' for a test model m' are unknown before acquisition, making direct optimization impossible." In practice, we cannot directly optimize this objective since Y^(u)_m' is unknown. Instead, we train policies that approximate this optimization based on historical evaluation data. In our revision, we will add an intuitive explanation of this formulation explaining the intuition behind our probability formulation to avoid confusion. ## Method Overhead and Implementation The reviewer raises an excellent point about discussing implementation overhead. We provide a computational complexity analysis in Appendix G showing that our method adds minimal overhead compared to the LLM inference costs it saves. Specifically: 1. The one-time training of the neural process model and acquisition policy takes 2-3 hours on a standard GPU 2. The policy network is lightweight, consisting of only 2-3 linear layers that execute in milliseconds 3. The 43-92% reduction in required evaluations vastly outweighs this overhead In our revision, we will highlight this information in the main text and clarify that we will release our implementation to help practitioners adopt our method. ## Distinction from Active Testing and Active Learning We thank the reviewer for the suggestion to clarify our positioning relative to active testing and active learning. We will revise the related work section to more clearly articulate that: 1. Active Learning typically selects examples to label for training a model 2. Active Testing focuses on reducing the labeling cost for evaluating model performance 3. Our Active Evaluation Acquisition (AEA) specifically targets reducing the number of prompt evaluations needed during LLM benchmarking We use "acquisition" to emphasize the sequential process of obtaining evaluation outcomes, but we agree this could be presented more clearly. We will revise to better own our contribution and clarify the distinctions between these related but different approaches. ## Figure Labeling We appreciate the suggestion about labeling our method in the figures. We will update all figures to clearly label our method as "AEA (ours)" or similar to distinguish it from baseline approaches. ## Typo on line 207 Thank you for catching this. We will correct it in the revised version. We appreciate the reviewer's thorough reading of our work and are grateful for the strong endorsement. We believe the requested clarifications will further strengthen the paper.
Summary: The paper presents an approach to improve the efficiency of evaluating large language models (LLMs) by selecting a subset of evaluation prompts through a learned policy. The authors claim that their RL-based approach significantly reduces computational cost while maintaining accuracy. Claims And Evidence: 1. The author assumes that evaluation scores across prompts are highly correlated, but no strong evidence is provided to support this assumption. 2. Only 5 benchmarks are selected. This could lead to misleading conclusions about the model's generalized performance. Methods And Evaluation Criteria: 1. The proposed RL-based technique seems computationally expensive. Therefore, further justification is required regarding its real-world applicability. 2. Missing proof that the RL-based policy generalizes better than simpler baselines. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. The experimental section lacks the statistical significance test. 2. Only 5 benchmarks are selected. This could lead to misleading conclusions about model's generalized performance. Supplementary Material: Briefly reviewed all sections in appendix. Relation To Broader Scientific Literature: 1. Somewhat useful but adding more benchmarks and models would have been better. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: Addresses an important problem to reduce LLM evaluation cost. Also, a well motivated paper. Weaknesses: Only 5 benchmarks are selected. This could lead to misleading conclusions about the model's generalized performance. Also, the RL-based technique seems computationally expensive. Other Comments Or Suggestions: Would have been much better for the readers if the author demonstrates the task and the proposed methods using a figure. Questions For Authors: 1. Why only 5 benchmarks are selected and how do they ensure the generalized effectiveness of the proposed method? 2. Why no figure is presented to demonstrate the task and the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's feedback and concerns. Below we address the specific points raised: ## Evidence for Prompt Correlations The reviewer questions our assumption that evaluation scores across prompts are correlated. This correlation is well-documented in prior literature on LLM evaluation, including in works by Perlitz et al. (2023) and Polo et al. (2024), which we cite. Additionally, we present empirical evidence in Appendix F.4 through a detailed analysis of factors affecting prediction accuracy. This analysis shows that similar prompts (in embedding space) typically yield similar evaluation outcomes, confirming the existence of exploitable dependencies. ## Benchmark Selection and Generalizability Regarding the concern about using only 5 benchmarks, we respectfully note that our selection includes a diverse set of prominent LLM benchmarks that cover different evaluation paradigms: - AlpacaEval (win rate against reference models) - HELM-Lite (multiple tasks and metrics) - OpenLLM Leaderboard (multiple-choice QA) - MMLU (subject-specific academic tests) - Chatbot Arena (human preference judgments) These benchmarks represent the most widely used evaluation frameworks in the field and employ different scoring mechanisms, ensuring our method's applicability across diverse evaluation settings. Together, they comprise over 56,000 evaluation prompts across dozens of tasks and metrics, providing robust evidence for generalizability. ## Computational Efficiency of RL-based Approach The reviewer expresses concerns about the computational cost of our RL-based policy. We address this directly in Appendix G with a computational complexity analysis showing that: 1. The one-time training of our policy and neural process model is computationally inexpensive (typically 2-3 hours on a single GPU) 2. During inference, our policy network adds negligible computational overhead compared to the cost of running LLM inference 3. The savings from reducing the number of required evaluations (65-92% fewer) vastly outweigh any overhead from our method Our approach makes large-scale LLM evaluation significantly more accessible, especially when evaluating resource-intensive models or when using costly API calls for closed models. ## Statistical Significance Regarding statistical significance, our evaluation compares the convergence behavior of different acquisition policies across multiple runs. The standard deviations reported in our figures and detailed in Tables 1-5 provide evidence of the consistent performance advantages of our approach over baseline methods. Our results in Figure 1 clearly demonstrate that as the acquisition budget increases, our RL-based method consistently achieves lower error rates compared to baseline methods. For instance, on MMLU (Figure 1d), our method reaches an error of approximately 0.02 with just 50 prompts, while random sampling requires nearly 100 prompts to achieve the same accuracy. This pattern is consistent across all benchmarks, with our method's error decreasing more rapidly as more prompts are acquired. This faster convergence to low error rates demonstrates our method's superior efficiency in identifying the most informative prompts for evaluation. ## Visual Representation We acknowledge the reviewer's suggestion about including more visual representations. Figure 1 in our paper illustrates various benchmarks with performance plots showing acquisition budget versus absolute error. In the revised version, we will add a schematic diagram illustrating our overall approach to make the workflow more accessible to readers. We appreciate the opportunity to address these concerns and believe our work makes a significant contribution to efficient LLM evaluation, with strong empirical evidence supporting our claims.
Summary: This paper introduces a novel RL-based method to LLM's benchmarking evaluation. From the aspects of efficiency and accuracy, they improve the accuracy in evaluation and also lower the computation overhead in the process of evaluation. They are inspired by active learning and propose their approach by modeling the dependencies across different samples. Their experimental results show their method's performance including accuracy and robustness, and also demonstrate their contribution to efficiency in LLM's evaluation field. Claims And Evidence: Yes Methods And Evaluation Criteria: The methods make sense. Because the active learning can truly improve the efficiency and keep the accuracy. They include the active learning and RL-based method to model the dependencies to achieve their goals. Also their method is based on the neural process and we can find the relevant literature to support the parts in method. And the method is consistent with their motivation. For their evaluation criteria, I think it's good. For absolute error, it can show method's effectiveness. But I am not sure whether it's the best. Or maybe it's not enough. For example, they could include the running time to prove they improve the efficiency. And from the numbers of absolute error, different results are not very distinguishable. Is there any other better criteria. Or maybe they could show the metric they use is most used often. Theoretical Claims: I think their theoretical parts are correct. They didn't put any theorems/lemmas. Just from the equations in paper, I think they are clear and not wrong. For example, from the equation7-9, they are clear to define a reward way in RL to show their novel ideas in method. So far, I didn't see any wrong things in theoretical parts. Experimental Designs Or Analyses: Yes I did. I think their experiments are good. But there are some concerns: 1) We can see some big std in some figures and they run the code using 3 different random seeds. I think it would be better to run more times, like 5 different random seeds to lower the std of baselines. 2) Still about the absolute error, the paper did not write why the metric is good. Because the numbers of all results are so small, I can't conclude how powerful their method is. Maybe they can include some instructions on why they use the metric. Supplementary Material: I reviewed the code part of Supplementary Material. I think they are good. I did not find something wrong. Relation To Broader Scientific Literature: They introduced active learning to LLM's evaluation. I think their work can inspire later related research areas. In the future, there may be more works to lower the computation cost in LLM's evaluation and keep the performance at the same time. Also in the other fields related to LLM, how to improve the accuracy of other LLM's tasks and also improve the efficiency. Essential References Not Discussed: I think the necessary references are included in the method section and results part. Other Strengths And Weaknesses: Other Strengths: They present their paper in a clear way to show the pipeline and motivation along with corresponding supporting results. Their algorithms' pseudo and figures in paper are clear and great. Other Weaknesses: If they could introduce the neural process in the preliminary more, it would be better. Other Comments Or Suggestions: I don't have other comments. Questions For Authors: Could you run more different random seeds to decrease the variance of baselines? That would show your method's performance better. Current some of figures are not so clear due to big shaded areas caused by variance. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough evaluation of our work and the positive recommendation. Below, we address the specific concerns and questions raised: ## Evaluation Metrics and Result Interpretation The reviewer raised questions about our use of absolute error as an evaluation metric and the distinguishability of results. We chose absolute error because it directly measures how accurately our method estimates the true benchmark performance, which is the primary goal of our work. Regarding the distinguishability of results: This is due to the fact that given enough acquisition budget, all selection methods can eventually approximate the overall performance. However, as clearly demonstrated in our figures, our RL-based approach decreases the error much more quickly as the acquisition budget increases, which demonstrates its superiority. In Appendix F.5, we conduct a detailed analysis comparing our approach to random sampling, showing that we can achieve the same level of error using 35-92% fewer prompts across different benchmarks. This substantial reduction in required evaluations represents significant cost savings in practical applications. For instance, on MMLU, our method achieves the same accuracy with just 35 prompts as random sampling does with 100 prompts - a 65% reduction that would translate to proportional cost savings when evaluating new models. ## Statistical Significance and Random Seeds We conducted experiments with three random seeds to keep comparison consistent across all methods. The larger variance observed in baseline methods actually further demonstrates the superiority of our approach, which maintains more consistent performance across different initialization seeds. For better result representation, we have included detailed tables in the appendix (Table F.1) that show performance statistics across all runs. ## Neural Process Background We appreciate the suggestion to provide more background on neural processes. In the revised version, we will expand Section 3.1 to include: 1. A more intuitive explanation of how neural processes model stochastic functions 2. A brief comparison with related approaches like Gaussian Processes 3. A clearer explanation of why neural processes are particularly well-suited for capturing dependencies across evaluation prompts This additional background will make our methodology more accessible to readers who may be less familiar with these techniques. We appreciate the reviewer's positive assessment of our paper's organization, algorithms, and figures. We're committed to further improving the clarity and impact of our contribution in the final version.
Summary: The paper focuses on LLM efficient evaluation , that is, estimating overall performance based on a subset of data. The authors first model dependencies across evaluation prompts using neural processes, then analyze various selection methods and propose a RL-based method. Additionally, for the cold start problem, the authors propose a semi-supervised approach to solve it. Finally, they compare various methods on five benchmarks and conduct a comprehensive analysis. Claims And Evidence: Yes, the experiment results support their claims well. Methods And Evaluation Criteria: Yes, the methods, baselines and the selected benchmark are appropriate. Theoretical Claims: Yes. The reasoning process of ELBO in section 3.1 is correct. Experimental Designs Or Analyses: Yes. The baselines selection in section 3.2 is comprehensive, including random policy, clustering, IRT, uncertainty sampling, and so on. Supplementary Material: Yes, I have reviewed the appendices, which have helped me gain a better understanding of the work, especially Appendices A–E. Relation To Broader Scientific Literature: The work focus on efficient evaluation of LLM. Most important related work have been discussed in this paper, such as CAT and Clustering- IRT Essential References Not Discussed: You should cite the "Anchor Points: Benchmarking Models with Much Fewer Examples" published in ACL 2024, although you have discussed their method: Clustering Other Strengths And Weaknesses: Strengths: - The experiments and analyses were both conducted comprehensively. - The paper is well-structured. Weaknesses: - Although RL-based methods have achieved better performance than random sampling, it is more complex as they require evaluation data history, training a VAE model, and training an RL model. By comparison, the random sampling is still a strong baseline. More importantly, it is simple and efficient. So, whether the RL-based method is more suitable for practical applications remains uncertain. Other Comments Or Suggestions: A suggestion is that you should highlight the practical applications of efficient evaluation in the introduction, for example, it could be used for rough testing during the model development process, etc. Questions For Authors: Please explain how to evaluate a new model using your method from scratch, given a benchmark that already contains test results for some models. Specifically, outline each step involved and estimate the workload required, such as training models. Additionally, discuss whether, given this workload, researchers would be more inclined to adopt your method or simply use random sampling. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough evaluation of our work and the positive recommendation. Below, we address the points raised: ## Practical Applications of Efficient Evaluation We appreciate the suggestion to highlight practical applications of efficient evaluation in the introduction. In the revised version, we will emphasize how our method can be particularly valuable during model development for: 1. Enabling more frequent intermediate evaluations during training, allowing earlier detection of issues 2. Supporting more extensive hyperparameter tuning by reducing evaluation costs per configuration 3. Facilitating rapid comparisons between model variants during research iterations 4. Reducing API costs for closed-source models during the development phase 5. Decreasing environmental impact through reduced compute requirements ## Implementation Workflow for New Models Regarding how to evaluate a new model from scratch, we thank the reviewer for prompting us to better highlight our existing detailed workflow in Appendix A, which provides a comprehensive description of the training and deployment procedure. As described there, our approach involves: 1. **One-time setup**: Training the neural process model and acquisition policy on historical benchmark data 2. **Per-model evaluation**: Sequentially selecting prompts, obtaining scores, and predicting remaining scores Once this setup is complete, the evaluation process for each new model adds minimal computational overhead compared to the actual LLM inference costs. ## Complexity vs. Random Sampling Trade-off We acknowledge the valid concern about complexity versus simplicity. While random sampling is indeed simpler, our results demonstrate substantial efficiency improvements (43-92% fewer evaluations) that justify the additional setup complexity in many scenarios. As detailed in Appendix F.5, our approach offers particularly compelling benefits for: - Organizations conducting ongoing LLM development and benchmarking - Evaluations of large or expensive models where each prompt evaluation is costly - Scenarios requiring adaptation to new model families or previously unseen prompts The choice between approaches depends on the specific evaluation context, with our method becoming increasingly beneficial as evaluation costs or frequency increase. We'll ensure these considerations are more prominently highlighted in the main text of our revision. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and I will maintain my positive score.
Summary: This paper proposes a large language model (LLM) evaluation method, which considers dependency modeling and subset selection to improve efficiency. The authors develop a model that captures dependencies across evaluation prompts and propose subset selection policies based on these dependencies. Extensive experiments on multiple LLM evaluation benchmarks demonstrate the superiority of the proposed method in providing accurate performance estimation with minimal acquisition budget. ## update after rebuttal The authors' rebuttal solves most of my concerns. I regard this paper as a weak accept case and maintain my score. Claims And Evidence: The claims made in the submission are supported by clear theoretical and empirical evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria almost make sense for the problem. However, there is a potential risk that the selected subset for different groups of LLMs may vary. Thus, the comparison result of two LLMs' performance may be affected by the group of LLMs in the training dataset, which seems unnatural. Theoretical Claims: I have checked most of the theoretical part in this paper and found no obvious errors. Experimental Designs Or Analyses: The experimental designs or analyses are almost sound. Supplementary Material: I have roughly reviewed the supplementary material, including the appendix and the submitted code. Relation To Broader Scientific Literature: Compared with the works in the broader scientific literature, the key contributions of this paper is to improve LLM evaluation efficiency via dependency modeling and subset selection. Essential References Not Discussed: There are some important papers about active evaluation of natural language generation (NLG), which should be properly discussed in this paper. For example, [1] investigates active learning in the related evaluation setting (i.e., pairwise comparison). [1] Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons. ACL 2022 Other Strengths And Weaknesses: Strengths: 1. The research problem about efficient LLM evaluation is interesting and realistic, because the cost of LLM evaluation increases quickly since the model parameters become larger. 2. The proposed method is sound and convincing with dependency modeling and subset selection. 3. This paper is well-organized and overall easy to follow. Weaknesses: 1. I wonder whether the proposed method can effectively adapt to new models and benchmarks, since the core challenge of LLM evaluation compared with traditional NLG tasks' evaluation is the high requirement of generalization ability. The authors should add more theoretical or empirical analysis about this point. Other Comments Or Suggestions: None. Questions For Authors: I have included my questions in other parts of the review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful assessment of our work. Below, we address the key points raised: ## Concern about Subset Selection and Fairness Across Models The reviewer raises a valid concern about whether variations in selected subsets for different groups of LLMs might affect fair comparison between models. This is indeed an important consideration that we would like to clarify: 1. Our framework maintains fair comparability by using both the acquired scores on selected prompts AND predicted scores on the remaining prompts for final benchmark estimation. This ensures that all models are evaluated on the full benchmark (either via direct evaluation or accurate prediction), enabling fair comparison regardless of which specific prompts were selected for each model. 2. As demonstrated in Table 2 of our paper, our prediction-based estimation approach ("w/ pred") consistently outperforms direct aggregation of only the acquired evaluation scores ("w/o pred"). This confirms that our method maintains benchmark integrity while improving efficiency. 3. We conducted a specific experiment examining model bias (Section 5, Fig. 2) where we deliberately evaluated on models from different families than those used for training. The results demonstrate that our method remains effective even in this challenging scenario, indicating robustness to the training dataset composition. ## Adaptation to New Models and Benchmarks The reviewer asks about our method's ability to adapt to new models and benchmarks, which is indeed crucial for LLM evaluation: 1. For new models: Our approach is explicitly designed for evaluating new, unseen models. The train-test splits in our experiments (particularly for Open LLM Leaderboard where we use chronological splitting) ensure that our method is validated on truly new models not seen during training. The consistently strong results across various benchmarks demonstrate the generalization capability to new models. 2. For new prompts (cold start problem): We explicitly address this in Section 5 with a dedicated experiment (Fig. 3) where 15 MMLU subsets are treated as "cold start prompts" with no historical scores. Our RL-based policy successfully generalizes to these completely new prompts due to our policy architecture that explicitly incorporates prompt representations. 3. For adapting to distribution shifts: We could further enhance adaptation through continual learning approaches where the neural process model and acquisition policies are jointly updated as new models are evaluated - a promising direction for future work that we mention in the conclusion. ## Missing Reference on Active Evaluation We thank the reviewer for highlighting the paper "Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons". This is indeed a relevant work that should be discussed in our paper. This paper focuses on efficiently identifying top-ranked NLG systems using pairwise comparisons, specifically applying dueling bandit algorithms to reduce the number of human annotations required. Our work shares the goal of improving evaluation efficiency but addresses a complementary problem: reducing the number of prompts needed to benchmark individual LLM performance rather than ranking systems through pairwise comparisons. We will incorporate a thorough discussion of this paper in our revised manuscript, acknowledging their valuable contributions to efficient evaluation methods and highlighting how both approaches serve the broader goal of making NLG/LLM evaluation more practical and accessible. ## Additional Comments We note the reviewer's positive assessment of our paper's strengths, including: - The importance of the research problem - The soundness of our proposed methods - The clarity and organization of the paper These align with our goal of addressing a practical challenge in LLM development while maintaining scientific rigor. ## Conclusion We appreciate the reviewer's recommendation and constructive feedback. We believe our work makes a significant contribution by dramatically reducing evaluation costs (by 43-92% across benchmarks) while maintaining accurate performance estimation. This enables more efficient and sustainable LLM development, especially for researchers with limited computational resources. In the revised version, we will: 1. Further clarify how our approach maintains fair comparability between models 2. Incorporate the suggested reference and discuss its relevance 3. Expand our discussion on adaptation to new models and benchmarks Thank you for the opportunity to address these points and strengthen our paper.
null
null
null
null
Local Manifold Approximation and Projection for Manifold-Aware Diffusion Planning
Accept (poster)
Summary: The authors introduce formally and investigate mathematically the manifold deviation issue due to approximate guidance in the context of trajectory-planning (for reward maximization) via diffusion models. They provide a lower bound on this error, and to address this issue they introduce LoMAP, a training-free method that performs projections sequentially by leveraging offline data along the reverse sampling process to constrain the generated samples to the underlying data manifold. Ultimately, they show extensive experimental validation for the proposed method. ## update after rebuttal After reading the authors rebuttal I decided to keep my original score as the content of the rebuttal did not alter significantly my beliefs regarding positive aspects as well as (inherent) limitations of this work. Claims And Evidence: 1 - The claim "The current sample is then projected onto this subspace, thereby remaining on the manifold of valid behaviors." at line 69 seems not precise. Diffusion models, as well of offline data, capture an implicit notion of validity only approximately. As a consequence these models can generate data points that are invalid, and the procedure presented within the paper, although useful to improve the chances of generating valid points, does not seem to imply that the generated points are valid. In order to achieve this, the diffusion model would need to interact with an available validity checker or certain assumptions would have to be made about the data space. 2 - The claim "Consequently, combining LoMAP with minority guidance can help uncover feasible yet unexplored solutions that might otherwise remain inaccessible to standard diffusion planners." at line 403 seems not precise. To the best of my knowledge, training a diffusion model consists in learning a function approximator for the score function (i.e., a score network) that can induce a final marginal density (on the data space) which modes (i.e., significantly positive density) in regions where no data are present. It seems to me that the proposed method would instead prevent the diffusion model from sampling these modes, while it would let the model sample low-density modes where enough offline data are present. From the experiments presented within Sec. 4.4 this distinction is not clear and therefore the claim above seems not supported, especially the 'unexplored solutions' part. Methods And Evaluation Criteria: Yes. Theoretical Claims: Checked the derivations in the main paper and seem correct. Experimental Designs Or Analyses: Checked exp. in Sec. 4.1 and seems convincing to me. Regarding exp. in Sec. 4.4 I mentioned a doubt within point (2) of the Claims and Evidence section. Supplementary Material: No. Relation To Broader Scientific Literature: - The paper tackles an important problem that arises in the context of trajectory-based planning, a very timely research topic, tackled via guidance. To the best of my knowledge, the paper properly mentions related works and references and makes a convincing positive comparison with works within this literature stream. - The problem of reward-guided sampling can be solved in a multitude of ways beyond guidance. Examples include using RL[1] or control-theoretic[2] formulations to fine-tuning a diffusion model, or inference-time techniques beyond guidance [3]. It is not clear how the ideas presented in this work extend to these settings, as the current formulation seems quite specific to guidance. [1] Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review, Uehara et al. [2] Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control, Domingo-Enrich et al. [3] Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review, Uehara et al. Essential References Not Discussed: I am not aware of important references not mentioned. Other Strengths And Weaknesses: Strengths: 1. The paper tackles a timely open problem and presents a formally clear formulation. 2. Proposes a solution that seems of practical relevance (i.e., easy to use) and well-performing on relevant problems. 3. The paper is well written, clear, and easy to read. Weaknesses: 1. My main concern regarding the paper regards the risk of ending up obtaining a sampler excessively regularized from offline data (i.e., overfitting to offline data) as explained within my point (2) of Claims And Evidence section. This might be problematic towards using the introduced machinery in RL problems, where the goal is often to discover new strategies not contained within offline data. Other Comments Or Suggestions: - what is $n$ in line 161 defined? Questions For Authors: - Did I misinterpret something within the weakness point that I mentioned ? - In line 225 the paper claims that Prop. 3.2 indicates that the manifold dev. problem is stronger for long-horizon tasks. How should I infer this from the mathematical result within the proposition? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed review and constructive feedback on our work. We especially appreciate the reviewer pointing out the clarity issue regarding our claims. Please find our detailed response below. - **“The claim "The current sample is then projected onto this subspace, thereby remaining on the manifold of valid behaviors." at line 69 seems not precise.”** We fully agree with the reviewer that our LoMAP method does not necessarily guarantee the validity of behaviors generated by diffusion models. Accordingly, we will **tone down** this claim to avoid ambiguity and overstatement. Specifically, we will revise: > "thereby remaining on the manifold of valid behaviors." > to a more accurate and precise description: > "thereby significantly reducing off-manifold deviations and improving the likelihood of generating valid behaviors." > - **“The claim "Consequently, combining LoMAP with minority guidance can help uncover feasible yet unexplored solutions that might otherwise remain inaccessible to standard diffusion planners." at line 403 seems not precise.”** We apologize for the confusion caused by this statement. Our original intention was to highlight that standard diffusion planners typically require a large number of samples to generate trajectories located in low-density regions of the data distribution, thus limiting exploration within a restricted sampling budget. We aimed to clarify that one way to overcome this limitation—though not our contribution—is through minority guidance [1], which explicitly guides sampling towards low-density regions. Our contribution, LoMAP, further enhances this process by ensuring that trajectories sampled under minority guidance remain feasible and closer to the data manifold, allowing efficient generation of trajectories from low-density regions without extensive sampling. We appreciate the reviewer highlighting this ambiguity and will clarify this point carefully in the final version of the paper. - **“what is $n$ in line 161 defined?”** We apologize for this confusion. This was a typo. To clarify, the original sentence: > "… $\boldsymbol{\tau}^i$ is inherently concentrated around a ($n-d$) dimensional manifold $\mathcal{M}_i$." > should be corrected to the following description: > "… $\boldsymbol{\tau}^i$ is inherently concentrated around a ($d-k$)-dimensional manifold $\mathcal{M}_i$." > Thank you for pointing out this typo. - **“In line 225 the paper claims that Prop. 3.2 indicates that the manifold dev. problem is stronger for long-horizon tasks. How should I infer this from the mathematical result within the proposition?”** Proposition 3.2 states that the guidance gap scales at least proportionally to the square root of the dimensionality $d$ of the trajectory representation: $$\Delta_{\mathrm{guidance}}\bigl(\boldsymbol{\tau}^i\bigr) \ge \frac{\,c\,}{\sqrt{1-\alpha_i}}\sqrt{d}$$ We clarify that the dimensionality $d$ here directly relates to the length of the planning horizon. Specifically, in diffusion-based planning, trajectories are represented by concatenating states and actions across the entire planning horizon. Therefore, the dimension $d$ grows linearly with the length of the planning horizon. Consequently, the lower bound provided by Proposition 3.2 indicates that as the planning horizon increases, the guidance gap inevitably grows, which implies a higher likelihood and severity of manifold deviation in longer-horizon tasks. - **“My main concern regarding the paper regards the risk of ending up obtaining a sampler excessively regularized from offline data as explained within my point (2) of Claims And Evidence section.”** Thank you for raising this important point. We acknowledge that our approach inherently regularizes trajectories towards the offline data manifold. However, our primary objective in this work is to ensure the generation of feasible, high-return trajectories for offline RL tasks particularly in settings where safety and reliability are crucial. This highlights a fundamental trade-off between trajectory feasibility and exploration freedom. To clarify this explicitly, we will revise the manuscript to discuss this limitation in detail, clearly identifying scenarios where LoMAP provides the most substantial benefits (e.g., safety-critical domains or tasks prioritizing feasibility over aggressive exploration). Furthermore, we believe LoMAP can be effectively combined with trajectory stitching and data augmentation techniques [2, 3], which enrich offline datasets with diverse, synthetic trajectories, which we leave for future work. References: [1] Um et al., Don't Play Favorites: Minority Guidance for Diffusion Models. In ICLR, 2024. [2] Li et al., DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching. In ICML, 2024. [3] Chen et al., Extendable Long-Horizon Planning via Hierarchical Multiscale Diffusion. arXiv, 2025.
Summary: The authors tackle the problem of approximate energy guidance with diffusion policies in the context of offline RL. Partial energies naively trained through MSE loss such as in Diffuser are only lower bounds to the true energy, and so using their gradients for conditional sampling can push trajectories off the manifold supported by the offline dataset. The authors propose a simple to implement technique to project the latent trajectories back into a low rank approximation of the K nearest neighbors to the dataset, noised to the current diffusion timestep. The K nearest neighbors are obtained with the expected denoised trajectory from the current diffusion step using Tweedie's formula. The authors demonstrate that using this correction improved performance across tasks in the D4RL suite and specifically highlight generation of valid trajectories in Maze2D and AntMaze tasks, where Diffuser often generates infeasible paths that pass through walls. Claims And Evidence: The claims in the paper are clear, and evidence is convincing Methods And Evaluation Criteria: D4RL, specifically the maze tasks make sense for evaluation of this approach. However, I will note that D4RL is a very saturated benchmark, and going forward I hope OGBench is used instead. Since this work must have been done concurrently with the introduction of OGBench, I do not consider this a weakness. Theoretical Claims: The theoretical claims are valid, and I checked the only major proof in Appendix A. Experimental Designs Or Analyses: The experiments seem to be sound, I did not notice any specific issue. Supplementary Material: I reviewed the proof in Appendix A, and quickly skimmed through the rest of the supplementary material which primarily consists of implementation details. Relation To Broader Scientific Literature: The paper is relevant to the application of diffusion models for offline RL, which is an active area of research. Contrastive Energy Prediction [1] and Relative Trajectory Balance are quite relevant to this work, as hey similarly focus on the problem of inaccurate energy guidance in the context of offline RL. However in that paper, the authors introduce a contrastive training strategy to learn an unbiased guidance function, whereas this paper instead introduces a simpler to implement (but perhaps less general) training-free guidance method. The problem of training-free approximate/asymptotically unbiased guidance for diffusion models is heavily explored outside the context of offline RL, as explored in this survey paper [3]. [1] Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning, https://arxiv.org/abs/2304.12824 [2] Amortizing intractable inference in diffusion models for vision, language, and control, https://arxiv.org/abs/2405.20971 [3] Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review, https://arxiv.org/abs/2501.09685 Essential References Not Discussed: I think it would improve the paper to include references (and perhaps even a comparison) to asymptotically unbiased guidance strategies such as Twisted Diffusion Sampler (TDS) [1] and some other SMC based methods listed in the above survey paper, and broadly discuss this line of work (perhaps in the appendix). [1] Practical and Asymptotically Exact Conditional Sampling in Diffusion Models, https://arxiv.org/abs/2306.17775 Other Strengths And Weaknesses: ### Strengths 1. The paper is clearly written, with useful illustrations. 2. The proposed method is novel to my understanding. ### Weaknesses 1. K-nearest neighbors computation with the entire dataset could be expensive, especially with growth in dimensionality of states and trajectory length. This is especially problematic for any sort of real world control tasks. 2. The approach used to compute K-nearest neighbors from the dataset, with Tweedie's denoised estimate as the anchor and cosine distance the metric function may not be good heuristics for other problems (especially with high dimensional state spaces). Perhaps some kind of abstract trajectory latent representations could be used instead? Other Comments Or Suggestions: Could you include QGPO (from the contrastive energy prediction paper) to the results tables? It is a technique which trains a time dependent energy model for unbiased guidance, so perhaps can serve as a sort of upper bound in performance. Questions For Authors: 1. With the IVF method for faster nearest neighbors as described in Appendix D, what was the wall clock time to generate a single trajectory for a task like AntMaze? How expensive is the procedure with naive nearest neighbors? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive review and constructive feedback. Please find our detailed response below. - **“With the IVF method for faster nearest neighbors as described in Appendix D, what was the wall clock time to generate a single trajectory for a task like AntMaze? How expensive is the procedure with naive nearest neighbors?”** Thank you for highlighting this important practical aspect. We measured the wall-clock time required by Hierarchical Diffuser (HD) with LoMAP in the AntMaze task, averaging the generation time per trajectory using a single NVIDIA A10G GPU, with the number of neighbors set to $K=10$. | **Method** | **Total wall-clock time per trajectory (sec)** | **KNN-search time (sec)** | | --- | --- | --- | | Naive KNN | 21.62 | 21.02 | | IVF-based approximate KNN (**ours**) | 0.53 | 0.01 | As shown above, the naive KNN search is prohibitively expensive, requiring over 21 seconds to generate a single trajectory, with the vast majority of time dedicated to the KNN search itself. In contrast, our IVF-based approximate KNN significantly reduces runtime, with only a negligible fraction of the total time spent on the nearest neighbor search. - **“Could you include QGPO to the results tables? It is a technique which trains a time dependent energy model for unbiased guidance, so perhaps can serve as a sort of upper bound in performance.”** Thank you for suggesting QGPO as an additional baseline. We agree that including QGPO can indeed serve as a valuable upper-bound comparison for performance. We appreciate this insightful recommendation and will incorporate the QGPO results into the final version of our paper. - “**Cosine distance metric function may not be good heuristics for other problems (especially with high dimensional state spaces).**” As the reviewer correctly points out, cosine distance may not be suitable for certain high-dimensional state spaces, especially pixel-based environments. Exploring effective manifold approximation methods for more complex, high-dimensional state spaces remains an important direction for future work. We believe one promising approach would be to combine LoMAP with latent trajectory representations [1, 2]. We will clarify this limitation explicitly in the final version. - **"An additional references to asymptotically unbiased guidance strategies (perhaps in the appendix)"** Thank you for this valuable suggestion. We will discuss asymptotically unbiased guidance strategies further in the related work section of the final version. References [1] Co-Reyes et al., Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings. In ICML, 2018. [2] Jiang et al., Efficient Planning in a Compact Latent Action Space. In ICLR, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the response, I will maintain my score 4 (accept). --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time and dedication to providing us with your valuable feedback.
Summary: Classifier guidance can introduce distribution shift during diffusion sampling. This paper proposes a training-free method to constrain guided diffusion within a learned manifold by projecting noisy samples onto a local low-dimensional manifold, approximated using nearest neighbors from the training set at each diffusion step. Claims And Evidence: The claim that guided diffusion can lead to manifold deviation is supported by theoretical analysis. However, the assertion that LoMAP, the proposed projection method, effectively bridges the guidance gap is less convincing, as evidenced by its lack of significant improvement over baselines like DD and TAT in Table 2. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. The code part. Relation To Broader Scientific Literature: See below the essential reference section. Essential References Not Discussed: Many recent works have explored diffusion sampling that satisfies constraints [1-3]. In particular, [1] introduced a training-free, plug-and-play four-line modification to guided diffusion that directly addresses the issue shown in your Figure 3. It also enables minority guidance without introducing constraint violations, as demonstrated in your Figure 5. Comparing your proposed method against this simpler alternative would help verify whether it offers a meaningful improvement over existing approaches in addressing the same challenge. [1] Inference-Time Policy Steering through Human Interactions [2] Learning Diverse Robot Striking Motions with Diffusion Models and Kinematically Constrained Gradient Guidance [3] DISCO: Language-Guided Manipulation with Diffusion Policies and Constrained Inpainting Other Strengths And Weaknesses: Strength – The paper provides a theoretical demonstration of manifold deviation. Weakness – The results do not show a strong improvement over chosen baselines such as DD or TAT. Additionally, the study lacks comparisons with directly relevant baselines, as mentioned above. Other Comments Or Suggestions: Figures 1(a) and 1(b) do not appear significantly different, making it difficult to highlight the differences. Additionally, Figure 1(c) does not clearly convey the idea of your method. It may be helpful to refine the figure to better illustrate the key improvements and make the distinctions more visually compelling. Questions For Authors: 1. What is the intuition behind the idea that projecting onto the subspace approximated by PCA of nearest neighbors will reduce manifold deviation during diffusion? Why does PCA, in particular, help? Does the number of PCA components impact performance? 2. Could this projection (an approximation) at later diffusion steps introduce constraint violations? Have you experimented with applying LoMAP only during the early diffusion steps or, conversely, only during the later diffusion steps, rather than applying it across all diffusion steps? 3. What is the computational overhead of finding nearest neighbors from a large offline trajectory dataset at each diffusion step? Does your algorithm require access not only to the diffusion model but also to the dataset the model was trained on? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the detailed review and constructive feedback on our work. Please find our detailed responses below. - **Not a strong improvement over chosen baselines** We appreciate the valuable feedback provided by the reviewer. However, we respectfully disagree with the subjective judgment that there was “not strong” improvement. We believe the performance improvements demonstrated by our method are meaningful. Specifically, our approach consistently outperforms all baselines in terms of average performance across three established benchmarks, despite the simplicity of our method. - **Missing baselines** We sincerely thank the reviewer for pointing out several interesting works. However, we would like to clarify why the suggested baselines are not directly suitable for comparison in our setting: - **[1]** assumes human-provided guidance in the form of sketches or demonstrations, which fundamentally differs from our scenario where no such human intervention is assumed. Additionally, [1] relies on a diffusion-based policy rather than a diffusion-based planner, making direct comparisons inappropriate. The reviewer's comment, "It also enables minority guidance without introducing constraint violations, as demonstrated in your Figure 5," is also not entirely accurate. The trajectories in [1] were generated using a diffusion policy trained with random walks rather than being goal-conditioned as in our experiments. - **[2]** assumes the availability of differentiable constraints. For instance, applying [2] directly to the maze environment would require prior knowledge about the exact location of walls, making the comparison unfair. - **[3]** focuses specifically on manipulation scenarios guided by language instructions, which significantly deviates from the tasks and settings considered in our work, making it an unsuitable baseline. --- - **Figure suggestion** Thank you for the suggestions! We will incorporate them into the final version of our paper. - **What is the intuition behind the LoMAP** Please see response for reviewer XEyQ for the details. - **Impact of the number of PCA components** We have conducted additional experiments by varying the number of PCA components ($K$). As shown in the results below, LoMAP demonstrates robustness provided that K is above a certain threshold. We select $K=10$ as it offers computational efficiency comparable to $K=20$, yet achieves similar performance. | Environment | **Diffuser** | **K=5** | **K=10** | **K=20** | | --- | --- | --- | --- | --- | | halfcheetah-med-expert | 88.9 ± 0.3 | 91.2 ± 0.3 | 91.1 ± 0.2 | 90.9 ± 0.2 | | hopper-med-expert | 103.3 ± 1.3 | 108.9 ± 2.9 | 110.6 ± 0.3 | 110.9 ± 0.2 | | walker2d-med-expert | 106.9 ± 0.2 | 107.8 ± 0.2 | 109.2 ± 0.1 | 108.9 ± 0.1 | | halfcheetah-med | 42.8 ± 0.3 | 44.5 ± 0.1 | 45.4 ± 0.1 | 44.9 ± 0.1 | | hopper-med | 74.3 ± 1.4 | 90.16 ± 2.0 | 93.7 ± 1.5 | 94.1 ± 1.7 | | walker2d-med | 79.6 ± 0.6 | 79.17 ± 1.8 | 79.9 ± 1.2 | 81.4 ± 1.1 | | halfcheetah-med-replay | 37.7 ± 0.5 | 38.2 ± 1.2 | 39.1 ± 1.0 | 38.9 ± 0.7 | | hopper-med-replay | 93.6 ± 0.4 | 96.4 ± 0.2 | 97.6 ± 0.6 | 96.3 ± 0.1 | | walker2d-med-replay | 70.6 ± 1.6 | 74.7 ± 3.1 | 78.7 ± 2.2 | 78.9 ± 1.8 | - **“Have you tried applying LoMAP only in early steps or only in later steps?”** Yes, we explored applying LoMAP exclusively in either early or later stages of the diffusion sampling process. We observed that applying LoMAP only in early steps was not particularly effective. In contrast, applying LoMAP during intermediate to later diffusion steps yielded the most significant improvements. This indicates that the mismatch between the model’s reverse transition and the true data distribution becomes most pronounced during these intermediate steps, aligning with similar observations reported in prior study [4]. - **“Computational overhead of finding nearest neighbors”** Please see response for reviewer wazP for the details. - **“Does your algorithm require access not only to the diffusion model but also to the dataset the model was trained on?”** LoMAP, as currently presented, assumes the existence of an offline dataset, consistent with other offline RL baselines. For instance, Decision Diffuser (DD) retrains a conditional diffusion planner from scratch using offline datasets, and RGG trains an out-of-distribution (OOD) score predictor utilizing offline datasets. Unlike these methods, our approach leverages the dataset directly without additional training, making it simple and straightforward to apply. References: [1] Shi et al., Inference-Time Policy Steering through Human Interactions. In ICRA, 2025. [2] Lee et al., Learning Diverse Robot Striking Motions with Diffusion Models and Kinematically Constrained Gradient Guidance. arXiv, 2024. [3] Hao et al., DISCO: Language-Guided Manipulation with Diffusion Policies and Constrained Inpainting. arXiv, 2024. [4] Na et al., Diffusion Rejection Sampling. ICML, 2024. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal effort. However, I respectfully disagree with the authors' claim that their approach "consistently outperforms all baselines." If you look at individual entries in Table 2, baselines DD, TAT, and RGG either outperform LoMAP or are within the error bars of LoMAP. Admittedly, LoMAP has the best overall average performance, but there are no error bars, and DD, TAT, and RGG's average performances come very close. Therefore, objectively, I cannot agree that the proposed LoMAP provides a strong improvement over the baselines. Furthermore, the majority of baselines (BC, CQL, IQL, DT, TT, MOPO, MOReL, DD) do not natively address the issue of inadvertently sampling infeasible plans during conditional generation. Therefore, surpassing their performance by a large margin cannot be used as evidence that LoMAP improves upon the SoTA methods for mitigating distribution shift (or for sampling feasible plans that satisfy constraints) during conditional generation. Rather, the authors should focus their efforts on meaningfully beating TAT, RGG, and the three works [1–3] I mentioned in this spirit. The authors dismissed [1] by saying it assumes human guidance, which is not true. [1] only requires human guidance when performing conditional generation—just as LoMAP uses goal conditions to do conditional plan generation. In fact, one could replace human guidance with goal conditioning of your tasks, and the same algorithm would apply. Given that [1] is just a four-line algorithmic change to Diffuser (which the authors used) that reduces infeasible plans during conditional generation, it is worth comparing to LoMAP (which appears to be a much more complicated algorithm). The authors dismissed [2] by saying applying [2] directly to the maze environment would require prior knowledge about the exact location of walls. This does not seem overly restrictive, given that LoMAP requires the entire offline dataset to be available in order to find nearest neighbors during deployment-time planning. In fact, if one already has access to the entire offline dataset, one can simply recover the maze walls from the offline trajectory dataset and plug in [2] to see whether LoMAP truly performs better at sampling plans that satisfy constraints. The authors dismissed [3] by saying [3] focuses specifically on manipulation scenarios guided by language instructions, which is not relevant. Admittedly, [3] includes components that are irrelevant to LoMAP. However, one core innovation of [3] is to perform simple gradient descent during diffusion so that sampling falls back onto the data manifold, thereby reducing the sampling of infeasible plans ([3], Fig. 3(b)). This practically works well and appears much simpler than the proposed LoMAP. Therefore, it is worth comparing LoMAP to [3] to evaluate whether the added complexity of LoMAP's algorithmic design truly brings additional benefits. Lastly, I am also concerned that many of the results rely on normalized average returns (in line with offline RL works) as a performance measure, which might not be the most appropriate metric for evaluating whether a sampled plan is feasible. The authors may consider using binary counts, such as their proposed artifact ratios, for all experiments to better demonstrate LoMAP's improvement in sampling feasible plans. While I really appreciate the cleverness of the algorithmic design of LoMAP and the thorough theoretical analysis, I am concerned about the practical value of the algorithm. Given that the authors' rebuttal glosses over my concern about whether LoMAP truly improves upon the latest methods that reduce infeasible plans during conditional diffusion sampling, I will lower my score to reject. [1] Inference-Time Policy Steering through Human Interactions [2] Learning Diverse Robot Striking Motions with Diffusion Models and Kinematically Constrained Gradient Guidance [3] DISCO: Language-Guided Manipulation with Diffusion Policies and Constrained Inpainting --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your additional feedback. Below, we present additional comparisons and experiments designed to address the concerns, specifically regarding (1) direct comparisons with [1--3], and (2) metrics for feasibility. * **Additional Comparisons with [1], [2], and [3]** Following the reviewer's feedback, we conducted additional experiments in the Maze2D-Large environment to evaluate the effectiveness of our LoMAP method compared to [1], [2], and [3]. For a fair comparison, we used the same diffusion model as described in our paper (e.g., using a planning horizon four times longer than in [1], including velocity states). **Comparison with [1]:** As suggested, we adapted the method from [1] by goal-conditioning via stochastic sampling, tuning the number of MCMC sampling steps $(2, 4, 6, 8)$. **Comparison with [2]:** We approximated walls as multiple spheres in Maze2D and defined a sphere-based constraint cost following [4]. Specifically: $$ J_{c}(\tau) = \sum_{m=1}^{M} \sum_{t=1}^{H} \max \Bigl( r - \mathrm{dist}\bigl(\tau_{t}, p_{m}\bigr), 0 \Bigr), $$ where $H$ is the planning horizon, $p_m$ is the center of the sphere, and $r$ is its radius. We tuned the guidance scale within the range (0.001, 0.01, 0.05, 0.1). **Comparison with [3]:** To emulate VLM-based keyframe generation akin to [3], we trained a high-level policy that learns the optimal $k$-steps jump using Hierarchical Implicit Q-learning (HIQL) [5] to generate subgoals. These subgoals served as keyframes for the inpainting optimization technique from [3], with $k=25$ following the official implementation in ogbench. We first extended the experiments from Section 4.1 to compare artifact ratios: | # of plans | **LoMAP (ours)** | **Diffuser** | **[1]** | **[2]** | **[3]** | | --- | --- | --- | --- | --- | --- | | 10 | **0.35** | 0.50 | 0.42 | 0.49 | 0.43 | | 20 | **0.35** | 0.62 | 0.44 | 0.54 | 0.46 | | 30 | **0.38** | 0.66 | 0.47 | 0.61 | 0.49 | LoMAP consistently outperformed all compared methods, significantly reducing the artifact ratio. Notably, even [2], despite leveraging exact wall constraints, failed to match LoMAP's effectiveness. We speculate that the nonconvex nature of the constraints makes gradient updates inadequate for reliably projecting trajectories into collision-free regions. Additionally, although stochastic sampling [1] and inpainting optimization [3] improved over the standard Diffuser, they still exhibited higher artifact ratios compared to LoMAP. Below, we report the normalized return for Maze2D-Large and Multi2D-Large. LoMAP significantly outperforms [1], [2], and [3] in terms of normalized average returns, further validating the effectiveness of our approach. | | **LoMAP (ours)** | **Diffuser** | **[1]** | **[2]** | **[3]** | | --- | --- | --- | --- | --- | --- | | Maze2D Large | 151.9 ± 2.7 | 123.0 ± 6.4 | 135.1 ± 4.0 | 129.0 ± 5.3 | 137.9 ± 2.4 | | Multi2D Large | 154.7 ± 0.3 | 132.1 ± 5.8 | 143.8 ± 4.7 | 141.3 ± 4.3 | 145.6 ± 3.1 | * **Appropriate metric for feasibility** We acknowledge the limitation that artifact ratios can be directly measured only in environments like maze domain, where explicit constraints are easily defined. Therefore, we further evaluated feasibility in locomotion tasks using the dynamic mean squared error (dynamic MSE), measuring how closely generated trajectories adhere to true environment dynamics: $$\text{dynamic MSE}=||f^*(s,a)-s'||^2,$$ where $f^*$ denotes the true dynamics model. We generated 100,000 trajectory samples for each environment and reported the average dynamic MSE. | Environment | **w/o LoMAP** | **w/ LoMAP** | | --- | --- | --- | | halfcheetah-med-expert | 0.363 | **0.295** | | hopper-med-expert | 0.027 | **0.020** | | walker2d-med-expert | 0.391 | **0.293** | | halfcheetah-med | 0.352 | **0.285** | | hopper-med | 0.024 | **0.021** | | walker2d-med | 0.395 | **0.293** | | halfcheetah-med-replay | 0.710 | **0.555** | | hopper-med-replay | 0.049 | **0.045** | | walker2d-med-replay | 0.829 | **0.506** | Consistently, LoMAP significantly reduced dynamic MSE, clearly demonstrating its effectiveness in generating trajectories that better adhere to environmental dynamics. We again appreciate your time and dedication to providing us with your valuable feedback. References: [1] Shi et al., Inference-Time Policy Steering through Human Interactions. In ICRA, 2025. [2] Lee et al., Learning Diverse Robot Striking Motions with Diffusion Models and Kinematically Constrained Gradient Guidance. arXiv, 2024. [3] Hao et al., DISCO: Language-Guided Manipulation with Diffusion Policies and Constrained Inpainting. arXiv, 2024. [4] Shaoul et al., Multi-Robot Motion Planning with Diffusion Models. ICLR, 2025. [5] Park et al., HIQL: Offline Goal-Conditioned RL with Latent States as Actions. ICML, 2023.
Summary: This paper addresses the limitation in diffusion-based trajectory planning for RL tasks. Previous works in diffusion models often produce infeasible trajectories due to "manifold deviation" during the sampling process, so the authors proposed a novel method LoMAP, which is a training-free framework. It projects diffusion model samples onto locally approximated low-rank manifolds derived from offline datasets to prevent infeasible trajectory generation caused by guidance errors during the sampling process. The experiments on Gym achieved good results compared to the baseline. Claims And Evidence: The authors make a clear point on the problem of the guidance gap in diffusion models and provide theoretical proof to understand the issue. However, I'm a bit lost on why KNN and PCA can solve this problem, and I feel section 3.2 is a bit isolated from the discussion of manifold deviation. I hope the authors can explain more about this part and expand this section since it is the key part of this paper. Methods And Evaluation Criteria: The method and analysis are clear and simple, which is a plus. The whole framework is lightweight, but I do not fully understand why LoMAP can help with the issue of diffusion sampling. The experiment part seems solid and the performance is good. I suggest adding more DT-related baseline methods. See the below sections for references. Theoretical Claims: The theoretical analysis in section 3.1 looks good to me. And the introduction of Diffuser and related literature is clear and concise. Experimental Designs Or Analyses: The experimental designs follow the most commonly used environment and baselines. This whole paper talks about the manifold projection, and I'm wondering if the author can provide some evidence this method actually helps with more accurate manifold approximation. Supplementary Material: The code looks solid to me. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Although this paper focused on diffusion models, I suggest the author include more related works based on Decision Transformers (DT) as baseline or related works, especially those with classifier or classifier-free guidance in planning. Here are some papers: [1]: Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference [2]: Q-learning Decision Transformer: Leveraging Dynamic Programming for Conditional Sequence Modelling in Offline RL Other Strengths And Weaknesses: I like to narrative of the paper. The weaknesses are discussed above. Other Comments Or Suggestions: N/A Questions For Authors: See the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the detailed review and constructive feedback on this work. We especially appreciate the insightful questions regarding the accurate manifold approximation. Please find our detailed answers below. - **"I do not fully understand why LoMAP can help with the issue of diffusion sampling."** We thank the reviewer for this important question. To clarify, the core intuition behind LoMAP is grounded in the observation that feasible trajectories from offline datasets lie on an intrinsically low-dimensional manifold embedded within a high-dimensional trajectory space. Diffusion-based sampling methods can deviate from this manifold due to inaccuracies in guidance (as analyzed in Section 3.1), leading to infeasible trajectories. LoMAP addresses this problem by iteratively projecting guided diffusion samples back onto an approximated local manifold derived directly from offline trajectories. Specifically, at each diffusion step, we first compute a denoised estimate of the current trajectory sample using Tweedie's formula. We then retrieve the $k$-nearest neighbors from the offline dataset, which naturally represent trajectories that closely adhere to the true data manifold. By forward-diffusing these neighbor trajectories to the current diffusion timestep, we approximate the local structure of the intermediate manifold ($\mathcal{M}_{i-1}$). PCA then provides a convenient and effective way to approximate the local manifold around this neighborhood. Because PCA identifies principal directions of variance, it naturally captures the major local geometric structures of the feasible manifold represented by nearby trajectories. By projecting the diffusion sample onto this PCA-derived subspace, we remove the off-manifold components, effectively correcting the artifact trajectories. - **"I'm wondering if the author can provide some evidence this method actually helps with more accurate manifold approximation."** Thanks for the important question. To provide further evidence, we computed the Realism Score [1], which quantifies how closely generated trajectories lie to the true manifold defined by the offline dataset. Specifically, we approximated the true manifold using k-NN hyperspheres constructed from 20,000 offline trajectories and generated 100,000 trajectory samples to measure the average realism score. As shown below, applying LoMAP consistently yields higher realism scores compared to diffusion sampling without LoMAP, clearly demonstrating that our method effectively produce samples closer to the true data manifold. | Environment | **w/o LoMAP** | **w/ LoMAP** | | --- | --- | --- | | Maze2D U-Maze | 1.23 | **1.30** | | Maze2D Medium | 1.40 | **1.56** | | Maze2D Large | 1.36 | **1.47** | - **“I suggest adding more DT-related baseline methods.”** Thank you for suggesting additional DT-related baselines. Following your advice, we now include comparisons with QDT [2], LPT [3] (guidance-based methods), and WT [4] (waypoint-based method). For MuJoCo locomotion tasks, we used the QDT implementation from the d3rlpy library and the official implementation provided by the authors for LPT, while WT results are taken directly from the original paper [4]. As shown in the table below, despite its simplicity, LoMAP consistently achieves the best average performance across these benchmarks. We will incorporate these additional comparisons into the final version of the paper. | Environment | **QDT** | **LPT** | **WT** | **ours** | | --- | --- | --- | --- | --- | | halfcheetah-med-expert | 89.8 ± 0.7 | 90.8 ± 0.19 | **93.2 ± 0.5** | 91.1 ± 0.23 | | hopper-med-expert | 109.4 ± 2.3 | **111.4 ± 0.31** | 110.9 ± 0.6 | 110.6 ± 0.29 | | walker2d-med-expert | 108.8 ± 0.7 | 109.1 ± 0.04 | **109.6 ± 1.0** | 109.2 ± 0.05 | | halfcheetah-med | 42.3 ± 0.4 | 43.5 ± 0.08 | 43.0 ± 0.2 | **45.4 ± 0.13** | | hopper-med | 66.5 ± 6.3 | 63.8 ± 1.47 | 61.1 ± 1.4 | **93.7 ± 1.54** | | walker2d-med | 67.1 ± 3.2 | **81.1 ± 0.33** | 74.8 ± 1.0 | 79.9 ± 1.21 | | halfcheetah-med-replay | 35.6 ± 0.5 | **40.7 ± 0.12** | 39.7 ± 0.3 | 39.1 ± 0.99 | | hopper-med-replay | 52.1 ± 20.3 | 89.9 ± 0.61 | 88.9 ± 2.4 | **97.6 ± 0.58** | | walker2d-med-replay | 58.2 ± 5.1 | 75.7 ± 0.34 | 67.9 ± 3.4 | **78.7 ± 2.2** | | **average** | 69.9 | 78.4 | 76.8 | **82.8** | References [1] Kynkäänniemi et al., Improved precision and recall metric for assessing generative models. In NeurIPS, 2019. [2] Yamagata et al., Q-learning Decision Transformer: Leveraging Dynamic Programming for Conditional Sequence Modelling in Offline RL. In ICML, 2023. [3] Kong et al., Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference. In NeurIPS, 2024. [4] Badrinath et al., Waypoint Transformer: Reinforcement Learning via Supervised Learning with Intermediate Targets. In NeurIPS, 2023.
null
null
null
null
null
null
An Architecture Search Framework for Inference-Time Techniques
Accept (poster)
Summary: The paper introduces an inference-time "architecture search" framework (not to be confused with NAS) designed to optimize language model performance on specific benchmarks. The framework systematically selects and integrates multiple inference-time techniques (ensembling, fusion, ranking, critiquing, verification) using Bayesian optimization to find optimal architectures for a given task and compute budget. The framework supports publicly available and closed-source models, showing performance gains over SOTA LLMs and inference-time methods, particularly in instruction-following, reasoning, and coding tasks​. ## update after rebuttal The authors have answered my concerns with every experiment I found lacking, and even more. Even after reading other reviews, I still believe this paper strongly merits acceptance, as I found most criticisms unconvincing, except for a single one of Uk8K’s, which the authors have responded to in our discussion. I think the paper has novelty and value. Even if some of the primitives from the paper existed prior to it, an optimized way to use them is very beneficial to practitioners utilizing agents. Claims And Evidence: All claims are supported as far as I can tell. Methods And Evaluation Criteria: Evaluation setup: The paper benchmarks Archon against closed-source and publicly available models and existing inference-time techniques (MoA, ADAS, AFlow). Performance is measured via accuracy and compute efficiency. Theoretical Claims: The paper does not introduce new mathematical proofs or theoretical claims. Experimental Designs Or Analyses: -The experiments use a diverse set of benchmarks (MT-Bench, Arena-Hard-Auto, MixEval, etc.), which adequately cover LLM performance. Model selection: The comparison against state-of-the-art LLMs (GPT-4o, Claude 3.5 Sonnet, Llama 3.1 405B) ensures a fair baseline. -The Bayesian optimization search space is well-justified, though further ablation studies could clarify interactions between components​. Supplementary Material: A.7.1 and A.7.2 Relation To Broader Scientific Literature: Inference-time architectures are an active area of research, and the paper builds on techniques similar to MoA, RouteLM etc. Unlike papers that optimize a single model, Archon uses multiple inference-time techniques simultaneously​. Essential References Not Discussed: Not as far as I'm aware. Other Strengths And Weaknesses: Strengths: -Automated inference-time architecture search framework integrating multiple techniques provides a way to utilize these various techniques in practice. It is a considerable strength. -Well written. -Outperforms other SOTA models and techniques. -Being able to use both publicly available and closed-source models in the architecture is a plus. Weaknesses: -The biggest weakness of the method, in my opinion, is its requirement to get specific benchmarks to optimize the architecture for. While the setting is realistic for certain scenarios where we know exactly what's the type of data we'll encounter, a lot of scenarios require a less specialized approach. With that being said, I'm fine with the paper limiting its scope to the aforementioned subset of scenarios. -It's unclear to me how the computation budget constraints are enforced during inference and optimized for during the search. From various places across the main body, including A.7.2. it seems the budget consists of model calls. But from various figures (such as Figure 1) it seems the budget is defined as the input/output token ratio (which seems more practical to me, but more challenging to measure). I'd love an explanation, but nonetheless, I think it should be clarified in the paper itself too. -Minor naming: I'm not sure I'd call Archon an "architecture" as it's a highly overloaded with the model's architecture. Especially since it's not learned. As a reader, I was expecting a NAS paper. -Minor (not a weakness): there are several "construction rules" that could be changed for cases with higher budgets. e.g., hypothetically speaking, in resource-rich tasks, generators could appear again in later layers to generate later parts in the response after the best candidates were to be chosen from earlier layers (e.g., writing a book chapter by chapter). This comes with a trade-off of expanding the search space. Other Comments Or Suggestions: -I appreciate the comprehensive explanation of the Bayesian Optimization (Snoek et al., 2012) algorithm at the appendix. Questions For Authors: How the computation budget constraints work (see above). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to read our paper! We appreciate your feedback and comments. We’d like to address each of your concerns individually: - *Weaknesses: -The biggest weakness of the method, in my opinion, is its requirement to get specific benchmarks to optimize the architecture for. While the setting is realistic for certain scenarios where we know exactly what's the type of data we'll encounter, a lot of scenarios require a less specialized approach. With that being said, I'm fine with the paper limiting its scope to the aforementioned subset of scenarios.*: - With regards to architecture optimization, we do currently use a dev set of at least 20 queries for optimizing the Archon architecture learned (Section 3.3). - However, the learned architecture can be quite generalizable. For example, by the development sets of the seven benchmarks explored, we were able to create generalized Archon architectures that preserved 90-95% the performance of specialized Archon architectures (Table 1). - This suggests that the learned Archon architectures generalize beyond any individual set of queries and perform well on unseen domains. - To test this hypothesis, we ran an additional set of experiments and evaluated the generalized Archon architectures on three new benchmarks: GPQA, MMLU, and MMLU Pro. These benchmarks cover a variety of topics ranging from science to business to the humanities. - **Note:** We did not use any ground truth labels from these datasets for developing Archon architectures. - We find that our generalized Archon architectures are capable of preserving 91 to 95% the performance gains of specialized Archon architectures trained exclusively for these benchmarks individually. - The generalized ADAS and AFlow architectures only achieve 66% and 73% of their specialized architecture performance, respectively. - We highlight these added results in Table 2 of our revised paper: https://storage.googleapis.com/anonymous-files/archon.pdf. - *It's unclear to me how the computation budget constraints are enforced during inference and optimized for during the search*: - We impose the constraints by excluding any architecture that would exceed the inference call, input token, or output token budgets from the search space: - Multiple restrictions could be added. For example, you can filter out architectures with more than 20 inference calls or more than 20,000 input tokens. - This prevents our Bayesian optimization algorithm from even considering these invalid architectures in our architecture search (Appendix A.7). - To address your concern, we added a new section to discuss this in Section 3.3 called “Adding Search Restrictions”. - We agree that the name Archon might elicit concepts surrounding model architectures. However, in our application, we use the term “archon” to highlight that we are using an *architecture of LMs*, rather than an architecture of neurons as in NAS. - Also, thank you for highlighting potential avenues for improving the Archon construction rules! - Given an infinite inference budget, future work can further leverage existing techniques, such as generation and fusion, while even developing new components to better translate additional inference compute to improved performance, such as expanded verification systems. We welcome any further questions. Thank you again for your feedback and comments! In light of these additional experiments and clarifications to address your comments, we would really appreciate it if you would re-examine our paper and consider raising your score. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for their answers and appreciate their work on my concerns. **I also read other reviewers' feedbacks and still believe this paper merits acceptance**. Most criticisms were unconvincing to warrant rejection, or were replied by the authors adequately in my opinion. The only exception was Uk8K’s following criticsm: >”However, since different LLMs take different costs, it seems inappropriate to unified the budgets as input/output tokens. Subsequently, it is unclear how can we align the budgets of different counterparts for a fair comparison.” I also read the authors’ response. After reading this discussion, I think the best way to measure costs in a unified way is to consider actual $ cost per response. This allows (a) closed-source and open-source models to be directly compared, and; (b) considers each user’s unique costs. For example, I might be able to run a specific model (e.g., Llama-70B) more cheaply due to having a certain better hardware, or because I had a better deal with some compute provider. > To test this hypothesis, we ran an additional set of experiments and evaluated the generalized Archon architectures on three new benchmarks: GPQA, MMLU, and MMLU Pro. These benchmarks cover a variety of topics ranging from science to business to the humanities. I thank the authors for their additional experiment, and agree that it strengthens the claim to the method’s being able to generalize to other benchmarks. To make this point even more convincing, I would try to make additional experiments that make sure the intersection of topics in the “training” set and the “test” set is more limited than this particular experiment. If you fix these two aforementioned points, I believe your paper has more impact than initially believed. Trusting you will, I increase my score. --- Reply to Comment 1.1.1: Comment: **Thank you for taking the time and effort to read our response! We appreciate you raising your score and advocating for our paper’s acceptance.** We agree that the average dollar cost per response is a relevant approach for comparing Archon to alternate inference time frameworks and frontier LMs. To address your feedback, we have added a new column to Table #1 and a set of graphs in Figure #9 comparing Archon architectures to ADAS and AFlow architectures by average dollar cost per query across five different budgets, included in our updated PDF: https://storage.googleapis.com/anonymous-files/archon.pdf. In an unrestricted budget setting (Table 1), we find that Archon architectures are 37.1% more cost efficient than ADAS and Archon while achieving 15.1 percentage points better performance across instruction-following, reasoning, math, and coding tasks. In a restricted budget setting (Figure 9), we find that Archon architectures are 44.2% more efficient while achieving 13.4 percentage points better performance across instruction-following, reasoning, math, and coding tasks. For our inference compute providers, we use OpenAI, Anthropic, and TogetherAI. To further demonstrate the generalization of Archon architectures, we recompute our scores for MMLU and MMLU Pro after removing their math questions since the Archon architectures were optimized on a different math dataset: the train set of MATH. To do this, we removed questions in the Math category for MMLU Pro and in the Elementary Mathematics, High School Mathematics, College Mathematics, Abstract Algebra, High School Statistics, and Formal Logic categories for MMLU. For GPQA, we do not remove any questions since its graduate level questions in physics, chemistry, biology, and other natural sciences are not related to any datasets used to optimize our Archon architectures. Based on our updated results in Table 2, we find that our all-source general purpose Archon architecture captures 91 to 94% of the task-specific Archon architectures performances on these benchmarks, suggesting that our architectures are more broadly applicable to out-of-domain tasks. The generalized ADAS and AFlow architectures only achieve 66 to 74% of their specialized architecture performance, respectively. **Note:** We did not use any ground truth labels from these datasets for developing Archon architectures. We welcome any further questions. Thank you again for your feedback and comments!
Summary: This work focuses on how to combine inference-time techniques of LLMs to achieve better performance. It first proposes a framework termed Archon, which is able to incorporate different inference-time techniques rather flexibly. Then, a search method based on Bayesian optimization is designed, which takes as input the target benchmark, the budget for inference, the set of available LLMs, and the available inference-time techniques, and takes as output the layered architecture of different techniques defined in Archon. ### update after rebuttal I agree with Reviewer mgTm's opinion that "an optimized way to use them is very beneficial", and my concern on the budget alignment is addressed. My remaining concern is not from the technical perspective but lies on the paper organization. IMHO, given that the paper does not introduce new techniques, the motivations and insights that drive this research become rather important, since they may motivate followers and differentiate this manuscript from technical reports. Overall, I am on the boundary of acceptance and rejection, and I would defer to AC's recommendation. Claims And Evidence: Partially. In the introduction, the authors claimed > we evaluate the utilities of a comprehensive set of existing and proposed inference-time techniques > we analyze the interactions between inference-time techniques and explore the benefits of adding new models and new techniques individually. However, I find that the major analyses/conclusions are not presented in Section 3. In my humble opinion, these are rather important, as they could bring new insights and guide readers to the proposed methods. Methods And Evaluation Criteria: Reasonable. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Overall reasonable. However, since different LLMs take different costs, it seems inappropriate to unified the budgets as input/output tokens. Subsequently, it is unclear how can we align the budgets of different counterparts for a fair comparison. Supplementary Material: I took a look at the tables and figures in the appendix since they are frequently mentioned in the main body. Without them, the whole paper is hard to follow. However, I didn’t carefully read the detailed discussion in the appendix. Relation To Broader Scientific Literature: Inference-time techniques are important for empowering LLMs. Essential References Not Discussed: Sufficient. Other Strengths And Weaknesses: Overall, this work studies a timely and important topic, the framework is reasonably designed, and the experimental results are positive. However, my major concern is that the paper heavily relies on the appendix (e.g., Section 3.1 relies on Table 8, Section 3.2 relies on Table 9). This makes the paper hard to follow. Subsequently, since the analyses are placed in the appendix, there lacks motivation for your architecture design. For example, how such a design achieves optimality? And how does it address the challenges/limitations of previous works? Other Comments Or Suggestions: The tables and figures are not referred to in order. Questions For Authors: Please elaborate the analyses, which are important for readers to understand the value and motivation of your work. If this paper is accepted, how can you reorganize the paper so that it is much more readable while fitting the page limit? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for taking the time to read our paper! We appreciate your feedback and comments. *However, I find that the major analyses/conclusions are not presented in Section 3.:* - We agree that understanding the utilities of inference-time techniques is central to our contributions. We've enhanced the explanation of ablation findings in our revised draft (highlighted green in Sections 3.1, 3.2): https://storage.googleapis.com/anonymous-files/archon.pdf - Our key findings (Table 5, Figures 4, 7, 8) show the following performance increases: - Generation: 10.5% across all tasks when increasing from 1 to 10 generators - Ranking: 5.7% for instruction-following, reasoning, and math tasks - Fusion: 13.2% across all tasks - Critiquing: 8.1% when added before fusion for instruction-following, reasoning, and math - Verification / Unit Testing: 5.4% improvement across all tasks - Combining components leads to compounded gains—ranker+critic+fusion yields 15.1% average improvement across all benchmarks and models, while 10+ generators with verification/unit testing boosts math/coding performance by 12.1% (Tables 5-9). - These results come from extensive ablation studies across 7 benchmarks and 3 model classes detailed in Appendix A.3 (Tables 18-22). *For efficiency, it seems inappropriate to use the token budgets:* - We used input/output tokens and inference calls for compute-matching Archon because: - True FLOPs of closed-source models used by ADAS, AFlow, and frontier LMs can only be estimated - FLOPs comparisons between different-sized models can be misleading - API providers price by tokens, correlating with their inference FLOP costs - Overall, Archon is 20.0% more inference-call efficient, using 15.1% less input tokens and 13.5% less output tokens than ADAS/AFlow while delivering 15.1% better performance (Table #1; Figure #5). - Importantly, Archon uses open-source LMs ≤72B parameters, while competitors use GPT-4o with estimated 200B active parameters: https://epoch.ai/gradient-updates/frontier-language-models-have-become-much-smaller - To further address your comment, we've added PFLOPs estimates in Table 1 using the estimated active parameters from EpochAI. In unrestricted settings, Archon uses 32.1% fewer FLOPs with models 53% smaller than GPT-4o. In restricted budget settings, Archon achieves equal performance with 38.9% less budget (Figure 6). *My major concern is that the paper heavily relies on the appendix. If accepted, how can you reorganize the paper for better readability while fitting page limits?:* - We appreciate your concerns about appendix reliance. To address your concern, we've: - Added more ablation highlights in the main paper to better explain Archon's design decisions (highlighted in green, Sections 3.2, 3.3) - Added table of contents for easier navigation of Appendix (A.1) - Fixed misordering of Appendix references to corresponding tables/figures - These changes make the paper more self-contained while still adhering to final conference page limits. *How does such a design achieve optimality? How does it address challenges/limitations of previous works?:* - The three intellectual contributions underlying Archon are: - Observing that existing inference techniques can be naturally combined, studying their interactions, and optimizing model selection to maximize technique efficacy - Providing an automatic approach for optimal inference technique combinations, yielding significant improvements over SOTA baselines over restricted and unrestricted compute budgets. - Archon architectures generalize to unseen benchmarks, outperforming alternative frameworks like ADAS and AFlow - Point (1) motivated Archon's design through ablation studies across 7 benchmarks and 5 model classes (Sections 3.1, 3.2; Appendix A.3). Existing frameworks can't effectively search the inference design space while balancing cost-performance tradeoffs—please note any comparable baselines we missed. - Point (2) had never been pursued by previous inference-time papers, allowing us to exceed SOTA LMs and emerging frameworks by +15% accuracy and +30% FLOP efficiency (Table 1; Figures 5-6). While ADAS and AFlow plateaued at 30 PFLOPS per query, Archon continues improving beyond 50 PFLOPs per query. - For Point (3), we evaluated generalized Archon architectures on three new out-of-domain benchmarks: GPQA, MMLU, and MMLU Pro. Without using any ground truth labels for architecture adaptation, our generalized architectures preserve 91-95% of the performance gains from specialized architectures trained exclusively for these benchmarks (Table 2). In contrast, generalized ADAS and AFlow architectures only achieve 66% and 73% of their specialized performance. Thank you again for your feedback and comments! In light of these additional experiments and clarifications to address your comments, we would really appreciate it if you would re-examine our paper and consider raising your score. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. My concerns are partially addressed. The anonymous revised draft still heavily relies on the appendix. The added texts to Section 3 simply summarize the results in the experiments, yet a self-contained paper should be readable and followable even if we only focus on the main body. Overall, my score is on the borderline. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time and effort to read our rebuttal! We are glad we were able to address your concerns about Archon’s compute-matched comparisons and novelty. *The added texts to Section 3 simply summarize the results in the experiments, yet a self-contained paper should be readable and followable even if we only focus on the main body.* To address your comment, we now moved all of the experimental findings on individual inference-time component utilities directly to the main body in Section 3. These findings highlight the following trends: -**Generator**: Added results showing 10.5% performance gain from increased sampling (1 to 10 models) and its particular effectiveness for coding tasks (56% boost in Pass@1 by scaling from 1 to 1000 generations) (Section 3.1, Figure 7, Table 1) -**Fuser**: Included detailed analysis showing 8.9% average improvement across all benchmarks, with evidence that it's most effective when receiving diverse inputs (Section 3.1, Figure 4, Figure 8) -**Ranker**: Demonstrated 10.8% improvement for instruction-following and reasoning tasks, with pairwise comparisons focusing on style adherence (Section 3.1, Table 5, Figure 8) -**Critic**: Incorporated results showing 11.5% improvement when placed before fusion, with greatest gains on instruction-following benchmarks (Section 3.1, Figure 4, Table 5) -**Verifier**: Added performance analysis showing 8.4% improvement across benchmarks, with strongest impact on reasoning tasks (Section 3.1, Table 5) -**Unit Test Generator / Evaluator**: Included concrete results showing its critical role in coding tasks (56% boost in Pass@1) (Section 3.1, Table 1, Section 4.2) With these added details, Section 3 should be more standalone in its experimental results and analysis while we include the expanded results in the Appendix for future interested readers. Thank you again for your feedback and comments! We hope that this addressed your concern about bringing key results back to the paper. Are there any additional comments that we have not addressed? Please let us know if so.
Summary: The paper introduces a framework for optimizing inference-time techniques in large language models (LLMs). The contributions, as stated in the paper, are an algorithm that identifies optimal ways to combine inference-time techniques (such as ensembling, ranking, fusion, verification, and critique), as well as understanding the interactions between inference- time techniques and their utilities. They evaluate the method on multiple instruction-following, reasoning, and coding benchmarks, demonstrating improvements over state-of-the-art closed-source models. ARCHON relies on Bayesian optimization to search over a large space of possible architectures, selecting the best-performing configuration for a given task. The results indicate that combining multiple inference-time techniques in a structured manner provides better performance than single techniques alone. The authors release an open-source framework to facilitate further research in this area. Claims And Evidence: The claim that ARCHON produces architectures that outperform existing state-of-the-art models is well-documented. The authors report consistent improvements across MT-Bench, AlpacaEval, MixEval, MATH, and CodeContests, often surpassing top closed-source models. However, compute budgets are not always controlled for, making it difficult to isolate whether the improvements come from ARCHON’s methodology or simply from increased inference cost. The improvements shows in Table 1 are quite predictable and not surprising. Additionally, controlling only for token budget as in Table 1 and Figure 5 doesn't give a clear picture. I understand that, for several models tested, the flops and the throughput are not available, but this causes a lot of issues when comparing the models from an inference budget perspective. Methods And Evaluation Criteria: The evaluation uses multiple diverse benchmarks, covering instruction-following, reasoning, and coding tasks, which strengthens the paper’s claims. A significant concern as I said is the absence of compute-normalized comparisons. Many baseline models receive far fewer inference calls, and no information is available in terms of flops and generation time (I understand that this is an intrinsic limitation of dealing with closed source models). This does create confounding effects in the experimental results. Theoretical Claims: N/A Experimental Designs Or Analyses: - Supplementary Material: I reviewed the code. Relation To Broader Scientific Literature: The contributions are closely related to recent trends in scaling test time compute, although they don't provide significant novelty. Essential References Not Discussed: - Other Strengths And Weaknesses: The open source library can be a very useful contribution for the community and speed up research on test-time compute. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for taking the time to read our paper! We appreciate your feedback and comments. *However, compute budgets are not always controlled for... controlling only for token budget as in Table 1 and Figure 5 doesn't give a clear picture:* - We appreciate your concerns regarding compute matching experiments. For our comparisons, we originally utilized input/output tokens and inference calls because: - We can only speculate on the true FLOPs of the closed-source models used by baseline approaches (i.e. o1 and GPT-4o for ADAS and AFlow) - Comparing FLOPs between larger and smaller models can be misleading when evaluating efficiency (e.g. 1 PFLOPs with open-source 70B LMs vs. 1 PFLOPs with closed-source 200B LMs) - The API providers, such as OpenAI, Anthropic, Google, Together, etc., price their APIs by input and output tokens per query, correlating directly with their FLOP costs at inference. - With our measurements, Archon is 20.0% more inference call efficient, using 15.1% less input tokens and 13.5% less output tokens compared to alternative frameworks (ADAS and AFlow), while delivering 15.1% better performance across all benchmarks (Table #1; Figure #5). - Importantly, Archon achieves these gains using open-source LMs of 72B parameters or less, while ADAS and AFlow use GPT-4o with an estimated 200B active parameters per forward pass (https://epoch.ai/gradient-updates/frontier-language-models-have-become-much-smaller). - To further address your comment, we've added a column in Table 1 showing PFLOPs per query using estimated active parameters from Epoch AI: https://storage.googleapis.com/anonymous-files/archon.pdf. When benchmarked against ADAS and AFlow in an unrestricted budget setting, Archon architectures use 32.1% less FLOPs while using models that are, on average, 53% smaller than GPT-4o. - In a restricted budget setting, this gap increases as we scale the allowed budget, allowing Archon to achieve the same performance as ADAS and AFlow with 38.9% less budget across various settings (Figure 6). *The improvements shows in Table 1 are quite predictable and not surprising:* - We'd like to respectfully push back on this claim. Additional inference compute does not always translate to better task performance (Figure 5). Unlike ADAS and AFlow, which plateau at 30 PFLOPs per query using GPT-4o, Archon continues to improve beyond 50 PFLOPs across all task types (Figure 6): https://storage.googleapis.com/anonymous-files/archon.pdf - Even with unlimited budget for inference-time architectures (Table 1, Figures 5-6), Archon architectures outperform alternatives by +15% accuracy while using 32.1% less FLOPs, 20.0% less inference calls, 15.1% less input tokens, and 13.5% less output tokens. *The contributions are closely related to recent trends in scaling test time compute, although they don't provide significant novelty:* - The three intellectual contributions underlying Archon are: - Observing that existing inference techniques can be naturally combined, studying their interactions, and optimizing model selection to maximize technique efficacy - Providing an automatic approach for optimal inference technique combinations, yielding significant improvements over SOTA baselines over restricted and unrestricted compute budgets. - Archon architectures generalize to unseen benchmarks, outperforming alternative frameworks like ADAS and AFlow - Point (1) motivated Archon's design through ablation studies across 7 benchmarks and 5 model classes (Sections 3.1, 3.2; Appendix A.3). No existing frameworks effectively search this large inference architecture space while balancing cost-performance tradeoffs—please inform us if we've overlooked comparable baselines. - Point (2) had never been pursued by previous inference-time papers, allowing us to exceed SOTA LMs and emerging frameworks by +15% accuracy and +30% FLOP efficiency (Table 1; Figures 5-6). While ADAS and AFlow plateaued at 30 PFLOPS per query, Archon continues improving beyond 50 PFLOPs per query. - For Point (3), we evaluated generalized Archon architectures on three new out-of-domain benchmarks: GPQA, MMLU, and MMLU Pro. Without using any ground truth labels for architecture adaptation, our generalized architectures preserve 91-95% of the performance gains from specialized architectures trained exclusively for these benchmarks (Table 2). In contrast, generalized ADAS and AFlow architectures only achieve 66% and 73% of their specialized performance. We welcome any further questions. Thank you again for your feedback and comments! In light of these additional experiments and clarifications to address your comments, we would really appreciate it if you would re-examine our paper and consider raising your score.
Summary: This paper proposes a framework called ARCHON for optimizing combinations of inference time techniques to improve the performance of large language models. Their approach combines multiple inference techniques such as ensembles, rankings, etc. and uses automatic architecture fusion search to find the optimal combination of different LLMs. ## update after rebuttal I appreciate the authors' detailed ablation results and clarifications in the revised version. They have presented more discussion regarding the placement of generation, critique, ranking, fusion, and verification modules, including their "Rules of Construction". This addresses some of my earlier concerns about whether shifting the position of modules can degrade performance. The additional experiments on GPQA, MMLU, and MMLU Pro also help illustrate the robustness of the framework. However, I still find the overall novelty to be somewhat incremental since ARCHON is largely an engineered combination of known inference techniques. Although the empirical gains are encouraging, the paper would benefit from clearer theoretical or conceptual insight into why specific module orderings are consistently beneficial and how these search strategies might extend to more complex tasks (e.g., advanced math or Olympiad-level problems). On balance, while the rebuttal clarifies many issues, I remain slightly unconvinced that this submission offers sufficient conceptual novelty for a strong accept. I maintain my recommendation as a weak reject, though I acknowledge the paper’s potential and encourage the authors to strengthen the theoretical framing and expand the demonstration of generality in future submissions. Claims And Evidence: - The claim that to understand the Utilities of Inference-Time Techniques is not well studied. Lack of the evidence about the improvement percentage from ranking/fusion/verification/etc. with different position. Will shifting the position for different modules causes several performance degradation? Also lack of explanation about the searched framework. How specific the architecture differs from tasks to tasks is not well demonstrated. Methods And Evaluation Criteria: Yes, make sense Theoretical Claims: N.A. in this paper. Experimental Designs Or Analyses: See concerns under `Claims And Evidence` regarding comprehension score and in-context evaluation. Supplementary Material: No Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **weakness** Incremental novelty. This paper is mainly a combination of different strategies. I don't see a strong motivation or insights with explanation or evidence. Other Comments Or Suggestions: N/A Questions For Authors: I'd like to see the framework performance on frontier reasoning datasets such as GPQA, AIME25 and OlympiadBench. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for taking the time to read our paper! We appreciate your feedback and comments. *The claim that Utilities of Inference-Time Techniques is not well studied:* - We agree that understanding the utilities of inference-time techniques is central to our contributions. We've enhanced the explanation of ablation findings in our revised draft (highlighted green in Sections 3.1, 3.2): https://storage.googleapis.com/anonymous-files/archon.pdf - Our key findings (Table 5, Figures 4, 7, 8) show the following performance increases: - Generation: 10.5% across all tasks when increasing from 1 to 10 generators - Ranking: 5.7% for instruction-following, reasoning, and math tasks - Fusion: 13.2% across all tasks - Critiquing: 8.1% when added before fusion for all tasks - Verification / Unit Testing: 5.4% improvement across all tasks - Combining components leads to compounded gains—ranker+critic+fusion yields 15.1% average improvement across all benchmarks and models, while 10+ generators with verification/unit testing boosts math/coding performance by 12.1% (Tables 5-9). - These results come from extensive ablation studies across 7 benchmarks and 3 model classes detailed in Appendix A.3 (Tables 18-22). *Will shifting position for different modules cause performance degradation?:* - Our findings in Appendix A.3 (Tables 5-9) show: - Generator components are most effective at architecture start - Critic components work best before ranker/fusion components - Critic, ranker, and fuser components can be stacked sequentially for improvement - Verifier and unit testing components are most effective at architecture end - These findings informed our "Rules of Construction" (Section 3.2) to prevent performance-degrading placements. We've highlighted these findings in green in the revised paper and added pointers to the ablation study. *Lack of explanation about searched framework. How architecture differs across tasks is not well demonstrated:* - The all-source generalized Archon architecture is included in Figure 3 (Section 3.2). All generalized and specialized architectures from - Table 1 are described in detail in Appendix A.9 (Figures 12-15) and our supplementary code. - In Section 4.3 ("Archon by Task"), we discuss task-specific architecture distinctions with these key trends: - Instruction-following tasks benefit from additional generator models and deeper fusion layers (17.8% improvement in win-rate from 1 to 10 generators). - For reasoning, task-specific architectures show meaningful improvements (10.1% improvement for specialized vs. generalized architectures). - For coding, unit testing and increased sample scaling significantly improve performance (44.3% boost in Pass@1). - Instruction-following and reasoning architectures use multiple critique-rank-fuse layers with diverse LMs, while math/coding architectures use repeated samples from a single LM before applying unit-testing or verification. *Incremental novelty. This paper is mainly combining different strategies without strong motivation or insights:* - The three intellectual contributions underlying Archon are: - Observing that existing inference techniques can be naturally combined, studying their interactions, and optimizing model selection to maximize technique efficacy - Providing an automatic approach for optimal inference technique combinations, yielding significant improvements over SOTA baselines over restricted and unrestricted compute budgets. - Archon architectures generalize to unseen benchmarks, outperforming alternative frameworks like ADAS and AFlow - Point (1) motivated Archon's design through ablation studies across 7 benchmarks and 5 model classes (Sections 3.1, 3.2; Appendix A.3). No existing frameworks effectively search this large inference architecture space while balancing cost-performance tradeoffs—please inform us if we've overlooked comparable baselines. - Point (2) had never been pursued by previous inference-time papers, allowing us to exceed SOTA LMs and emerging frameworks by +15% accuracy and +30% FLOP efficiency (Table 1; Figures 5-6). While ADAS and AFlow plateaued at 30 PFLOPS per query, Archon continues improving beyond 50 PFLOPs per query. - For Point (3), we evaluated generalized Archon architectures on three new out-of-domain benchmarks: GPQA, MMLU, and MMLU Pro. Without using any ground truth labels for architecture adaptation, our generalized architectures preserve 91-95% of the performance gains from specialized architectures trained exclusively for these benchmarks (Table 2). In contrast, generalized ADAS and AFlow architectures only achieve 66% and 73% of their specialized performance. We welcome any further questions! Thank you again for your feedback and comments! In light of the additional experiments and clarifications to address your comments, we would really appreciate it if you would re-examine our paper and consider raising your score.
null
null
null
null
null
null
Causality-Aware Contrastive Learning for Robust Multivariate Time-Series Anomaly Detection
Accept (poster)
Summary: This paper proposes a causality-aware contrastive learning method for time-series anomaly detection. Experiments on five real-world and two synthetic datasets validate that the integration of causal relationships improve the anomaly detection capabilities. Claims And Evidence: Most the claims in the paper are clear, except for the following concerns. Time series anomalies can arise from various sources, such as evolving underlying processes, external events, or sensor transmission errors. In many cases, while the time series may exhibit abnormal values, they can still adhere to the underlying causal relationships. For instance, external events might disrupt overall sensor readings and push them into abnormal ranges; however, the fundamental causal processes within the system may remain unchanged. I recommend that the authors discuss the specific types of time series anomalies their method is designed to address and identify scenarios in which their approach might fail. The proposed method appears to be highly complex and likely computationally intensive. Considering that changes in causal relationships may not be the sole factor contributing to time series anomalies, I recommend that the author explore more practical and efficient methods for time series anomaly detection. Methods And Evaluation Criteria: I recommend that the authors also report the number of sensors included in the discovered causal model/graph in the experiment. If not all sensors are included into the causal graph, how can abnormal behaviors from sensors that fall outside the model's coverage be detected? Theoretical Claims: I didn't see any issues. Experimental Designs Or Analyses: I recommend that the authors also report the number of sensors included in the discovered causal model/graph in the experiment. If not all sensors are included into the causal graph, how can abnormal behaviors from sensors that fall outside the model's coverage be detected? Supplementary Material: Yes, all. Relation To Broader Scientific Literature: This work provides findings to the time series anomaly detection community. Essential References Not Discussed: Some important related work are missing, such as https://arxiv.org/pdf/2206.15033 https://ieeexplore.ieee.org/document/6413806 Other Strengths And Weaknesses: There are also studies that utilize the correlation of time series data for anomaly detection. For example, https://arxiv.org/abs/2307.08390 https://onlinelibrary.wiley.com/doi/10.1155/2022/4756480 It is highly recommended that the authors discuss the advantages and limitations of both correlation-based and causality-based approaches for anomaly detection to provide a more comprehensive perspective. Other Comments Or Suggestions: No Questions For Authors: Time series anomalies can arise from various sources, such as evolving underlying processes, external events, or sensor transmission errors. In many cases, while the time series may exhibit abnormal values, they can still adhere to the underlying causal relationships. For instance, external events might disrupt overall sensor readings and push them into abnormal ranges; however, the fundamental causal processes within the system may remain unchanged. I recommend that the authors discuss the specific types of time series anomalies their method is designed to address and identify scenarios in which their approach might fail. The proposed method appears to be highly complex and likely computationally intensive. Considering that changes in causal relationships may not be the sole factor contributing to time series anomalies, I recommend that the author explore more practical and efficient methods for time series anomaly detection. I recommend that the authors also report the number of sensors included in the discovered causal model/graph in the experiment. If not all sensors are included into the causal graph, how can abnormal behaviors from sensors that fall outside the model's coverage be detected? Some important related work are missing, such as https://arxiv.org/pdf/2206.15033 https://ieeexplore.ieee.org/document/6413806 There are also studies that utilize the correlation of time series data for anomaly detection. For example, https://arxiv.org/abs/2307.08390 https://onlinelibrary.wiley.com/doi/10.1155/2022/4756480 It is highly recommended that the authors discuss the advantages and limitations of both correlation-based and causality-based approaches for anomaly detection to provide a more comprehensive perspective. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank Reviewer J2ZH’s insightful and constructive comments. This response presents additional experiments and discussion to address the reviewer’s concerns, all of which will surely be integrated into the main paper. >**Time series anomalies can arise from various sources… discuss specific types of time series anomalies their method is designed to address and identify scenarios in which their approach might fail.** As the reviewer notes, certain anomalies may result in anomalies while keeping the causal structure. CAROTS is designed to detect anomalies that violate inter-variable causal dependencies, including scenarios: - A variable behaves inconsistently with its known causal parents. - Structural dynamics deviate from the learned causal graph. - Temporal or multi-variable patterns break causal relationships. We humbly acknowledge that CAROTS may be less sensitive to anomalies that lie within the causal structure; however, our method consistently demonstrates strong anomaly detection performance across a wide range of datasets, which suggests that CAROTS remains effective in practice. >**Computational cost of CAROTS** Even with causal modules, CAROTS is efficient due to a lightweight one-layer LSTM. We compare the total train time of the studied methods on SWaT: Train Time (min): |Method|Time| |-|-| |CAROTS|25| |AnomalyTransformer|12| |TimesNet|56| |USAD / SimCLR / SSD|6| |CSI / CTAD|10| The train time of CAROTS is comparable to baselines and cheaper than heavier models like TimesNet, indicating that leveraging causal structure can reduce reliance on deeper architectures. We also profile the per-iteration time on MSL_P-14: Per-Iter Time (seconds): |Component|Time|Ratio| |-|-|-| |CPA|0.0382|78%| |CDA|0.0017|3%| |Loss|0.0017|3%| |Others|0.0072|15%| |Total|0.0488|100%| Each iteration takes less than 0.05 sec, and even the heaviest component (CPA) is lightweight. >**# of sensors included in the discovered causal mode/graph** Discovered causal graphs include all sensors, even if they can include disjoint subgraphs and isolated nodes. Some sensors appear as isolated nodes, which is expected in complex systems with partially independent components. CPA and CDA ensure that every sensor is incorporated in training and synthetic outlier generation. CDA randomly selects a node and performs DFS over causal graph to extract a local subgraph. If the selected node is isolated, it forms a single-node subgraph, and bias is directly injected into its values. This allows us to synthesize abnormal behavior for all sensors, regardless of connectivity. As node selection is random and repeated, all variables have equal chances to be selected and perturbed during training. CPA similarly applies to all sensors, including isolated nodes. CPA perturbs parents of a selected variable and forecasts the target with the causal forecaster. For isolated nodes, we consider temporal self-dependence as the causal link. The node is perturbed through its past values, and forecasting is performed accordingly, enabling CPA to handle the absence of graph edges. We also report # of non-trivial subgraphs (excluding isolated nodes) and isolated nodes in datasets: ||#Vars|#Subgraphs|#Isolated Nodes| |-|-|-|-| |SWaT|51|1|13| |WADI|123|1|32| |PSM|25|1|2| |SMD_2-1|38|1|14| |SMD_3-7|38|2|12| |MSL_P-14|55|1|53| |MSL_P-15|55|1|47| |Lorenz96|128|1|0| |VAR|128|3|121| CAROTS’ strong performance even on datasets with a large # of isolated nodes and fragmented subgraphs indicates that it can handle complex graph structures and partially disconnected systems. >**More related works** - Related works will be updated to include discussion of [1, 2], a relevant early work connecting causality and anomaly detection. - Correlation vs. Causality: Correlation-based methods, such as [3, 4], model dependencies with co-activation patterns across variables. While these methods are effective at capturing immediate statistical associations, they may fail to distinguish true dependencies from spurious correlations, particularly under distribution shifts or external interventions. In contrast, CAROTS is grounded in the causal perspective. It explicitly models directional relationships by learning a causal graph from train data using a causal discovery method. This enables our method to simulate both causality-preserving and causality-breaking augmentations, which serve as the foundation for contrastive learning. Anomalies are then interpreted as deviations from learned causal relationships, making CAROTS more robust to superficial variations that do not reflect structural disruptions. [1] Yang et al., A Causal Approach to Detecting Multivariate Time-series Anomalies and Root Causes, 2022. [2] Qiu et al., Granger Causality for Time-Series Anomaly Detection, 2012. [3] Zheng et al., Correlation-aware Spatial-Temporal Graph Learning, 2023. [4] Wang et al., Correlation-Based Anomaly Detection Method for Multi-sensor System, 2022.
Summary: This paper proposes a new anomaly detection method called CAROTS, tailored for multivariate time-series data. Its central idea is to leverage stable causal relationships among variables discovered through a forecasting-based causal model. These discovered relationships guide two specialized data-augmentation “augmenters”: one generates variations that preserve the typical causal structures, while the other simulates anomalies by breaking them. A contrastive-learning framework is then trained to distinguish these “causality-preserving” and “causality-disturbing” samples, thereby learning a representation space where typical (standard) patterns and anomalous (disturbed) patterns are well separated. CAROTS combines two scores to detect anomalies at test time. First, it measures a sample’s distance from a centroid of “causality-preserving” training samples in the learned embedding space. Second, it computes a forecasting error with the original causal discovery model because true anomalies are more complex to predict under the learned causal relationships. Experiments on both real-world and synthetic datasets show consistently strong performance for CAROTS, emphasizing that explicitly modeling and preserving causal relationships can enhance the robustness and accuracy of anomaly detection. ## update after rebuttal The authors covered most of my concerns in the rebuttal, so I kept the original positive rating. Claims And Evidence: Overall, the paper’s central claims—including that (i) incorporating causal discovery leads to more robust anomaly detection, (ii) contrastive learning can separate samples based on whether their causal structures are preserved or disrupted, and (iii) the combined distance-and-forecasting anomaly score outperforms standard baselines—are supported by results on multiple datasets (including both real-world and synthetic scenarios). Notably, the authors demonstrate that existing approaches struggle more than CAROTS on anomalies that stem from “broken” causal relationships, thereby lending convincing evidence to the core claim that integrating a causal model helps. That said, a few points merit caution: the paper relies heavily on the assumption of correct (or near-correct) causal discovery in standard data. While the authors test different hyperparameters and show consistent performance, it would help to see explicit empirical checks on how inaccuracies in the learned causal graph impact final performance. Also, while they provide ablations (removing specific components and comparing results), the paper could explore real-world complications like partial anomalies in the “normal” training set in more detail. Still, these caveats do not significantly detract from the main results that the authors present. Methods And Evaluation Criteria: The paper’s use of public, well-known benchmarks (SWaT, WADI, PSM, SMD, MSL) and synthetic datasets (VAR and Lorenz96) aligns well with the anomaly detection context, as each dataset is commonly used to test multivariate time-series methods. Likewise, the evaluation metrics (AUROC, AUPRC, and F1) are standard and appropriate for anomaly detection, capturing different aspects of precision, recall, and overall ranking performance. By showing strong results across these diverse benchmarks, the authors demonstrate that the approach is suitable for real-world scenarios and that the chosen evaluation pipeline genuinely assesses detection performance. Theoretical Claims: The paper does not formally present (or prove) any strong theoretical claims that typically require rigorous mathematical proofs (e.g., convergence guarantees or asymptotic optimality). Instead, the authors rely on conceptual justifications—particularly around the plausibility that “causality-preserving” vs. “causality-disturbing” samples guide a practical contrastive learning objective—and empirical evidence across multiple benchmark datasets. Hence, there were no formal proofs to check in the text, and all theoretical underpinnings (e.g., why preserving causal relationships should help anomaly detection) are primarily described at a high level rather than as fully proved theorems. Experimental Designs Or Analyses: The experimental design aligns with standard anomaly-detection practices: - **Data Splits**: The paper uses a regular portion of the training set for validation and then evaluates a test set containing anomalies. - **Metrics**: AUROC, AUPRC, and F1 are all standard and appropriate. - **Comparisons**: The authors test against reconstruction-based and contrastive-based methods, offering thorough performance comparisons. - **Ablations**: They turn off individual components (causality-preserving or disturbing augmentations, similarity filtering) to highlight each element’s contribution. No significant flaws stand out. While it assumes predominantly standard training data, this is common in unsupervised detection research. Overall, the experiment design and analyses appear valid and consistent with standard practice. Supplementary Material: The author did not provide supplementary materials, so this item is not applicable. Relation To Broader Scientific Literature: They extend two critical lines of multivariate time-series anomaly detection research: 1. **Contrastive Learning Approaches**: Similar to techniques (e.g., CSI, CTAD) that generate synthetic anomalies, they introduce “causality-preserving” and “causality-disturbing” samples, making their contrastive training explicitly reflect causal structures. 2. **Causal Discovery**: Previous works (e.g., CUTS+, causal formers) show that learning inter-variable causal graphs can improve forecasting. The authors directly embed such causal insights into anomaly detection, bridging causal discovery and self-supervised representation learning. Essential References Not Discussed: The paper cites primary contrastive anomaly-detection methods (e.g., CSI, CTAD) and noteworthy causal-discovery tools (e.g., CUTS+). However, it could reference earlier neural causal discovery work—like Neural Granger Causality [1] —as an additional example that merges neural networks with causal inference to further illustrate the lineage of ideas leading to the proposed CAROTS framework. [1] Tank, Alex, et al. "Neural granger causality." IEEE Transactions on Pattern Analysis and Machine Intelligence 44.8 (2021): 4267-4279. Other Strengths And Weaknesses: **Other Strengths** - **Originality**: Although each element (causal discovery, contrastive learning) has been studied, combining them into a coherent anomaly-detection pipeline is creative. - **Application Potential**: The method’s strong performance on real industrial datasets (e.g., SWaT, WADI) hints at practical significance. **Other Weaknesses** - **Clarity in Hyperparameters**: The paper could further clarify how thresholds, temperature, or other tunings might generalize across domains. - **Interpretability**: While causal relationships are central, it would be valuable to see deeper interpretability analyses linking detection results to specific causal graphs or disruptions. Other Comments Or Suggestions: - **Writing Style**: The manuscript reads overall, but some sections would benefit from tighter phrasing (e.g., focusing on the essential motivations and the underlying intuition of causal-based data augmentation). - **Minor Edits**: The text occasionally uses broad statements like “overlook inter-variable causal relationships” without citations. Cite or clarify specific methods as examples. - **Discussion of Negative Results**: Any cases where CAROTS fails or underperforms (e.g., if causal graphs are partially wrong) would further enrich the discussion. Questions For Authors: 1. **Handling Imperfect Causal Graphs**: How sensitive is CAROTS if the learned causal structure is partially incorrect or the training set contains mild anomalies? If it severely degrades performance, clarifying mitigation strategies (e.g., robust training or iterative graph refinement) would increase my confidence in real-world applicability. 2. **Threshold Tuning**: The paper employs a dynamically adjusted similarity filter (0.5→0.9). Could you elaborate on how this threshold is chosen or adapted for different datasets? If there’s a systematic tuning procedure, it would clarify reproducibility and broader applicability. 3. **Computational Overhead**: Generating causality-preserving/-disturbing augmentations repeatedly might be costly. Is there a significant runtime impact compared to simpler contrastive approaches, and have you explored approximate techniques to reduce cost? 4. **Reusable code**: Could you consider making reproducible code publicly available to enhance the persuasiveness of the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for Reviewer Xmpv’s detailed yet positive comments. Overall, the reviewer believes that “the paper’s claims are are supported by results on multiple datasets,” which “makes it suitable for real-world scenarios.” This response includes additional experiments and discussion to consolidate our contributions. We would love to answer further questions during the discussion period. Lastly, the “Interpretability analysis” will be updated later in the discussion period. >**How inaccuracies in the learned causal graph impact CAROTS** To assess the robustness of CAROTS to inaccuracies in the causal graph, we study the performance of CAROTS on the SWaT dataset as diverse perturbations are introduced to the learned causal structure: - random init: causal edges randomly initialized - zero init: no causal edges (fully disconnected graph) - flipped: cause-effect directions reversed - noisy: Gaussian noise added to the causal matrix - original: learned causal structure without perturbation We report AUROC (mean ± std over 3 seeds) below: ||CAROTS|w/o CPA|w/o A_CD|w/o CPA,A_CD| |--|--|--|--|--| |random|0.841±0.004|0.833±0.002|0.844±0.004|0.833±0.004| |zero|0.848±0.005|0.826±0.016|0.835±0.030|0.616±0.121| |flip|0.831±0.009|0.836±0.004|0.836±0.010|0.836±0.004| |noisy|0.839±0.004|0.837±0.004|0.842±0.005|0.840±0.005| |orig|0.852±0.008|0.850±0.005|0.861±0.005|0.849±0.004| These results show that CAROTS is robust under moderate graph perturbations. Notably, the combining CPA and A_CD helps preserve performance even when the causal structure is partially inaccurate. >**Results when partial anomalies are in the training set** To evaluate the robustness of CAROTS in more realistic settings, we conduct additional experiments where synthetic anomalies are injected into the train set at varying ratios (0% to 20%). Synthetic anomalies are generated by injecting point-level global anomalies (same as the outlier generation synthesis process in the main paper). The results below (AUROC, mean ± std over 3 seeds on SWaT) show that CAROTS maintains strong performance even when the train data is partially contaminated. |Ratio|AUROC| |--|--| |0%|0.861±0.003| |0.1%|0.856±0.006| |1%|0.845±0.002| |3%|0.852±0.003| |5%|0.848±0.006| |10%|0.856±0.001| |20%|0.847±0.001| >**Discussion of neural causal discovery works** While our method builds on recent causal discovery tools like CUTS+, we acknowledge that referencing earlier approaches like Neural Granger Causality [1] would help contextualize the development of CAROTS. We will revise the related work section to include and cite this line of research. [1] Tank et al., Neural Granger Causality, 2018 >**Hyperparameter settings** The dynamic similarity threshold (from 0.5 to 0.9) is not dataset-specific but follows a fixed schedule. This threshold schedule is kept constant across all datasets and does not undergo dataset-specific tuning. Likewise, other hyperparameters such as temperature are selected based on standard practices from prior contrastive learning literature [2] and kept fixed throughout all experiments. [2] Kim et al., Contrastive Time-series Anomaly Detection, 2024. >**Interpretability analysis** CAROTS is inherently interpretable, as it leverages explicitly learned causal graphs and performs anomaly detection by identifying violations of these relationships. We are currently conducting following interpretability analyses: - Forecasting-based attribution: identifying variables with high causal forecasting error and tracing them back to their parents in the graph to localize disrupted relationships. - CDA attribution: comparing real anomalies with synthetic ones generated via CDA to identify which causal subgraphs were likely disturbed. >**Other comments** - Tighter phrasing and broad statements: We will surely improve the writing of our paper and clarify broader statements in the revised version. - “How inaccuracies in the learned causal graph impact CAROTS” presents results of incomplete causal graphs (in which CAROTS may underperform). >**Questions** - Handing Imperfect Causal Graphs: included in “How inaccuracies in the learned causal graph impact CAROTS.” - Similarity filter threshold: included in “Hyperparameter settings.” - Computational Overhead: Due to the character limit, we would greatly appreciate it if the reviewer could refer to “Computational cost of our method” section to Reviewer J2ZH. - Reusable Code: We fully agree that releasing code enhances reproducibility and transparency. We plan to make the implementation publicly available upon acceptance. --- Rebuttal Comment 1.1: Comment: The author's answer, to some extent, resolved my doubts, so I kept the original positive rating. --- Reply to Comment 1.1.1: Comment: Thank you again for your valuable suggestion. In our initial response, we noted that CAROTS is inherently interpretable due to its use of explicitly learned causal graphs, and that we were in the process of conducting further analyses to strengthen this claim. We are happy to report that we have completed the proposed interpretability experiments. Specifically, we performed a forecasting-based attribution analysis using the Lorenz96 synthetic dataset, where the ground truth anomalous variables are known by construction. Each synthetic anomaly involves injecting abnormal values into 10 randomly chosen variables among 128, allowing us to evaluate whether CAROTS can correctly identify the sources of anomaly. We use the forecasting error from CAROTS’s causality-conditioned forecaster as a proxy for variable-level anomaly attribution. By computing per-variable errors and comparing them to the true perturbed variables using AUROC, we quantify the model’s ability to localize causal disruptions. The results are as follows: |Anomaly Type|AUROC| |-|-| |Point Global|0.917| |Point Contextual|0.874| |Collective Trend|0.844| |Collective Global|0.691| These results demonstrate that CAROTS can meaningfully identify the anomalous variables, particularly for point-level anomalies that involve localized causal violations. While performance is slightly lower for collective anomalies, the attribution signal remains useful.
Summary: This paper firstly addresses the problem of Multi-variate Time-Series Anomaly Detection (MTSAD) by incorporating causality relationships. The authors propose novel data augmentation methods, CPA and CDA, which generate samples by leveraging causality learned from previous causality learning approaches. Furthermore, they suggest a novel loss term, Similarity-filtered One-class Contrastive loss, which enables the model to capture the semantic diversity. Finally, CAROTS calculates anomaly score based not only distance (A_CL) but also causality preservation (A_CD). The results demonstrate that the proposed methods outperform existing approaches and exhibit robustness across diverse datasets. Claims And Evidence: It is intuitively convincing that leveraging causality relationships to distinguish anomalies from normal operations. To concentrate on this, the authors propose new augmenters to enable the model to capture these relationships. However, it remains unclear how to ensure that causality relationships in normal multivariate time-series remain consistent over time. This is a critical and fundamental assumption of the proposed approach, but there is no theoretical or experimental support for it. I believe this is a major weakness of the paper and strongly recommend that the authors to provide some evidence or discussion to address this issue. Methods And Evaluation Criteria: There are no issues regarding the evaluation criteria in this paper. As for the proposed method, it is sound convincing, but the authors need to provide additional evidence or justification to support their assertions. Theoretical Claims: There are no theoretical claims in this paper. Please check the comment in “Claims And Evidence”. Experimental Designs Or Analyses: Despite their extensive experiments, the paper lacks comparisons with state-of-the-art methods, such as CARLA [1]. For a fair evaluation, I strongly recommend considering more recent and relevant works. Additionally, an ablation study on sigma is required. If it is too large, the samples generated by CPA may become anomaly rather than representing normal data. Lastly, the performance of the proposed methods are highly dependent to causal discover methods. CAROTS used CUTS+, but there is no ablation or other causal discover methods. I am curious about that how vary the performance of CAROTS depends on the causal discover methods. [1] Darban, Zahra Zamanzadeh, et al. "CARLA: Self-supervised contrastive representation learning for time series anomaly detection." Pattern Recognition 157 (2025): 110874. Supplementary Material: I also review the supplementary material. Especially, I checked the supplementary material to figure out the standard deviation in the main table. Relation To Broader Scientific Literature: The key contribution of the paper is leveraging causality relationships to discriminate the normal and anomaly. Despite additional justifications are needed, the proposed approach is convincing. Essential References Not Discussed: This paper cited the related works appropriately. Other Strengths And Weaknesses: The figures in the paper are well structured and help a lot to understand the methods and process. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: In the experiments on VAR, although CARLOTS is competitive, other baselines achieve better performance. I believe this trend differs from the results on other datasets and is likely related to dataset characteristics. I request that the authors provide additional analysis on this point. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank Reviewer aXkV for helpful comments, which we believe will enrich the depth of our work. We are delighted that the reviewer commends that the proposed method is intuitively convincing, outperforms existing approaches, and exhibits robustness across diverse datasets. We tried our best to answer all of the questions during the initial response period, and we are happy to engage in further discussion during the author-reviewer discussion period. Also, the “results of using other causal discovery methods” will be updated later in the discussion period. >**Do causality relationships in normal multivariate time-series remain consistent over time?** While we do not assume strict stationarity, previous works in both classical [1,2] and deep learning-based causal discovery [3] has observed that causal structures often remain stable over time in real-world time-series. To empirically assess whether this statement holds in our setting, we further analyze the evolution of causal structures in three benchmark datasets: SWaT, WADI, and PSM. For each dataset, we split the normal training data into four disjoint, time-ordered segments (Quarter 1 to 4), train a causal discovery model on each, and compute pairwise cosine similarities between the resulting graphs. Causality Matrix Consistency across Time Segments (Cosine Similarity by Dataset) |Quarters|SWaT|WADI|PSM| |-|-|-|-| |Q1vsQ2|0.911|0.965|0.955| |Q1vsQ3|0.923|0.966|0.953| |Q1vsQ4|0.928|0.959|0.918| |Q2vsQ3|0.978|0.973|0.952| |Q2vsQ4|0.978|0.964|0.898| |Q3vsQ4|0.981|0.963|0.915| The consistently high similarity indicates that the learned causal relationships remain stable across time segments, supporting the validity of our approach. [1] Spirtes et al., Causation, Prediction, and Search, 2000. [2] Peters et al., Elements of Causal Inference, 2017. [3] Kong et al., CausalFormer:..., 2024. >**Comparison with CARLA** According to the reviewer’s suggestion, we compare CAROTS with CARLA [4] under the same settings and datasets and report the mean ± std results over three seeds. The results below show that CAROTS outperforms CARLA on most datasets and metrics. |Dataset|Metric|CARLA|CAROTS| |-|-|-|-| |SWaT|AUROC|0.807±0.034|0.852±0.008| | |AUPRC|0.691±0.015|0.764±0.003| | |F1|0.742±0.022|0.791±0.008| |WADI|AUROC|0.533±0.056|0.622±0.042| | |AUPRC|0.103±0.047|0.260±0.021| | |F1|0.175±0.058|0.391±0.076| |PSM|AUROC|0.445±0.041|0.783±0.008| | |AUPRC|0.257±0.012|0.595±0.007| | |F1|0.444±0.001|0.603±0.011| |SMD_2-1|AUROC|0.546±0.157|0.726±0.023| | |AUPRC|0.156±0.078|0.193±0.018| | |F1|0.202±0.087|0.299±0.026| |SMD_3-7|AUROC|0.483±0.075|0.769±0.011| | |AUPRC|0.171±0.050|0.430±0.015| | |F1|0.254±0.069|0.564±0.011| |MSL_P-14|AUROC|0.712±0.165|0.782±0.028| | |AUPRC|0.521±0.154|0.449±0.030| | |F1|0.639±0.113|0.599±0.051| |MSL_P-15|AUROC|0.571±0.117|0.701±0.008| | |AUPRC|0.150±0.115|0.022±0.001| | |F1|0.272±0.136|0.087±0.004| [4] Darban et al., CARLA:..., 2025 >**Ablation study on sigma** Results of ablation study on σ in CPA over a wide range (0 to 0.4) are presented below (SWaT dataset; mean ± std over 3 seeds): |σ|AUROC|AUPRC|F1| |-|-|-|-| |0 |0.850±0.001|0.761±0.002|0.798±0.001| |0.05|0.853±0.003|0.762±0.002|0.797±0.004| |0.1 |0.852±0.008|0.764±0.003|0.791±0.008| |0.2 |0.849±0.007|0.759±0.009|0.795±0.000| |0.4 |0.848±0.002|0.762±0.007|0.792±0.001| Performance remains stable across different σ’s, indicating that the generated samples do not degrade model quality, even with higher noise levels. These results suggest that CPA is robust to the choice of σ within a reasonable range. >**Results of using other causal discovery methods** We agree that studying the impact of different causal discovery methods is important for assessing the generality of CAROTS. We are currently running experiments with alternative causal discovery methods, and we will upload the results as soon as the experiments are completed. >**Explanation for the VAR results** CAROTS behaves differently on the VAR dataset because VAR has different characteristics from other datasets. VAR is synthetically generated using a linear autoregressive process, where all variable relationships are linear and stable over time. As a result, methods that rely on modeling correlations or co-occurrence patterns (like TimesNet or USAD) are naturally well-suited for this setting. In contrast, CAROTS is designed to detect anomalies that disrupt more complex or directional causal relationships, especially in non-linear or dynamic systems. This is why it performs particularly well on datasets like Lorenz96 or SWaT, which reflect those properties. That said, CAROTS still achieves strong results on VAR for certain anomaly types, such as Point Contextual and Collective Global, where detecting multi-variable inconsistency is important. We believe this suggests that CAROTS complements correlation-based approaches and is especially useful when anomalies reflect deeper structural disruptions. --- Rebuttal Comment 1.1: Comment: Thank you for your informative rebuttal. The authors' responses address most of my concerns, but I hope to see the comparison and analysis of other causal discovery methods. Nonetheless, I have kept the original positive score. --- Reply to Comment 1.1.1: Comment: Thank you again for your thoughtful comments and for maintaining a positive score. As suggested, we conducted additional experiments to examine how CAROTS performs under different causal discovery methods. Specifically, we evaluated CAROTS using Neural Granger Causality (NGC) [1], CUTS [2], and CUTS+ [3] across six datasets. Below are the results (AUROC, mean ± std): |Method|SWaT|WADI|SMD_2-1|SMD_3-7|MSL_P-14|MSL_P-15| |-|-|-|-|-|-|-| |NGC|0.852±0.007|0.485±0.011|0.684±0.007|0.691±0.011|0.764±0.001|0.758±0.018| |CUTS|0.855±0.007|0.490±0.014|0.726±0.077|0.694±0.033|0.764±0.000|0.662±0.003| |CUTS+|0.852±0.008|0.502±0.007|0.703±0.021|0.769±0.011|0.764±0.000|0.701±0.008| We find that CAROTS consistently performs well across all causal discovery methods, with only modest performance variation. While each method performs best on different datasets (e.g., CUTS+ on WADI and SMD_3-7; CUTS on SWaT and SMD_2-1; NGC on MSL_P-15), the overall performance remains robust and competitive. This indicates that CAROTS does not overly depend on a particular discovery algorithm or exact causal graph structure. Instead of solely relying on the causal graph searched by a causal discoverer for anomaly detection, CAROTS uses the causal graph as a guide for generating semantically meaningful causality-preserving or disturbing augmentations for contrastive learning. Our additional results confirm that causality-aware contrastive learning of CAROTS enabled by causality-informed augmentations enables strong generalization across discovery methods and datasets. We appreciate your suggestion, which helped strengthen our empirical validation. We will include this analysis and discussion in the revised version of the paper. [1] Tank et al., Neural Granger Causality, TPAMI, 2021. [2] Cheng et al., CUTS: Neural Causal Discovery from Irregular Time-series Data, ICLR, 2023. [3] Cheng et al., CUTS+: High-Dimensional Causal Discovery from Irregular Time-Series, AAAI, 2024.
Summary: The paper proposes a way to detect anomalies from multivariate time-series data using causality. The proposed method employs two data augmentors to obtain causality-preserving and causality-disturbing samples, respectively. Afterwards, regarding those samples as positive and negative samples, contrastive learning is performed to train the encoder of the abnormal detector. Experiments on five real-world and two synthetic datasets validate the effectiveness of the proposed method. ## update after rebuttal Authors covered most of my concerns in the rebuttal, so I will increase my rating. Claims And Evidence: - The use of two data augmentors to generate positive and negative examples of the contrastive learning. Also, applying the contrastive learning to train the encoder of the anomaly detector to achieve the causality-aware data anomaly detection. - Similarity-filtered one-class contrastive loss (SOC) is further proposed to incorporate hard samples during the training process. Methods And Evaluation Criteria: - The method looks rather simple to involve two types of data augmentation methods and applying the existing contrastive learning on those datasets. - There seems less theoretical validation for the reasons to propose each module. Theoretical Claims: - There seems not much theoretical claim and analysis. Experimental Designs Or Analyses: - Ablation study in Table 4 should be more complete: (1) It should be conducted for more datasets (7 datasets) used for the entire experiments. It was originally conducted only for 1 dataset and the effect is hard to be captured. I think 1 more ablation study is essential for the same dataset as Table 2, to clearly see the effectiveness of the proposed method. (2) Also, I request authors to report the baseline performance without performing the data augmentation and contrastive learning. It might be useful to judge the effectiveness of the method if authors could show the baseline performance. (3) The reason for using two types of anomaly score is not yet clear since there is no comparison to "w/o A_CL and A_CD" that does not use both A_CL and A_CD. In overall, I am not fully convinced about the effectiveness of each module, due to incomplete experimental settings yet. Supplementary Material: No supplemental submitted. Relation To Broader Scientific Literature: The paper has a potential to impact various domain that involves the multivariate time-series data. Essential References Not Discussed: Reference looks rather complete. Other Strengths And Weaknesses: I think the most critical weakness of the paper are on experimental settings: Experimental settings, especially the ablation studies, are not complete yet. Authors need to design experiments to validate the (1) effectiveness of each module on "all" multiple benchmarks that are used, (2) gap between baseline and proposed methods. Also, there is less theoretical validation for the effectiveness of each module. Other Comments Or Suggestions: I think authors to better design experiments in the aspect of what I mentioned in previous sections. Also, authors could include more theoretical analysis on each module. Questions For Authors: In Table 4, why other combinations such as "w/o A_CL and A_CD" are not considered yet? Is ablation study on SWaT can represent the tendency for other datasets? Why not conducted the ablation study on other dataset? What motivates authors to use both A_CL and A_CD are used in scoring the anomaly? How and why CPA and CDA are more effective, compared to conventional data augmentation methods? Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer vFwM for constructive comments. We are encouraged that the reviewer acknowledges our work's potential to impact various domain that involves multivariate time-series. We hope our response addresses your concerns. Should the reviewer have more follow-up questions, we would be happy to answer them during the discussion period. >**Extended ablation study from Table 4** The table below presents the extended ver. of Table 4 on 7 real datasets and 2 synthetic datasets. For Lorenz96 and VAR, the reported values represent the average performance across the four different synthetic anomaly types. In summary, the extended results are generally consistent with Table 4 in the main paper. |Config|SWaT|WADI|PSM|SMD_2-1|SMD_3-7|MSL_P-14|MSL_P-15|Lorenz96|VAR| |-|-|-|-|-|-|-|-|-|-| |w/o CPA|0.850|0.486|0.786|0.700|0.756|0.764|0.740|0.918|0.767| |w/o CDA|0.842|0.488|0.789|0.623|0.732|0.764|0.609|0.917|0.785| |w/o SOC|0.819|0.493|0.706|0.758|0.719|0.764|0.719|0.923|0.765| |w/o A_CL|0.814|0.494|0.778|0.602|0.701|0.768|0.694|0.943|0.732| |CAROTS† (w/o A_CD)|0.861|0.622|0.729|0.726|0.779|0.782|0.683|0.919|0.769| |CAROTS|0.852|0.502|0.783|0.703|0.769|0.764|0.701|0.909|0.805| - The setting without data augmentation and contrastive learning corresponds to using only A_CD, the causal forecasting-based anomaly score. This result, included in Table 4, is also included in the table above (w/o A_CL). While A_CD alone performs competitively, full CAROTS, which includes CPA, CDA, SOC, and A_CL, further improves the detection results. - Anomaly detection without both A_CL and A_CD is infeasible because, by definition, every detection method requires at least one anomaly score to score samples. Instead, in Table 4 and the extended table above, we report the results of using only one of the two scores (w/o A_CL and w/o A_CD) to demonstrate the efficacy of each score. CAROTS† (w/o A_CD) achieves the highest detection performance on 6 datasets, which highlights the effectiveness of the proposed causality-driven contrastive learning and A_CL. CAROTS, which additionally uses A_CD, further improves CAROTS† on 3 datasets, indicating that the auxiliary A_CD score offers complementary signals for anomaly detection. - Table 4 and the extended table demonstrate that CDA and CPA are more effective than conventional data augmentation methods because replacing either one with a conventional method (w/o CPA & w/o CDA) results in a performance drop. More explanation on their effectiveness is detailed in the section below. >**Validation for the effectiveness of each module** We hope our extended ablation study, which empirically evidences the effectiveness of each module, alleviates reviewer's concern about less theoretical validation. Below, we clarify the intuitive justification behind the design of each module. - Novel Data Augmentation (CPA + CDA) - CPA (Causality-Preserving Augmentor) generates causality-preserving variations by perturbing causing variables and using the causal forecaster to reconstruct affected variables, ensuring that augmented samples retain the normal causal behavior. This encourages the model to learn representations that belong to diverse yet causally consistent normal patterns. - CDA (Causality-Disturbing Augmentor) breaks causal relationships to synthesize anomalies. By injecting perturbations into randomly extracted subgraphs of the causal graph, CDA produces samples that simulate how real anomalies disrupt inter-variable dynamics—something conventional augmentations cannot replicate. - Compared to conventional time-series augmentation methods that rely on surface-level distortions (e.g., time warping, noise injection), CPA and CDA leverage the underlying causal structure from the train data to obtain more semantically meaningful and task-relevant augmentations. Together, augmentations from CPA and CDA enable the contrastive learning process to discriminate samples based on causal consistency rather than superficial similarity. - Novel SOC Loss guides contrastive learning to respect the semantic diversity within normal data by filtering out low-similarity positives early in training. In effect, it yields a more structured embedding space, grouped by semantic diversity. - Novel Anomaly Scores (A_CL and A_CD): Once contrastive learning with CPA, CDA, and SOC yields a causality-informed embedding space, A_CL detects anomalies by measuring how much a test sample deviates from the causality-preserving embedding space. In addition, A_CD utilizes the causal forecaster to obtain an auxiliary causality-driven signal for anomaly detection; if the causal forecaster yields a high forecasting error, the sample is more likely to be an anomaly. >**Ethics review flag** We noticed that an ethical review flag was raised for our paper. To our understanding, our work does not involve any ethical concerns, but we would be grateful for any clarification to help us address this appropriately.
null
null
null
null
null
null
Bifurcate then Alienate: Incomplete Multi-view Clustering via Coupled Distribution Learning with Linear Overhead
Accept (poster)
Summary: In the paper, the authors propose a dual-determinant incomplete multi-view clustering algorithm BACDL. They partition feature clusters through a bifurcation scheme, and alienate bifurcations to differentiate determinants. With coupled distribution learning, it alleviates the dimension inconsistency by introducing view guidance. Then, they bridge all views based on the principle between marginal distribution and conditional distribution, and reorder all incomplete sample clusters in potential space to construct the clustering embedding. Finally, compared experiments with different missing ratios are organized to reveal the merits of BACDL. Claims And Evidence: The provided evidence supports the claims. Methods And Evaluation Criteria: The method is applicable to large-scale clustering scenarios. Theoretical Claims: Yes. No issues were found. Experimental Designs Or Analyses: Yes. The designs confirm the effectiveness of proposed BACDL. Supplementary Material: I viewed supplementary material. Relation To Broader Scientific Literature: The ability to be applicable to large-scale scenarios highlights its practicality. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths, - The designed dual-determinant learning paradigm and coupled distribution learning paradigm for IMC issues are interesting, and owns reference value for further study. - The solution is technically sound, making linear overhead and ensuring convergence. - The introduction of sample cluster transformation for misregistration is novel. Experiments are evaluated from multiple perspectives. Weaknesses The discussions regarding Remark 1 appears overly concise. It is recommended to provide more details. - Regarding TCIMC and LSIMV (without feature cluster bifurcating), they receive preferable results on DEOLOG. From table 2, there is a remarkable 2.19 PUR interval. These is no interpretation for this phenomenon. - The application of orthogonal rotation in the potential representation space to reorganize the sample clusters could deteriorate the distribution attribute. This implies that the cluster label quality is highly dependent on the rotation. Other Comments Or Suggestions: None Questions For Authors: - About the number of feature clusters and the number of sample clusters, how to set their values? Based on eq (1), one can have that these numbers can be arbitrary and only the association needs to satisfy the dimensionality constraints. - The view guidance plays a role in avoiding the dimension difference, and is related to the common representation matrix to extract perspective-shared features. If assigning it to the samples (This also can learn shared features since all sample dimensions are consistent after projecting. Moreover, the dual-determinant representations are concurrently learned in the same dimension space), will that facilitate increases in the results? - Instead of the unified association, if using the framework in eq (1) to construct the sample clusters and then remapping them, it seems that the loss 1 is not necessary. In this case, is the performance still competitive? Could you please provide some empirical evidence for this baseline? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** More details regarding Remark 1. **A1:** Thanks. Due to $\mathbf{X} _r\in\mathbb{R}^{d_r\times n}$ and $\mathbf{G} _r\in\mathbb{R}^{n\times n_r}$, computing $q=\|\mathbf{X} _r\mathbf{G} _r-\widehat{\mathbf{A}} _r\mathbf{E} _r^{\top}\mathbf{G} _r\| _F^2$ will take at least $\mathcal{O}(n n_r)$ overhead. Note that $\mathbf{G}_r$ consists of 0 and 1, and there is only one element that is 1 in each column. $\mathbf{G} _r\mathbf{G} _r^{\top}$ is diagonal with 0 and 1. $\mathbf{X} _r\mathbf{G} _r\mathbf{G} _r^{\top}$ and $\mathbf{E} _r^{\top}\mathbf{G} _r\mathbf{G} _r^{\top}$ equal to $\mathbf{X} _r\odot\mathbf{O} _r$ and $\mathbf{E} _r^{\top}\odot\mathbf{Q} _r$. $\mathbf{O} _r$ is $\mathbf{1} _{d _r}\cdot[\sum _{j=1}^{n _v}(\mathbf{G} _r) _{1,j},\cdots,\sum _{j=1}^{n _v}(\mathbf{G} _r) _{n,j}]$ and $\mathbf{Q} _r$ is $\mathbf{1} _{k}\cdot[\sum _{j=1}^{n _v}(\mathbf{G} _r) _{1,j},\cdots,\sum _{j=1}^{n _v}(\mathbf{G} _r) _{n,j}]$. So, we have $q$ equals to $\|\mathbf{X} _r\odot\mathbf{O} _r-\widehat{\mathbf{A}} _r\mathbf{E} _r^{\top}\odot\mathbf{Q} _r\| _F^2$. Computing the latter needs $\mathcal{O}(d_rn)$, which is $\mathcal{O}(n)$. **Q2:** Interpretation regarding TCIMC and LSIMV. **A2:** TCIMC uses the tensor Schatten p-norm to explore complementary and spatial structure. Despite enhancing view interaction, it induces intensive time overhead due to the tensor operation, and is unsuitable for large-scale tasks. LSIMV constructs sparse and structured representations. It adopts a norm based sparse constraint to generate low-dimensional features, and uses local embedding to do aggregation. This needs a relatively larger memory cost due to the almost full-sized graph. **Q3:** Does the orthogonal rotation affect the cluster label quality? **A3:** This rotation reorganizes the sample clusters to relieve misregistration. Although altering the distribution, kindly note that the final common sample clusters are also required to be orthogonal, which keeps pace with the rotation. See the following comparisons. NRN: not involve rotation; ART: contains rotation. |FLOEVEN|||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||0.2|||0.5|||0.8||| ||PUR|ACC|FSC|PUR|ACC|FSC|PUR|ACC|FSC| |NRN|10.57|10.13|5.18|10.35|10.13|5.13|9.98|9.39|5.07| |ART|**11.17**|**10.80**|**5.89**|**11.25**|**10.53**|**5.88**|**11.54**|**10.74**|**5.83**| |**SYNTHREED**|||||||||| |NRN|38.17|38.17|33.68|37.67|36.33|33.55|38.50|37.00|33.65| |ART|**42.50**|**42.12**|**35.21**|**42.83**|**42.83**|**35.24**|**41.04**|**41.04**|**34.59**| **Q4:** How to set the numbers of feature clusters and sample clusters? **A4:** The model based on eq (1) mainly shows that this paradigm can factorize multi-view data as feature clusters and sample clusters as well as association to mine latent patterns. For generality, the numbers of feature clusters and sample clusters can be any value (smaller than the feature dimension and sample size). In practice, we set their values just as $k$. The benefits are multifold. Dataset will be divided into $k$ groups, whether from the view of samples or features. The misregistration will be relieved owing to the small sample cluster number. Also, this makes the association small-sized, saving resources. **Q5:** Does assigning view guidance to samples (VGS) facilitate the results? **A5:** This a solution to avoid dimension difference. The following experiments reveal the performance. DAM: devised assigning mechanism. |FLOEVEN|||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |VGS|10.83|10.32|5.47|10.77|9.73|5.58|10.63|10.07|5.37| |DAM|**11.17**|**10.80**|**5.89**|**11.25**|**10.53**|**5.88**|**11.54**|**10.74**|**5.83**| |**SYNTHREED**|||||||||| |VGS|41.53|40.93|33.97|41.74|42.17|34.82|39.92|39.83|33.64| |DAM|**42.50**|**42.12**|**35.21**|**42.83**|**42.83**|**35.24**|**41.04**|**41.04**|**34.59**| The reasons for superior results of VGS are that this operation filters out noisy and data redundancy. For inferior results, possible reasons are that the operation causes information loss in diversity. **Q6:** Performance when using the association in eq (1). **A6:** This will build an association on each view for feature clusters and sample clusters separately. SAS: separated association scheme in eq (1); DUA: devised unified association. |FLOEVEN|||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |SAS|10.95|10.50|5.35|10.54|10.24|5.76|11.43|10.60|**6.01**| |DUA|**11.17**|**10.80**|**5.89**|**11.25**|**10.53**|**5.88**|**11.54**|**10.74**|5.83| |**SYNTHREED**|||||||||| |SAS|40.51|40.57|**35.95**|41.48|41.37|34.46|40.76|**41.76**|33.18| |DUA|**42.50**|**42.12**|35.21|**42.83**|**42.83**|**35.24**|**41.04**|41.04|**34.59**| Under eq (1), the view communication could be inadequate due to the separability. Also, this could fail to propagate feature cluster information across views into sample clusters since the association is established individually. --- Rebuttal Comment 1.1: Comment: The author's rebuttal has addressed my concerns, so I have decided to raise my score to accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer hkPK, Thank you for your encouraging words! We will further enhance the manuscript in line with your expert suggestions. Best Wishes, The authors
Summary: This work aims to alleviate the issue of single-determinant paradigm in incomplete multi-view clustering. It introduces distribution learning and associates each type of determinants to bifurcated feature clusters. Through mutual exclusion learning and view guidance learning, it eliminates the dimension inconsistency and enlarges the determinant distinction. All views are interconnected together based on the distribution principle. After rotating and remapping, incomplete sample clusters are formed into full clustering embedding. A nine-step updating rule with overall linear overhead and theoretical convergence efficiently minimizes the objective function. Claims And Evidence: Utilizing dual perspective determinants to encode cluster representations is under-studied in incomplete multi-view clustering. I agree with that. Table 4 and 5 illustrate its effectiveness. Methods And Evaluation Criteria: The authors conduct comparing experiments under several missing ratios to illustrate the effectiveness in tackling IMC issues Theoretical Claims: Yes. They are correct. Experimental Designs Or Analyses: Yes. I checked experimental designs. They are valid. Supplementary Material: Yes. I checked the Appendix. Relation To Broader Scientific Literature: The key contributions in this paper are the dual perspective determinant cluster coding and the solution with overall linear overhead, which may serve as a groundwork for subsequent studies. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1) The algorithm analysis is comprehensive, covering complexity, convergence, sensitivity, etc. 2) Experiments are solid, involving diverse missing ratios and data scales. Ablation is also thorough. 3) The flow chart of the proposed model is intuitive, and the function of each part is clear. Weaknesses: 1) During formulating Eq.(3), the descriptions about the sample cluster distribution needs further enhancement. 2) The space rotation operation utilized could alter the value of learned sample clusters. This influence requires more illustrations. 3) The initialization lacks clarity, for instance, the cluster association. 4) A systematic analysis of each term's contribution to the objective loss would significantly strengthen the motivation. Other Comments Or Suggestions: See the weakness above. Questions For Authors: 1. As mentioned, the final clustering results are derived by performing spectral clustering on the common sample clusters. The sample clusters from each view contribute to the formation of the common sample clusters. A question arises: why do the common sample clusters not adhere to the similar constraints as the view sample clusters? 2. The number of samples observed on each view $n_r$ is embedded into the original data matrix, how is it merged during updating the view coefficient? If not merged, is the computing overhead impacted by it? In practice, whether does it affect the running speed? 3. When generating the clustering embedding by accumulating sample clusters on all views vertically, how does the algorithm performance behave? possible reasons? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** More descriptions on the sample cluster distribution in Eq.(3). **A1:** Thanks. Each row of the sample cluster $\mathbf{E}_r$ represents a probability distribution. So, for each row of $\mathbf{E}_r$, its sum needs to be 1. For handling the incompleteness, we introduce the index matrix $\mathbf{G}_r$ which consists of 0 and 1. We can formulate the observed sample clusters as $\mathbf{G} _r^{\top}\mathbf{E} _r$. Hence, we have the incomplete sample clusters need to satisfy $\mathbf{G} _r^{\top}\mathbf{E} _r\mathbf{1} _k =\mathbf{1} _{n _r}$. **Q2:** More illustrations on the space rotation influence. **A2:** It mainly serves as reordering sample clusters to make them as consistent as possible. Note that the common sample clusters are subject to the similar constraints as space operation, which can help tackle the negative elements that rotation operation brings. Please see the following performance comparisons. DM: direct mapping; RO: rotation operation. |FLOEVEN|||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||0.2|||0.5|||0.8||| ||PUR|ACC|FSC|PUR|ACC|FSC|PUR|ACC|FSC| |DM|10.57|10.13|5.18|10.35|10.13|5.13|9.98|9.39|5.07| |RO|**11.17**|**10.80**|**5.89**|**11.25**|**10.53**|**5.88**|**11.54**|**10.74**|**5.83**| |**SYNTHREED**|||||||||| |DM|38.17|38.17|33.68|37.67|36.33|33.55|38.50|37.00|33.65| |RO|**42.50**|**42.12**|**35.21**|**42.83**|**42.83**|**35.24**|**41.04**|**41.04**|**34.59**| |**DEOLOG**|||||||||| |DM|30.62|21.51|17.96|30.34|21.89|18.16|30.23|22.01|18.07| |RO|**31.84**|**22.32**|**18.45**|**31.84**|**23.74**|**18.41**|**31.28**|**24.25**|**19.56**| One can observe that after space rotation, the performance receives improvement. **Q3:** The initialization lacks clarity. **A3:** Thanks! We create a random matrix with element from 0 to 1, and then do column-normalization. We use it to initialize $\mathbf{P}_r$, $\mathbf{C}_r$ and $\mathbf{C}$. For $\mathbf{F}_r$ and $\mathbf{E}$, we use random orthogonal matrix to initialize them. We use the row-normalized random matrix with element from 0 to 1 to initialize $\mathbf{E}_r$, and the random matrix with element from 0 to 1 to initialize $\mathbf{D}$. For $a_r$ and $b_r$, we initialize them with $1/v$ and $1/\sqrt{v}$. **Q4:** Analysis of each term's contribution. **A4:** The first term is deemed as a error reconstruction, and extracts feature clusters and sample clusters as well as their association. It also adaptively balances the important of each view. The second term separates feature clusters from sample clusters to highlight their discrimination by point-to-point alienation. The third term mitigates the misregistration and constructs complete sample clusters. **Q5:** Why do the common sample clusters not adhere to the similar constraints as the view sample clusters? **A5:** We introduce orthogonal transformation to reformulate the view sample clusters, which inevitably brings negative elements. Note that the essence of view sample clusters is a probability distribution, and accordingly all elements in them are non-negative. After transformation to produce common sample clusters, there will have negative elements. So, we make the common sample clusters adhere to similar constraints as the orthogonal transformation. **Q6:** How is $n_r$ merged during updating the view coefficient? Affect the running speed? **A6:** Updating the view coefficient needs to compute $\left\|\mathbf{X} _r\mathbf{G}_r\right\| _F^2$. Note that for each column of $\mathbf{G}_r$, there is only one 1 and other elements are 0. We have $\left\|\mathbf{X} _r\mathbf{G} _r\right\| _F^2$ is equal to $\left\|\mathbf{X} _r\mathbf{G} _r\mathbf{G} _r^{\top}\right\| _F^2$. The diagonal elements of $\mathbf{G} _r\mathbf{G} _r^{\top}$ are 1 or 0 and other elements are 0. Consequently, we have $\mathbf{X} _r\mathbf{G} _r\mathbf{G} _r^{\top}$ aims to select some columns of $\mathbf{X}_r$. So, we have that $\left\|\mathbf{X} _r\mathbf{G} _r\right\| _F^2$ is equal to $\left\|\mathbf{X} _r\odot\mathbf{O} _r\right\| _F^2$. $\mathbf{O}_r$ is $\mathbf{1} _{d _r}\cdot[\sum _{j=1}^{n_v}(\mathbf{G} _r) _{1,j},\cdots,\sum _{j=1}^{n _v}(\mathbf{G} _r) _{n,j} ]$. Before merging, the computing overhead is $\mathcal{O}(n n_r)$. After merging, it is $\mathcal{O}(n)$ and not affected by $n_r$. **Q7:** When accumulating sample clusters vertically, how does the algorithm behave? Reasons? **A7:** For the performance comparison, please refer to A7 in Reviewer xp7C. We attribute this phenomenon to three points. Accumulating sample clusters on all views vertically could result in inadequate communication between sample clusters across different views, which is not conductive to formulating rich representations. This also could not automatically measure the importance of different sample clusters. Meanwhile, the misregistration will disturb the cluster structure, and consequently degrades the formulated representations. --- Rebuttal Comment 1.1: Comment: My comments have been partially addressed in the rebuttal. I raise my rating to accept.. --- Reply to Comment 1.1.1: Comment: Dear Reviewer KBMg, Many thanks for recognizing our contributions! We will further polish the manuscript according to your profound and professional guidance in the further. Best Wishes, The authors
Summary: A BACDL algorithm with dual-determinant learning is specially designed for incomplete multi-view clustering (IMC) in this paper. It bifurcates feature clusters and further alienates them through mutual exclusion learning to strengthen the discrimination. It alleviates the dimension inconsistency, and bridges all views by unifying the association between feature clusters and sample clusters. The full clustering embedding is formulated by weighted space rotation and remapping. Theoretical analysis demonstrates its linear overhead and convergence. Experimental results under multiple missing ratios validate its effectiveness. Claims And Evidence: This paper is organized in a clear manner and provides some insights for incomplete multi-view clustering. I understand the motivation of dual-determinant scheme. The feature cluster bifurcating highlights the novelty. Methods And Evaluation Criteria: This paper compares the results with multiple methods in three metrics. Theoretical Claims: It is ok in convergence and complexity. Experimental Designs Or Analyses: The experimental designs are reasonable. Supplementary Material: I checked the proof and additional experimental results. Relation To Broader Scientific Literature: The feature cluster bifurcating and alienating could provide further reference for incomplete multi-view clustering. Essential References Not Discussed: None Other Strengths And Weaknesses: The strengths: 1. The motivation is clearly illustrated and the overall organization is logically-structured. 2. The idea that bifurcates feature clusters and alienates via mutual exclusion learning is novel to a certain degree. 3. The authors conduct extensive experiments and also make moderate discussions about the results. The weaknesses: 1. Introducing view guidance may induce additional computing complexity, which could impair the efficiency goal. 2. The reasons why introducing common sample clusters and why setting them to be orthogonal are not illustrated in depth. Other Comments Or Suggestions: Refer to the above comments Questions For Authors: 1. Does the missing ratio affect the complexity of BACDL? If yes, how? As illustrated in the updating rule, the index matrix is associated with the data matrix. 2. Why it is necessary to guarantee the matrix C to be column-normalized? 3. How does the element-wise multiplication operation decrease the computational overhead? 4. Why learning the sample clusters on each view respectively? Why not directly learning common sample clusters? Is this beneficial for the performance improvement? 5. Rather than formulating the full clustering embedding by mapping the sample clusters of each view, when stacking them, there is no common plane, how does the model perform? . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** View guidance may impair the efficiency goal. **A1:** Thanks. The computing cost of view guidance is linear, and thus hardly affects the efficiency goal. It requires constructing $\mathbf{X} _r\mathbf{G} _r\mathbf{G} _r^{\top}\mathbf{E} _r\mathbf{D} _{\gamma}^{\top}\mathbf{C}^{\top}$, $\mathbf{C} _r\mathbf{C}^{\top}$ and $[\mathbf{P} _r\mathbf{C}|\mathbf{C} _r]\mathbf{D}\mathbf{E} _r^{\top}\mathbf{G} _r\mathbf{G} _r^{\top}\mathbf{E} _r\mathbf{D} _{\gamma}^{\top}\mathbf{C}^{\top}$, which takes $\mathcal{O}(d_rn+d_rnk+d_rk^2)$, $\mathcal{O}(d_rk^2)$, $\mathcal{O}( nk + 2d_rk^2 + d_rnk)$. So, it totally takes $\mathcal{O}(d_rnk+d_rk^2)$. This is $\mathcal{O}(n)$ and consistent with the efficiency goal. **Q2:** Why introducing common orthogonal sample clusters? **A2:** They gather all view sample clusters to formulate full embedding. Due to the missing instances, the sample clusters on each view are incomplete. The orthogonality plays a role in enhancing the separability of learned common sample clusters to better group samples. **Q3:** Does the missing ratio affect the complexity of BACDL? **A3:** It does not affect the complexity. The index matrix containing missing ratio owns the properties, i.e., there is only one element that is 1 while the other elements are 0 in each column. We have $\mathbf{G} _r\mathbf{G} _r^{\top}$ is diagonal with elements either 0 or 1. So, $\mathbf{X} _r\mathbf{G} _r\mathbf{G} _r^{\top}$ equals to $\mathbf{X} _r\odot\mathbf{O} _r$ where $\mathbf{O} _r=\mathbf{1} _{d _r}\left[\sum _{j=1}^{n _v}(\mathbf{G} _r) _{1,j}, \cdots,\sum _{j=1}^{n _v}(\mathbf{G} _r) _{n,j}\right]$. This takes $\mathcal{O}(d_rn)$ cost and is irrelevant to the missing ratio. **Q4:** Why guaranteeing C column-normalized? **A4:** Each column of feature clusters denotes a probability distribution on all feature dimensions. Each column sum needs to add up to 1. After bifurcating, each part also should conform to this point. On the basis of column-normalized $\mathbf{P}_r$, we derive that the premise for $\mathbf{P} _r\mathbf{C}$ being column-normalized is that $\mathbf{C}$ only needs to be column-normalized. **Q5:** How does the element-wise operation reduce the computational overhead? **A5:** This mainly benefits from the equivalent element transformation. For $\mathbf{E}_r$, directly calculating $\left\|\mathbf{E} _r^{\top}\mathbf{G} _r\right\| _F^2$ will take $\mathcal{O}(knn_r)$. As the missing ratio decreases, $n_r$ is gradually increasing. The overhead is almost close to $\mathcal{O}(n^2)$. Note that $\left\|\mathbf{E} _r^{\top}\mathbf{G} _r\right\| _F^2$ is equal to $\left\|\mathbf{E} _r^{\top}\mathbf{G} _r\mathbf{G} _r^{\top}\right\| _F^2$. Then, by the element-wise operation, $\left\|\mathbf{E} _r^{\top}\mathbf{G} _r\mathbf{G} _r^{\top}\right\| _F^2$ is $\left\|\mathbf{E} _r^{\top}\odot\mathbf{Q} _r\right\| _F^2$, which takes $\mathcal{O}(nk)$. $\mathbf{Q}_r$ is $\mathbf{1} _{k}\left[\sum _{j=1}^{n _v}(\mathbf{G} _r) _{1,j}, \cdots, \sum _{j=1}^{n _v}(\mathbf{G} _r) _{n,j}\right]$. **Q6:** Why learning the sample clusters on each view respectively (SEV)? Why not directly learning common ones? **A6:** The former may facilitate capturing the view characteristics. Learning common sample clusters (CSC) is a feasible scheme, and yet could weaken the view data diversity. |FLOEVEN|||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||0.2|||0.5|||0.8||| ||PUR|ACC|FSC|PUR|ACC|FSC|PUR|ACC|FSC| |CSC|10.47|10.64|4.78|10.52|9.32|**5.92**|10.25|9.36|4.57| |SEV|**11.17**|**10.80**|**5.89**|**11.25**|**10.53**|5.88|**11.54**|**10.74**|**5.83**| |**SYNTHREED**|||||||||| |CSC|40.72|39.89|34.73|40.37|39.87|33.83|38.87|37.84|31.59| |SEV|**42.50**|**42.12**|**35.21**|**42.83**|**42.83**|**35.24**|**41.04**|**41.04**|**34.59**| |**DEOLOG**|||||||||| |CSC|30.68|20.57|17.58|**31.93**|19.82|17.63|28.93|22.47|18.97| |SEV|**31.84**|**22.32**|**18.45**|31.84|**23.74**|**18.41**|**31.28**|**24.25**|**19.56**| **Q7:** When stacking sample clusters (STA), the performance? **A7:** Please see the following table. MSC: mapping sample clusters. |FLOEVEN|||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||0.2|||0.5|||0.8||| ||PUR|ACC|FSC|PUR|ACC|FSC|PUR|ACC|FSC| |STA|10.23|10.14|4.93|10.33|9.57|5.01|10.14|9.52|4.38| |MSC|**11.17**|**10.80**|**5.89**|**11.25**|**10.53**|**5.88**|**11.54**|**10.74**|**5.83**| |**SYNTHREED**|||||||||| |STA|40.05|38.97|33.98|40.73|38.56|33.69|36.73|36.40|30.79| |MSC|**42.50**|**42.12**|**35.21**|**42.83**|**42.83**|**35.24**|**41.04**|**41.04**|**34.59**| |**DEOLOG**|||||||||| |STA|29.06|20.82|16.62|30.29|21.87|17.79|28.75|22.85|18.51| |MSC|**31.84**|**22.32**|**18.45**|**31.84**|**23.74**|**18.41**|**31.28**|**24.25**|**19.56**| Directly stacking sample clusters could lead to insufficient interaction among views. Moreover, this does not adaptively balance their contributions. Misregistration also will weaken the quality of sample clusters.
Summary: This paper introduces a new incomplete multi-view clustering (IMC) algorithm named BACDL. It simultaneously explores both perspective-shared and perspective-specific determinants through coupled distribution learning, with linear overhead. The approach bifurcates feature clusters and enhances discrimination via alienation and mutual exclusion learning. Extensive experiments validate the effectiveness of BACDL across multiple large-scale datasets and benchmarks, with results outperforming several state-of-the-art methods in terms of accuracy and efficiency. Claims And Evidence: The claims in the submission are not fully supported by clear evidence. Specifically, Theorem 5 lacks clarity regarding whether convergence refers to the objective function value or iterations. Additionally, the term "global optimal" in Theorems 1 and 2 is used imprecisely, without specifying if it refers to the global minimum of the original problem or the proxy problem. This paper also fails to clarify whether the function is strictly convex, which is crucial for ensuring a global minimum. These issues weaken the theoretical rigor of the claims. Methods And Evaluation Criteria: This paper validates the proposed method using multiple large-scale datasets, which is appropriate for assessing its effectiveness and efficiency. The experimental design is well-structured, and a variety of widely used evaluation metrics are employed, providing sufficient evidence to support the claims. The experiments are comprehensive and thoroughly validate the proposed approach. Theoretical Claims: I have checked the correctness of the proofs for the theoretical claims in the paper, particularly for Theorem 1 and Theorem 2. However, there is an issue with the claim of "global optimal" in these theorems. This paper does not clearly specify whether "global optimal" refers to the global minimum of the original problem or the approximate solution to the proxy problem. Additionally, the use of the term "global" is imprecise, as it does not account for cases where multiple global solutions may exist, especially if the function is not strictly convex. This lack of rigor weakens the theoretical clarity of the claims. Experimental Designs Or Analyses: The experiments are very thorough. However, in the analysis of the results, the paper should provide a more quantitative evaluation of the performance gains. Additionally, hypothesis testing should be included to confirm the statistical significance of the observed performance improvements. Supplementary Material: No supplementary material was provided, but the paper includes a substantial number of appendices. These appendices contain many theoretical proofs, which I have reviewed thoroughly, particularly the proofs for Theorem 1 and Theorem 2. Relation To Broader Scientific Literature: The key contributions of the paper are well-positioned in relation to the broader scientific literature. The paper reviews several state-of-the-art methods, and compared to these existing approaches, the main contribution lies in simultaneously considering both perspective-shared and perspective-specific determinants. Additionally, the paper introduces a new algorithm designed to solve complex optimization problems. One of the notable aspects of the proposed algorithm is its linear overload, which distinguishes it from other methods in the literature. This contribution offers a novel approach to tackling the problem, improving efficiency and effectiveness in ways not previously explored. Essential References Not Discussed: A thorough review of the existing works that are based on NMF should be included, such as: [1] Wen, J., Zhang, Z., Zhang, Z., Zhu, L., Fei, L., Zhang, B., & Xu, Y. (2021, May). Unified tensor framework for incomplete multi-view clustering and missing-view inferring. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 11, pp. 10273-10281). [2] Wen, J., Xu, G., Tang, Z., Wang, W., Fei, L., & Xu, Y. (2023). Graph regularized and feature aware matrix factorization for robust incomplete multi-view clustering. IEEE Transactions on Circuits and Systems for Video Technology, 34(5), 3728-3741. Other Strengths And Weaknesses: Strengths: 1.Well-written and organized. This paper is clear, logically structured, and easy to follow. 2.Novel Approach. Introduces BACDL, addressing the limitations of current methods by capturing both perspective-shared and perspective-specific determinants, improving clustering performance. 3.Comprehensive Literature Review. Relevant works are reviewed in depth, providing a strong foundation for the proposed method. 4.Thorough Theoretical and Experimental Analysis. This paper includes detailed theoretical proofs, optimization steps, and a well-structured experimental setup. Performance is validated using large datasets and state-of-the-art baselines. 5.Efficiency: The algorithm exhibits linear overhead in terms of time and space, demonstrating scalability to large-scale datasets. Weaknesses: 1.Convergence and Approximation Guarantees. Theorem 1 proves the validity of the proxy problem, demonstrating that the proposed proxy problem is a valid approximation that can represent the original problem for optimization. However, Theorem 2 proves that solving this proxy problem guarantees a decrease in the objective function value, but it does not provide an analysis of convergence at the iteration level. In other words, while the objective value decreases, there is no clear proof that the algorithm will converge to a fixed point in a finite number of iterations. For approximation methods, in addition to convergence in terms of the objective value, it is important to prove an approximation guarantee, which would clarify the relationship between the solution to the approximate problem (i.e., the proxy problem) and the solution to the original problem. Many approximation methods provide such guarantees, and incorporating this would enhance the theoretical rigor of this work. 2.Ambiguities in Theoretical Details. (1) In Theorem 5, it is unclear which level of convergence is being discussed. The theorem mentions the convergence of the algorithm, but it is not specified whether this refers to convergence in the objective function value or convergence in terms of iterations. It is crucial to clarify this to prevent potential misinterpretation, as readers might assume iteration convergence is guaranteed when only objective value convergence is proven. (2) Similarly, the use of the term “global optimal” in Theorem 1 and Theorem 2 needs to be more precise. This paper uses the term "global solution" without clearly specifying whether it refers to the global minimum of the original problem or the approximate solution to the proxy problem. It would be more precise to refer to the "global minimum" of the proxy problem, especially in the context of convexity. Additionally, since a semi-definite Hessian matrix ensures that the function is convex, it only guarantees a global minimum if the function is strictly convex (i.e., the Hessian matrix is positive definite). If the function is not strictly convex, multiple global solutions might exist. The paper should clarify these points for better theoretical rigor. 3.Lack of Justification for Method Choices. Although non-negative matrix factorization (NMF) is used in the proposed method, this paper does not provide a clear justification for why NMF is preferred over other potential techniques. There are various other matrix factorization methods available, and it would be useful to explain why NMF is particularly suited for this incomplete multi-view clustering (IMC) problem. Additionally, the choice of perspective-shared and perspective-specific determinants is made from the feature clusters' perspective, but the paper does not provide any reasoning as to why this approach is chosen rather than considering the sample clusters' perspective. Moreover, while distribution learning and mutual exclusion learning are critical components of the algorithm, their roles and necessity are not clearly explained. 4.Performance Gains and Statistical Validation. Although this paper demonstrates significant performance gains of the proposed BACDL algorithm over several state-of-the-art methods, these gains are not sufficiently quantified statistically. For example, the performance comparison could benefit from hypothesis testing to determine whether the differences observed in performance metrics (e.g., clustering accuracy, purity) are statistically significant. This would provide stronger evidence that the observed improvements are not merely due to random variation. 5.Lack Thorough Review on NMF-based Approaches. Although the related work overview is comprehensive, the proposed method is based on Non-negative Matrix Factorization (NMF). Therefore, a thorough review of the existing works that are based on NMF should be included, such as: [1] Wen, J., Zhang, Z., Zhang, Z., Zhu, L., Fei, L., Zhang, B., & Xu, Y. (2021, May). Unified tensor framework for incomplete multi-view clustering and missing-view inferring. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 11, pp. 10273-10281). [2] Wen, J., Xu, G., Tang, Z., Wang, W., Fei, L., & Xu, Y. (2023). Graph regularized and feature aware matrix factorization for robust incomplete multi-view clustering. IEEE Transactions on Circuits and Systems for Video Technology, 34(5), 3728-3741. 6.Writing and Structural Issues. (1) In the introduction, this paper mentions several limitations of the proposed method but fails to address the consequences of these limitations. (2) Section Transitions: There is a lack of smooth transitions between some sections, which affects the readability of the paper. For example, the introduction of Definition 1 (lines 180-188) feels abrupt and lacks context—it's unclear why this definition is introduced at this point. Similarly, Theorem 3 is presented without explaining its relevance or application, leaving readers to wonder what it contributes to the overall algorithm. (3) Example Clarifications: In some sections, such as the one introducing the index matrix G_r, it would be helpful to provide a concrete example to better illustrate its structure and functionality. 7.Complexity Analysis. The complexity analysis section provides a general overview of the computational costs, but it fails to account for the number of iterations required for convergence. Given that the algorithm involves several steps of optimization, the number of iterations could significantly affect the time complexity, especially for large-scale datasets. This paper does not clarify whether more iterations are needed when handling larger datasets or whether the algorithm's performance scales well with the number of iterations. 8.Reproducibility Issues. While the paper provides extensive experimental results, it does not offer any code or datasets, which makes it difficult for other researchers to verify the results or build upon this work. Providing access to the implementation would greatly enhance the transparency and reproducibility of the research. Other Comments Or Suggestions: 1.In Algorithm 1, it should be (g^h-g^{h+1}) rather than (g^{h+1}-g^h). 2.Figure 2: Include units for runtime to provide clearer context. 3.Ensure that all definitions (e.g., Definition 1) are properly referenced, specifying the exact source from which they are drawn. Additionally, this paper should cite the relevant literature for the Majorization-Minimization (MM) framework used in the optimization step, as this method is central to the proposed approach. Questions For Authors: 1.How do the authors ensure that the algorithm converges in a finite number of iterations? Can the authors provide proof of convergence in terms of the number of iterations? 2.Can the authors provide an approximation guarantee for the solution to the approximate problem used in the optimization? How does this compare to the exact solution? 3.This paper uses the term "global optimal" in the theoretical proofs. Is this referring to a global minimum of the objective function, or is it specific to the approximation method? 4.Why are perspective-shared and perspective-specific determinants modeled from the feature clusters' perspective rather than from the sample clusters' perspective? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Sincerely thank Reviewer 7KrE for the very constructive comments. **Q1:** Providing iteration level convergence and approximation guarantees would enhance the theoretical rigor. **A1:** Many thanks! This is a challenging task for authors during the rebuttal period. At this moment, authors have no inspiring ideas about it. So sorry for this. We will strive to explore this topic in future work. **Q2:** Ambiguities in theoretical details. Like the convergence level, global optimal. **A2:** Thanks! We will carefully proofread the manuscript to state them more precisely. Specially, the convergence refers to the objective function value. The global optimal refers to the approximate solution to the proxy problem. **Q3:** Lack of justification for method choices. Like why NMF. **A3:** NMF, subspace, kernel and neural network are four common means to tackle IMC problems. Kernel and neural network can perform nonlinear mapping well, while usually suffering from intensive complexity due to the large-sized feature matrix and complex network structure. Moreover, the selection of kernel type and network architecture also heavily relies on empirical knowledge. In virtue of the superior high-dimensional data processing capability, subspace effectively builds affinity via utilizing multiple potential spaces. However, due to the full-sized self-expression characteristics, it generally encounters cubic computing overhead. Unlike them, NMF mines shared low-dimensional structures by decomposing the view data matrix to discover latent clusters. In the decomposed basis vectors, each element can be seen as a contribution to the features, which makes the analysis results more intuitive and helps to understand the reasons for clustering. So, in the paper, we adopt the NMF technique. **Q4:** Performance gains and statistical validation. **A4:** In experiments, we run 50 times, and record the average value of clustering results. We will add these statistical information in the next version, and further analyze its features. **Q5:** Lack thorough review on NMF-based approaches. **A5:** Thanks! We will thoroughly review the mentioned works in the next version. **Q6:** Writing and structural Issues. Like section transitions, example clarifications. **A6:** Thanks! We will carefully polish the manuscript. For the matrix $\mathbf{G}_r$, it consists of 0 and 1, and there is only one 1 in each column. **Q7:** Complexity analysis. Considering the number of iterations required for convergence. **A7:** Good suggestion! The number of required iterations indeed affects the time complexity. Deriving the iterations required for convergence is a promising research direction. We will make efforts to explore this topic in future work. **Q8:** Reproducibility issues. **A8:** We will release the source code in the final version. **Q9:** What "global optimal" refers to. **A9:** It refers to that of the proxy problem. **Q10:** Why feature clusters' perspective? Why not sample clusters' perspective? **A10:** Kindly note the feature cluster matrix aims at mapping original data to a potential space, and learns the marginal distribution of original data. Bifurcating it will be conductive to extracting representative features from original data. It can be seen as a feature extractor. The sample cluster matrix mainly plays a role in grouping formed representations to generate clusters. A high-quality cluster partition relies on more discriminative representations. So, we adopt the feature clusters' perspective. Please see the following comparisons. BSC: Bifurcate sample clusters; BFC: Bifurcate feature clusters. |FLOEVEN|||||||||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| ||0.2|||0.5|||0.8||| ||PUR|ACC|FSC|PUR|ACC|FSC|PUR|ACC|FSC| |BSC|10.42|10.11|5.12|9.63|10.12|5.21|10.52|10.13|5.01| |BFC|**11.17**|**10.80**|**5.89**|**11.25**|**10.53**|**5.88**|**11.54**|**10.74**|**5.83**| |**SYNTHREED**|||||||||| |BSC|41.21|40.51|35.02|**42.88**|41.01|34.21|40.21|39.94|32.43| |BFC|**42.50**|**42.12**|**35.21**|42.83|**42.83**|**35.24**|**41.04**|**41.04**|**34.59**| |**DEOLOG**|||||||||| |BSC|31.21|21.73|18.12|30.76|21.42|17.14|30.87|**24.62**|19.21| |BFC|**31.84**|**22.32**|**18.45**|**31.84**|**23.74**|**18.41**|**31.28**|24.25|**19.56**| |**YALTHREE**|||||||||| |BSC|23.21|21.67|7.62|20.52|19.43|6.11|20.62|19.62|6.53| |BFC|**24.94**|**23.55**|**7.89**|**21.73**|**20.61**|**6.30**|**21.55**|**20.70**|**6.70**| |**BGFEA**|||||||||| |BSC|21.32|21.42|19.72|21.32|21.22|19.67|21.42|21.22|19.87| |BFC|**22.08**|**22.08**|**20.08**|**22.64**|**22.40**|**20.16**|**22.48**|**22.28**|**20.20**| |**AWTEN**|||||||||| |BSC|19.32|**12.26**|10.64|19.86|11.94|10.22|20.01|11.46|10.31| |BFC|**20.74**|12.24|**11.01**|**20.17**|**12.01**|**10.97**|**20.18**|**12.06**|**10.94**| Evidently, BFC is more desirable than BSC in most cases, revealing that feature clusters' perspective is more preferable. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I have read it. Combining the overall contributions of this paper and the response, I choose to keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 7KrE, Thank you sincerely for acknowledging our research contributions! We will strive to refine the manuscript following your fairly constructive recommendations. Best Wishes, The authors
null
null
null
null
null
null
Differentially Private Federated $k$-Means Clustering with Server-Side Data
Accept (poster)
Summary: This paper proposes a novel fully federated and differentially private k-means clustering algorithm (FedDP-KMeans). This method overcomes the problem that existing differentially private (DP) clustering methods require good clustering initialization by utilizing the data on the server side. Experiments have been conducted under the data-point-level privacy and client-level privacy settings to verify the effectiveness of this method. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes, I have checked. There is a lack of experiments for evaluating the clustering effect using external clustering evaluation indices such as the Fowlkes-Mallows Index (FMI), Adjusted Rand Index (ARI), and internal indices such as the Silhouette Coefficient (SC) and Calinski-Harabasz Index (CH). Ablation experiments are also lacking, as well as some parameter analysis experiments, such as the number of clients \(m\), privacy parameters \(\varepsilon\), etc. Supplementary Material: Yes. All. Relation To Broader Scientific Literature: This paper proposes a novel fully federated and differentially private k-means clustering algorithm (FedDP-KMeans). Most of the existing federated clustering methods do not have such a high level of privacy protection, but only use a federated architecture to ensure that local original information is not directly transmitted. Essential References Not Discussed: The following highly related works have not been cited/discussed: [1] Zhang Y, Chen H, Lin Z, et al. FedAC: An Adaptive Clustered Federated Learning Framework for Heterogeneous Data. arXiv preprint arXiv:2403.16460, 2024. [2] Zhang Y, Zhang Y, Lu Y, et al. Asynchronous Federated Clustering with Unknown Number of Clusters. arXiv preprint arXiv:2412.20341, 2024. [3] Ma Q, Xu Y, Xu H, et al. FedUC: A unified clustering approach for hierarchical federated learning. IEEE Transactions on Mobile Computing, 2024. [4] Zhang Y, Chen H, Lin Z, et al. Lcfed: An efficient clustered federated learning framework for heterogeneous data. arXiv preprint arXiv:2501.01850, 2025. Other Strengths And Weaknesses: Strengths: This paper proposes a novel fully federated and differentially private k-means clustering algorithm (FedDP-KMeans). This method overcomes the problem that existing differentially private (DP) clustering methods require good clustering initialization by utilizing the data on the server side. Experiments have been conducted under the data-point-level privacy and client-level privacy settings to verify the effectiveness of this method. Weaknesses: 1. This paper lacks a summary of the contributions. The authors should clearly list the core contribution points of this paper so that readers can clearly grasp the innovative value of the research. 2. The main contribution is the proposal of a new initialization method. However, a large part of the introduction section is spent on introducing the development and limitations of privacy techniques and federated learning. There is no detailed elaboration on the existing problems of the current initialization methods and the challenges they bring. Nor does it explain how this initialization method can solve the above problems. Other Comments Or Suggestions: I do not have. Questions For Authors: Since the method proposed in this paper is for clustering private data, how do the authors solve the problem of the unknown number of clusters \(k\) and the problem that k-means performs poorly on non-convex datasets? And also see the weaknesses. If the concerns can be well addressed, I will consider raise the score. Ethical Review Concerns: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review and helpful comments. We have run additional experiments which can be found here https://anonymous.4open.science/r/FedDP-KMeans-Rebuttal-Figures-5B34/Rebuttal_Figures.pdf. We will reference this pdf when we address your specific concerns below. > lack of experiments for evaluating the clustering effect using external clustering evaluation indices We have run experiments evaluating with the suggested Fowlkes-Mallows and Adjusted Rand Indices, see FIgures 6-9 of the additional experiments. The results are in line with those reported with k-means cost. We would like to add that we believe that the metric we report, k-means cost, is fair and informative. All algorithms we compare are variants of k-means, so they all have the goal of minimizing the k-means objective. > Ablation experiments are also lacking, as well as some parameter analysis experiments, such as the number of clients (m), privacy parameters (\varepsilon) Thank you for the suggestion, we have run additional ablation experiments testing how our method performs when the server data is missing some of the clusters. See Section 1 of the additional experiments. The performance of all methods that use server data deteriorates modestly as the number of missing clusters grows. In all settings FedDP-KMeans is still the best performing method. Regarding number of clients, our current experiments do already cover a wide range of values of m. We have experiments with $m \in\lbrace 51,100,1000,2000,2720,5000,9237,10394,23266\rbrace$. Moreover, for the Gaussian mixture data we do in fact keep the distribution the same and vary $m \in\lbrace 1000, 2000, 5000\rbrace $ and discuss how the results change (Results paragraph on page 7). We hope this addresses your concerns. Regarding analysis of the privacy parameters we provide a detailed analysis of these in Appendix G.4 and our main experiments (e.g. Figures 1 and 2) analyze the changes in performance as privacy parameters vary. Do these points answer your specific concern? > highly related works have not been cited/discussed: Thank you for the references. It seems that [1], [3] and [4] are solving a related task of Clustered FL, where the goal is to cluster clients for better model training. [2] do propose a method for k-means clustering of the data, though their focus is on asynchronous and heterogeneous clients rather than privacy. We would be happy to include a discussion of these additional works in our Related Works section. > lacks a summary of the contributions. We will include a clearer summary of the main contributions of our work: - We propose a novel differentially private and federated initialization method that leverages small, out-of-distribution server-side data to generate high-quality initializations for federated $k$-means. - We introduce the first fully federated and differentially private $k$-means algorithm by combining this initialization with a simple DP federated Lloyd’s variant. - We provide theoretical guarantees showing exponential convergence to ground truth clusters under standard assumptions. - We conduct extensive empirical evaluations, demonstrating strong performance across data-point and user-level privacy settings on both synthetic and real federated data. > About the level details in the introduction: We aimed to make the work accessible to readers, who are not experts on privacy and FL, as these aspects are what make the problem of clustering much harder and unsolved. We’ll be happy to expand the discussion on initialization as well. > how do the authors solve the problem of the unknown number of clusters (k) Thank you for the question. In practice, k-means is often used with a value of k determined by external factors, such as computational or memory demands [Jain, 2010]. If k is meant to be chosen based on the data itself, existing methods can be incorporated into our setting quite simply, by using the method on the weighted and projected server dataset, $\Pi Q$ with weights $\hat{w_q(\Pi P)}$. This dataset serves as a proxy for the client data and we can operate on it without incurring any additional privacy costs. We illustrate this using the popular elbow method [Thorndike, 1953]. Concretely, we run lines 1-16 of Algorithm 1 using some large value $k’$, then we run line 17 for $k=1, 2, 3, \dots$ and plot the $k$-means costs of the resulting clusterings. This is shown in Figure 10 of the additional experiments. Clearly, the elbow of the curve occurs at $k=10$ which is indeed the number of clusters in the true data distribution (we used the same Gaussian mixture dataset as in the original experiments). > the problem that k-means performs poorly on non-convex datasets? Respectfully, we feel that this is not really a fair criticism of our method. Our contribution is a method for making k-means differentially private in a federated setting. It is not about trying to overcome principled shortcomings of the k-means objective.
Summary: This submission proposes an $(\\epsilon, \\delta)$-differentially private algorithm for aligning a server-side k-means clustering with client-side data by private, federated computation. In particular, the authors propose an initialization procedure FedDP-Init, where clients compute an SVD on their data in a federated way, which is used by the server to compute a projection matrix and send it alongside with, intuitively, a projected k-means coreset of public server data, to the clients. Clients assign weights to the conceptual coreset points in a private fashion and return noisy means that the server can use to construct centers in the original space. Finally, a federated DP Lloyds algorithm is used to improve the solution. The authors analyze their algorithm theoretically for well-separated Gaussian mixtures under the assumption that the server has at least one good sample from each component. Roughly speaking, they show that for each mean of a component, there is a center in the solution that has distance at most $O(\\sqrt{n \\sigma^2} + \\log(n)/(\\epsilon n)$. For experiments, the authors consider point-level and client-level privacy on synthetic and real-world datasets. They benchmark FedDP-Init again two methods that use only server-side data, and an almost data-independent sphere packing, followed by FedDP-Lloyd. For small datasets with a few thousands of protected entities, FedDP-Init outperfoms the other solutions at around $\\epsilon \\approx 1$, while larger populations give the best solutions at $\\epsilon < 0.5$ already. ## update after rebuttal The PC asks to add this section unconditionally. I didn't ask any questions in the rebuttal, so nothing has changed. Claims And Evidence: The claims are backed by theoretical results for a simple setting and experiments on synthetic and real-world data. Methods And Evaluation Criteria: Experiments plus a theoretical foundation are a valid approach. Theoretical Claims: The proofs in the main part seem plausible. Experimental Designs Or Analyses: The number of real-world data sets is a bit limited. Apart from the scale of the experiments, the approach is valid. Supplementary Material: No. Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: In the non-private setting, it has been proven that a good initialization is key. The k-means++ initialization alone provides $O(\\log k)$ guarantees. Leveraging client data in addition to public server data to overcome misaligned distributions in a federated private setting is therefore a natural approach. Approximation and privacy are proven if the input comes from some Gaussian mixtures. The theoretical guarantees of the proposed approach fall short when some clusters are not represented in the server data at all even when they make up the majority of the client data, though. The experiments cover cases with unrelated noise, but do not explicitly cover the case that some regions of the input are not covered by the server data at all. Overall, the theoretical results and the experiments suggest a significant value of center initialization with client data for either somewhat larger values of $\\epsilon$, or datasets with thousands or more of protected entities. However, the algorithm should be proven to be private on all input data sets, albeit without approximation guarantees for some inputs. Otherwise, privacy is ultimately not guaranteed. Other Comments Or Suggestions: None. Questions For Authors: 1. Could you confirm that the privacy is not proven if the input is not from a Gaussian mixture? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time and your review. We address your questions and comments below. > Could you confirm that the privacy is not proven if the input is not from a Gaussian mixture? This appears to be a misunderstanding that we find important to correct. Our algorithms (1 and 2) are always private, not just when the input comes from a Gaussian mixture. This is stated in Section 3.1 but we’ll add an explicit theorem to make this clearer. Technically, the $(\varepsilon, \delta)$-DP guarantee comes from the appropriately scaled noise that is added to the client statistics (e.g. lines 6, 15, 23 of Algorithm 1). Nowhere in the analysis of how noise is added to enforce privacy (section 3.1) do we require that the inputs come from a Gaussian mixture. Indeed our experiments are always run with $(\varepsilon, \delta)$ differential privacy, not just in the Gaussian mixture setting. The Gaussian mixture assumption is only required to theoretically prove accuracy and convergence. We also point out that, to define “accuracy” and “convergence”, assumptions such as the Gaussian mixture model are required. > The theoretical guarantees of the proposed approach fall short when some clusters are not represented in the server data at all even when they make up the majority of the client data, though. The experiments cover cases with unrelated noise, but do not explicitly cover the case that some regions of the input are not covered by the server data at all. For our theoretical guarantees you are of course correct. It is, however, not a requirement for the algorithm to work in practice. To test this we have now run additional experiments in exactly this setting that certain clusters are missing from the server dataset, the results can be found in Section 1 here: https://anonymous.4open.science/r/FedDP-KMeans-Rebuttal-Figures-5B34/Rebuttal_Figures.pdf. As seen in Figure 1, performance of FedDP-KMeans deteriorates modestly as the number of clusters missing from the server dataset increases. Figures 2-5 show that this also occurs in the other baselines that make use of the server data and that FedDP-KMeans is still the best performing method in this scenario. Thank you for the suggestion to consider this scenario in our experimental evaluation. --- Rebuttal Comment 1.1: Comment: Thank you the clarifying that the DP proof does not depend on the input being a Gaussian mixture, and the additional experiments! I'd also suggest to change the section title "B.2. Differential Privacy for Gaussian Mixtures" and the phrasing of Theorem 8. --- Reply to Comment 1.1.1: Comment: Thank you for the additional helpful feedback!
Summary: To adress the need of conducting clustering on distributed and private data, the authors proposed a private and disttubuted clustering framework. In detail, considering the performance of clustering highly relies on the initialization of the clustering center, the authors proposed ‘FedDP-Init’ that leverages a small-sclae server-side dataset and privatized client data statistics to provide a better initialization. The authors provide both theoretical and empirical evidences on their method works better than other baselines. Claims And Evidence: I think the claims are well supported. Methods And Evaluation Criteria: Considering methods, the authors a better initilization for DP-Federated Clustering algorithms, which make sense sincei initilization is truly critical in clustering. Considering evaluaton, they conduct experiments on both synthetic dataset and real-world dataset, which is relatively comprehensive. Theoretical Claims: The result of his convergence guarantee seems sound, as well as his privacy budget calculation. However I didn’t check each line in terms of all the authors’ proofs. Experimental Designs Or Analyses: The experimental setting is rational, and the result seems sound. Supplementary Material: N/A Relation To Broader Scientific Literature: Generally, there are many works discussing potentials on DP-Fed-Clustering, despite its low accuracy due to excessive DP noise. This work provide an attempt on achieving better privacy-utility trade-off from the initilization directly. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I would say the writing of this paper needs to be improved. Generally, I cannot grab a clear structure of Background-Motivation-Method-Contribution from the intrroduction. (‘background’ here doesn’t refers to the Section2, and I think the Section2 is more likely a Preliminary.) Other Comments Or Suggestions: N/A Questions For Authors: The main motivation of this work is the view that ‘initialization is important for clustering’, and that’s the reason why the author’s methods achieve such a clearly better expetimental results as reported in Fig1, 2. I’m curious that is there any other clustering method that is less sensitive to the initialization? And will your plugin improve those initialization-less-sensitive clustering method more marginally than Kmeans? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and for your review. We discuss your comments below. > I would say the writing of this paper needs to be improved. Generally, I cannot grab a clear structure of Background-Motivation-Method-Contribution from the intrroduction. (‘background’ here doesn’t refers to the Section2, and I think the Section2 is more likely a Preliminary.) Thank you for the feedback. We can rename Section 2 as Preliminary and rewrite the introduction so that the motivation is clearer. We will also include a more explicit summary of our contributions as follows: - We propose a novel differentially private and federated initialization method that leverages small, out-of-distribution server-side data to generate high-quality initializations for federated k-means. - We introduce the first fully federated and differentially private k-means algorithm by combining this initialization with a simple DP federated Lloyd’s variant. - We provide theoretical guarantees showing exponential convergence to ground truth clusters under standard assumptions. - We conduct extensive empirical evaluations, demonstrating strong performance across data-point and user-level privacy settings on both synthetic and real federated data. > I’m curious that is there any other clustering method that is less sensitive to the initialization? And will your plugin improve those initialization-less-sensitive clustering method more marginally than Kmeans? Of course. For example, there’s methods that do not need any initialization, such as single-linkage clustering (Gower,, Ross 1969), or that converge to the same solution, regardless of the initialization, such as convex clustering (Pelckmans et al 2005). But k-means is used much more in practice, because it has better properties.
Summary: This paper proposes a k-means clustering algorithm in the federated learning model under differential privacy. The chief difficulty with such a setup is the seeding algorithm, since many non-private algorithms would be too slow in the federated learning model, and possibly not robust to the noise added by privacy. The primary contribution is a seeding algorithm which works using PCA to project into a lower dimension, then asking clients to add lower-dimensional noise to their projected vectors. Then, private Lloyd's algorithm is able to be run as normal. The utility of the approach is demonstrated through experiments and theoretically. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not closely check the proofs for correctness. Experimental Designs Or Analyses: I did not closely check the experiments for correctness. Supplementary Material: Not applicable. Relation To Broader Scientific Literature: This paper fits into private federated learning. Federated learning is an enormous field of research, and many works study how to apply privacy, which is usually applied the moment the data leaves the user's phone. Essential References Not Discussed: I cannot think of essential references not discussed. Other Strengths And Weaknesses: The algorithm is versatile, as it fits into common federated learning setups including those involving secure aggregation. It also can provide flexible privacy guarantees, including user-level and item-level differential privacy. The experiments are designed well, and there is a study on how to choose epsilon and other hyperparameters, which adds to the practical demonstration of the algorithm. One potential drawback is in the requirement that the server have a representative datapoint for each cluster on hand. This assumption may be too strong, since often the purpose of running k-means clustering is to identify clusters. Other Comments Or Suggestions: No further comments. Questions For Authors: I don't fully understand the role of the server's private data Q. What would happen if the server simply took in each user's projected, noisy points and attempted to form seeds based on this? Can the assumption that Q need to contain a point from each cluster beforehand be relaxed at all? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time and for your review. We discuss your feedback below. > One potential drawback is in the requirement that the server have a representative datapoint for each cluster on hand. This assumption may be too strong, since often the purpose of running k-means clustering is to identify clusters. We would like to clarify one point: the server holds data points, but does not have any a priori information on the clustering on those points. In addition, the requirement that there is at least one point per cluster is only required for the theoretical contributions (where we want to identify ground truth GMM components). In practice, the algorithm performs well even without this assumption, see our answer below. > What would happen if the server simply took in each user's projected, noisy points and attempted to form seeds based on this? The issue with this is that directly sharing the projected user data points themselves with the server, while still having meaningful DP guarantees, would require so much noise as to destroy any signal in the points themselves. To give a concrete illustration, for the Gaussian mixtures data in the easier data-point level privacy setting with $\varepsilon = 2$, we would be forced to add iid 0 mean Gaussian noise with standard deviation $\approx35$ to each dimension of each projected user point before sending it to the server. So the expected norm of the noise vector to be added to each point is at least $35 \sqrt{10} \approx 105$. For reference the norms of the projected data points are mostly in the range (5, 7). > Can the assumption that Q need to contain a point from each cluster beforehand be relaxed at all? The requirement that the server holds a point from each cluster is only needed to prove our theoretical guarantees on clustering accuracy and convergence in the Gaussian mixtures setting. It is not strictly required for the algorithm to run or work in practice. We have now run additional experiments to test the method’s performance in the scenario where some clusters are completely missing in the server data, Q. The results can be found in Section 1 here: https://anonymous.4open.science/r/FedDP-KMeans-Rebuttal-Figures-5B34/Rebuttal_Figures.pdf. As seen in Figure 1, the performance of FedDP-KMeans deteriorates modestly as the number of clusters missing from the server dataset increases. Figures 2-5 show that this also occurs in the other baselines that make use of the server data and that FedDP-KMeans is still the best performing method in this scenario. --- Rebuttal Comment 1.1: Comment: The new plots look interesting! It looks like the clustering algorithm succeeds in finding the missing clusters when epsilon is high enough? Can you explain intuitively what is happening in the algorithm to cause this behavior? --- Reply to Comment 1.1.1: Comment: Indeed, your observation is correct: for sufficiently large $\varepsilon$, we can still match the performance of the optimal baseline. Intuitively, this likely occurs because some server data points—either from the OOD uniform distribution or outliers from the present Gaussians—are “close enough” (although not generated from the missing gaussian) in the projected space to receive non-negligible weights from client data associated with the missing Gaussians. These points then influence the server clustering, resulting in a projected center that somewhat approximates the true projected mean. This center helps locate the missing Gaussian in the client data, yielding a good initialization by the end of step 3. A larger $\varepsilon$ is required because the “close enough” points provide a weaker signal than true samples from the missing Gaussian. Since these points are fewer and further away, preserving their influence requires more accurate estimation of projection directions and server point weights—hence the need for a higher privacy budget. As a side note, this behavior may not generalize across all server data distributions. For example, if the server data consisted solely of very well-separated Gaussians, then it might be that there are no points “close enough” to the missing Gaussians. In that case, the projected server data would only reflect the present clusters and our initialization would likely fail to locate the missing Gaussians. As a result, FedDP-Lloyds might require more iterations than a typical privacy budget permits. However, such a scenario—completely non-overlapping clusters on the server—seems unrealistic in most practical settings.
null
null
null
null
null
null
Sidechain conditioning and modeling for full-atom protein sequence design with FAMPNN
Accept (poster)
Summary: The paper presents a new model FAMPNN for fixed-backbone protein sequence design that models both sequence and sidechains. FAMPNN addresses the limitations of existing methods that rely solely on backbone and sequence identity. The authors demonstrate that FAMPNN improves sequence recovery, achieves state-of-the-art sidechain packing, and can be used for zero-shot prediction of binding and stability. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: no theoretical claims Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: By integrating sequence and sidechain modeling in a single model, the paper extends prior work (e.g. ProteinMPNN which focuses on predicting protein sequences based on backbone structure without sidechain modelling) for improve protein design. Essential References Not Discussed: the authors could expand more on recent all-atom approaches for protein generation (e.g. Protpardelle, ProteinGenerator etc.) and co-folding (AF3). Other Strengths And Weaknesses: Strengths: * The paper is well written and handles an interesting and challenging problem of joint sequence and sidechain conformation prediction. The proposed model is novel and well motivated. * The authors provide extensive experimental results, demonstrating the effectiveness of FAMPNN across multiple benchmarks (zero-shot prediction of protein fitness, side chain packing, sequence design). The inclusion of the sidechain confidence module adds interpretability to the model's predictions. Weakness: * The authors could include a comparison in sampling time of FAMPNN compared to backbone-only models like ProteinMPNN to measure the computational cost of predicting sidechains. Other Comments Or Suggestions: / Questions For Authors: * What is the standard deviation in the predictions ? Given that the process is the outcome of diffusion process, there might be some variance in the prediction. * Section 5.3; is it possible to compare on Figure 5.a and 5.c. these results with a supervised baseline to provide a reference point ? * Can the authors comment why in Table 5. ProteinMPNN is working better for higher lengths (400, 500); is this due to the error in side chain coordinate predictions ? * more generally, why is there a need for predicting sequence in the model given that the full atom representation contains implicitely information on the sequence ? * could the authors comment on the application of FAMPNN to multi chain sequence design problems ? I gather from Section 4.3 that the model can be applied to complexes. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and questions. Below, we address these questions in detail: > FAMPNN sampling time comparisons Please refer to the last section of the response to reviewer GPV6 > Standard deviation in the predictions Please refer to the first section of the response to reviewer MU8A > Supervised baselines for Figure 5 This is an excellent suggestion, for **5c** there are baselines available with formal test splits for stability datasets on which we can evaluate FAMPNN to compare to supervised methods. Supervised baselines and formal test splits do not exist for **5a**, but we can run our own baseline (eg. one-hot-encoding of sequence with a ML-based regressors). If accepted, we will include these additional results. > ProteinMPNN self-consistency on longer lengths This is an interesting hypothesis that we also explored. We believe that side-chain prediction error may indeed be compounding for longer-length proteins. At each sampling step, we condition only on previously-predicted side chains with predicted error < 0.3Å, and notice a larger performance degradation without this filtering. > Why explictily predict sequence? We agree that given a full-atom representation, it is trivial to predict the sequence as it is already defined by the atom composition. In early versions of the model, we experimented with removing explicit sequence prediction and calling sequence from the implicit full-atom predictions, but we found this to not work as well in practice. We believe that the cross-entropy loss objective is important for the model to learn to predict high-quality sequences. Additionally, in addition to careful tuning of our masking schedule, and occasionally completely dropping all side-chain coordinates, FAMPNN is able to predict sequences with no full-atom representation, allowing use in situations such as sequence design for de novo backbones. > Multi-chain sequence design with FAMPNN Please refer to the rebuttal to reviewer GPV6 for details on the training of the PDB model and the respective dataset. Regarding multi-chain design, we showcase the capability of the PDB model for multi-chain design in Figure **6b**, where we demonstrate the utility of full-atom context for higher sequence recovery in protein-protein interfaces. Here, FAMPNN achieves higher sequence recovery, and higher sequence recovery scaling with context than LigandMPNN. Additionally, we note FAMPNN performs state-of-the-art in binding-affinity fitness evaluation in large multi-chain protein-protein complexes greater 3000 residues (SKEMPIv2 dataset), which are shown in figures **5a** and **5c**. Together these results demonstrate robust utility of FAMPNN in multi-chain design.
Summary: This paper introduces FAMPNN (Full-Atom MPNN), a model that explicitly incorporates sidechain conformation modeling for fixed-backbone protein sequence design. While existing deep learning methods implicitly reason about sidechain interactions based solely on backbone geometry and amino acid sequence, FAMPNN jointly models both sequence identity (discrete) and sidechain conformation (continuous) for each residue using a combined categorical cross-entropy and diffusion loss objective, respectively. Built on a hybrid MPNN-GVP architecture that conditions on both backbone and available sidechain information, FAMPNN employs an iterative sampling strategy to efficiently generate samples from the joint distribution of sequence and structure. The model achieves competitive sequence recovery on CATH 4.2 and strong self-consistency results compared to state-of-the-art methods like ProteinMPNN when evaluated on de novo backbones. FAMPNN also demonstrates superior sidechain packing accuracy on CASP13/14/15 datasets and provides per-atom predicted sidechain error estimates that strongly rank-correlates with true errors. Additionally, the authors show FAMPNN's effectiveness for unsupervised fitness prediction on experimental datasets for antibody-antigen binding, protein-protein binding affinity, and protein stability. Through comprehensive analysis, the authors demonstrate that increasing sidechain context (both sequence identity and conformation) leads to better model performance. Their ablation studies reveal that a similarly sized model without sidechain context performs worse for protein-protein binding affinity prediction, though it closely matches FAMPNN's performance for antibody-antigen binding affinity and protein stability prediction. Claims And Evidence: - **Claim**: pSCE serves as an effective confidence metric for sidechain packing While this claim is supported by high Spearman correlation in Figure 3(a), there is miscalibration between predicted and true sidechain error. In the per-residue setting, the maximum pSCE is around 2.0 in contrast to the maximum true error of 4.0. This limits its utility as an absolute error predictor, though it remains useful for ranking/relative confidence. - **Claim**: FAMPNN outperforms unsupervised methods for antibody-antigen binding affinity and protein stability This claim is supported by Figure 4(a) and 4(c) when FAMPNN is compared to other structure-conditioned models. The authors should also provide performance comparison with leading sequence-only foundation models or qualify in text that the comparison is limited to structure-conditioned models. It's also unclear why the comparison doesn't include other state-of-the-art methods like ESM3. - **Claim**: Increasing context leads to better sequence packing accuracy This is more or less true but the relationship between context and packing accuracy is not monotic, strictly speaking. There is a nominal increase in RMSD around 40% context for partial sidechain context and a similar increase around 50% context for partial sequence context which should be clarified or explained in the text. Methods And Evaluation Criteria: **Strengths:** - The formulation of the training objective considering both discrete sequence tokens and continuous sidechain coordinates is appropriate for the problem at hand. - The proposed method re-uses existing components (MPNN, GVP) that have been proven to be effective. - The selected datasets and metrics are well-suited to the problem being addressed. In particular, evaluating sequence recovery and self consistency on CATH 4.2 is a standard practice in the field. - A diverse set of datasets covering antibody-antigen binding, protein-protein binding affinity, and protein stability are used for zero shot evaluation of fitness prediction. **Weaknesses:** - Sequence recovery aggregates binary decisions for correct sequence identity without taking into account the precise confidence of the model for the correct residue. Perplexity addresses this shortcoming, however, it is not included in the evaluation. - ProteinGym (https://proteingym.org/) is a standard benchmark in the field for zero-shot fitness prediction that is not included in the evaluation making it difficult to compare performance with other methods not part of authors' evaluation. Theoretical Claims: No theoretical claims are made in this paper. Experimental Designs Or Analyses: **Strengths:** - Comprehensive benchmarking and analysis of sequence recovery, self-consistency, sidechain packing, zero-shot fitness prediction, and the impact of available sidechain context on performance. - Selection of a wide range of appropriate baseline models for comparison. - Rigorous ablation study is performed to investigate the primary claim of the paper about the utility of sidechain context. - The authors ensure there is no data leakage in the evaluation by holding out validation/test data from training and use updated splits for datasets like Skempi to ensure fair comparison. **Weaknesses:** - The authors do not provide interval estimates for performance metrics which makes it harder to evaluate significance. - The selection of benchmark models varies across evaluations without clear justification. For instance, ESM3 is part of the self-consistency evaluation but not the zero-shot fitness prediction evaluation. - A notable exclusion from the paper is the Chroma generative model (https://www.nature.com/articles/s41586-023-06728-8) which also addresses the problem of incorporating sidechain conformation into fixed-backbone sequence design using similar architectural components. It would have been nice to see performance comparison with this model. - For antibody-antigen binding datasets, the authors note they used homologous structures when exact experimental structures weren't available (e.g., using CR9114 bound to H5 to predict binding to H1 and H3 subtypes). This potential mismatch might introduce errors in evaluation. Supplementary Material: All parts of the supplementary material were reviewed and found to be consistent with the main paper. Relation To Broader Scientific Literature: The primary contribution of FAMPNN is the explicit modeling of sidechain conformation during fixed-backbone sequence design. This is a known limitation of existing methods widely used in the field such as ESM-IF1, ProteinMPNN, etc. The authors show that addressing this limitation leads to improved performance along various axes. However, the novelty of the work is unclear in the context of Chroma (https://www.nature.com/articles/s41586-023-06728-8) which also addresses the problem of incorporating sidechain conformation into fixed-backbone sequence design and has overlapping ideas. Essential References Not Discussed: The most notable exclusion from the paper is the Chroma generative model (https://www.nature.com/articles/s41586-023-06728-8; Nature 2023) that addresses the problem of incorporating sidechain conformation into fixed-backbone sequence design amongst other things and has overlapping ideas. Other Strengths And Weaknesses: The paper is well written and easy to follow. The authors provide the key implementation details, descriptions of datasets along with preprocessing details, and describe the evaluation criteria in detail. Other Comments Or Suggestions: Typos: - Missing year of publication for Akpinaroglu et al. in the citation in text. (e.g. page 1, first column, line 49) - The citation for RFdiffusion is missing in text. - Page 7, second column, lines 341-362: The references to Figure 5 sub-figures in the paragraph appear incorrect. - Page 23, line 1251: "provided the the" Suggestions: - Please mention the number of unmasking steps used for generating Figure 2a. - Please consider remaking Figure 3a to include Pearson correlation as well as marginal distribution of true and predicted sidechain errors. - Please mention the size of different datasets used for evaluation in section 5.3 in the main text and the kind of structure (predicted, crystal, etc.) available for each. - Please consider remaking Figure 6a to share the y-axis range for the two subplots. Questions For Authors: 1. Could the authors clarify the novelty of their work in the context of Chroma (https://www.nature.com/articles/s41586-023-06728-8; Nature 2023) and provide a more detailed performance comparison? This would help in understanding the unique contributions of FAMPNN and allow for a more accurate assessment of the paper's impact. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comprehensive evaluation and suggestions to improve clarity and benchmark comparisons in our paper. Below, we provide detailed responses and clarifications: > Inclusion of perplexity in addition to sequence recovery for evaluation This is a great suggestion. With 1 step recovery, FAMPNN achieves a perplexity of 4.99, and we will update this with the perplexities of all methods in Table 1 in a revised version. > Evaluation on ProteinGym We agree with the reviewers point and will include ProteinGym in our evaluations if accepted. We would also like to note that many of the assays included in our fitness evaluations make up a great amount (but of course not all) of ProteinGym data (eg. Megascale, SKEMPIv2). > Interval estimates We thank the reviewer for pointing this out and believe it would be a valuable addition. We intend to add interval estimates for performance metrics in the final version if accepted. > Expected monotonic relationship between context and packing accuracy We believe that the slight deviations from this trend are due to random chance, and we plan to update this plot with error bars as well as clarify this claim in the text. > Selection of benchmark models In general, we chose to benchmark models that are commonly used for their respective tasks. We agree that ESM3 should be included in the zero-shot fitness prediction evaluations, and we will include this in the final version if accepted. > Novelty of our work in the context of Chroma As correctly pointed out, Chroma does in fact have the capability to return an all-atom structure given an input backbone via its ChromaDesign module. However, the sequence design module does not have the ability to encode a full-atom protein structure and therefore cannot design sequences conditioned on neighboring sidechain context. As a consequence, this means that **Chroma's sequence predictions are unable to leverage either experimentally determined sidechain conformations or previously predicted sidechains**. By contrast, FAMPNN **explicitly conditions on sidechain context**, which allows users to provide known sidechain conformations as conditioning input during sequence generation. This allows for more fine-grained control. Despite this, we very much agree it would be good to discuss Chroma and benchmark ChromaDesign along with other methods. We will provide updates in a revised version and have included an extension to Table 5 below: | Method | Length 100 | Length 200 | Length 300 | Length 400 | Length 500 | |---------------|----------------|----------------|----------------|----------------|----------------| | | scTM \| pLDDT | scTM \| pLDDT | scTM \| pLDDT | scTM \| pLDDT | scTM \| pLDDT | | FAMPNN (0.3Å) | 0.968 \| 93.00 | 0.967 \| 91.27 | 0.938 \| 83.45 | 0.760 \| 74.73 | 0.545 \| 61.73 | | FAMPNN (0.0Å) | 0.896 \| 88.99 | 0.890 \| 81.87 | 0.703 \| 67.86 | 0.602 \| 63.80 | 0.471 \| 55.49 | | Chroma | 0.940 \| 90.94 | 0.949 \| 88.04 | 0.946 \| 86.73 | 0.914 \| 80.02 | 0.751 \| 71.50 | We also provide an extension to Table 2 for sidechain packing evaluation below, noting that Chroma has not been trained on a dataset to explicitly hold out CASP homologues: | Dataset | Method | Atom RMSD | |---------|---------------|-----------------------| | | | All / Core / Surface | | CASP13 | FAMPNN (0.3Å) | 0.667 / 0.362 / 0.775 | | | FAMPNN (0.0Å) | 0.579 / 0.345 / 0.659 | | | Chroma* | 0.677 / 0.392 / 0.770 | | CASP14 | FAMPNN (0.3Å) | 0.821 / 0.534 / 0.937 | | | FAMPNN (0.0Å) | 0.745 / 0.430 / 0.858 | | | Chroma* | 0.851 / 0.550 / 0.964 | | CASP15 | FAMPNN (0.3Å) | 0.785 / 0.417 / 0.888 | | | FAMPNN (0.0Å) | 0.690 / 0.350 / 0.789 | | | Chroma* | 0.810 / 0.434 / 0.917 | > Potential mismatch for antibody-antigen binding dataset evaluation Yes this is true. As described in Shanker et. al, experimental structures matching the exact sequence are not available for each assay, however as the sequences of the different HA variants (H1,3,9 etc.) are quite close, models are still able to perform well using the backbone of the homologous proteins. While this can bring about errors, and we do acknowledge this, this is an extremely common occurrence in practical protein design, where homologous structures must be used due to a lack of experimental structure (homology modelling). Thus, we see this as an opportunity to showcase the model's performance in a structurally data limited situation which is even more prudent to demonstrate given our method uses “more” structure than its backbone-only counterparts. > Other comments or suggestions We thank the reviewer for suggestions on improving the clarity of our figures and will make these changes if accepted.
Summary: This paper presents FAMPNN, an iterative inverse folding algorithm capable of co-generating sidechain conformations and sequences. Such design allows the model to condition on the currently known sidechain atoms in addition to the fixed backbone and sequence. FAMPNN models the per-residue sequence type and sidechain structure with a combined cross-entropy and diffusion loss objective. Experimental results show that FAMPNN can achieve promising results in full-atom sequence design, sidechain packing, and full-atom conditioned protein fitness evaluation. They also validate the effectiveness of the sidechain context. ## update after rebuttal I've read the authors' replies and other reviewers' comments. I will keep my positive score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NA. Experimental Designs Or Analyses: Yes, the experimental designs are sound as they benchmark the proposed method on 3 related tasks and also perform an additional ablation study. Supplementary Material: I checked the algorithm presented in the supplementary material Relation To Broader Scientific Literature: The key contributions of the paper are related to next-token modeling for continuous data. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths 1. sound experiment design and results 2. They show that the FAMPNN can perform protein fitness evaluation quite well. We may scale up the training data (with some af2 distillation data) and model size to get even better results. Other Comments Or Suggestions: NO Questions For Authors: 1. In section 4.2.1, the authors mention the definition of the ghost atom. However, it is not clear how they encode the ghost atoms for masked tokens. 2. FAMPNN adopts an MLM framework for inverse folding instead of random permutation AR or diffusion. I'm curious whether the MLM framework is optimal for the inverse folding task. Have you ever tried using a random permutation AR (proteinmpnn) for FAMPNN? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We'd like to thank the review for their positive evaluation and interest in the methodological choices behind our model. Below, we clarify these points: > Ghost atoms for masked tokens To encode the ghost atom for masked tokens, we set them at the position of the central CA atom by default. Because the model receives a mask sequence token for this position, it is able to understand that these ghost atoms refer to a masked position. > MLM vs. AR In early versions of the model, we experimented with both AR and MLM, finding that they perform similarly in terms of both sequence recovery and self-consistency. We noticed that high-quality sequences could be predicted in relatively few steps, so viewing the generative process through an MLM framework could allow for faster inference. We also chose MLM because it gives us more flexibility in choosing train-time masking schedules, allowing the model to prioritize learning on certain masking levels.
Summary: The paper introduces FAMPNN for protein sequence design that explicitly models both the sequence identity and sidechain conformation of each residue. Unlike existing methods that rely solely on backbone geometry, FAMPNN uses a combined categorical cross-entropy and diffusion loss objective to jointly learn the distribution of amino acid identities and sidechain conformations. The authors demonstrate that this approach improves sequence recovery and achieves state-of-the-art sidechain packing accuracy. Additionally, FAMPNN shows promise in practical applications, such as zero-shot prediction of protein stability and binding affinity. Claims And Evidence: yes. Methods And Evaluation Criteria: yes. Theoretical Claims: no theoretical contributions. Experimental Designs Or Analyses: yes. Supplementary Material: yes. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: **Strengths** 1. The explicit modeling of sidechain conformations during sequence design is a significant advancement over existing methods that only consider backbone geometry. This is a clear improvement, as sidechain interactions are crucial for protein stability and function. 2. The use of a diffusion loss for sidechain conformation prediction is well-justified. It allows the model to handle continuous data (sidechain coordinates) effectively. 3. The paper provides a thorough evaluation of FAMPNN across multiple benchmarks, including sequence recovery, sidechain packing, and protein fitness prediction. The results are strong. The inclusion of zero-shot prediction tasks (e.g., protein stability and binding affinity) is particularly compelling. **Weaknesses** 1. In terms of sequence recovery, Table 1 lacks of recent and stronger baselines. This omission weakens the claim of state-of-the-art performance. Include 2023–2024 methods or clarify why they were excluded. 2. High sequence recovery ≠ good design. The field prioritizes novel, stable sequences that fold into the target structure, not just matching native sequences. The lack of diversity and novelty metrics leaves it unclear if FAMPNN avoids overfitting. This is not only a practical desire for protein design but also an important measure about how and what your model learns, capable of generalization or simply through memorization. Authors however lack of related results and discussions. 3. Training/inference costs (e.g., GPU hours, memory) are not quantified, raising concerns for scaling to large proteins or multi-chain complexes. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their suggestions to strengthen our baselines and evaluations. Below, we address the raised concerns: > Inclusion of more recent sequence recovery baselines We thank the reviewer for pointing out this omission. To this end, we report Frame2Seq (Dec. 2023), a structure-conditioned masked protein language model achieve 46.53% sequence recovery as a single model, and 49.11% as an ensemble of three models on the CATH 4.2 test set with a single step (Akpinaroglu et. al), while FAMPNN achieves a 49.66% and 50% recovery as a single model with single and 5-step sampling respectively. Second, some other baselines could not be included due to differences in training data: e.g. certain methods do not train on CATH and elect to train on non-standardized versions of the PDB, thus we cannot compare sequence recovery on a common test set. Third, certain recent baselines for sequence recovery, particularly hybrid models which utilize pre-trained protein language models such as ESM2 (e.g. LM-Design, KW-Design), may have data-leakage issues with the CATH 4.2 test set, as pretrained protein language models likely have trained on sequences in the CATH test set. We will include a discussion of this in the final version of the paper if accepted. Finally, in this paper, we note that we do not claim state-of-the-art performance in terms of sequence recovery, but rather that sidechain packing and sequence design are synergistic tasks to learn, where we can achieve self-consistency competitive with ProteinMPNN while achieving state-of-the-art packing. > High sequence recovery ≠ good design We completely agree that high sequence recovery does not equal good design. To this end we use the de novo benchmark, popularized by ProteinBench (Ye et. al), as a way of measuring the ability to recapitulate structures on structures that are structurally distinct and diverse from those found in nature and report self-consistency to capture if designs “fold into the target structure” as you mention. As these proteins are not found in nature, we hope this demonstrates generalization beyond memorization. In fact, we find very low average pairwise similarity between sequences generated by FAMPNN on this benchmark with **[13.4%, 9.1%, 8.4%, 7.9%, 7.8%]** average pairwise similarity for de novo backbones of length 100, 200, 300, 400, and 500 respectively. We will conduct a more robust benchmark of diversity on de novo backbones and compare to other methods in the final version of the paper if accepted. > Training and inference costs We completely agree that compute costs are important for a useful protein design tool. Our CATH and PDB models are competitive with state-of-the-art models at **8.6M** and **10.7M** parameters respectively, within magnitude of models such as ProteinMPNN (1.9M), Frame2Seq (10M), and PiFold (6.6M). Notably, our model is much smaller compared to models such as ESM-IF (142M), LM-Design (664M), and ESM-3 (1.4B). For training, we mention in Section B.2 that we trained our CATH model for **~8 hours** on a single H100 GPU with 80GB of memory. The PDB model was trained with 4 H100s for 72 hours, but in practice we notice that the model can reach similar performance in **~24 hours** (96 GPU hours). Regarding example size, training of our PDB dataset was conducted with examples cropped or padded to a large fixed size of 1024 residues, with 82.91% of examples being multi-chain complexes. Finally, regarding inference costs, when evaluating our CATH model on the test-set, we achieve a single-step sampling speed of **0.03s** and **0.11s** per sample on a single H100 GPU. Additionally, we are enthusiastic to add a detailed comparison of inference costs to other methods to the final version of the paper. We appreciate this point being, as we have found that because we use a MLM procedure, FAMPNN can sample equally high quality sequences with much fewer steps, making our method much faster for longer proteins than other methods, including ProteinMPNN.
null
null
null
null
null
null
Statistical Hypothesis Testing for Auditing Robustness in Language Models
Accept (poster)
Summary: This work presents a statistical framework based on hypothesis testing for assessing the sensitivity of language models to perturbations of their inputs or even the model parameters. They describe their framework in detail including various design choices and then explore empirical validations of how their framework behaves when applied to real models and text data in relevant scenarios. ### Update after rebuttal: See "Rebuttal Comment" and Author's continued response. While the overly constrained single turn based system chosen for this iteration of ICML put quite a damper on the ability to workshop research via peer review and discussions (no ability to see the revised work in full format), the authors made a concerted effort to address critiques and improve the work in critical ways. I am relatively confident that the work would get accepted with a 3,4,3,3 pool, but I am happy to ensure this via a final bump from 3 to 4. Looking forward to seeing the paper featured in the conference. Claims And Evidence: Claims are generally sound, partially on account of there being few concrete claims about what the framework can actually be used to achieve at what level of efficacy. The sentences at L072 "Significance beyond technical novelty. We see this work as having immediate practical relevance for practitioners who wish to evaluate their LLM outputs" reads almost as if it was put there to fend off the inevitable comment from certain reviewers that the contribution of the work beyond stat-crobatics is unclear. While this reviewer _does_ indeed appreciate much of the intrinsic value to the proposed framework, how to demonstrate its practicality is still an issue requiring further discussion. Please see questions and comments below. Methods And Evaluation Criteria: Setup of hypothesis testing framework is verbose, but clear and visually appealing. The detail is appreciated in a field where not all researchers are familiar with certain experimental methods (even if they seem simple). See comments for a few additional suggestions on presentation. The evaluation is generally lacking in groundedness and thus the significance of what it proves the method can do in practice is unclear. The only quantitative stress test of any sort for the input perturbation use case is the experiments showcased in Table 2, and appears to be an adhoc constructed test set of prompts and perturbations to which we have an implicit "label" where a priori the authors have decided that "robust", i.e. insensitive, models should not behave differently under certain subsets/perturbations of the input data. It's not clear how objectively this assesses whether this framework adequately discriminates between robust and non robust models. The same issue is true for the alignment test setup shown in Table 3 in that it merely shows that "some ordering" falls out of the p-values and matches some vague expectations about model size. See comments and questions, and please understand that figuring out how to showcase this framework's utility isnt something the reviewer sees as a trivial problem to solve. Theoretical Claims: N/A Experimental Designs Or Analyses: The main issue with the soundness of the proposal and its potential validation via experiment is that the choice of embedding function $e ( \cdot )$ is not treated in any detail. In the experimental section, I don't even believe the embedding model name is noted anywhere. Is it a separate transformer? is it the pooled last layer features of one of the LLMs being evaluated for sensitivity? etc This is an issue, and because such care is taken in treating the statistical testing setup, it is surprising that the authors take for granted having access to a suitable embedding model that is precisely calibrated to yield distance changes when the semantics of interest change, but to be unchanged when this is not the case. Here is a toy example of why this aspect requires further formalization and empirical validation. Consider a reasonably long response that, under sampling, reliably contains a negation, or reliably does not contain a negation, depending on some input perturbation intervention of interest. In the worst case scenario, while the generative LLM obviously responds to the input perturbation, the embedding model chosen (which might be weaker in what it can represent) could happen to be insensitive to the negation contained somewhere in the middle of the string. As evidence for transformer language models "missing" things they shouldn't, while not precisely this setting but still illustrative, see the "Lost in the middle" phenomenon: arxiv.org/abs/2307.03172. Supplementary Material: Reviewed Appendix A, and made suggestions for minor additions. Relation To Broader Scientific Literature: This work is contextualized mostly in bias and fairness literature, but not related in any clear way to the broad array of other literature on LLM performance. The more relevant missing connections would be to the vast array of work on LLM robustness to jailbreaking or other safety and alignment tampering procedures. Essential References Not Discussed: There are many works on LLM robustness to adversarial inputs from the last few years that could be included, but they are not necessarily essential. Other Strengths And Weaknesses: Generally, the strength of this work is in its clear formalism and generality. Its weaknesses are in its arguments for its own practicality. Other Comments Or Suggestions: 1. Can the authors also note the explicit formulation of KL over the empirical distributions $P_0$ and $P_1$ being computed as part of the JSD instantiation of $\omega$? This can go alongside the nice explication of MC versus permutation formulation already provided in Appendix A. 2. It would also be more self-contained if Eq 10 had a note that the one sided p-value is computed by computing the empirical frequency of the event, i.e. summing the number of times the distributional difference $\omega(P_0^*, P_1^*)$ was larger than $\omega(P_0, P_1)$ and dividing by the number of possible permutations. Correct any mistake in the above suggestion, but the point here is that there is no implementation linked in this draft nor basic references or additional details about some of the machinery and since this paper is ostensibly for a broad interdisciplinary audience centering the use of large language models in practice it's worth stating these things somewhere in the draft in the phrasing and notation that the authors think is most useful. 3. Cosine sim is a natural choice as stated, but are there any theoretical reasons for choosing it over other similarity-preserving dim reduction metrics? 4. The reviewer doesn't see any meaningful way to reason about the effect size given that it is the JSD over the distribution of cosine sims between embeddings. Do the authors have anything further to say on interpreting this value? Questions For Authors: 1. Improving the evaluation to better showcase the utility of the proposed framework. A suggestion, albeit an expensive one... A human study could be used to assess the accuracy of this framework, including the choice of embedding model for the deployment scenario. If one replaces the similarity over embeddings with an annotation as to how similar a human rater thinks two responses are and this is done for many samples, then even considering a binarization to "same meaning" = 0 , "different meaning" = 1, do the authors think these pairwise annotations could be used to perform a version of the permutation test to asses whether the automated method agrees with human rater sensitivity in a realistic scenario? This speculation is simply the reviewer trying to help figure out how to ground the proposed framework to reality, because in its current state, it's not clear how to interpret the p-values coming out of the overall setup. This limits the readers ability to decide how accurate or useful this framework could be for any given task in practice. 2. The crux determining much of the methods reliability in practice is that it "passes the buck" down to the embedding similarity space. In an ideal scenario, if the embedding space yields distances that are known to reflect exactly the sensitivity that the practitioner wants, we have a calibrated null hypothesis space: fluctuations under which we will fail to reject H_0 are exactly as desired, but that region is tight so that rejection happens at the desired rate too. Could the authors consider including a practical case study discussion on how to use the rejection of the null as a decision metric? Eg. in some scenario, and then say calibrate the rejection rate to achieve desired FPRs/TPRs would potentially strengthen the work. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer k56Q, Thank you for engaging with our work. A lot to respond to with little space, forgive us for being brief. --- # A. Expanding on embedding evaluation Your critique that the embedding function choise is not adequately addressed is well taken. First, we agree that not *all* embedding functions would work well. There are a few properties we implicitly require but do not explicitly mention in our paper. Umportantly, the embedding should preserve semantic differences in a way that the similarity metric *s* becomes a *sufficient statistic* for a hypothesis test. This means that the embedding should induce a metric space where the distance correlated with semantic dissimilarity and it should ensure that the induced similarity distributions $P_0$ and $P_1$ are well separated under $H_1$ (if there is separation), and overlapping under $H_0$. Since this paper is not intended to analyze the properties of embedding spaces generally, we will leave links to relevant works in the paper but note that this is an active area of research (with papers coming out as recently as 2025 March). Second, does the choice of embeddings impact result? In our implementation, we wanted to use an industry standard that is cheap, fast, and strong, so we used ``ada-002`` embeddings. However, we run tests to evaluate if the embedding function matters. - Results are [here](https://imgur.com/a/KVPM4Sw). Main takeaway: they matter because embeddings have different properties. However, they still show important signal *within the same embedding function*. We expand our Appendix to evaluate 4 embeddings across 8 models. Third, is it reasonable to expect people tot have embedding functions? We think yes, as they are either open-source and free or extremely cheap for state-of-the-art models. We in fact modeled costs of such an approach. - Financial analysis [here](https://imgur.com/undefined). This makes this framework very practical cost-wise with state-of-the-art embeddings. --- # (B) Expanding the evaluation methodology and empirical validation You proposed to expand the evaluation framework. We agree. Regarding your *negation* example, we focused on three cases where minimal input changes produce significant meaning changes: a negation in a sentence, a temporal change (describing different time periods) and a topic change. - Results for LLama-8B are [here](https://imgur.com/LuIIOls). **We run results on more models**. Briefly, the outcome is that DBPA seems to capture the differences in responses very well relative to the control. **Discussion**. We agree evaluation is tricky but with the three case conditions (Negation, time change, topic change), across smaller and more consistent models, we see we *are* able to capture such differences well. --- # \(C) Other similarity preserving metrics Q: Why choose cosine? A: We chose cosine for its empirical success historically. Motivated by your question about how different metrics impact results, we have varied $\omega$ to different distances and replicated Experiment 4.1. - Experiment results [here](https://imgur.com/ngnWA8P). Some metrics have a much *tighter* null distribution and are therefore *more* sensitive to changes in the prompt. We think this is a useful feature of the framework which we highlight in the updated manuscript. Q: Can we reason about the effect sizes? A: Not directly. However, it still has practical utility; We can (a) evaluate statistical significance of a perturbation; (b) compare the relative effect of two perturbations on which one induces larger changes in output distributions; \(c) quantify FPR/TPR rates for a given $\alpha$; (d) compare model alignment or (e) other use cases in Table 1. Q: Can we have a practical example of rejecting the null as a decision metric? A: Yes! We can in fact quantify TPR/FPR for a given $\alpha$. One can obtain FPR/TPR metrics such as [here](https://imgur.com/23AiroN). --- # (D) Eight new experiments As a part of the response, we have run **eight** new experiments. Their descriptions and key findings are presented here and they have now been included in the Appendix. - Experiment summary [here](https://imgur.com/E9BTCo1) We believe these new experiments significantly expand the paper's contribution. --- # (E) ACTIONS TAKEN Based on your feedback we have: - Added section 4.4 "Other evaluations" discussing the expanded test cases - Enhanced Section 4 with additional experiments - Expanded the appendix - Added a discussion on reasoning about effect sizes - Added clarifying notes to Eq 10 on one-sided p-values - Included a discussion on KL to match the Appendix A writeup. - Added embedding analysis - Incorporated 8 new experiments --- # Thank you Thank you for your engagement. **You have helped us improve our work significantly. If we have addressed your concerns, we hope you would consider raising your score to a 4** to reflect that you think this paper should be featured at ICML2025. --- Rebuttal Comment 1.1: Comment: I appreciate the hard work of the authors in preparing additional experiments and detailed rebuttals to all reviewers. I have some comments and suggestions in response. ## While perusing other responses: The discussion on sensitivity to longer inputs and outputs requires a bit further analysis. As noted in citation for my original review, transformers can lose information in the middle and so it's not clear whether given non-trivial changes in the output sequence that are generally similar, but include critical differences, the approach will pick them up. > As the input prompt increases in length, so does the amount of information the prompt carries. Naively, longer prompts should decrease output entropy making the H0/1 distributions closer. Increases in prompt length is a good ablation but the case of shorter prompts and longer outputs, eg. multi-clause/paragraph/procedures/CoTs, is under explored. I tried squinting at the new (S4.1?) table with personas and token counts and see a few rows with a consistent trend in pvals/effect sizes, but I also see a few that appear to be uncorrelated with input token count. Of the rows with a trend, I do observe something consistent with my speculation above, but this could be confounded by the output length likely growing commensurate to input length which is unreported. Please: 1) make this a line chart not a table to highlight any trends clearly 2) please report the output sequence lengths 3) and perform regressions or simple correlation analysis to identify whether the trend in effects can be explained by input length, output length, and whether there are solid groupwise differences between the different personas. ## New embedding model ablation: It appears that Ada is the weakest model (I am assuming we _want_ to reject in the experiment, no description given). The fact that Jasper, Stella, and Kalm yield nearly equivalent, high significance test results across every test sort of suggests that the entire analysis might need to be re-done with a larger suite of embedding models on the improved evaluation collection. Eg. for the new persona/token count table, what embedding was used? For the new distance function ablation, what embedding was used? Should probably include a few of the embedding models in each experiment to clarify what is the intervention signal, and what is embedding model specific noise/confounding. ## "expect people to have embedding functions": My comment had nothing to do with cost (broken imgur link regardless), there are certainly amazing, small open source embedding models to use. The question wholly revolved around faithfulness of the embedding space to the testing goals as discussed in detail in the review, and in your rebuttal. (relatedly, saw another broken imgur link in response to jcUj) ## Decision problem TPR/FPR: What was this experiment? Please visualize as a ROC plot. The peak detection/rejection performance suggests this was not the stronger embedding models (by the p=0.000 sig levels, I would hope that in certain scenarios, you can get perfect discrimination, but this looks far from that). ## Summary: Overall, I like the work, but I do believe that the new experiments need to be expanded, and the analysis of embedding models and sensitivity in various testing scenarios needs to be made a more part of the paper to prove out the utility of the approach in real world settings. As such I will maintain my score of a weak accept, as I would be fine with the paper being published with the updates currently discussed in the rebuttals, but also believe it could benefit from reworking and resubmission. Noteably, the work will not get stale over a conference cycle as it is relatively novel and a niche practical application, unlikely to be identical to any other work under submission. --- Reply to Comment 1.1.1: Comment: Dear Reviewer k56Q, Thank you for your thoughtful follow-up and for recognizing the effort in our expanded experiments. We deeply appreciate your constructive critique and are eager to address your remaining concerns to strengthen the paper’s impact. Below, we outline concrete actions taken in direct response to your feedback. First, we'd like to note that we have included *all* the above as visuals in the paper and not as tables. We only put them as tables here, as we thought we can paste the results directly instead of using external links. 1. Sensitivity to input/output lengths. - Your Point: Analyze trends between input/output lengths and effect sizes, visualize via line charts, and perform regression. - Actions taken. (1) We replaced the table in the appendix with line charts (now a Figure in the text) plotting input/output tokens against effect size and p-values. Here is one such example for the input/output lengths: [link](https://imgur.com/a/53lNIJS) - For our analyses, we found that moving tokens from 100 to 1000 did not impact the perturbation effects on average and we have expanded our regression analysis with these insights in the paper. All experiments now include output token counts (mean ± std. dev.) alongside input lengths. 2. Embedding model robustness. - Your Point: Test multiple embeddings per experiment to disentangle signal vs. model-specific noise. Actions taken: - We have indeed replicated the experiments across four embedding models (Ada-002, Jasper-7B, Stella-v2, Kalm-12B). They are included in the appendix with a description in the main text. They are visualized as a bar chart. 3. TPR/FPR Visualization - Your Point: Visualize decision metrics via ROC plots. - Actions taken: **We agree with you**. We have already made temporary ROC plots right now and will integrate the final ROC plots with uncertainty intervals in the final version of the paper together with a separate discussion section (which has been added) to explain how to quantify and choose $\alpha$ for a given FPR/TPR rate. *We will also release the code for the out-of-the-box selection of TPR/FPR analyses.* 4. Broken links. - Your point: The links are broken - Actions taken. We're not sure why the links are not working. We apologize for this and this is a mishap on our end that we did not take proper care of finding a better alternative to posting links. Of course, all the data and figures are fully integrated into the appendix, the code/artifacts are hosted on a permanent repository that we will fully share upon acceptance. 5. Broader implications Your insights have directly improved the paper’s rigor. By rigorously addressing input/output dynamics, embedding sensitivity, and decision metrics, we now: - Provide actionable guidelines to practitioners (e.g. embedding selection, prompt design) - Demonstrate robustness across 8 LLMs and 4 embeddings - Include 8 new figures and significantly expanded the discussion section --- We sincerely hope these revisions alleviate your concerns and demonstrate the framework’s utility in real-world settings. We have done a lot of work to get this work ready for ICML and hope to get your support by asking you to consider increasing your score to a 4. This will help get this work out in higher quality faster---we think this is important given the discussions going on about LLM evaluation more generally given that they are stochastic machines. Thank you once more. We truly appreciate your engagement. Warm regards, The Authors
Summary: The paper presents a framework for measuring how input perturbations affect large language model (LLM) outputs. DBPA uses Monte Carlo sampling to construct empirical output distributions and evaluates perturbations in a low-dimensional semantic space, enabling robust, interpretable hypothesis testing. It is model-agnostic, provides p-values and effect sizes, and supports multiple perturbation testing. Case studies demonstrate its effectiveness in assessing prompt robustness, model alignment, and sensitivity to input changes, making it valuable for high-stakes applications like legal and medical domains. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are well-suited for the problem of quantifying the impact of input perturbations on LLM outputs. Theoretical Claims: Correct Experimental Designs Or Analyses: The experiment was source but lacked in-depth analysis. The results are difficult to interpret. See Weaknesses. Supplementary Material: N/A Relation To Broader Scientific Literature: 1. Proposes DBPA, a novel framework for quantifying the impact of input perturbations on LLM outputs. 2. Designed to work with any black-box LLM, making it broadly applicable without requiring internal model details. 3. Existing methods typically lack rigorous statistical foundations, making it difficult to disentangle meaningful changes in model behavior from intrinsic randomness in the output generation process. The proposed method provides interpretable p-values and scalar effect sizes, facilitating meaningful comparisons of perturbation impacts. Essential References Not Discussed: References are discussed in the related work section. Other Strengths And Weaknesses: Weaknesses: 1. The paper does not compare DBPA with existing methods for evaluating perturbation impacts, making it hard to assess its advantages. Including comparisons with baseline methods. 2. The paper relies on cosine similarity and Jensen-Shannon divergence but does not explore alternative metrics like BLEU or ROUGE scores. 3. The paper uses $p$-values to assess statistical significance but does not clearly explain how these relate to practical concerns like unintended bias or others aspects of model's behavior. Other Comments Or Suggestions: Providing more context on how p-values and effect sizes translate into model's sensitive behaviors. Questions For Authors: 1. How does DBPA compare to existing methods for evaluating perturbation impacts, such as word overlap metrics (e.g., BLEU, ROUGE) or log-probability comparisons? Can you provide experimental results or theoretical arguments demonstrating DBPA's advantages over these baselines? 2. Does the choice of metrics significantly influence the results? 3. Have you tested DBPA in real-world, high-stakes applications (e.g., healthcare, legal document drafting)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer 62GP, Thank you for your thoughtful feedback on our work. We appreciate your recognition that our work presents a framework for measuring how input perturbations affect LLM outputs, that our case studies are clear and backed up with convincing evidence, and that our evaluation is well suited for the problem. --- # (A) We have clarified comparisons with existing methods We will answer your concern by breaking this question down into two pieces. **Comparison with other methods**. The primary reason we do not compare against other methods is because DBPA is a proposed framework for quantifying perturbation impacts. There is no *ground-truth* and therefore all evaluations are inherently misleading. We in fact *do* provide a related work disscussion in Table 4. **Could BLEU and ROUGE be compared against?** We do not see much utility in comparing against BLEU and ROUGE as these metrics are not designed to quantify perturbation impacts. Furthermore, even if we did, we could not evaluate which approach is "better" as they serve fundamentally different purposes. That said, we still run additional studies to evaluate how to incorporate BLEU and ROUGE into our framework. We have performed this analysis for the role-play experiment, which results in the table below. - Please find experiment [here](https://imgur.com/undefined) **Discussion**. The BLEU and ROUGE metrics are very sensitive to small perturbations. However, any analyses should be taken with caution as they operate on the text space and won't capture the null distribution well. **ACTIONS TAKEN.** We have updated Section 4.2 with an additional discussion and have expanded the Appendix. --- # (B) We have examined how the choice of the metric affects results Motivated by your question about how different metrics impact results, we have varied $\omega$ to different distances and replicated Experiment 4.1. Specifically, we vary $\omega$ by computing in addition the Euclidean, Wassesterin, and Energy distances. - Experiment results [here](https://imgur.com/ngnWA8P). **Discussion**. Because the $\omega$ has a different magnitude for each metric, these measures are not normalized. Some metrics have a much *tighter* null distribution and are therefore *more* sensitive to changes in the prompt. We think this is a useful feature of the framework which we highlight in the updated manuscript. **Takeaway**. In the paper, we compute our results using a baseline JSD $\omega$ and here we have added an enhanced view with different metrics. **ACTIONS TAKEN.** We have enhanced Section 4 with an additional experiment on the metrics and a discussion, and updated the appendix. --- # \(C) Providing more context on p-value and real-world translations We will answer your concern by breaking down the question into two parts: *unintended bias* and *others aspects of model's behavior* **p-value and practical significance**. Our paper examines how p-values reveal information about role-play, prompt robustness, and model alignment (sections 4.1-4.3). In this paper, we ask whether the answers of a language model change which we formulate as a hypothesis testing problem and therefore use p-values as a useful way to answer this question. **Measuring bias**. While not intended for it, DBPA could potentially analyze unintended bias through sensitivity analysis – designing scenarios where LLM output should be deterministically influenced by specific input factors, then testing if irrelevant input changes affect output. --- # (D) Comment: Testing the model in real-world applications. To address your concern about how we have tested the framework, we will highlight: (a) what is this used for and (b) what we have tested for? **(a) What is DBPA used for?** We see DBPA as being useful *at least* in four different scenarios due to the broad nature of the framework: prompt robustness, training stability, model comparison, and adversarial attacks. We explain how each framework is useful below. We describe this in Table 1 of the paper. As a part of the response, we have run **eight** new experiments. Their descriptions and key findings are presented here and they have now been included in the Appendix. - Experiment summary [here](https://imgur.com/E9BTCo1) We believe these new experiments significantly expand the paper's contribution. --- # Thank you Thank you for your engagement. **You have helped us improve our work significantly**. We have made revisions to multiple parts of the paper as a result and think it's now in better shape than before. **If we have addressed your concerns, we hope you would consider raising your score to a 4** to reflect that you think this paper should be featured at ICML2025. We are certain this paper opens doors to multiple new research directions and has clear, practical relevance for researchers in diverse domains who care about model auditing and evaluation.
Summary: The paper introduces Distribution-Based Perturbation Analysis (DBPA), a novel framework for assessing how input perturbations affect the outputs of LLMs by reformulating the perturbation analysis as a frequentist hypothesis testing problem. This model-agnostic approach constructs empirical null and alternative output distributions within a low-dimensional semantic similarity space using Monte Carlo sampling, enabling tractable frequentist inference without restrictive distributional assumptions. DBPA supports evaluating arbitrary input perturbations on any black-box LLM, provides interpretable p-values, facilitates multiple perturbation testing through controlled error rates, and quantifies effect sizes with scalar metrics. Demonstrated across multiple case studies, DBPA showcases its effectiveness in enhancing model reliability and post-hoc interpretability. Claims And Evidence: All the claims are well-supported. Methods And Evaluation Criteria: In various scenarios, the author successfully demonstrates that their methods are capable of (1) capturing those answer divergences which are significant as well as those that are not under perturbation, (2) analyzing the robustness of language models against irrelevant changes in the prompt, and (3) evaluating alignment with a reference language model. Theoretical Claims: The author provides a new perspective on the evaluation of LLM outputs from the viewpoint of frequentist hypothesis testing, and accordingly introduces the DBPA framework. Experimental Designs Or Analyses: The author conducts experiments on more than 8 diverse open-source and closed-source models, thereby effectively demonstrating the efficacy of their proposed methods. Supplementary Material: The supplementary material is same with the paper. Relation To Broader Scientific Literature: This paper is closely related to model reliability and post-hoc interp, with a particular focus on measuring how input perturbations impact LLM outputs. Essential References Not Discussed: Current related works are well-structured and enough for this paper. Other Strengths And Weaknesses: The author also provides numerous user cases to support the usefulness of the DBPA framework. The paper is well-structured and appears poised to address an immediate practical need within the community. In Section 4.1, the author exemplifies the use of DBPA in medical scenarios, noting that these examples are primarily role-play tasks sharing a common prefix ("act as"), with only slight variations in wording. This raises my question of whether the proposed framework can effectively showcase its analytical capabilities on longer and more complex input sequences. It would be beneficial for future work to explore the framework's performance under such conditions to fully understand its potential and limitations. Other Comments Or Suggestions: The paper is well-structured and clear. Questions For Authors: In Section 4.1, the author provides examples from the medical domain, noting that these instances are primarily role-play scenarios sharing a common prefix ("act as"), with only minor variations in wording. This observation raises questions about whether the proposed framework can effectively demonstrate its analytical capabilities on longer and more complex input sequences. Additionally, it remains to be seen if the author's approach can illustrate the correspondence between semantic changes in the input space and those in the output space, essentially attributing semantic meaning to distributions. Exploring these aspects would be crucial to fully assess the framework's potential in handling intricate and nuanced tasks, thereby enhancing our understanding of its applicability and robustness under diverse conditions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer grUt, Thank you for your thoughtful feedback on our work. We appreciate your recognition that our work presents a novel framework for assessing how input perturbations affect the outputs of LLMs, that our case studies effectively demonstrates the efficacy of our proposed methods, and that our evaluation is new and insightful. We'll structure our response as follows: - (A) Handling complex input sequences. We have examined DBPA across complex input sequences. - (B) Evaluating input/output semantic correspondence. We have examined DBPA from various semantic perspectives. - \(C) Eight new experiments --- # (A) Handling complex input sequences You noted that our examples in Section 4.1 primarily featured role-play scenarios and you questioned whether DBPA can effectively analyze longer and more complex input sequences. We agree this is an important consideration. **Can DBPA focus on longer input sequences, theoretically?** While the paper focused on simpler examples for clarity of exposition, DBPA is designed to handle arbitrary input perturbations regardless of sequence length or complexity. The framework operates on distributions of semantic embeddings rather than raw text. Therefore, it is robust to input complexity. In fact, this is precisely one of our motivations for developing this framework, as DBPA provides a new way to distinguish long-form natural language responses. **Inspired by your comment, we have conducted additional experiments with more complex input sequences.** We have modified the experiment setup in section 4.1 such that there are now multiple input prompts with different lengths. As the input prompt increases in length, so does the amount of information the prompt carries. - Find the experiment [here](https://imgur.com/7rMrzv4). **Comment on the results**. We see that this effect is more pronounced in the shorter prompts and less so in more complex prompts. We think this explains well-observed empirical phenomena, such as [1], in a quantitative sense. **Takeaways of these experiments**. We show how to highlight how we can extend DBPA to longer, more complex input sequences. The experiment shows (i) less consistency and (ii) larger level of response randomness for longer and more complex input prompts. In addition to [1], we believe this has the potential to open a new avenue of research within the field. **ACTIONS TAKEN.** We have updated Section 4.1 with a discussion of these experiments. --- # (B) Evaluating input/output semantic correspondence. Here, we address your concern on the input/output correspondence between semantic meaning and distributions. **(a) Can we capture semantic meaning?** The only reason why DBPA captures semantic meaning is by the choie of its $\omega$ function that operates on the embedding space. It's well understood that text embedding methods map text data to a space where semantic correspondence is similar to distance (see even by now forgotten works such as [2]). While this in itself is an active area of research [3-4], it's common (and useful) to map text to embedding spaces for such purposes. **(b) Choice of the metric used**. We can further change which $\omega$ we use. While we use the cosine($\cdot$) function due to it naturally capturing directionality, users of DBPA can map it to their own preferred metrics if they have improved domain knowledge. **\(c) Existence of a control group**. Lastly, one important reason why we claim our method does, in fact, find semantic meaning is that while we operate in an embedding space, we have a control group (the null distribution) which helps us account for what would have happened had there been no perturbation. Therefore, any changes we observe quantify the probability of observing an event as extreme as the one observed (the definition of the p-value) which implicitly has natural variability as a control. [1] https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00638/119630 [2] https://arxiv.org/abs/1301.3781 [3] https://arxiv.org/abs/2410.16608 [4] https://arxiv.org/abs/2406.07640 --- # \(C) Eight new experiments As a part of the response, we have run **eight** new experiments. Their descriptions and key findings are presented here and they have now been included in the Appendix. - Experiment summary [here](https://imgur.com/E9BTCo1) We believe these new experiments significantly expand the paper's contribution. --- # Thank you Thank you for your engagement. **You have helped us improve our work significantly**. We have made revisions to multiple parts of the paper as a result and think it's now in better shape than before.
Summary: The paper proposes a Distribution-Based Perturbation Analysis (DBPA) framework to evaluate the sensitivity of LLM outputs to input perturbations. Addressing the limitations of traditional methods, which struggle to distinguish between semantic changes and the inherent randomness of models, this study reformulates the problem as a frequentist hypothesis testing task. It constructs output distributions for both original and perturbed inputs using Monte Carlo sampling and compares these distributions in a low-dimensional semantic similarity space. The main contributions of the article are: 1. Identifying limitations in existing sensitivity-based measures for language models. 2. Introducing distribution-based perturbation analysis, which is a model-agnostic sensitivity measure. 3. Conducting a case study on DBPA. Claims And Evidence: The authors have generally validated their claims: 1. The proposed DBPA method effectively addresses issues related to sensitivity-based measures. 2. The authors design simple case studies to verify the feasibility of the DBPA method. Methods And Evaluation Criteria: 1. The hypothesis testing approach and the rigorous analysis of the metrics adopted by the author are insightful. 2. The proposed method is a novel evaluation approach, lacking existing benchmarks for assessment. Theoretical Claims: The article's analysis and summary of distribution-based perturbation analysis are generally correct, although I have not thoroughly examined all the details. Experimental Designs Or Analyses: The author's experimental design is generally reasonable. However, to be frank, as a study on the impact of input distribution differences on output, I would like to see a more fundamental analysis: 1. In Experiment 4.1, merely examining the significance of differences within a single model across different domains may not be sufficient. 2. In Experiment 4.3, is there a more fundamental analysis of the alignment degree between different models? Supplementary Material: I reviewed the appendix, which includes an introduction to MC sampling and examples of prompt inputs. Relation To Broader Scientific Literature: I haven't found it yet. Essential References Not Discussed: I haven't found it yet. Other Strengths And Weaknesses: I don’t quite understand the concrete practical significance of the author's findings on quantifying a model’s sensitivity to input perturbations. Although the author claims that the method has many applications and mentions some in section 4, I find these applications somewhat non-essential. The author should provide clearer and more practical application scenarios and implementation plans. Other Comments Or Suggestions: I haven't found it yet. Questions For Authors: Please refer to **Experimental Designs Or Analyses** and **Other Strengths And Weaknesses**. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer jcUj, Thank you for your thoughtful feedback on our work. We appreciate your recognition that our work effectively addresses issues related to sensitivity-based measures, that our case studies effectively verify our method, that our approach is insightful, and that our evaluation is novel. --- # (A) Experiment 4.1. We have examined the significance of DBPA across more models. To address your concern about a single model being analyzed, we replicate that study with two closed-source and large models (GPT-4 and GPT-3.5) *and* six smaller, open-source models. We replicate the same setup with different "Act as" prompts, following Experiment 4.1. Results are provided below. - Results for large closed-source models are [here](https://imgur.com/7B9LkFO) - Results for smaller open-source are [here](https://imgur.com/undefined) **Takeaways**: These findings reveal: (1) less consistency and (2) greater response randomness in smaller models. This analysis demonstrates why smaller models are less suitable for these tasks—their baseline responses diverge significantly from expected outcomes. **Actions taken**: We've added these experiment results and discussion in Section 4.1 and Appendix B.1 "Replicating with more models." --- # (B) Experiment 4.3. Answering your question on more fundamental analysis of LLM alignment To address your concern on alignment, we'd like to first answer *what do we mean by alignment?* and then show *how to evaluate alignment more generally*? **(a) What do we mean by alignment?** In the context of Experiment 4.3, we say two models are *aligned* if their responses (technically, *response distributions*) are the same for a given prompt. This is intuitive because we desire two models to have similar responses to similar questions. Therefore, we measure alignment via $\omega$ as the difference between a baseline model response and the new model response. This is why lower $\omega$ indicates higher alignment. **In this experiment, our main contribution is showing that DBPA can be used as a way of measuring alignment** **(b) Is there a way to measure alignment more fundamentally?** In response to your question, we develop another experiment where we vary our measure $\omega$ to evaluate the degree to which alignment is consistent across these models. Concretely, we vary $\omega$ by computing the Euclidean, Wassesterin, JSD, Energy distances. - Experiment results [here](https://imgur.com/ngnWA8P). **Discussion**: The varying magnitudes of $\omega$ across metrics indicate these measures aren't normalized. However, they reveal alignment patterns across different evaluation metrics. As expected, Child and Reviewer roles show greater divergence from baseline than Doctor or Nurse roles. Some metrics (like energy distances) prove too sensitive to minor changes for practical application. **Takeaway**: Alignment measures how well response distributions match between two language models for a given prompt. We demonstrate alignment using baseline JSD $\omega$ and enhance this view with additional metrics. This represents just one alignment perspective; DBPA framework implementation may vary by task, potentially using different embedding functions or distance metrics. **\(c) Are there future work that can be done to evaluate alignment?** We believe our work opens up a new avenue for alignment referencing in language models. There are some works that could be fruitful as extensions of our work in the future, but are significantly out-of-scope for our paper. Examples include: (i) developing concrete real-time model alignment metrics that are cheap to quantify or (ii) evaluating alignment by looking at stability during training (and comparing output consistency. **ACTIONS TAKEN.** Enhanced alignment discussion with additional results in Section 4.3 and extended discussion in an Appendix. --- # \(C) Practical implications of our work We see DBPA as being useful *at least* in four different scenarios due to the broad nature of the framework: prompt robustness, training stability, model comparison, and adversarial attacks. This is illustrated in Table 1. --- # (D) Eight new experiments As a part of the response, we have run **eight** new experiments. Their descriptions and findings are presented here and they have now been included in the Appendix. - Experiment summary [here](https://imgur.com/E9BTCo1) We believe these new experiments significantly expand the paper's contribution. --- # Thank you Thank you for your engagement. **You have helped us improve our work significantly**. We have made revisions to multiple parts of the paper as a result and think it's now in better shape than before. **If we have addressed your concerns, we hope you would consider raising your score to a 4** to reflect that you think this paper should be featured at ICML2025. We are certain this paper opens doors to multiple new research directions and has clear, practical relevance for researchers.
null
null
null
null
null
null
Hierarchical Refinement: Optimal Transport to Infinity and Beyond
Accept (oral)
Summary: This paper proposes a new large-scale hierarchical optimal transport algorithm, between distributions with the same number of samples. The authors algorithm is based on multi-scale partitions of the source and target datasets, as well as recent development in low-rank optimal transport. Through a series of experiments on synthetic and real-world datasets, the authors show that their approach 1) has better alignment performance than previously proposed methods and 2) can scale to larger datasets. Claims And Evidence: There are 2 main claims in this paper. 1) The proposed strategy successfully aligns source and target distributions, 2) The algorithm can scale to larger datasets Both of them are supported in the experiments. Furtheremore, there are some theoretical results supporting 1). Methods And Evaluation Criteria: The experiments are well designed and use relevant benchmarks. The authors could have considered other problems where OT has contributed to, such as generative modeling or domain adaptation, but overall the experiments are comprehensive. Theoretical Claims: There are 3 main theoretical results (Propositions 3.1, 3.2 and 3.3). I could not assess the validity of these proofs. I have some questions down below my review. Experimental Designs Or Analyses: The experiments are valid and well designed. As I mentioned previously, they are comprehensive. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper fits into the broader litreature of Hierarchical OT (Chen, Georgiu and Tannenbaum, 2017; Yurochkin et al, 2018; Delon and Desolneux, 2019; El Hamri, Bennani and Falih, 2021), which consider transportation plans at different levels of representations of probability measures through clustering. Arguably, the paper pushes this idea further, as it consider partitions of increasing depth of the source and target datasets. Essential References Not Discussed: As I mentioned in the previous point, I think the paper could benefit from a broader discussion with hierarchical optimal transport, especially these references, (Chen, Georgiu and Tannenbaum, 2017) Chen, Yongxin, Tryphon T. Georgiou, and Allen Tannenbaum. "Optimal transport for Gaussian mixture models." IEEE Access 7 (2018): 6269-6278. (Yurochkin et al, 2018) Yurochkin, Mikhail, et al. "Hierarchical optimal transport for document representation." Advances in neural information processing systems 32 (2019). (Delon and Desolneux, 2019) Delon, Julie, and Agnes Desolneux. "A Wasserstein-type distance in the space of Gaussian mixture models." SIAM Journal on Imaging Sciences 13.2 (2020): 936-970. (El Hamri, Bennani and Falih, 2021) El Hamri, Mourad, Younes Bennani, and Issam Falih. "Hierarchical optimal transport for unsupervised domain adaptation." Machine Learning 111.11 (2022): 4159-4182. Furthermore, while the authors do mention mini-batch OT, the authors could have cited a few papers on the problem, such as, (Nguyen et al., 2022) Khai Nguyen, Dang Nguyen, The-Anh Vu-Le, Tung Pham, Nhat Ho Proceedings of the 39th International Conference on Machine Learning, PMLR 162:16656-16690, 2022. (Fatras et al., 2022) Kilian Fatras, Thibault Sejourne, Rémi Flamary, Nicolas Courty Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3186-3197, 2021. (Fatras et al., 2020) Fatras, Kilian, et al. "Learning with minibatch Wasserstein: asymptotic and gradient properties." International Conference on Artificial Intelligence and Statistics. PMLR, 2020. including hierarchical approaches (Nguyen et al., 2022) Khai Nguyen, Dang Nguyen, Quoc Dinh Nguyen, Tung Pham, Hung Bui, Dinh Phung, Trung Le, Nhat Ho Proceedings of the 39th International Conference on Machine Learning, PMLR 162:16622-16655, 2022. Other Strengths And Weaknesses: Here, I make a summary of my review. **Strenghts** 1. The paper provides a practical and scalable algorithm for large-scale optimal transport 2. The idea of using multi-level partitions is quite interesting 3. The discussion at the end of the paper is quite comprehensive and deals with natural questions that arise from the reading of the paper. **Weaknesses** 1. The authors could have considered minibatch-OT in their experimental comparisons. There could be a deeper discussion of minibatch OT in the main paper ---- As a result, I am leaning towards acceptance, as I think the paper is good. --- __Post Rebuttal:__ The authors addressed my concerns in their rebuttal. As a result, I raise my score from __3. Weak Accept__ to __4. Accept__ Other Comments Or Suggestions: In Figure 2, the x-axis should be in log-scale Questions For Authors: Here are a few questions for the authors, - Do the linear space complexity stems from the assumption that $X$ and $Y$ have the same number of elements? For instance, in principle, if $X$ and $Y$ have the same number of elements and have uniform weights, one can store $\{x_{i}, T(x_{i})\}_{i=1}^{n}$ with classic OT. - What would be the challenges of adapting their method to handle distributions with unequal number of samples? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer _9vCU_ for their feedback and careful reading. > As I mentioned ... authors could have cited a few papers ... Thank you for these suggestions, we will include the following sentences in our Background: _"Hierarchical OT (Schmitzer and Schnorr '13) is a variant of OT modeling data and transport across multiple scales, using Wasserstein distances as coarse-scale ground costs. It has been applied to document representation (Yurochkin et al. '19), domain adaptation (El Hamri et al. '22), sliced Wasserstein distances (Bonneel et al. '15; Nguyen et al. '22b) and to give a discrete formulation of transport between Gaussian mixture models (Chen et al. '18; Delon and Desolneux '20) . These works build interpretable, coarse-grained structure into a single transport plan, rather than solving for a sequence of plans at progressively finer scales as in the present work."_ In addition, we have added experimental validation against mini-batch OT. We will add the following sentences to our Introduction: _"Mini-batch OT (Genevay et al. '18) improves scalability, but incurs significant biases (Sommerfeld et al. '19; Korotin et al. '21; Fatras et al. '21a) as each mini-batch alignment is often a poor representation of the global one. Several works have investigated the theoretical properties of mini-batch estimators of the plan (Fatras et al. '20b; '21c), while others have attempted to mitigate bias using partial or unbalanced OT (Nguyen et al. '22a; Fatras et al. '21b) by allowing for mass variation between the mini-batches."_ > ..deeper discussion of minibatch OT.. Thank you for pointing this out. We re-ran the large-scale real data experiments with mini-batch OT without-replacement, which is more conventional as noted in (Fatras et al. '21c), and will include these values in the updated manuscript. See the tables for the mouse embryo spatial transcriptomics, ImageNet below and MERFISH experiments above in response to _nCzR_, including mini-batch couplings. We notate with (MB $B$) for $B$ the batch size in each case. One value in the HR-OT table (Cost 14.35 for 12.5-13.5) has been changed, and reflects a slightly higher setting of $r _ {max} = 128$ in the rank annealer. We also will add more discussion to our paper on practical considerations for mini-batch OT, as well as existing theoretical results on convergence and complexity (Fatras et al. '20a; '21a; '21c) relative to full-rank OT. **Table: Cost Values for Embryo** |Method|9.5-10.5|10.5-11.5|11.5-12.5|12.5-13.5|13.5-14.5|14.5-15.5|15.5-16.5| |-|-|-|-|-|-|-|-| |HR-OT|**21.81**|**14.81**|**16.14**|**14.35**|**13.78**|**14.29**|**12.79**| |MB 128|22.44|15.35|16.69|14.86|14.14|14.75|13.32| |MB 512|22.15|15.05|16.33|14.54|13.92|14.50|13.01| |MB 1024|22.05|15.02|16.24|14.45|13.86|14.43|12.91| |MB 2048|21.98|14.98|16.18|14.39|13.81|14.39|12.85| **Table: Cost Values for ImageNet** |Method|HR-OT|MB 128|MB 512|MB 1024| |-|-|-|-|-| | |**18.97**|21.89|20.34|19.58| We will also log-transform Fig.2's $x$-axis for the next version! > Do the linear space complexity..? Great question. While you are correct that bijections can be stored with linear space, our linear space complexity does not rely on $X$ and $Y$ having the same number of elements. For classical approaches tackling OT as an assignment problem, while the final coupling returned is linear in space, the space complexity of the algorithm is quadratic. In comparison, HR-OT returns a solution with linear space and has linear space complexity. In addition, the runtime complexity is log-linear for squared-Euclidean cost $\lVert \cdot \lVert_{2}^{2}$ and remains log-linear for sample-linear approximations of the distance matrix (Indyk et al. '19). We refer to the bullet below on your question concerning whether hierarchical refinement could retain its properties with an unequal number of points. > What would be the challenges of adapting their method to handle distributions with unequal number of samples? There are two challenges to extend hierarchical refinement to datasets with $n$ source points and $m$ target points. 1. If $n > m$, the challenge is to extend Proposition 3.1 from 1-1 Monge maps (the assignment problem) to many-to-one Monge maps (i.e. with smaller target dataset). Specifically, one needs to account for the possibility that each target index $j \in [m]$ is co-clustered with a _set_ $ S _ {j} \subset [n]$ of indices indexing the preimage of the Monge map $T^{-1}( \mathbf{x} _ {j} )$. 2. If $m>n$, no Monge map exists from the source to target dataset. While Sinkhorn approaches the Kantorovich problem, hierarchical refinement approaches the Monge formulation of optimal transport. Thus, it inherits any limitations of the Monge framework, including its asymmetry and inability to account for mass-splitting. However, assuming the extension of the proposition discussed in point (1.) holds, one can reverse the role of the two datasets and infer the Monge map from target to source. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I consider that your comments answer my questions, and I will raise my score accordingly, from __3. Weak Accept__ to __4. Accept__. Congratulations on your work. In the meantime, I want to raise some discussion for future works on extending the authors' work to $n \neq m$. While the authors are correct on their remark about the challenges of adapting their strategy for $n \neq m$, they could find an approximation for the actual monge map using the barycentric projection, $$T\_{\gamma}(x\_{i}) = \text{argmin}\_{x \in \mathbb{R}^{d}}\sum\_{j=1}^{n}c(x,y\_{j})\gamma\_{ij},$$ which, for the squared Euclidean cost, has closed-form solution. This mapping has been widely used, for instance, in domain adaptation. The authors can consult [R1] and the references therein for further information about how this mapping approximates the Monge map. Note that this comment is surely beyond the scope of the current paper, and have nothing to do with the current evaluation. [R1] Deb, Nabarun, Promit Ghosal, and Bodhisattva Sen. "Rates of estimation of optimal transport maps using plug-in estimators via barycentric projections." Advances in Neural Information Processing Systems 34 (2021): 29736-29753.
Summary: This paper proposes a hierarchical framework to obtain Kantarovich plans in Optimal Transport. The authors conceptually build on many prior works of low-rank OT (notably Scetbon 2021, Halmos 2024) and propose a rank annealing schedule to obtain the full rank Kantarovich plan using many low-rank solutions of sequentially partitioned sets. The main premise is founded on a theoretical result that when a Monge map exists between the datasets, it co-clusters the optimal low-rank solution (albeit with minor differences like uniform assumption in Eq 6). Altogether the proposed framework is claimed to be linear in space complexity and log-linear to quadratic in time complexity. The authors contrast this with well-known OT solvers mainly Sinkhorn (which has quadratic space and quadratic time) and ProgOT. Experiments are reported for (1.) Synthetic Datasets (Checkerboard, MAFMoons and Rings, and Half-Moon and S-Curve etc) (2.) Large-scale Matching Problems and Transcriptomics (3.) MERFISHBrainAtlasAlignment and (4.) ImageNet Alignment. The authors compare with Sinkhorn (Cuturi, 2013), ProgOT (Kassraieetal.,2024), as well as low rank OT solvers like (Scetbon et al.,2021) and FRLC(Halmos et al.,2024). The main message from the experiments is to highlight that the proposed approach scales better (in space complexity) than existing full-rank solvers and produces a smaller kantarovich cost, establishing a good operating point between accuracy and complexity. Claims And Evidence: The paper is written well. The main contributions are highlighted nicely and the experiments reflect the nuanced benefits of the proposed approach. The main method is convincing - and I found it nice that the algorithm builds on Proposition 3.1 Methods And Evaluation Criteria: Yes. The experiments are diverse and fairly comprehensive Theoretical Claims: Yes. Propositions 3.1 3.2 and 3.3. Experimental Designs Or Analyses: Yes. All of the 4 experiments reported in the summary. Supplementary Material: Yes. Relation To Broader Scientific Literature: - At a high level, the main message of the paper is to propose the use of low rank OT solvers to sequentially obtain a full rank solution. This submission does NOT propose a new low rank OT solver. If we credit the authors for focusing on this novelty - however, I find that the authors do make specific choices (1.) Use of Halmos et al in Algorithm 1 (2.) the specific Select subroutine in Algorithm 1. While the choices are not unreasonable, I do find the whole setup arguably lacking generality. For e.g., I could not place any experiment that validates replacing with Scetbon 2021 et al, etc. I feel it is most impactful when the novel idea is shown to work for different LROT solvers, which I am not yet convinced. Essential References Not Discussed: - I do feel the lack of *any* comparison to Gerber&Maggioni, 2017 to be underwhelming. Even though I agree it is not apples to apples comparison, this is a very strong and similar prior work, and perhaps an experiment highlighting the difference in performance for a reasonable choice of initial partition would be very enlightening Other Strengths And Weaknesses: Overall, barring some important clarifications (e.g., using the correct sinkhorn implementation), I am inclined positively. The paper has been compiled well, has an interesting message, and has comprehensive experiments. Other Comments Or Suggestions: - Eq 1 incomplete (missing b) - Line 119, define \delta_r Questions For Authors: - Are the pointclouds in Figure 3 for comparing the 3 methods the *same*? - Is the Sinkhorn used in the experiments (ott-jax) with the \eps annealer? - This citation is missing: Cuturi, M., Meng-Papaxanthos, L., Tian, Y., Bunne, C., Davis, G., & Teboul, O. (2022). Optimal transport tools (ott): A jax toolbox for all things wasserstein. arXiv preprint arXiv:2201.12324. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer _j2d6_ for their feedback and careful reading. > At a high level, the main message of the paper is to propose the use of low rank OT solvers to sequentially obtain a full rank solution. This submission does NOT propose a new low rank OT solver. If we credit the authors for focusing on this novelty - however, I find that the authors do make specific choices (1.) Use of Halmos et al in Algorithm 1 (2.) the specific Select subroutine in Algorithm 1. While the choices are not unreasonable, I do find the whole setup arguably lacking generality. For e.g., I could not place any experiment that validates replacing with Scetbon 2021 et al, etc. I feel it is most impactful when the novel idea is shown to work for different LROT solvers, which I am not yet convinced. We agree on the benefit of demonstrating with other LROT solvers. We did not have time to implement this in the short response window, but aim to offer this option in the code and demonstrate results with the Scetbon et al. 2021 solver. Theoretically, our framework will work for the Scetbon et al. solver because our Proposition 3.1 is tailored to the low-rank factorization introduced by (Scetbon et al. '21). Proposition 3.1 hinges on Lemma B1, which shows _optimal_ factors $(\mathbf{Q}, \mathbf{R})$ for the LOT problem (Scetbon et al. '21) are vertices on the transport polytope and thus have $\leq n+r-1=n+1$ non-zero entries for $r=2$. Thus for $n = 2^t$ the solutions correspond to partitions in Proposition 3.1. The key point is these arguments are independent of the solver, and apply to any optimal solution. Regarding (2), our choice of the argmax as the Select sub-routine is motivated by our Proposition 3.1, and is correct if $(\mathbf{Q}, \mathbf{R})$ are optimal, but other choices may be appropriate if the solutions are entropic and do not define exact partitions. > I do feel the lack of any comparison to Gerber & Maggioni, 2017 to be underwhelming. Even though I agree it is not apples to apples comparison, this is a very strong and similar prior work, and perhaps an experiment highlighting the difference in performance for a reasonable choice of initial partition would be very enlightening. We agree, Gerber & Maggioni is a seminal work in multi-scale OT, and we have added comparisons with their method MOP for our 2-dimensional datasets where MOP uses the GMRA (Geometric Multi-Resolution Analysis) R package to generate multiscale partitions. In particular, we benchmarked against MOP for our three synthetic datasets and our MERFISH expression transfer task. MOP's performance for MERFISH is given in the table above in response to reviewer _nCzR_. MOP's performance on synthetic data is given in the table below: **Table: Comparison of Coupling-Based OT Methods on Primal Cost $\langle \mathbf{C}, \mathbf{P} \rangle_F$ (Wasserstein-2) on 512 point small instance** | Method | Checkerboard | MAF Moons & Rings | Half Moon & S-Curve | |-|--|--|--| | MOP (Gerber et al. '17) | 0.393 | 0.276 | 0.401 | | Sinkhorn (`ott-jax`) | 0.136 | 0.221 | 0.338 | | ProgOT (Kassraie et al. '24) | 0.136 | 0.216 | 0.334 | | HR-OT | 0.129 | 0.216 | 0.334 | | Dual Revised Simplex Solver | **0.127** | **0.214** | **0.332** | > Eq 1 incomplete (missing b); Line 119, define $\delta_r$ Thank you for catching these mistakes. We will fix them in the final version. > Are the pointclouds in Figure 3 for comparing the 3 methods the same? The left two pointclouds for hierarchical refinement and Sinkhorn are the same, since these methods are able to run on 4096 points. The rightmost figure is for an optimal LP-solver, which scaled only to 512 points and thus is plotted with an identically distributed but smaller dataset. We will add this distinction to the legend. > Is the Sinkhorn used in the experiments (ott-jax) with the $\epsilon$ annealer? Regarding your question about the Sinkhorn implementation: we use `ott-jax` as the Sinkhorn solver, but do not use the $\epsilon$-annealer. We rely on the default implementation of Sinkhorn, which uses a fixed value of $\epsilon = 0.05$. This default value produces relatively sparse solutions. We will note this detail in the experimental section. > This citation is missing: Cuturi, M., Meng-Papaxanthos, L., Tian, Y., Bunne, C., Davis, G., \& Teboul, O. (2022). Optimal transport tools (ott): A jax toolbox for all things wasserstein. arXiv preprint arXiv:2201.12324. Thank you for catching this, we will add this citation after our references of `ott-jax`.
Summary: This paper focuses on the solving of Optimal Transformer (OT) problems. To this end, it derives an algorithm, HR-OT that leverages the invariant under Monge map and dynamically constructs a multi-scale partition of each dataset using low-rank OT subproblems. By doing that, it could use linear space and achieve runtime ranging from log-linear to quadratic. Experiments have been conducted on several datasets, even including large scale data with over a million points. The results demonstrate its advantages over the original Sinkhorn. Claims And Evidence: Seems yes. Methods And Evaluation Criteria: Seems yes. Theoretical Claims: Yes. Should be correct. Experimental Designs Or Analyses: Yes. The experimental designs are acceptable. However, it is better to include large scale real data, e.g., 3DMatch (https://3dmatch.cs.princeton.edu/) or KITTI (https://www.cvlibs.net/datasets/kitti/eval_odometry.php). Moreover, as Sinkhorn has been widely used in point cloud registration algorithms, e.g., CoFiNet (NeurIPS 2021) and GeoTransformer (CVPR 2022), I would like to see the promotion of those methods by replacing the Sinkhorn part with the proposed method. Supplementary Material: Yes, mainly the experimental parts. Relation To Broader Scientific Literature: As a promoted version of Sinkhorn, it could use linear space and achieve runtime ranging from log-linear to quadratic. Validation is also provided via theoretical proof as well as experiments. Essential References Not Discussed: I have no suggestion about the reference list. Other Strengths And Weaknesses: Strengths: 1. The writing is fluent and the organization is also good. 2. The proposed method could be considered novel. 3. The proposed method could be applied on data with over a million points, which is highly valuable, if no one could achieve that before. Other Comments Or Suggestions: See above. I highly suggest the authors to add the metioned experiments. Questions For Authors: 1. On the MERFISH experiments, the score of HR-OT results are much higher than that of ther methods. However the reason under this should be further analyzed. 2. It has been metioned that the numer of sample should be the same (|X| = |Y| = n) for the proposed method,and it does not hold, the data could be slightly modified. Could the authors give an example how this could be done? And under this case, what is the results compared with other methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer _nCzR_ for their feedback and careful reading. > Yes. The experimental designs are acceptable. However, it is better to include large scale real data, e.g., 3DMatch (https://3dmatch.cs.princeton.edu/}{https://3dmatch.cs.princeton.edu/) or KITTI (https://www.cvlibs.net/datasets/kitti/eval_odometry.php). Moreover, as Sinkhorn has been widely used in point cloud registration algorithms, e.g., CoFiNet (NeurIPS 2021) and GeoTransformer (CVPR 2022), I would like to see the promotion of those methods by replacing the Sinkhorn part with the proposed method. Thank you for your response. We will investigate using Hierarchical Refinement as a module in CoFiNet and GeoTransformer, but unfortunately have limited time in the review period to benchmark it. We will, however, make sure to cite both of these methods and highlight them as a major application area for hierarchical refinement in our discussion with the following sentence: * _"Optimal transport has also been successfully integrated into deep-learning frameworks for computer vision and point cloud registration, notably in methods such as CoFiNet (Yu et al. '21) and GeoTransformer (Qin et al. '22), suggesting that hierarchical refinement could help scale existing deep-learning methods based on OT."_ On the scalability of hierarchical refinement on real datasets, we note that the size of the real datasets used was 85958 and 84172 for the two MERFISH datasets, 5913, 18408, 30124, 51365, 77369, 102519, 113350, and 121767 for the seven Stereo-Seq spatial transcriptomics datasets, and 1.281 million for the ImageNet ILSVRC dataset. > On the MERFISH experiments, the score of HR-OT results are much higher than that of ther methods. However the reason under this should be further analyzed. In the MERFISH experiments, as classical full-rank methods were unable to scale, we benchmarked against low-rank OT methods for a fixed rank. Low-rank OT is unable to compute one-to-one alignments in the case that the Monge map exists, and in this case an alignment of spatial coordinates lacks any low-rank cluster structure, which likely explains why full-rank performs significantly better. To address questions raised by other reviewers, we added a comparison to mini-batch OT on this task. Mini-batch OT is also a full-rank method, like hierarchical refinement, and we observe scores which are much closer to it. For example, for the largest batch-size (2048) we found mini-batch cosine scores across 5 genes of (0.7434, 0.7822, 0.7056, 0.4912, 0.5683), compared to the hierarchical refinement scores (0.8098, 0.7959, 0.7526, 0.4932, 0.6015). This implies the gap may largely explained by the difference in expressivity of full-rank ($r=84,172$) versus low-rank couplings ($r=20$ for LOT, $r=500$ for FRLC). > It has been mentioned that the number of samples ... what is the results compared with other methods? Thank you for your question. To your point, if the datasets are of slightly different sizes one could randomly subsample the number of points in the larger one to ensure an $n$ to $n$ alignment. In all comparisons, the datasets aligned are either $n \times n$ or sub-sampled to be $n \times n$ so that all methods (Sinkhorn, ProgOT, FRLC, LOT, HR-OT, mini-batch) are compared on the exact same point clouds. To gauge the effect of subsampling the MERFISH data, we ran LOT without sub-sampling to $n$ points and then LOT with sub-sampling, comparing the cosine similarities on a downstream task, as the primal OT cost is no longer directly comparable. Without the sub-sampling, the cosine score is only slightly higher than with: (0.3390, 0.2712, 0.3186, 0.1666, 0.1080) vs (0.3241, 0.2279, 0.3029, 0.1653, 0.0719). These scores remain significantly lower than those of hierarchical refinement on the sub-sampled data: (0.8098, 0.7959, 0.7526, 0.4932, 0.6015). While sub-sampling incurred little error on this comparison, generalizing hierarchical refinement to directly handle datasets of unequal sizes (i.e. without sub-sampling) is an important direction for future work. **Table: Cosine Similarity Scores for Expression Transfer** | Method | *Slc17a7* | *Grm4* | *Olig1* | *Gad1* | *Peg10* | |-|-|-|-|-|-| | HR-OT | **0.8098**| **0.7959**| **0.7526**| **0.4932**| **0.6015**| | FRLC (Halmos et al. '24) | 0.2180 | 0.2124 | 0.1929 | 0.0963 | 0.0991 | | FRLC, no subsampling | 0.2373 | 0.1896 | 0.1579 | 0.0644 | 0.1550 | | LOT (Scetbon et al. '21) | 0.3241 | 0.2279 | 0.3029 | 0.1653 | 0.0719 | | LOT, no subsampling | 0.3390 | 0.2712 | 0.3186 | 0.1666 | 0.1080 | | MOP (Gerber et al. '17) | 0.5211 | 0.4714 | 0.5972 | 0.3571 | 0.2719 | | Mini-batch (128) | 0.6693 | 0.6637 | 0.6442 | 0.4150 | 0.4932 | | Mini-batch (512) | 0.7089 | 0.7383 | 0.6771 | 0.4562 | 0.5383 | | Mini-batch (1,024) | 0.7256 | 0.7621| 0.6918 | 0.4733 | 0.5557 | | Mini-batch (2,048) | 0.7434 | 0.7822| 0.7056 | 0.4912 | 0.5683 |
Summary: This work concerns the use of hierarchical refined version of low-rank optimal transport (HR-LOT). In previous works, low-rank transportation plans have been explore to reduce (high) computational costs for solving the optimal transport problem, and hierarchically refined version of this problem was also studied. This work combines the two, Claims And Evidence: The authors claim that for optima transportation problems that do not necessarily possess the low-rank structure as assumed by LOT, HR-LOT enriches the space of transport maps while maintaining the linear runtime when low-rankness is present. Methods And Evaluation Criteria: The examples are illustrative and convincing. Theoretical Claims: The Propositions 3.1-3 appear to be correct, although the assumptions in the statements are strong. Experimental Designs Or Analyses: The experiments are performed mostly on existing datasets, ranging from synthetic to realistic. Supplementary Material: I took a light look at the mathematical proofs, they appear to be sound. Relation To Broader Scientific Literature: The paper is an advance in LOT by adding the hierarchical partitioning. This in itself is novel, using algorithms both in hierarchical OT and LOT, and putting it together with a rank-annealing scheduling in Algorithm 1. Essential References Not Discussed: I'm not aware of paper the authors missed. Other Strengths And Weaknesses: Strengths The paper is very well motivated and clearly written, and arguments are convincing. The experiments are thorough and the discussion is sufficiently detailed. Weaknesses The theoretical result Proposition 3.1 seems to have strong assumptions about the transportation plan, although a more general result might be simply very challenging to formulate and prove. Other Comments Or Suggestions: - Hierarchical low-rankness of the transportation map seems to share common properties with so-called Hierarchical Matrices used to solve partial differential equations (most notably the Helmholtz equation). Are there some commonality in their approaches? Questions For Authors: I found the discussion regarding the rank-scheduling and annealing scheduling $\epsilon_n$ in Section 3.2 a bit hard to follow. Is there an alternative explanation for their relation? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank reviewer _dsLw_ for their feedback and careful reading. > The theoretical result Proposition 3.1 seems to have strong assumptions about the transportation plan, although a more general result might be simply very challenging to formulate and prove. Yes, two important generalizations of Proposition 3.1 are: (1) extend to datasets of unequal sizes $n \neq m$; (2) extend to general schedules of factors (as is done in practice in the algorithm) rather than just powers of 2. We briefly discuss (1) in response to reviewer 9vCU below. Regarding (2), we note requiring that $n$ be a power of two is without loss of generality in the Proposition. We will add the following statement to Section 3 to highlight this: _"We note that the assumption that the datasets are of size $2^k$ is without loss of generality. For a dataset of size $n$, let $q = \min$_ { _$2^{t} \mid t \in \mathbb{N}, 2^{t} > n $_ } _and add $q-n$ 'dummy' points at 'infinite' distance from $\mathsf{X}, \mathsf{Y}$, and mutual distance zero."_ By construction, the optimal mapping for this augmented data is given as the product measure $\gamma = \gamma _ {1}^{ * } \otimes \gamma _ {2}^{ * }$, where $\gamma _ {1}^{ * }$ is the optimal mapping between the original points and $\gamma _ {2}^{ * }$ pairs the dummy points. Thus $\gamma_{1}^{ * }$ agrees with the original optimal coupling when restricted to the datasets $\mathsf{X}, \mathsf{Y}$, and one may thus invoke Proposition 3.1 directly for any $n$. The only computational cost is, in the worst case, one would extend the size of the dataset by a factor of 2. > Hierarchical low-rankness of the transportation map seems to share common properties with so-called Hierarchical Matrices used to solve partial differential equations (most notably the Helmholtz equation). Are there some commonality in their approaches? Yes! Thank you for highlighting this, as there appears to be an interesting parallel between the two. The coupling at each iteration $\mathbf{P} ^ {(t)}$ can be viewed as having a hierarchical block structure according to each partition. Similarly to how hierarchical matrices in PDE reduce the complexity of dense linear operators from $O(n^{2})$ to $O(n \log{n})$, hierarchical refinement uses a multiscale block structure to reduce the complexity of OT to $O(n)$ in space, and $O(n\log{n})$ in time. For iterations $t \in [0, \log{n} ]$ earlier iterations capture coarser structure, while later iterations capture fine-grained structure. A noteworthy difference is that the linear operators approximated by hierarchical matrices in PDE are inherently dense, while the optimal solution to OT is guaranteed to be sparse. In OT, dense matrices are prevalent because of the computational value of entropic (or rank) regularization as a means to explore the solution space by annealing from a dense initial condition to an optimal sparse solution. > I found the discussion regarding the rank-scheduling and annealing scheduling $\epsilon_n$ in Section 3.2 a bit hard to follow. Is there an alternative explanation for their relation? Thank you for this feedback. First, to better explain the connection between entropy- and rank-regularization, we refer to Chapter 5 in the thesis of Scetbon, based on the seminal work (Scetbon et al. '21): * _"A key observation when entropy is added to the coupling is that the more entropy is added, the lower the rank."_ (29); * _"a useful parallel can be drawn between (LOT) and that of the vanilla Sinkhorn algorithm, in the sense that they propose different regularization schemes. Indeed, the (discrete) path of solutions obtained by (LOT) when varying $r$ between $1$ and $\min(n,m)$ can be seen as an alternative to the entropic regularization path. Both paths contain at their extremes the original OT solution (maximal rank and minimal entropy) and the product of marginals (minimal rank and maximal entropy)."_ (30) Now because of this correspondence of small epsilon with large rank, and large epsilon with low-rank, the analogue of annealing in the parameter $\epsilon$ in entropy-regularized OT, i.e. gradually decreasing $\epsilon \to 0$ according to some schedule, is to initialize at a low-rank plan, and then to gradually increase the rank from low to full. In our case, this gradual rank increase is accomplished implicitly. At each scale $t = 1, \dots, \kappa$ this plan $\mathbf{P}^t$ is made explicit in our supplement, equation (S8). Our rank-annealing schedule $(r_1, \dots, r_\kappa)$ describes the sequence of multiplicative factors by which the rank of this explicit plan will increase at each successive scale. The partial products of these, denoted by $(\rho_1, \dots, \rho_\kappa)$, are the ranks of the plans $\mathbf{P}^1, \dots, \mathbf{P}^\kappa$. We will clarify this relationship in future versions, and distinguish the notion of a rank-annealing schedule from the secondary discussion of how to efficiently choose such a schedule under given memory constraints.
null
null
null
null
null
null
Algorithms and Hardness for Active Learning on Graphs
Accept (poster)
Summary: The authors study a "Graph Label Selection" (GLS) problem of Blum and Chawla and Guillory and Bilmes, which, given a graph G=(V,E) and parameter k, asks the learner to find a subset |L|=k of vertices maximizing the unnormalized min-cut outside of $L$: $\Psi(L)=\min_{C \subset V \setminus L} \frac{e(C,V\setminus C)}{|C|}$ This problem is related to active learning on graphs, where finding a subset with large $\Psi(L)$ leads to a generalization error upper bound for labeling the vertices of the graph assuming labels between points with edges are similar on average. The authors prove two main results: 1) Via reduction to independent set, it is NP-Hard to determine whether $\Psi(L) \leq 2$ or $\Psi(L) \geq 3$ 2) There is a poly time algorithm for GLS on any graph with *log(|V|) slack*, i.e. it is possible to find L of size $O(k\log(|V|))$ such that $\Psi(L) \geq \max_{|L'|=k} \Psi(L')$ The authors also give some experimental evidence that executing their algorithm with no slack is as good or better than prior heuristics on many datasets. The main new technical contribution of the work is an algorithm for building such $L$ by equating the computation of $\Psi(L)$ to mincut/maxflow in a closely related auxiliary graph. The authors then show that by greedily finding the vertex which maximizes the auxiliary min-cut, one iteratively halves the gap to the true $\Psi(L)$ value every k steps, resulting in the $k\log(|V|)$ slack bound. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. The proofs seem fine. Experimental Designs Or Analyses: No. Supplementary Material: I skimmed Appendices A and B. Relation To Broader Scientific Literature: The key contribution of this paper is a new algorithm for the graph label selection problem (itself related to active learning on graphs) for *arbitrary* underlying graphs with $O(\log(|V|))$ slack. Previously algorithms (with slack) were only known for trees, with heuristic methods suggested for general graphs. The authors also prove the first NP-hardness result for GLS with no slack, though this aspect is not particularly surprising given typical hardness of such graph-based combinatorial problems. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: It would be helpful if the authors included more discussion how the studied problem is actually related to learning on graphs. From my understanding looking up previous works, finding $L$ with large $\Psi(L)$ corresponds to a learning strategy of Blum and Chawla getting good generalization error assuming that most pairs of vertices with edges are given the same label. This motivates the GLS (with slack) problem, since it essentially shows that if one can achieve low error using $k\log(|V|)$ label queries assuming the $k$-GLS value is small. The algorithm and proofs presented in the paper are simple and natural, which I view as a strength, especially since previously algorithms were only known for trees. Giving an algorithm for general graphs therefore seems like a big advance on the problem, even if prior methods for trees had constant vs logarithmic slack. I find the hardness result presented a bit weak. While it is formally true the result implies 3/2-approximating GLS is NP-hard, it is only for a very specific setting of parameters. Could one hope to achieve an algorithm with additive error? What about when the GLS value is large (indeed generally it seems one might hope for this in the downstream learning application?). Can one prove hardness for learning GLS with constant slack for general graphs? Other Comments Or Suggestions: Typo in Lemma 6.1 proof, should say Psi(L) \geq Delta “he results” Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer gCbm, We thank you for your work reviewing our paper, we will address your points in the final version. Here we will focus on discussing the points you raised. 1) Motivation of GLS: Thanks for the suggestion. We will include a discussion of the motivation behind the GLS objective in our article to better motivate the question. 2) Hardness of approximation: Hardness of approximation results that are either stronger or capture more parameter regimes are certainly interesting questions for future research. Given the problem definition, it is plausible that the problem might inherit the hardness of sparsest cut, although we were not able to prove it. An additive approximation may also be possible. 3) Thank you for pointing out some typos in our article.
Summary: The authors propose an approximation algorithm (or, more specifically, a resource augmented algorithm) for the graph label selection problem (GLS). GLS is an abstraction of the active learning task of selecting a small set of data points to label out of a pool of unlabeled data. The main contribution of the authors is an algorithm with theoretical guarantees on the quality of the solution for *general graphs*, whereas previous works propose approximation algorithms on restricted types of graphs. Moreover, the authors present a proof on the hardness of approximation for GLS in general graphs. ## update after rebuttal Not much to add. After reading the other reviews, I maintain my original score and review that this paper is above the acceptance bar. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I checked all the theorems and lemmas to the best of my ability. The theoretical results are the bulk of the contribution. There seem to be typos / inconsistencies in Section 6 and Lemma 6.1. This whole section seems somewhat carelessly written compared to the rest of the paper. In particular, halfway through the proof of Lemma 6.1, the authors switch from talking about $\Delta$-regular graphs to the specific case of $\Delta=3$. Experimental Designs Or Analyses: Yes. I read through the description of the datasets and the metrics. No issues identified. Supplementary Material: Yes, but not very rigorously. Sections A and B. The counter examples to sub/super modularity are correct. Section B also seems correct. Even if there are typos, it is a standard extension to this type of algorithms. Relation To Broader Scientific Literature: The key contributions of the paper are clearly novel and provide new insights on the hardness of GLS. Since GLS is a simplified version of the active learning task, the paper contributes theoretical results on the hardness of active learning. This is one core task in machine learning / data science. Essential References Not Discussed: No. Other Strengths And Weaknesses: None Other Comments Or Suggestions: Proofread Section 6 and Section 7. The writing is sloppy compared to the rest of the paper. Questions For Authors: In Figure 4, the heuristic of Guilroy and Bilmes has better performance than your proposed method for k >= 30. This is not acknowledged or mentioned in the text. Do you have any explanation as to why the heuristic is better in this case? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer 3ZDj, We thank you for your work reviewing our paper. Here we focus on responding to your question and the points you raised. 1) ``In Figure 4, the heuristic of Guilroy and Bilmes has better performance than your proposed method for $k \geq 30$. This is not acknowledged or mentioned in the text. Do you have any explanation as to why the heuristic is better in this case?": This behaviour appears because when k approaches the number of vertices, it is no longer necessarily beneficial to select vertices which have a lot of neighbors. This can be illustrated well by considering a star graph on say $n$ vertices with budget $k = n - 1$. Counterintuitively, The best solution for this example is to pick all the leaves. This solution achieves score $n - 1$. Since our algorithm follows a greedy approach, it will always pick the star center and then all but one of the leaf vertices. This only achieves score $1$. The heuristic of Guilroy and Bilmes on the other hand, as long as the star center is unlabeled, will always sample a vertex from a set that contains the star center, but also all but one of the currently unlabeled leaves. Some simple calculations show that this means that the algorithm actually has a reasonably large chance of around $1/\log n$ to not include the star center and obtain the optimal solution. Since the gap between the two solutions is $n - 1$, this very significant influences the expectation in this degenerate scenario. We will add a discussion of this phenomenon to our article. 2) We will proofread and edit section 6 and 7 to improve the writing in our article. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my comments. I will maintain my original score.
Summary: This paper tackles the problem of *active learning on graphs* under a label smoothness assumption. The authors study how to select a set of $k$ labeled vertices in a graph such that the labels can best predict all other vertices’ labels. The core contributions are two-fold: **(1)** a new **approximation algorithm** (with resource augmentation) that is the first to offer theoretical guarantees on general graphs, and **(2)** a **hardness result** proving that no efficient algorithm can achieve high accuracy in general without relaxing the problem. Specifically, they present an algorithm that, using an $O(\log n)$ larger label budget, achieves an objective value at least as good as the optimal set of $k$ labels. In complement, they prove it is NP-hard to distinguish whether the optimal value is small (2) or moderately larger (3), implying a constant-factor hardness of approximation for the exact budget-$k$ problem. Beyond theory, the paper includes proof-of-concept experiments on both synthetic graphs and real-world graph datasets, which indicate that the proposed method outperforms prior heuristics on this task. Claims And Evidence: The paper’s claims are generally well-supported by theoretical arguments and empirical validation. The **approximation guarantee** (the first efficient algorithm with theoretical bounds for general graphs) is clearly stated and backed by rigorous proofs. The **hardness result**, which demonstrates NP-hardness of achieving high accuracy for the considered objective, is convincingly justified by a known reduction from a well-studied NP-hard problem. Experimentally, the authors substantiate their claim of improved performance by comparing the proposed algorithm to baselines (Guillory & Bilmes, 2009; Cesa-Bianchi et al., 2010) on both synthetic and real-world graphs, consistently showing superior results. The authors responsibly qualify their contributions by explicitly noting limitations such as resource augmentation and computational complexity, making their claims balanced and credible. Guillory & Bilmes, 2009: Label selection on graphs. Cesa-Bianchi et al., 2010: Active learning on trees and graphs. Methods And Evaluation Criteria: The proposed methodology leverages a reduction of the graph label selection problem to a **flow-based formulation** inspired by the densest subgraph problem. The authors introduce a *flow gadget*, where labeling vertices translates into adding edges to increase the max-flow, enabling a greedy selection process. Though the original objective $\Psi(L)$ is neither submodular nor supermodular, the derived surrogate measure (flow increase) is submodular, allowing established submodular optimization guarantees. They complement this with a binary search approach to efficiently pinpoint feasible threshold values. Technical assumptions (like polynomially bounded edge weights) are clearly stated and justified, with full details provided in appendices. The evaluation methods—worst-case approximation analysis and empirical $\Psi(L)$ values—are appropriate for the theoretical nature of the paper. Empirically, the authors demonstrate improvements over baseline methods through controlled experiments on both synthetic and real-world graphs. While the evaluation does not directly address prediction accuracy, the chosen proxy ($\Psi(L)$) suitably reflects the intended smoothness goal. Overall, the methods and evaluation criteria are thoughtfully designed, effectively validating the proposed approach. Theoretical Claims: The theoretical claims are significant and appear correct based on the provided material. The main results include: - **Approximation Algorithm (Theorem 1.1):** By allowing a factor-$O(\log n)$ more labels than the budget $k$, the algorithm achieves an objective equal to the optimal solution for exactly $k$ labels. The authors provide a plausible proof sketch (detailed proofs in Appendix B), relying on standard submodularity and flow techniques. The assumptions (e.g., polynomially bounded integral weights) are clearly stated and reasonable. - **Hardness Result (Theorem 1.2):** The authors show it is NP-hard to distinguish between cases where the optimal value is at most 2 or at least 3, implying constant-factor hardness of approximation. This reduction from known NP-hard problems on 3-regular graphs seems correct and convincing, clearly establishing the problem’s theoretical difficulty. - **Generalization to Weighted Vertex Importance:** The paper extends the approximation results to scenarios with vertex importance weights, offering a generalization that appears logically sound and consistent, though I did not verify proofs exhaustively. Overall, the theoretical contributions are rigorous, clearly presented, and supported by well-known techniques. While the runtime complexity ($(|V|+|E|)^{1+o(1)}$ per iteration) is slightly super-linear and somewhat limits practical scalability, it does not undermine the theoretical validity of the results presented. Experimental Designs Or Analyses: The proposed method uses a flow-based formulation inspired by the **densest subgraph** problem, constructing a flow gadget to guide greedy vertex selection. Although the original label selection objective ($\Psi(L)$) is neither submodular nor supermodular, the authors identify a submodular surrogate measure (flow increase), enabling standard greedy approximation techniques. They combine this with binary search for threshold estimation, clearly outlining assumptions and deferring details to appendices. The evaluation methodology (theoretical approximation ratio, empirical objective values) is suitable for the paper's goals. Empirical results on synthetic corner-case graphs and two real-world graphs from SNAP consistently demonstrate better performance over baseline methods (Guillory & Bilmes, 2009; Cesa-Bianchi et al., 2010). A notable limitation is the relatively small scale of experiments (max ~4k nodes), acknowledged by authors due to computational complexity. Nonetheless, the experiments convincingly support the claims within the tested scope. Reproducibility is reasonably addressed through clear descriptions and standard datasets, though explicit code release is not mentioned. Guillory & Bilmes, 2009: Label selection on graphs. Cesa-Bianchi et al., 2010: Active learning on trees and graphs. Supplementary Material: The supplementary material substantially enhances the paper: - **Appendix A** offers helpful clarification about submodularity properties and technical preliminaries, deepening understanding of the main approach. - **Appendix B** contains complete proofs for key theoretical results sketched in the main text, ensuring rigorous verification. - **Appendix C** provides additional experimental results on synthetic Watts-Strogatz graphs, supporting robustness claims across different graph structures. The supplementary material is well-aligned with the main paper and clearly referenced at relevant points (e.g., the main text points the reader to Appendix C for more experiments, and to Appendix A for background on submodularity). This makes the supplement a useful resource rather than an afterthought. In my view, the appendices meaningfully strengthen the submission: they ensure that knowledgeable readers can verify all claims (via full proofs) and see additional evidence (via more experiments). The supplementary content is indeed relevant and helpful in supporting the paper’s contributions. Relation To Broader Scientific Literature: The paper positions itself clearly within existing literature, referencing both foundational works and recent advancements in relevant fields, specifically: - **Graph-based Semi-supervised Learning:** (Blum & Chawla, 2001; Zhu et al., 2003; Belkin et al., 2004; Bengio et al., 2006). - **Active Learning on Graphs:** (Guillory & Bilmes, 2009; Cesa-Bianchi et al., 2010; Dasarathy et al., 2015) - **General and Deep Active Learning:** (Settles, 2009; Ren et al., 2021; Mac Aodha et al., 2014; Kushnir & Venturi, 2020) - **Algorithmic Graph Theory (densest subgraph):** (Goldberg, 1984; Boob et al., 2020; Chekuri et al., 2022) Overall, the paper effectively builds upon previous theoretical frameworks (e.g., Guillory & Bilmes, 2009; Cesa-Bianchi et al., 2010) and clearly outlines how it extends prior limitations, such as handling general graphs rather than special cases (trees). While the paper does not deeply explore sequential active learning, this omission is justified given its scope. ### Standard Citations: - Blum, A., & Chawla, S. (2001). *Learning from labeled and unlabeled data using graph mincuts.* ICML. - Zhu, X., Ghahramani, Z., & Lafferty, J. (2003). *Semi-supervised learning using Gaussian fields and harmonic functions.* ICML. - Belkin, M., Matveeva, I., & Niyogi, P. (2004). *Regularization and semi-supervised learning on large graphs.* COLT. - Guillory, A., & Bilmes, J. A. (2009). *Label selection on graphs.* NeurIPS. - Cesa-Bianchi, N., Gentile, C., Vitale, F., & Zappella, G. (2010). *Active learning on trees and graphs.* COLT. - Dasarathy, G., Nowak, R., & Zhu, X. (2015). *S2: An efficient graph-based active learning algorithm.* COLT. - Settles, B. (2009). *Active learning literature survey.* University of Wisconsin-Madison. - Ren, P., et al. (2021). *A survey of deep active learning.* ACM Computing Surveys. - Mac Aodha, O., Campbell, N. D., Kautz, J., & Brostow, G. J. (2014). *Hierarchical subquery evaluation for active learning on a graph.* CVPR. - Kushnir, D., & Venturi, L. (2020). *Diffusion-based deep active learning.* arXiv preprint arXiv:2003.10339. - Goldberg, A. V. (1984). *Finding a maximum density subgraph.* Technical report, UC Berkeley. - Boob, D., et al. (2020). *Flowless: Extracting densest subgraphs without flow computations.* WWW. Overall, the literature review is thorough and clearly situates the paper’s contribution within the broader field. Essential References Not Discussed: No essential references appear missing. Literature coverage seems thorough, and relevant works are adequately cited. Other Strengths And Weaknesses: **Strengths:** - Theoretical novelty: providing approximation guarantees for general graphs. - Creative algorithmic insights: clever use of max-flow reduction and submodular optimization framework. - Well-presented theoretical results with careful, rigorous reasoning. - Useful extension to vertex importance scenarios. **Weaknesses:** - *Scalability:* A notable weakness is that the proposed algorithm is **computationally intensive**, with a runtime that is super-linear in the number of vertices. The authors admit it doesn’t scale to very large graphs. In an era where graphs can have millions of nodes, an algorithm that likely struggles beyond a few thousand nodes has limited immediate application. This is somewhat inherent given the complexity of the problem, but it does mean the practical impact is curtailed until further optimizations are found. - *Resource Augmentation Requirement:* The approximation guarantee is achieved by using up to $O(k \log n)$ labels instead of $k$. From a theoretical perspective this is fine (and indeed common in approximation algorithms), but from a practical standpoint, it means the method might need substantially more labeled data than one’s budget in order to guarantee optimal results. In the experiments, they ran it with exactly $k$ labels and it still outperformed others, but there’s no guarantee in that regime. In scenarios where label budget is strict, one might wonder how suboptimal the greedy picks could be if cut off at $k$. This gap between theory (with augmentation) and practice (fixed budget) is worth noting as a weakness, though the empirical evidence suggests the algorithm still performs well without augmentation. - *Focus on Batch Selection:* The paper addresses the *offline* (batch) selection problem. It does not consider the fully sequential active learning setting where one can query a label, update the model, and then choose the next query. The batch selection is an important scenario on its own (and often needed for parallel labeling), but it’s inherently a restricted version of active learning. The contribution is still valuable, but it doesn’t solve the entire active learning problem on graphs – only the one-shot selection version. Future work might need to explore how these results extend (or not) to an interactive setting. - *Minor Clarity Issues:* While generally well-written, some parts of the proof sketches and the description of the approach are dense. For instance, the flow gadget construction might be hard to follow for readers not versed in network flow. The paper relies on the appendix for full clarity. Additionally, some terminology like “constant slack” (used to describe the prior tree algorithm’s approximation) or “resource augmentation” could have been defined in a more reader-friendly way early on. These are relatively minor weaknesses in exposition. - *Experiment Limitations:* The experiments, as mentioned, are on small graphs and a limited number of datasets. The paper might have benefited from at least one medium-scale experiment (if feasible) or some discussion on how the algorithm behaves as graph size grows (e.g., a plot of runtime or objective vs. $n$). Reproducibility could also be enhanced by providing pseudocode (the algorithm is described in prose but pseudocode could help implementers) – though perhaps it’s included in the appendix or could be derived from the description. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer qHp3, Thank you for your thoughtful review, we will address your comments in the final version. Here we will focus on responding to the points you raised. 1) Scalability: We agree that the scalability, together with removing the resource augmentation requirement, is the main research question left open by our work. Given the progress on linear time algorithms for the related densest subgraph problem, we hope that further work can remove this deficit. This might come at the cost of further relaxing the theoretical guarantees. 2) Resource Augmentation Requirement: We agree with your assessment that a proper approximation algorithm that does not expand the budget would be interesting from a theoretical and practical point of view. A theoretical reason for the observed practical performance of our algorithm in this regime is that a resource augmented algorithm is also competitive for many values of $k$. This phenomenon is observed in a classic text by Tim Roughgarden and called loosely competitive (https://arxiv.org/abs/2007.1323). 3) Experiment Limitations: Due to the large number of maximum flow computations, we were unable to scale up our experiments to larger graphs. In our implementation (see supplementary material) we already parallelize the implementation of the algorithm to enable our current experiments in a reasonable timescale. We hope that future work motivated by these preliminary experiments will significantly reduce the time complexity. 4) Focus on Batch Selection: We focus on the batch selection problem as previous work in the area. We agree that an appropriate model of adaptivity could serve as a starting point for further fruitful research. We will leave it as an open direction for future work in the final version 5) Minor Clarity Issues: We will go over the writeup again to further improve the exposition and clarity of the proofs. 6) Experiment Limitations: Using maximum flow as a subroutine, and calling it at least as often as there are vertices, unfortunately means that this algorithm does not scale to large graphs as is. We already parallelized part of the implementation (see supplementary material) to enable the current experiments. As discussed in the first point, improving the scalability is a major open problem, now that a polynomial time solution has been found.
null
null
null
null
null
null
null
null
GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs
Accept (poster)
Summary: This paper is concerned with the problem of large language model unlearning, which is the process of removing a specific piece of information from a pre-trained language model. The authors note that unlearning usually comes with the cost of harming the performance of the model on other tasks. To mitigate this, they proposed `Gradient Rectified Unlearning` (GRU). The key idea of GRU is to project the gradient of the unlearning loss onto a perpendicularly aligned subspace of the gradient of the retention loss. The authors show both theoretical and empirical results to demonstrate the effectiveness of GRU. In addition, they also consider a setting where we only have unlearning data but no retention data, and propose a method called `Task Vector Rectified Unlearning` (TRU) based on an idea similar to GRU. The authors empirically show that TRU can achieve better performance over baseline methods in this setting. ## Update after rebuttal I am glad to see the authors have incorporated RMU into their experiment design. Overall, I am OK with this paper given that you will incorporate these changes into the final version and have raised my score to 3. However, I won't champion it since the changes in the experiment design are kinda significant. Claims And Evidence: I find the theoretical claims convincing, while the empirical results are not very strong, mainly because of the choice of baseline methods. See the detailed comments in `Experimental Designs Or Analyses`. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. Theoretical Claims: I read the statements in the paper and skimmed through the proofs in the appendix. I did not check the proofs in detail, but there is no reason to doubt the correctness of the theoretical claims. Experimental Designs Or Analyses: A major concern about the empirical results is the choice of baseline methods. The authors consider a set of baselines in the main experiments, including GA, GD, NPO and WGA. I think the choice of baselines is not very strong. Among these methods, NPO is probably the best one, but it is still not considered as state-of-the-art. Given the authors are already using the WMDP benchmark, I would suggest them to consider RMU, which is proposed in the same paper. From my own research experience, RMU performs better than NPO in the trade-off between unlearning and retention significantly. Without considering RMU, the empirical improvements of GRU and TRU over the baselines are not very convincing to me. Supplementary Material: I skimmed through Appendix A without checking it in detail. Relation To Broader Scientific Literature: This paper's idea of projecting the gradient of the unlearning loss onto a perpendicularly aligned subspace of the gradient of the retention loss is potentially useful for many algorithms in the context of unlearning, as the authors demonstrated in the paper. But it remains unclear if this improvement applies to state-of-the-art methods like RMU. Essential References Not Discussed: See the comments in `Experimental Designs Or Analyses`. Other Strengths And Weaknesses: The paper is well-written and easy to follow, and I think the idea of GRU is neat. Figure 3 is a bit confusing. Each algorithm's performance with and without GRU is shown as a segment in the figure. But it is unclear to me which endpoint of the segment is with GRU, i.e., whether GRU makes the performance better or worse. I can only guess from the caption that GRU improves the performance, but I think it would be necessary to make it clearer in the figure or at least in the caption. Other Comments Or Suggestions: Line 254: `datat` should be `data`. Line 255: `eeach` should be `each`. Line 267: $\mathbf T_s$ is not defined. Questions For Authors: What is the rationale behind the choice of baseline methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you sincerely for your constructive comments and for helping us identify typos. We hope that the feedback provided below will address your concerns. > Q1. A major concern about the empirical results is the choice of baseline methods. Given that the authors are already using the WMDP benchmark, I would suggest them to consider RMU, which is proposed in the same paper. From my own research experience, RMU performs better than NPO in the trade-off between unlearning and retention significantly. Without considering RMU, the empirical improvements of GRU and TRU over the baselines are not very convincing to me. **A1**. We aim to use a general set of representative methods for benchmarking. Regrettably, RMU, possibly due to its complexity to be implemented, is not widely adopted in earlier studies, such as [1,2] (though both cited WMDP). Although some recent papers incorporated RMU, their evaluations are either limited to WMDP [3], or they reveal its sensitivity to hyperparameters [4]. However, we completely agree that RMU represents a state-of-the-art within WMDP, which merits our particular focus. We present the results using RMU on WMDP, along with its version with GRU. As observed, GRU indeed enhance performance for both retention and unlearning, showing our GRU is general to mitigate the trade-off between unlearning and retention. Method|WMDP Bio↓|WMDP Cyber↓|WMDP MMLU↑| |:-:|:-:|:-:|:-:| |RMU|0.26|0.31|0.41| |w/ GRU|0.26|0.28|0.44| We adjust $\alpha$ from the default 1200 to 100 in the RMU open-sourced code, after finding that the original settings caused the retain term to overly dominate, hindering model updates. Similar sensitivity issues, such as precision setups, have been reported by others in the WMDP GitHub repository. We will study these issues in the future. However, we continue to see the RMU as a promising approach, particularly when viewing through the lens of local knowledge perturbation. We plan to delve deeper into the RMU concept and explore more specific verisons of GRU for RMU (e.g., constraints for masking) in our future work. [1] Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning. [2] MUSE: Machine Unlearning Six-Way Evaluation for Language Models. [3] Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning. [4] Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond. > Q2. Figure 3 is a bit confusing. Each algorithm's performance with and without GRU is shown as a segment in the figure. But it is unclear to me which endpoint of the segment is with GRU, i.e., whether GRU makes the performance better or worse. I can only guess from the caption that GRU improves the performance, but I think it would be necessary to make it clearer in the figure or at least in the caption. **A2**. Apologies for any confusion caused by the figure annotations. As detailed in the figure caption, each pair of scores represents metric values (either FQ or MU) before and after applying the GRU enhancement. For instance, taking Figure 3(a) as an example, the pair of scores (-16.93, -3.52) represents the FQ scores for the GA method, where -16.93 is the score without the use of GRU and -3.52 is the score with GRU. The visual representation using an upward growing grid between these scores emphasizes the improvements achieved by incorporating GRU. We will clarify the meaning of our figures and provide more explicit explanations in our revision to ensure clear understanding. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. - I am glad to see you have incorporated RMU into your experiment design. - To say a bit more about my confusion with the figures, note that you are stating two results - with and without GRU - where it is not necessary that GRU provides an improvement. Therefore, it's better to display the results well. Overall, I am OK with this paper given that you will incorporate these changes into the final version and have raised my score to 3. However, I won't champion it since the changes in the experiment design are kinda significant. --- Reply to Comment 1.1.1: Comment: Sincere thanks for your great support and the raised score, which mean a great deal to us! We will absolutely include additional results and better visualization in our revision and consult the authors of RMU to address the hyper-parameter configuration issue. We are committed to conducting further experiments on RMU. Thank you once again for your comments and support!
Summary: This paper addresses the problem of Machine Unlearning in Large Language Models (LLMs). To balance performance between the retain set and the forget set, the unlearning update is adjusted to avoid harming the performance on the retain set during the unlearning process. Starting from a fundamental optimization problem designed to implement this idea, the paper presents its closed-form solution, along with pseudocode and the theoretical foundations supporting the approach. The proposed technique is evaluated on WMDP, MUSE, and TOFU benchmarks, and it demonstrates successful performance improvements when combined with the loss functions of GA, GD, and NPO. Claims And Evidence: The claims of the proposed method are clearly presented. Although there are some weaknesses, the paper is overall well-written and solid. Below, I will provide a more detailed discussion on these points. Methods And Evaluation Criteria: The authors use three benchmarks, which seems reasonable. For the evaluation metrics, they adopt those proposed in each benchmark, which is also appropriate. Regarding the performance on the forget set, they show how much improvement is achieved when combined with existing loss functions, which appears to be a reasonable approach. However, I have some concerns about the evaluation on the retain set. Specifically, I question whether the current way of handling the retain set is sufficient. In my view, unless the utility performance on the retain set (e.g., MU or similar metrics) remains at 90–95% of the original model's performance, the improvements on the forget set may have limited practical meaning because the model's overall performance as a language model would already be compromised. Therefore, as discussed in the Task Arithmetic paper, I believe it is more appropriate to constrain the retain set performance to be at least 95% of the original model and then examine how much forgetting can be achieved under that constraint. Including experimental results from this perspective would significantly strengthen the paper. Theoretical Claims: This paper makes a theoretical claim, which appears reasonably sound upon skimming. However, I did not rigorously verify the theoretical development by working through the derivations in detail. Experimental Designs Or Analyses: Although hyperparameter settings, such as learning rate (LR), are specified in the paper, the process by which they were determined is not described. That said, the paper does explain how the hyperparameters for the proposed GRU method were selected. However, some questions remain. In particular, how was the validation data chosen? Since there is no widely established standard for selecting validation data in unlearning settings, I believe the paper should provide a more detailed explanation on this aspect. From a metric perspective, the evaluation feels somewhat insufficient. For instance, only WMDP-Cyber and MUSE-KnowMem results are reported, but VerbMem results are missing. I think that evaluating the proposed method on only a subset of metrics provided by the benchmarks does not fully demonstrate its effectiveness. A more comprehensive evaluation, covering all relevant metrics such as VerbMem, would strengthen the paper's claims. Supplementary Material: I have reviewed the Detailed Experimental Results section and did not find any issues. Relation To Broader Scientific Literature: The paper is well-connected to the existing literature, as it addresses the problem of severe performance degradation on the retain set caused by previous unlearning methods. Essential References Not Discussed: There was nothing in particular. Other Strengths And Weaknesses: The paper was overall well-written and easy to understand. The logical flow of arguments supporting the claims was excellent. Other Comments Or Suggestions: N/A Questions For Authors: I recognize that my perspective on unlearning evaluation differs from that of the authors. While it may not be entirely fair to insist that my view is the only correct one, I still struggle to understand the significance of forget set performance when the model's overall capability as a language model is already degraded. From my perspective, if the utility performance drops significantly, any gain on the forget set becomes less meaningful. It would be very helpful if the authors could either provide a counter-argument to this view or present experimental results where the utility performance is controlled to remain at 90–95% of the original model, so that the trade-off between the retain and forget sets can be properly evaluated. For reference, my overall recommendation is borderline. However, since there is no borderline option this time, I selected weak reject. I plan to revisit and potentially update my score after reading the author response letter. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Sincere thanks for your constructive comments, and we hope the following feedbacks can address your concerns. > Q1. Specifically, I question whether the current way of handling the retain set is sufficient. In my view, unless the utility performance on the retain set (e.g., MU or similar metrics) remains at 90–95% of the original model's performance, the improvements on the forget set may have limited practical meaning because the model's overall performance as a language model would already be compromised. **A1**. We totally agree with your opinion. A large decline in utility performance would indeed render the resulting model ineffective, at which point the process of LLM unlearning also becomes meaningless. On the other side, carefully tuning the hyperparameters to achieve the goal of preserving 85, 90, and 95% of the original model performance can be tedious. Fortunately, as mentioned in Section 2.2, the UWC method offers a post-unlearning strategy that enables calibrating model performance via model mixing, alleviating the challenges associated with maintaining utility. Due to the relatively high cost of UWC, here we take GA and NPO under the challenging Phi setup as two examples to show UWC's flexibility and GRU's effectiveness. The improvements achieved by GRU are more pronounced. We will add more results in our revision. Many thanks for your suggestion. w/ UWC |FQ 5%↑|MU 5%↑|FQ 10%↑|MU 10%↑| |:-:|:-:|:-:|:-:|:-:| |Original|-28.8|0.52|-40.5|0.52| |GA (85%)|-22.0|0.44|-35.3|0.44| |w/ GRU (85%)|-8.6|0.44|-20.8|0.44| |GA (90%)|-28.1|0.47|-36.8|0.47| |w/ GRU (90%)|-15.2|0.47|-28.8|0.47| |GA (95%)|-28.1|0.49|-39.8|0.49| |w/ GRU (95%)|-18.8|0.49|-33.9|0.49| w/ UWC |FQ 5%↑|MU 5%↑|FQ 10%↑|MU 10%↑| |:-:|:-:|:-:|:-:|:-:| |Original|-28.8|0.52|-40.5|0.52| |NPO (85%)|-15.7|0.44|-31.9|0.44| |w/ GRU (85%)|-9.5|0.44|-15.2|0.44| |NPO (90%)|-20.1|0.47|-35.3|0.47| |w/ GRU (90%)|-12.9|0.47|-18.2|0.47| |NPO (95%)|-25.0|0.49|-38.2|0.49| |w/ GRU (95%)|-14.0|0.49|-20.8|0.49| > Q2. In particular, how was the validation data chosen? Since there is no widely established standard for selecting validation data in unlearning settings, I believe the paper should provide a more detailed explanation on this aspect. **A2**. Many thanks for your comments. This perspective has indeed been overlooked by the majority of the community, and we appreciate the opportunity to detail our implementation below. For **unlearn data**, the datasets used for validation are exactly those used for unlearning **in the cases of TOFU and MUSE**, as they focus on privacy and copyright removal and do not require generalization. However, **for WMDP**, which aims to unlearn the entire concepts of harmful subjects, generalization is crucial. Hence, we separate a small set of test data (comprising 200 randomly selected samples) for evaluation, with respect to both the bio and cyber setups. For **retain data**, we separate 200 (20 for MUSE Book due to its small size of retain data) randomly selected samples from the original retain datasets used for unlearning, with respect to all three benchmarks. It ensures that we have a distinct set of data to verify retention compared to test data. We will add a detailed discussion in our revision. > Q3. Only WMDP-Cyber and MUSE-KnowMem results are reported, but VerbMem results are missing. **A3**. The results for VerbMem are in Appendix B. We did not include these in the main text as their values and ranks are quite similar to KnowMem-U, offering limited new insights. We apologize for any confusion caused and will clarify this choice in our revision. --- Rebuttal Comment 1.1: Comment: I appreciate that the authors agreed with my perspective. However, they only provided results showing controlled performance on the retain set for the TOFU dataset. Without knowing how the method performs on MUSE and WMDP in the same setting, it is difficult for me to recommend acceptance of the paper. Unfortunately, I will maintain my initial rating. --- Reply to Comment 1.1.1: Comment: Many thanks for your further comments! In our initial responses, we aimed to show the possibility to control retention performance post-unlearning based on the well-established UWC calibration framework. We are more than happy to provide more aligned results (85, 90, and 95%) across other benchmarks (WMDP and MUSE). We present the results below, which further verify our effectiveness in mitigating the trade-off. Method|WMDP Bio↓|WMDP Cyber↓|WMDP MMLU↑| |:-:|:-:|:-:|:-:| |GA (85%)|0.44|0.40|0.49| |w/ GRU (85%)|0.37|0.35|0.49| |NPO (85%)|0.34|0.39|0.49| |w/ GRU (85%)|0.25|0.36|0.49| |GD (85%)|0.31|0.37|0.49| |w/ GRU (85%)|0.29|0.35|0.49| |GA (90%)|0.53|0.41|0.52| |w/ GRU (90%)|0.46|0.39|0.52| |NPO (90%)|0.45|0.40|0.52| |w/ GRU (90%)|0.25|0.39|0.52| |GD (90%)|0.37|0.40|0.52| |w/ GRU (90%)|0.35|0.37|0.52| |GA (95%)|0.60|0.42|0.55| |w/ GRU (95%)|0.55|0.41|0.55| |NPO (95%)|0.54|0.42|0.55| |w/ GRU (95%)|0.26|0.41|0.55| |GD (95%)|0.56|0.43|0.55| |w/ GRU (95%)|0.44|0.41|0.55| Method |MUSE VerbMem ↓|MUSE KnowMem-U ↓|MUSE KnowMem-R ↑|MUSE PrivLeak → 0| |:-:|:-:|:-:|:-:|:-:| |GA (85%) |98|41|59|-56| |w/ GRU (85%) |7|32|59|-39| |NPO (85%)|55|28|59|-50| |w/ GRU (85%)|0|21|59|-11| |GD (85%) |98|42|59|-53| |w/ GRU (85%)|84|41|59|-51| |GA (90%) |100|45|62|-52| |w/ GRU (90%) |97|42|62|-52| |NPO (90%)|95|39|62|-53| |w/ GRU (90%)|0|22|62|-15| |GD (90%) |98|44|62|-54| |w/ GRU (90%)|96|43|62|-52| |GA (95%) |100|47|66|-54| |w/ GRU (95%) |98|45|66|-53| |NPO (95%)|91|41|66|-53| |w/ GRU (95%)|2|25|66|-26| |GD (95%) |100|47|66|-54| |w/ GRU (95%)|98|46|66|-53| We fully agree that the aligned performance facilitates a fairer and clearer comparison of unlearning efficacy. We will certainly add more results and the related discussions in our revision. We hope our new results can address your concerns. Please let us know if you need any further information or if there are additional points you would like to discuss with us. We are excited for further discussions with you!
Summary: The paper introduces Gradient Rectified Unlearning, an unlearning framework that constrains the unlearning gradient by projecting it onto the half-space where retention is preserved, using the gradient of the loss computed on mini-batch retain samples. This framework can be applied orthogonally to common objective-based machine unlearning methods, and the authors demonstrated its removal efficacy and retention reliability with both empirical evaluation (on TOFU, WMDP and MUSE benchmarks) and theoretical analysis. The authors also explored the retain-data-free setting by incorporating task vectors, which shows performance gain on the TOFU benchmark. Claims And Evidence: In general, I find the claims supported by clear and convincing evidence. * The paper claims that "GRU offers enhanced reliability in retention", which matches with intuition, as GRU uses the retention gradient to adjust the original unlearning gradient, ensuring no negative projection conflicting with the retention direction. This claim is supported by empirical evidence: model utility consistently improves when GRU is applied with current unlearning methods, especially for GA-based methods where the retain set was not originally used. The authors also present theoretical evidence with Theorem 3.2, showing that the retention loss with GRU does not surpass that from the original unlearning method. Contingent on its proof being sound (see the `Theoretical Claims` section for concerns), I think this claim is well-supported. * Regarding the aspect of forget quality, the paper claims that GRU achieves “powerful unlearning capabilities” (L91). Although Theorem 3.1 and the experiments provide some evidence for this, I find it somewhat over-claiming. The improvement in removal effectiveness on WMDP appears not as evident or consistent as on TOFU (as seen in Fig. 4a and the WMDP-cyber results in Table 3). Given that the room for improvement on MUSE with the KnowMem metric is limited (Fig. 5a), it would be more convincing if the authors included other metrics introduced in MUSE (e.g. PrivLeak). Additionally, it would be helpful if more baselines can be included, such as RMU [1] and FLAT [2]. [1]. The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning [2]. LLM Unlearning via Loss Adjustment with Only Forget Data Methods And Evaluation Criteria: The problem is to mitigate the tradeoff between removal and retaining in LLM unlearning, and specifically, the side-effect or excessive unlearning that is common among current gradient-based methods. The proposed methods (both GRU and TRU) make sense and is well-motivated by the intuition of projecting out the component opposite from the retain gradient direction, thereby forcing a non-obtuse angle that regulates the gradient updates, leading to a smaller retain loss and potentially better retention or minimal side-effect on model utility. The choice of the three commonly used benchmarks is reasonable, and the authors follow the default evaluation settings for each benchmark, even though this requires using different backbones and tuning specific hyperparameters for each dataset. However, additional metrics or baselines could be included, as noted in the previous section. Theoretical Claims: I checked the correctness of the closed-form solution derivation in A.1 and the proof of Theorem 3.1 in A.2. For the proof of Theorem 3.2 in A.3, the authors start by introducing the definition of q-Curvature, with which I am not familiar. Despite a careful literature search (e.g. from differential geometry [3]), I was unable to find a reference of this definition within the machine (un)learning community, or determine how this concept is derived from existing mathematical literature. Therefore, I am not able to assess the soundness of this proof. It would be much appreciated if the authors could provide more background and specify how it is adapted from previous works. [3]. (2010). Q-curvature. In: Conformal Differential Geometry. Oberwolfach Seminars, vol 40. Birkhäuser Basel. https://doi.org/10.1007/978-3-7643-9909-2_1 Experimental Designs Or Analyses: The provided experimental designs and analyses are valid, and the authors adhere to the default experimental designs and settings from the chosen benchmarks. Supplementary Material: No supplementary material is provided. It would be much appreciated if the authors could provide implementation details (e.g. see `Questions For Authors` section) and evaluation scripts to help validate the reliability of the results. Relation To Broader Scientific Literature: The proposed idea addresses the problem of excessive unlearning, and therefore mitigates the fundamental tradeoff in machine unlearning between adequate removal and model utility. Some results and findings, such as GA-based methods can be greatly improved by augmenting with GRU, and NPO-based methods generally work better, align with broader scientific literature. The exploration of TRU without requiring retain data displays potential for this framework to benefit a wider range of unlearning methods. Essential References Not Discussed: This paper has significant overlap in terms of methodology with [4], which was published in MM '24. [4] uses the same idea of gradient projection when there is a conflict (termed "Gradient Direction Rectification"; see Sec 4, Eq 4, and Algorithm 1 of the paper). Given that this gradient projection idea is indeed intuitively simple, I do acknowledge that it is likely that the authors have developed this idea independently but missed this work unintentionally. Nevertheless, a reference and a discussion on the similarities and differences should be added. Some less significant problems include: * Some related baselines could be added to strengthen the paper, as mentioned in the `Claims And Evidence` section. * WGA [5] was cited and used as a baseline method in the experiments. However, it was not discussed in Section 2.2. Notably, this paper also examines the gradient direction of the unlearning objective. A discussion clarifying the differences between the two works should also be included. [4]. GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients [5]. Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond Other Strengths And Weaknesses: Strengths: * The authors extend this idea of rectification to the retain-data-free setting, which is arguably more valuable. Weaknesses: * While the method is simple, intuitive, and well presented with geometric illustration, similar gradient projection ideas for unlearning has been proposed in previous works (see `Essential References Not Discussed`), which impacts its novelty and originality. * The effectiveness of this method could be sensitive to the size, quality, and distribution of the retain set. * There lacks analysis on the efficiency and added complexity of this method, such as wall-clock time for a gradient update step. Other Comments Or Suggestions: Minor typos: Theorem 3.2 Remark., "heurstics" -> "heuristics" Section 4, paragraph 2, "eeach" -> "each" Questions For Authors: 1. How are mini-batch retain samples selected for baselines that do not have a retain set, e.g. GA and WGA? 2. For the evaluation setting of TOFU, are you using exactly the TS-test (log) p-values as FQ? And for MU, are you using only the Truth Ratio sub-metric, or the aggregation of Truth Ratio, Prob, and ROUGE-L? It would be helpful if you could clarify these details in the paper. 3. By default, MUSE used LLaMA-2-7B on the news subset and ICLM-7B for the HP books subset, and you are only using the ICLM-7B model. Are you testing on the entire MUSE, or just the books subset? 4. Could GRU be seen as an explicitly-constrained variant of GD that relies on retention gradient estimates from up-sampled retain data? 5. Would GRU/TRU be robust to relearning attacks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Due to strict space limits, we try our best to address the most critical questions as briefly as possible. We sincerely welcome any further concerns and will try our best to respond to them. > Q1. The improvement in removal on WMDP appears not as evident or consistent as on TOFU. **A1**. WMDP uses a "Choose 1 from 4 QA accuracy," where **random guessing (totally unlearned) would result in about 0.25**. As observed, QA accuracies are already near 0.25, leaving little room for further declines. Our efficacy on WMDP is actually reflected by retention, with 2-7 percentage points increase over that without GRU. These results confirm that **GRU maintains strong unlearning while improve retention**, aligning our pirmary goal. > Q2. It would be helpful if more baselines can be included, such as RMU and FLAT. **A2**. Many thanks for your suggestion. We present results using RMU and FLAT on the WMDP, along with their versions with GRU. As observed, GRU indeed enhances performance for both retention and unlearning. Method|WMDP Bio↓|WMDP Cyber↓|WMDP MMLU↑| |:-:|:-:|:-:|:-:| |RMU|0.26|0.31|0.41| |w/ GRU|0.26|0.28|0.44| |FLAT|0.25|0.25|0.27| |w/ GRU|0.24|0.25|0.28| We adjust $\alpha$ from the default 1200 to 100 in the RMU open-sourced code, after finding that the original settings caused the retain term to overly dominate, hindering model updates. Similar sensitivity issues, such as precision setups, have been reported by others in the WMDP GitHub repository. We also expect the potential to further improve RMU and FLAT. For RMU, which perturbs localized representations, we aim to develop mask-based constraints for better knowledge localization. For FLAT, it uses a preference-based learning framework with distinct gradient behaviors from methods like GA and NPO. Thus, we need to devise new retain risk for more appropriate constraints, which we will explore in the future. > Q3. This paper has significant overlap in terms of methodology of GDR. WGA also examines the gradient direction of the unlearning objective. **A3**. **Regarding GDR**, its requirement to store gradients across epochs is infeasible for LLMs due to memory costs and fewer epochs (e.g., 1 for MUSE). GRU avoids extensive caching and epoch-dependent operations, which is more practical for LLMs. Also, for LLMs, the retain data can lead to overfitting when directly guiding updates as in GDR. GRU, by not directly merging retain gradients into its updates, avoids this issue. Also, our enhanced version, TRU, explicitly address this concern, which is superior over GDR. **Regarding WGA's associated paper**, it employs gradient computations to find reliable unlearning objectives. While conceptually possible, its practical challenges can moviate our work into optimization-focused research. Thus, **our work extends beyond WGA's, further emphasizing the importance in exploring gradients, while focusing on a promising yet orthogonal direction**. > Q4. It would be much appreciated if the authors could provide details to help validate the results. **A4**. The configurations are detailed in Section 5 and Appendix C.1. The batch size for retain samples, set at 32, mirrors the unlearn batch and follows the random sampling as to GD. We further provide our source codes via the [Anonymous GitHub](https://anonymous.4open.science/r/GRU-664A/). > Q5. For TOFU, are you using the TS-test (log) p-values as FQ? And for MU, are you using the aggregation of Truth Ratio, Prob, and ROUGE-L? **A5**. To enhance readability and ease figure plotting, we uniformly report the $\log p$ values, avoiding dealing with extremely small $p$ like $1e^{-19}$. For MU, we adhere to TOFU, reporting the aggregated metrics. > Q6. By default, MUSE used LLaMA-2-7B on the news subset and ICLM-7B for the HP books subset, and you are only using the ICLM-7B model. Are you testing on the entire MUSE, or just the books subset? **A6**. Following your suggestion, we further report results on MUSE News with LLaMA-2-7B, taking GA and NPO due to space limit. UWC is adopted for calibration towards a fair comparison. As observed, there are notable improvements in PrivLeak, espeically for GA. Method|VerbMem ↓|KnowMem-U ↓|KnowMem-R ↑|PrivLeak → 0| |:-|:-:|:-:|:-:|:-:| |GA|31|60|47|-100| |w/ GRU|11|56|47|-18| |NPO|43|58|47|-100| |w/ GRU|35|50|47|-96| > Q7. Could GRU be seen as an explicitly-constrained variant of GD that relies on retention gradient estimates from up-sampled retain data? **A7**. Yes, your understanding is correct. If GA as the objective and combining it with GRU corresponds to the explicit-constrained variant of GD. The benefits of our approach are clear, which forcibly ensures retention. In contrast, using original GD can result in unlearning gradients dominating updates, adversely affecting retention. --- Rebuttal Comment 1.1: Comment: Thank you for your reply and additional experiments, which addressed some of my concerns. However, I think a critical issue remains: * Regarding originality, novelty, and comparison between previous works: GDR and this work share the same core idea of gradient rectification, which projects the task gradient onto the orthonormal plane of the conflicting gradient. Similar gradient projection idea, I should note, has been widely adopted in multi-task learning literature when two or more tasks are in trade-offs. Specifically, Gradient Surgery/PCGrad [1] used the same projection rule. It is also worth noting that Theorem 3.1 (convergence guarantee) and 3.2 (loss improvement guarantee) share great similarity with Theorem 1, Definition 5 and Theorem 2, 3 of [1]. To be clear, I do think it's a fair contribution to adapt similar ideas to LLMs and LLM unlearning (where the two tasks, forget and retain, can be in conflict), but essential citations and discussions should be included. [1]. Gradient Surgery for Multi-Task Learning (2020; citations > 1200) &nbsp; Some further questions: 1. > Regarding GDR, its requirement to store gradients across epochs is infeasible for LLMs due to memory costs and fewer epochs (e.g., 1 for MUSE). GRU avoids extensive caching and epoch-dependent operations, which is more practical for LLMs. Can you elaborate on how you did that? From the code (`dataloader.py`), it appears that you are still caching the unlearning and retain gradients in `compute_loss` for every training step, flatten and store the structure map of each gradient with `flatten_and_store_grads`. To me, this does not seem less complex than the [GDR implementation](https://github.com/RUIYUN-ML/GDR-GMA/blob/main/memory_bank.py). 2. How often is the dot similarity negative (thus requiring gradient projection/adjustment)? Does the frequency of it being negative and its magnitude vary across epochs, or perhaps dependent on tasks/datasets (e.g. degree of overlap between retain and forget set)? I will consider update my score if the above concerns are well-addressed. --- **Edit**: I appreciate the frank and engaging response, and the ablation regarding EMA. Contingent on acknowledging the scope as "adapting existing gradient rectification techniques to LLM unlearning" and including a discussion on related work, I'm OK with this paper. I have raised my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your thorough and careful review! We are so happy to have engaged with a rigorous and insightful reviewer like you, whose pointed yet meaningful questions not only play a crucial role in improving the quality of this paper but also contribute a lot to the rigor and professionalism of the ICML community. Kindly please see our responses for your questions below. > Q1. Regarding originality, novelty, and comparison between previous works. **A1**. We totally agree that gradient rectification is a proven strategy for resolving conflict goals (e.g., in continuous learning [1] and multi-task learning, as you suggested) and acknowledge our alignment with this principle for balancing unlearning and retention. However, we remain confident in our contribution, given that **this work pioneers the adaptation of gradient rectification to LLM unlearning**. To support this claim, it is essential to highlight that, between classical machine unlearning (explored by GDR) and LLM unlearning, **the core distinction lies in retention data**: Classical machine unlearning assumes closed-world discriminative models with well-defined (in-distribution) retention distribution. For LLMs with general purpose, such a precise definition is unattainable because 1) their multi-phase training procedures (pre-training, SFT, and RL) without associated data access. Even having these data, 2) their extreme scales also makes it impracical to revisit all data during unlearning. Therefore, the retention data adopted for LLM unlearning are actually surrogate and biased. This factor motivates our paper to be structured as follows: In Section 3, we first show the possibility of incorporating gradient rectification into LLM unlearning, leading to GRU. Then, we delve into the limitations of current LLM unlearning setups, proposing TRU to better tackle the issue of ill-defined (or biased) retention data. TRU is a preliminary step toward more reliable LLM unlearning, with future work planned to expand its scope. We will carefully highlight these crucial points in our revision. We will **add a section on related work**, where we will review the literature on existing gradient rectification techniques and explore the differences between classical machine unlearning and LLM unlearning. Moreover, for the theoretical derivation, we recognize the similarities with PCGrad. We will incorporate a detailed discussion and extra remarks to further highlight their contributions. We would like to express our sincere thanks again for your comments, which are critical to improving the quality and rigor of this manuscript. [1] Gradient Episodic Memory for Continual Learning. 2017. > Q2. From the code, it appears that you are still caching the unlearning and retain gradients in compute_loss for every training step, flatten and store the structure map of each gradient with flatten_and_store_grads. **A2**. ``flatten_and_store_grads`` is used to reduce GPU memory usage during gradient rectification (preventing out-of-memory issues). Its values will be cleaned at the end of each step and does not function as a gradient bank. Below, we would like to further highlight the key differences in the implementations between GRU and GDR. In our GRU, we cache an exponential moving average (EMA) to achieve a more accurate estimation of the **average retain gradients**, which are dynamically updated. It is implemented as follows: ``` self.flattened_retain = self.moving_avg * self.flattened_retain_accumulation + (1 - self.moving_avg) * self.flattened_retain_old ``` On the other hand, GDR requires storing each **individual batch of gradients**, as indicated by the following line of code: ``` bank = MemoryBank(size=math.ceil(t_dataset_sizes/args.batch_size)) ``` Overall, the memory cost for our GRU is equivalent to the memory required for storing parameters that necessitate gradients. In contrast, GDR further multiplies this cost by the number of unlearning steps within each epoch. Therefore, a clear advantage of our approach is that **we notably reduce the additional memory costs compared to GDR**, making our method more practical for LLMs. Moreover, our GRU maintains reliable performance even without caching. We conduct experiments below to validate this claim, where GRU is implemented without EMA. Method |FQ 5%↑|MU 5%↑|FQ 10%↑|MU 10%↑| |:-:|:-:|:-:|:-:|:-:| |GA|-16.93|0.00|-14.37|0.00| |w/ GRU|-3.52|0.51|-7.34|0.22| |NPO|-10.91|0.49|-8.70|0.29| |w/ GRU|-9.96|0.54|-4.12|0.35| |GD|-13.48|0.55|-13.92|0.39| |w/ GRU|-12.42|0.56|-10.91|0.53| > Q3. How often is the dot similarity negative (thus requiring gradient projection/adjustment)? **A3**. Figure 6 in Appendix D visualizes the dot similarity for TOFU, which remains negative during unlearning. Extra visualizations have been conducted for MUSE and WMDP, as detailed in the provided [link](https://anonymous.4open.science/r/GRU-664A/cos.png), leading to the same conclusion.
Summary: The authors introduce GRU as a flexible framework designed to be integrated with existing unlearning methods to balance the trade-off between knowledge removal and retention in LLM unlearning. GRU constrains unlearning gradients to minimize their negative impact on retention. Theoretical analysis and empirical evaluations across multiple benchmarks and diverse unlearning baselines confirm its effectiveness. Claims And Evidence: The authors present a detailed theoretical analysis explaining how GRU effectively mitigates the negative impact on retention. Additionally, extensive experimental results further validate its effectiveness. Methods And Evaluation Criteria: The authors incorporate the proposed GRU into several well-established baselines, including GA, WGA, NPO, and GD, and conduct extensive experiments on widely used benchmarks such as TOFU, WMDP, and MUSE. Theoretical Claims: The authors present two theorems, Theorem 3.1 and Theorem 3.2, along with detailed proofs and analyses to support their validity. Experimental Designs Or Analyses: The authors present detailed experimental settings, evaluations, and baselines. The design and analysis are well-structured and valid. Supplementary Material: The supplementary materials include experimental results featuring a hyper-parameter sensitivity analysis, along with additional ablation studies and further experimental analyses. Relation To Broader Scientific Literature: This paper introduces a novel regularization method that contributes to the research on LLM unlearning. Essential References Not Discussed: The authors have cited most related works. Other Strengths And Weaknesses: Strengths 1. The authors use two figures, Figure 1 and Figure 2, to effectively illustrate the motivation and functionality of GRU with clarity. 2. In addition to GRU, the authors introduce Task Vector Rectified Unlearning (TRU) to remove the dependency on a retention set. Weaknesses 1. Some concepts lack clarity. For example, it is unclear how the constraints on gradients from different samples and mini-batches (i.e., the random mini-batch from $D_r$ and the mini-batch from $D_u$) are enforced in Eq. (7). The authors should provide further explanation or justification. 2. There are too many critical hyper-parameters, such as $\gamma$ and $\tau$. Performing grid search to find the optimal values for these hyper-parameters is computationally expensive and time-consuming. 3. Some claims need further justification, for example, "for each individual data point $s_u \in D_u$ targeted for unlearning, the remaining data points within $D_u$, i.e.,$D_u$ \ ${s_u}$, can offer information for retention if used properly.". 4. The experimental results presented in the figures, i.e., Figure 3 and Figure 4, are not clear. Other Comments Or Suggestions: 1. The preliminaries section takes up excessive space. The authors should consider making this section more concise while ensuring it remains comprehensive. Questions For Authors: 1. You mentioned that "retain data can often be biased." Could you clarify what bias refers to in this context? Does it relate to dataset distribution shifts, representation imbalances, or another specific aspect? A more precise explanation would improve clarity. 2. See weaknesses in the above section. Ethical Review Flag: Flag this paper for an ethics review. Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Many thanks for your great support and constructive comments! Please see our responses below. > Q1. It is unclear how the constraints on gradients from different samples and mini-batches are enforced in Eq. (7). **A1**. As shown in Algorithm 1, **mini-batches $B_{\rm r}$, $B_{\rm u}$ replace $D_{\rm r}$, $D_{\rm u}$**. In Section 3.2, we further discuss the limitations of mini-batch gradients and recommend using the exponenital moving average for stable estimation. We will clarify these points in our revised manuscript. Sincere thanks for your suggestion! > Q2. Performing grid search to find the optimal values for these hyper-parameters is computationally expensive and time-consuming. **A2**. Many thanks for raising this concern. We would like to clarify the practicality of our approach from the following two perspectives. 1. **The GRU performs well even without these hyper-parameters.** Our basic framework following Eq. (7) operates without any hyper-parameters. Incorrperating $\gamma$ (for EMA) and $\tau$ (for clipping) are practical enhancements that improve empirical results. Below is the table showing results for 3 methods under the LLaMA setup, without using $\gamma$ and $\tau$. The results highlight notable improvements of our method over the baselines. Method |FQ 5%↑|MU 5%↑|FQ 10%↑|MU 10%↑| |:-:|:-:|:-:|:-:|:-:| |GA|-16.93|0.00|-14.37|0.00| |w/ GRU|-3.52|0.51|-7.34|0.22| |NPO|-10.91|0.49|-8.70|0.29| |w/ GRU|-9.96|0.54|-4.12|0.35| |GD|-13.48|0.55|-13.92|0.39| |w/ GRU|-12.42|0.56|-10.91|0.53| 2. **The GRU is not very sensitive to these hyper-parameters.** By incorporating $\gamma$ and $\tau$, we explore the potential to further enhance unlearning. In Appendix D, we conduct a hyper-parameter sensitivity analysis, showing stable improvements across a wide range of candidate hyper-parameters. > Q3. Some claims need further justification, for example, "..., the remaining data points within $D_{\rm u}$ can offer information for retention.". **A3**. Heuristically, retain gradients primarily **act as a denoising mechanism, rather than directly contributing update directions**. By using retain gradients to redirect original unlearn gradients, we focus on the direction that removes targeted knowledge, while with limited side impacts on unrelated knowledge. For TRU, let's first consider a simple scenario to remove a single $s_{\rm u}$. Here, it is straightforward to take the complementary, i.e., $D_{\rm u} \backslash{s_{\rm u}}$, for retention. We then create a rectified task vector targeted at eliminating knowledge associated with $s_{\rm u}$ while minimizing impact on unrelated knowledge. This process can be applied to each $s_{\rm u}$ within $D_{\rm u}$, with the collective average forming the rectified task vector for the entire $D_{\rm u}$. Notably, **these task vectors remain mutally compatible**, as the retain gradients are exclusively employed for denoising rather than direct used into the task vectors. We will add the related discussion in our revision. > Q4. The results presented in the figures are not clear. **A4**. Apologies for any confusion caused by figure annotations. As shown in captions, each pair of scores represents metric values (either FQ or MU) before and after applying GRU. Taking Figure 3(a) as an example, the pair (-16.93, -3.52) represents FQ for GA, where -16.93 is the score without GRU and -3.52 is the score with GRU. The visual representation using an upward growing grid between these scores emphasizes the achieved improvement. We will provide more explanations in our revision to ensure clarity. > Q5. The preliminaries section takes up excessive space. The authors should consider making this section more concise while ensuring it remains comprehensive. **A5**. We have a relatively extensive preliminaries section to ensure self-containment and maintain clarity. However, we fully concur with your opinion that a more concise verision would enhance the paper flow and allow larger space to elaobrate on our core contributions regarding GRU and TRU. We sincerely appreciate your feedback and will carefully refine our paper structure accordingly. > Q6. You mentioned that "retain data can often be biased." Could you clarify what bias refers to in this context? **A6**. Many thanks for your question. Taking TOFU as an example, the current setup involves selectively unlearning specific author profiles while retaining others. However, we recognize that the broader objective of retention should ensure model capacity across diverse domains such as humanities and sciences. Therefore, the current retain data in TOFU may exhibit bias, **stemming from the distribution shift between the adopted retain data and the broader real data**. We will refine the related discussion in our revision.
null
null
null
null
null
null
Constrained Online Convex Optimization with Polyak Feasibility Steps
Accept (poster)
Summary: The paper investigates online convex optimization subject to fixed convex constraint by incorporating Polyak feasibility steps. This leads to a sublinear regret and different constraint violation guarantees: no violation if initial start satisfies constraint, no violation after $O(\log T)$ rounds or cumulatively constrained evaluation is small. Claims And Evidence: The results support the claim Methods And Evaluation Criteria: The paper use standard metric, regret and cost associated with constraint violation Theoretical Claims: All steps were checked. Experimental Designs Or Analyses: The experiment compared an extension of previous work with current work. Theoretically they can achieve the same bound, empirically the current work seems to be an interpretation between low regret and low violation cost. Supplementary Material: Appendix A Relation To Broader Scientific Literature: An new approach to solve constrained OCO problem with comparatively the same rate as the extension from previous work. (The extension was done by the authors) Essential References Not Discussed: Well referenced. Other Strengths And Weaknesses: The paper is well written and easy to follow. The explanation of incorporating Polyak feasibility steps is sensible and theoretically supported. Other Comments Or Suggestions: typo: line 316 : y_{t} instead of y_{t+1} section 3.4 begins with using eqn (9), and quoted Fact 1 which was with reference of eqn (2) or line 5 of Algorithm 1, $\rpo$ in different place means different things, although it does not affect the result, but in between line 382 left - line 331 right causes unnecessary confusion. Result reporting: As theorem 1 is already a consequence of plugging in hyper parameters thus it does not allow seeing how parameter changes yield to corollaries. It only became clear how hyper parameters change after reading section 3.3 and 3.4 Questions For Authors: the regret scales with multiplicative Lipschitz constants $G_f G_g$, from current analysis it does not seem to be possible to have additative dependence $G_f + G_g$, is this common? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your detailed review and positive appraisal of the work. We discuss each of your points below. **Q.1. typo: line 316 : $y_{t}$ instead of $y_{t+1}$** You are correct. Thank you for pointing this out. **Q.2. section 3.4 begins with using eqn (9), and quoted Fact 1 which was with reference of eqn (2) or line 5 of Algorithm 1, $\rho$ in different place means different things, although it does not affect the result, but in between line 382 left - line 331 right causes unnecessary confusion.** Thank you for pointing this out. We will adjust our notation to make it clearer. **Q.3. Result reporting: As theorem 1 is already a consequence of plugging in hyper parameters thus it does not allow seeing how parameter changes yield to corollaries. It only became clear how hyper parameters change after reading section 3.3 and 3.4** Thank you for this suggestion. We agree that it would improve the clarity significantly. We will adjust the presentation of the results accordingly. **Q.4. the regret scales with multiplicative Lipschitz constants $G_f G_g$, from current analysis it does not seem to be possible to have additative dependence $G_f + G_g$, is this common?** Similar dependence on the Lipschitz constant is in the regret bound of prior work Mahdavi et al (2012). In particular, their Theorem 8 has the term $G^{5/2}$, where $G = \max\\{G_g, G_f\\}$.
Summary: This paper addresses the problem of Online Convex Optimization (OCO) with constraints, where the goal is to minimize regret while ensuring that the constraints are satisfied anytime. The authors propose an algorithm that combines gradient descent steps with Polyak feasibility steps to achieve $O(\sqrt{T})$ regret and anytime constraint satisfaction when the strict feasibility point is known. They also provide a guarantee when the strict feasibility point is not known. The authors validate their approach through numerical experiments, demonstrating that their algorithm achieves constraint satisfaction without sacrificing regret performance. Claims And Evidence: Yes, the claims are clear and convincing. Methods And Evaluation Criteria: Given the vast literature on OCO with constraints, the paper might consider more baselines. For example, Yuan et al., 2018, could be a good baseline as they studied a strict violation metric. Theoretical Claims: I checked the theoretical claims and proofs, and they seem correct. Experimental Designs Or Analyses: No specific issues. Supplementary Material: No Relation To Broader Scientific Literature: The contribution is marginal compared to the previous work. My major concern is that the algorithm proposed in the paper has already appeared in previous work (e.g., [1] and [2]). [1] leverages the Polyak steps to deal with the stochastic constraints (which seems more challenging), and [2] already proposed the linear approximation in Algorithm 1 (SHAM), which is the same as the "First-order Approximation" in the paper. The contribution is marginal. [1] Ion Necoara and Nitesh Kumar Singh. Stochastic subgradient projection methods for composite optimization with functional constraints. JMLR, 2022. [2] Nitesh Kumar Singh and Ion Necoara. Stochastic halfspace approximation method for convex optimization with nonsmooth functional constraints. 2024. Essential References Not Discussed: Yes, as discussed above [1] and [2]. Other Strengths And Weaknesses: Weaknesses: - If I understand correctly, when the strictly feasibility does not hold as in Corollary 1, the algorithm would have a large period of $O(\log T)$ where the constraints could violate. This might not be proper for safety-critical applications. - The paper considered the setting where the constraints are static. This seems less challenging and it would be more convincing to study stochastic or time-varying constraints. Other Comments Or Suggestions: Justifying the paper with [1] and [2] above and addressing the weakness above would be convincing. More baselines are needed in the experiments. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review. We have responded to each concern below. **Q1. The contribution is marginal compared to the previous work.** We respectfully disagree. The contribution of our paper is an algorithm for constrained OCO that *for the first time* attains anytime constraint satisfaction $g(x_t) \leq 0$ and optimal regret $O(\sqrt{T})$, while only using first-order constraint feedback. Unlike prior methods for this problem, which sacrifice feasibility for efficiency, our results show that it is possible to guarantee both feasibility and efficiency. Our approach is based on the use of Polyak steps with respect to the constraint, which has been long overlooked by the OCO literature. **Q2. My major concern is that the algorithm proposed in the paper has already appeared in previous work (e.g., [1] and [2]).** You are correct that similar algorithms have appeared in the offline optimization literature (see Section 1.1.3). However, the theoretical analysis used in the offline optimization literature is not applicable to the OCO setting. Specifically, the offline optimization works mentioned by the reviewer [1][2] *do not* provide applicable analysis for the following reasons: - [1],[2] require that the suboptimality gap is non-negative, i.e. $f(\hat{x}_t) - f^* \geq 0$, as used in the proofs for Theorem 7 in [1] and Theorem 4.6 [2]. In the online convex optimization setting, there is no guarantee that $f_t(x_t) - f_t(x^*) \geq 0$, and therefore we need a different analysis approach. See discussion on line 151 right side. - [1],[2] only give constraint violation guarantees on the average iterate $\hat{x} = \frac{1}{\sum_{\tau} \alpha_\tau} \sum_{\tau} \alpha_\tau x_\tau$, whereas we provide violation guarantees on every iterate $x_t$, i.e. $g(x_t) \leq 0$ for all $t$, and therefore need a different analysis approach. Note that we *cannot* simply choose $x_t = \hat{x}_t$, because we only get feedback at $x_t$ and therefore the algorithm structure would no longer apply. - [1] requires constraint information on the intermediate iterate ($v_k$ in the notation of [1]), whereas we only use constraint information at the played actions $x_t$. - [2] requires that the cost function has Lipschitz gradients ($\beta$-smooth) as used in Lemma 4.1 of [2]. Since we do not assume Lipschitz cost gradients, we need a different analysis approach. Lastly, please note that [1] is already cited and discussed. **Q.3 [2] already proposed the linear approximation in Algorithm 1, which is the same as the "First-order Approximation" in the paper.** Thank you for pointing out that this part of our algorithm design is also used in [2], which studies (offline) optimization. We were not previously aware of this recent work. However, please note that the analysis from [2] does not apply to our setting for the reasons mentioned in the answer to Q.2 above. To clarify our contributions in the paper, we will add the following to Section 1.1.3. "This technique of using a first-order approximation is closely related to the approach used in the recent (offline) optimization work [2]. However, the analysis in [2] is not applicable to our OCO setting because (a) it requires that the suboptimality gap of the iterates $f( x_t ) - f^* \geq 0$ is non-negative, (b) only provides guarantees on the average of the iterates as opposed to our guarantees on every iterate, and (c) requires that the cost function has Lipschitz gradients." **Q4. When the strict-feasibility does not hold, the algorithm would have a large period of $O(\log(T))$ possible violation, which might not be proper for safety-critical applications.** Without knowledge of a feasible point, there is no way for any algorithm to ensure constraint satisfaction from the first round. Nonetheless, our results show that in such a setting, we can guarantee that the constraint is satisfied for all but a small number $O(\log(T))$ of rounds. In order to guarantee that the constraint is *always* satisfied, we require the slightly stronger assumption of a known *strictly-feasible* point. This assumption is motivated by the fact that, in safety critical-applications, there is typically a "safety margin" that is built in to traditional methods to account for modeling inaccuracies. For example, in personalized healthcare applications, medical standards specify a drug dosage that will be safe to the patient despite some small variations in the patient's characteristics (e.g. weight, blood pressure), and therefore this dosage can be used as a strictly-feasible point. **Q.5 The paper only considered the setting where the constraints are static.** The problem of OCO with fixed constraints is a major problem in the machine learning community. To illustrate this, Google Scholar shows that the first work on this problem (Mahdavi et al., 2012) has been cited 313 times. **More Experiments:** https://anonymous.4open.science/r/oco_pfs_rebuttal-1050/
Summary: In this work, the authors study online convex optimization with a fixed constraint function. Their method employs Polyak feasibility steps to guarantee constraint satisfaction without compromising regret. Specifically, they introduce an algorithm for constrained OCO that applies Polyak feasibility steps to ensure anytime constraint satisfaction $g(x_t) \leq 0, \forall t$ and $\mathcal{O}(\sqrt{T})$ regret. Claims And Evidence: Yes, the claims are supported by the theorem. Methods And Evaluation Criteria: Yes, the proposed method makes sense for constrained online convex optimization problems. Theoretical Claims: Yes. There are some issues: 1. Issue with $\mathcal{X}_{\rho}$ when $g^* = \min_x g(x) = 0$: In this case, $\mathcal{X}{\rho} = \emptyset$, which would break the analysis in line 300. 2. The result in Theorem 1 heavily relies on the relationship between $\epsilon$ and $\sigma$. If $\frac{\epsilon}{\sigma}$ is very large, the bound will blow up. Could you provide results justifying that this ratio remains bounded? Experimental Designs Or Analyses: Yes. This paper only provides a toy example. It would be great to see some large-scale experiments. Supplementary Material: Yes. I reviewed the proof in the appendix. Relation To Broader Scientific Literature: The key contribution of using Polyak feasible step sizes is closely related to the constrained optimization literature. Essential References Not Discussed: The algorithm presented in this paper closely resembles the one proposed in [Liang, 2024] for Variational Inequality Problems with Functional Constraints and . It would be beneficial to discuss the similarities and differences between these approaches. Reference: [1]. Liang Zhang, Niao He, and Michael Muehlebach. Primal Methods for Variational Inequality Problems with Functional Constraints. 2024 Other Strengths And Weaknesses: Strength: It is great to see polyak feasible stepsize is used in online convex optimization problems. Weaknessses: Assumption 4 is too strong, which is uncommon in prior literature [Yu, 2017], [Yuan, 2018]. It would be helpful to provide some justification and intuition on when this assumption holds. References: [1]. Hao Yu, Michael Neely, and Xiaohan Wei. Online convex optimization with stochastic constraints. Advances in Neural Information Processing Systems, 30, 2017. [2]. Jianjun Yuan and Andrew Lamperski. Online convex optimization for cumulative constraints. Advances in Neural Information Processing Systems, 31, 2018. Other Comments Or Suggestions: What is $\alpha$ in line 314? Are you referring to $\epsilon$? Questions For Authors: I am curious whether the tightening parameter $\rho$ is essential for this algorithm. It appears that setting $\rho = 0$ would still preserve most of the results. Could you clarify what would change if the tightening parameter $\rho$ were omitted, i.e., if we set $\rho = 0$? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review. We have responded to each concern below. **Q1. The algorithm in the paper closely resembles the one in Liang (2024), which studies constrained variational inequalities.** We were not previously aware of the recent work (Liang et al, 2024), which studies variational inequalities with functional constraints. Similar to our work, the algorithm in Liang et al (2024) uses a single iterate sequence and only uses constraint information at the chosen iterates. However, the analysis in Liang et al (2024) *cannot* be applied to our setting for the following reasons: 1. Liang et al (2024) assumes that the constraint functions $g_i$ has Lipschitz gradients (i.e. $\beta$-smooth), whereas we do not. This assumption is critical to the analysis in Liang et al. (2024), e.g. in equation (5), and therefore we require a different analysis approach. 2. Liang et al (2024) requires solving a QP subproblem at each iteration. On the other hand, our approach doesn't require solving a subproblem. This is critically important in the constrained OCO literature, because the constrained OCO problem was introduced for the sole purpose of avoiding subproblems and their computational burden. This is explicitly stated in Mahdavi et al. (2012), which introduced the constrained OCO problem. 3. Liang et al (2024) only studies variational inequalities, which *does not* extend to the OCO setting in general. Although our analysis is completely distinct from the one in Liang et al (2024), there are some similarities in the *algorithm design* for the special case of fixed cost functions $f_t = f$, one constraint function, and $\beta$-smooth constraints. In this case, our algorithm can be written as (with $\rho = 0$), $$ x_{t+1} = \Pi_{R \mathbb{B}} \left( x_t - \eta \nabla f(x_t) - \frac{[g(x_t) - \eta \nabla f(x_t)^\top \nabla g(x_t)]_+}{|| \nabla g(x_t) ||^2} \nabla g(x_t) \right). $$ At a high-level, this resembles the version of the algorithm in Liang et al. (2024) for the special case of one constraint function (see Algorithm 2 in Liang et al. (2024)). However, there are some key differences: - Liang et al. (2024) applies the decreasing step-size $\eta$ to the entire third term, whereas we only apply it to $\nabla f(x_t)^\top \nabla g(x_t)$. - Liang et al. (2024) applies the scaling $\alpha = G_g/R$ to $g(x_t)$. - Liang et al. (2024) adds an additional constraint to ensure that the iterates are bounded, whereas we use a projection onto $R \mathbb{B}$. These differences in algorithm design further highlight the needed for distinct analysis between our work and Liang et al. (2024). We thank the reviewer for pointing out this work and will add the above discussion to the paper. Liang et al. "Primal Methods for Variational Inequality Problems with Functional Constraints," arXiv:2403.12859 **Q2. Assumption 4 is too strong and uncommon in the prior literature.** We respectfully disagree with the statement that Assumption 4 is uncommon in prior literature, as it is used in the fundamental works on OCO with constraints (Mahdavi et al., 2012; Jennaton et al. 2016). To help put this assumption in context, we note that it is implied by Slater's condition (provided that the other assumptions hold). To see this, suppose that Slater's condition holds, i.e. there exists $y \in \mathbb{R}^d$ and $\xi > 0$ such that $g(y) \leq - \xi$. Then, let $\epsilon = \xi/2$ and $\sigma = \xi/(4R)$. Then, it holds for any $x \in \mathbb{R}^d$ and $s \in \partial g(x)$ where $g(x) = -\epsilon$, $$ -\xi \geq g(y) \geq g(x) + s^\top(y- x) \geq - \epsilon - || s || || y - x || \geq - \epsilon - 2 || s || R.\\ $$ $$ \implies || s || \geq \frac{\xi - \epsilon}{2 R} = \frac{\xi}{4 R} = \sigma. $$ This is precisely Assumption 4. **Q3. There is an issue with the analysis in that it is possible for $\mathcal{X}_\rho = \emptyset$.** Thank you for pointing this out. This is due to a typo in Assumption 4. There is a missing negative, the $g(x) =\epsilon$ is supposed to be $g(x) = - \epsilon$. The claims of the paper remain unchanged by fixing this typo, as the correct form of the assumption is used throughout the analysis. **Q.4. The result in Theorem 1 heavily relies on the relationship between $\epsilon$ and $\sigma$. Could you provide results justifying that the ratio $\frac{\epsilon}{\sigma}$ remains bounded?** Similar to prior work (Mahdavi et al., 2012; Jennaton et al., 2016), the terms $\epsilon$ and $\sigma$ are assumed to be constants and the regret bound depends on them. **Q.5. Could you clarify what would change if the tightening parameter were omitted, i.e., if we set $\rho=0$?** We kindly point the reviewer to Corollary 2 in the paper. This corollary shows that choosing $\rho = 0$ ensures that $g(x_t) \leq O(1/\sqrt{t})$. **Q.6. What is $\alpha$ in line 314? Are you referring to $\epsilon$?** Thank you for pointing out this typo. It is supposed to be $\rho$ instead of $\alpha$.
null
null
null
null
null
null
null
null
Understanding Mode Connectivity via Parameter Space Symmetry
Accept (poster)
Summary: The paper investigates (linear) mode connectivity of neural networks via symmetries in parameter space. Besides quantifying the number of connected components of invertible linear networks with and without skip connections, it sheds light on when modes can be connected linearly by investigating the difference of the linear path between two modes and the orbit between the two points. Claims And Evidence: The paper investigates the connection between symmetries in parameter space and mode connectivity theoretically. All theoretical claims are rigorously proven. Some claims (e.g., connected components w/o skip connections and loss preservation) are supported empirically. Methods And Evaluation Criteria: The paper's contributions are theoretical, yet a few claims are supported by empirical evidence. Where this is the case, the methods and criteria are sound. Theoretical Claims: I have checked the theoretical claims up to my abilities. The proofs presented in the appendix are clear and sound. Experimental Designs Or Analyses: As stated above, the main contributions of the paper are theoretical. When empirical evidence was presented, the experimental design was sound. Supplementary Material: I have reviewed the proofs provided in the appendix and verified them up to my abilities. Relation To Broader Scientific Literature: - It would be interesting to discuss how the studied symmetries of the general linear group relate to the symmetries discussed by Petzka et al. [1]. - How do the loss-preserving orbits discussed in this paper relate to the minima manifold in [2]? - This paper discusses linear reparameterizations. How does it relate to works on non-linear reparameterizations using Riemannian geometry [3]? [1] Petzka, Henning, Martin Trimmel, and Cristian Sminchisescu. "Notes on the symmetries of 2-layer relu-networks." Proceedings of the northern lights deep learning workshop. Vol. 1. 2020. [2] Simsek, Berfin, et al. "Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances." International Conference on Machine Learning. PMLR, 2021. [3] Kristiadi, Agustinus, Felix Dangel, and Philipp Hennig. "The geometry of neural nets' parameter spaces under reparametrization." Advances in Neural Information Processing Systems 36 (2023): 17669-17688. Essential References Not Discussed: To the best of my knowledge, the paper references all essential works. Other Strengths And Weaknesses: The paper is very well-written, the claims are clearly presented, and the proofs are sound. One thing that could be improved is a discussion on the limitations of the analysis: most of the analysis is for linear neural networks and thus the applicability to modern deep learning is limited. While a detailed empirical analysis of this difference might be out of scope, a discussion of these limitations seems appropriate. Other Comments Or Suggestions: - It is a bit of a long shot, but in learning theory for neural networks, one problem is that some generalization measures change through reparameterizations. Therefore, reparameterization-invariant measures have been proposed [4,5] and some can even be theoretically connected to generalization. Would, for example the proposed relative flatness [5] be invariant under the reparameterizations discussed in this paper? - Prop. 5.4: in the equivalence of weights $W_i$ and $W'_i$ at the end of the paragraph, the permutation $P$ seems to be missing. [4] Tsuzuku, Yusuke, Issei Sato, and Masashi Sugiyama. "Normalized flat minima: Exploring scale invariant definition of flat minima for neural networks using pac-bayesian analysis." International Conference on Machine Learning. PMLR, 2020. [5] Petzka, Henning, et al. "Relative flatness and generalization." Advances in neural information processing systems 34 (2021): 18420-18432. ######### After Rebuttal ############ I maintain my positive assessment and recommend acceptance. Questions For Authors: Q1: Cor. 4.2 shows that the number of connected components increases with depth, whereas width plays no role. This is interesting. Can you comment on potential implications for neural network training? It seems related to the fact that deeper networks can be trained easier than wide and shallow ones. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging feedback. We appreciate that they have taken the time to read our proofs. We also appreciate the valuable questions and the many relevant pointers to related work, which we discuss below. **Relation to broader scientific literature** > It would be interesting to discuss how the studied symmetries of the general linear group relate to the symmetries discussed by Petzka et al. [1]. Petzka et al. show that permutation and rescaling are the only transformations that leave a 2-layer ReLU network unchanged, excluding degenerate cases. These symmetries are subgroups of the general linear symmetry group studied in our work. They also identify degenerate-case transformations not present in linear networks, when neurons share identical zero hyperplanes. > How do the loss-preserving orbits discussed in this paper relate to the minima manifold in [2]? Both our paper and Simsek et al. [2] analyze symmetry-induced structures in the loss landscape. Our orbits arise from continuous symmetry groups (e.g., GL(n), rescalings), while their minima manifold arises by embedding smaller networks into overparameterized ones via permutations and neuron duplications. These manifolds can be viewed as unions of affine subspaces, and our orbits form structured subsets within it. The two views are complementary and compatible, especially in linear networks with full-rank weights. > This paper discusses linear reparameterizations. How does it relate to works on non-linear reparameterizations using Riemannian geometry [3]? Kristiadi et al. [3] study reparameterization as a map from the parameter space to a new space, while we focus on symmetry that maps from the parameter space to itself. Both maps can be nonlinear. Studying symmetry-induced orbits from a Riemannian perspective could be a promising direction for future work. We will include a brief discussion of these connections in the final version of the paper. **Limitation of the analysis** > most of the analysis is for linear neural networks and thus the applicability to modern deep learning is limited. While a detailed empirical analysis of this difference might be out of scope, a discussion of these limitations seems appropriate. We appreciate the suggestion and agree that a discussion would be valuable. Our results are derived under architectural assumptions that enable precise, mathematically grounded insights into how continuous symmetries shape the loss landscape. While the findings may not directly transfer to all architectures, our methods are adaptable. For example, Section 5.3 applies to any network with layers involving multiplication of two weight matrices–a pattern that appears, for example, in transformer blocks. In higher-dimensional, non full-rank settings, connectivity depends on how orbits defined by different rank combinations intersect. Extensions to nonlinear networks are possible by identifying approximate symmetries (as in Section 6), or by continuously deforming the minima and studying its behavior in the limit as the network approaches a linear regime.Even partial or approximate symmetry can provide useful structural information about the minima and support new applications. **Other comments and questions** > Would, for example the proposed relative flatness [5] be invariant under the reparameterizations discussed in this paper? Yes, in many cases. The relative flatness measure proposed in Petzka et al. [5] is designed to be invariant under layer-wise and neuron-wise linear reparameterizations, such as rescaling and certain changes of basis. These are precisely the types of continuous architectural symmetries we analyze in our paper. Therefore, for the linear and homogeneous network settings considered in our work, the relative flatness measure would indeed remain invariant under the reparameterizations we describe. > Prop. 5.4: in the equivalence of weights $W_i$ and $W_i'$ at the end of the paragraph, the permutation $P$ seems to be missing. This is indeed a typo. We will add the permutations. Thank you for the detailed read. > Q1: Cor. 4.2 shows that the number of connected components increases with depth, whereas width plays no role. This is interesting. Can you comment on potential implications for neural network training? It seems related to the fact that deeper networks can be trained easier than wide and shallow ones. We do not have concrete results on whether more connected components correlates with easier training, but below we share some speculations. A larger number of connected components could mean the minima is distributed more widely in the parameter space, which makes them easier to find during training. However, an exact analysis would require quantifying the area (or volume) and the exact distribution of the minima, as well as the geometry in the surrounding region. Nevertheless, connecting connectedness and optimization seems to be an exciting line of future work. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. They have answered my questions and clarified several points. I think this is a good paper and therefore maintain my positive rating.
Summary: This paper provides a group theory framework to study the connectivity and the number of components of the zero-loss set in a (non-convex) loss landscape. A precise characterization of the number of components and connectivity of the zero-loss set is given for deep linear neural networks. Some results on the existence of a high-loss barrier are given for non-linear neural networks. Claims And Evidence: It is fairly clear. Some well-known simple things are presented as Propositions (for ex. Propositions 3.4 and 3.5). These are not particular contributions of this paper hence should not be presented as Propositions. Methods And Evaluation Criteria: NA Theoretical Claims: I did not find a mistake in Theorems. Experimental Designs Or Analyses: NA. Supplementary Material: No. Relation To Broader Scientific Literature: Linear mode connectivity of deep neural networks is an interesting and active area of research with implications for mode connectivity and model merging. A complete and group-theory-based characterization of the zero-loss set of parameters for deep linear networks is a nice contribution to the literature (but I have some reservations about the symmetry-group relevant for deep linear networks). Moreover, the existence of a big barrier is characterized. This is expected for the distant components which would be rare compared to the close components which would be more common. SGD likely finds more common components which explain the observed phenomenon of linear connectivity. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: Characterization of the zero-loss set of deep linear networks is novel and group-theoretic background is the way forward. Weakness: ResNet 1D section should not be called that as that architecture has no non-linearity. Other Comments Or Suggestions: ```Param``` should be called $\Theta$. Questions For Authors: Comment: Proposition 5.2 uses permutations to connect components for a deep ---linear--- network. However, the relevant symmetry group here is $GL_n$ as those matrices can be multiplied by any invertible matrix without changing the loss. In other words, symmetries of deep linear networks have more ---volume--- compared to the deep non-linear networks. Please add a comment along these lines so that the readers are aware of this distinction between linear and non-linear networks. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging feedback and insights on our work’s relation to broader literature. We will incorporate their suggestions into the final version of the paper. > Some well-known simple things are presented as Propositions (for ex. Propositions 3.4 and 3.5). These are not particular contributions of this paper hence should not be presented as Propositions. Section 3 is intended as a background section, and it is not our intention to present these propositions as novel results. We chose to include them in proposition form primarily for completeness and ease of referencing them in proofs. We agree they should not be positioned as contributions of this paper, and we will revise the paper to clarify this. > ResNet 1D section should not be called that as that architecture has no non-linearity. Thank you for pointing this out. We will change the terminology to skip-connection to avoid ambiguity. > Comment: Proposition 5.2 uses permutations to connect components for a deep ---linear--- network. However, the relevant symmetry group here is $GL_n$ as those matrices can be multiplied by any invertible matrix without changing the loss. In other words, symmetries of deep linear networks have more ---volume--- compared to the deep non-linear networks. > Please add a comment along these lines so that the readers are aware of this distinction between linear and non-linear networks. We appreciate this observation and will include a comment along these lines. While permutations are sufficient to connect components in deep linear networks, the full symmetry group in this case is indeed $GL_n$. Also, it is indeed true that deep linear networks have larger symmetry groups than deep non-linear networks.
Summary: This work provides an interesting perspective on mode connectivity by linking topology of symmetry groups to the topology of minima. The key technique used in the paper is based on deriving the number of connected components of minima in linear networks (showing $2^{l-1}$ components for a network with $l$ layers), and in addition they show that skip connections reduce this number (although this is done in a very simplified setting, e.g., from 4 to 3 in a scalar ResNet). Based on this the authors go on to prove mode connectivity up to permutation for linear two-layer networks with invertible weights, and then go on to show how this fails when considering the multi-layer case. Further, some interesting analysis is carried to find curves on the minimum Claims And Evidence: Yes, more or less. But see the later discussion in questions or weaknesses, where there are some questions on how these results fit in to the empirical behaviour of LMC. Methods And Evaluation Criteria: See the strengths and weaknesses Theoretical Claims: no Experimental Designs Or Analyses: See the strengths and weaknesses Supplementary Material: No Relation To Broader Scientific Literature: The paper builds on and extends prior work on mode connectivity and loss landscapes. The key contribution—linking symmetry group topology to minima structure—is novel and brings in a complementary and rigorous perspective. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: - The theoretical connections made are quite interesting and might lead to more such works from the topological perspective. - The accompanying setups and figures provide useful intuition to think about the problem. - The paper is quite well written. Weaknesses: - Despite the above, the practical takeaways practitioners or those working in the area can have remain vague at best. Other Comments Or Suggestions: n/a Questions For Authors: - Is it clear that having a lower number of connected components makes for easier connectivity? I am not so sure, as even in a single connected space, it might require a complicated path. Maybe it would be nice to have some sort of discussion in the paper, as what would be ideal atleast from a topological perspective to connect the network solutions. - How do you reconcile the failure case of multi-layers with the empirically observed largely well-connected paths for deep networks? - I am curious what are the nature of theoretical bottlenecks that impede a general analysis for ResNets, besides the general case. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and insightful questions. We are encouraged by the recognition of our paper’s novelty, rigor, and intuition. Below we expand on the practical takeaways and respond to the questions. **Practical takeaways** Our work shows that parameter space symmetries, especially continuous ones, explain why and how minima are connected. For practitioners, these results motivate concrete strategies – and cautions – for tasks that navigate the loss landscape, including model merging, ensembling, and fine-tuning: - One can build low-loss curves explicitly using known parameter symmetries. This gives a principled and efficient way to obtain new minima from old ones, potentially useful in (1) generating model ensembles with low cost; (2) improving model alignment by allowing a much larger group of transformations than permutation; and (3) mitigating catastrophic forgetting in fine-tuning by constraining updates to remain on the symmetry-induced manifold of the pretraining minimum. - The connectedness of minima supports the practice of model merging and ensembling, even when models are trained separately. In addition to permutation, many other symmetry transformations can connect solutions that would otherwise appear very different. - Linear interpolation between minima is not guaranteed to lead to better models, despite its widespread use. This highlights the need to evaluate whether the minima found by specific learning algorithms are approximately connected before averaging models directly. Finally, for theorists working in this area, shifting focus from viewing loss landscapes as chaotic to structured, symmetry-rich spaces invites new mathematical tools to explain empirical observations in deep learning. This has led to new intuitions ranging from why residual connections lead to more connected minima, why linear model connectivity can fail, to why model averaging often works when the models are not too far from each other. We also hope that the broader approach – inferring properties of unknown solution sets from known symmetry groups – could inspire new work beyond this field. **Response to questions** > Is it clear that having a lower number of connected components makes for easier connectivity? We agree with the reviewer that connectedness alone does not imply easy connectivity in the sense of short or simple paths between solutions. Being in the same connected components is a necessary condition for connectivity, but a single component may still contain complex geometry necessitating complicated connecting paths. Defining the ease of connectivity is subtle, and we agree that a discussion would be valuable. One natural measure is the parametric complexity of the connecting curves, quantifiable by their degree if polynomial, or number of segments if piece-wise. Another possible definition for easy connectivity would be low curvature of the minimum manifold or short geodesic distance between two points on it. As we discussed in Section 6.2, low curvature implies that linear interpolation stays near the manifold. Other potential definitions include whether the connecting curve has an analytical expression, or how many points are needed to approximate it within a certain error. It would indeed be interesting to examine these properties for symmetry-induced connecting curves. > How do you reconcile the failure case of multi-layers with the empirically observed largely well-connected paths for deep networks? The empirical observation of mode connectivity and linear mode connectivity is likely due to the fact that certain family of optimizers, especially stochastic gradient descent (SGD), tend to explore only a subset of the minimum, a phenomenon often referred to as implicit bias. Regularization techniques and weight decay may further encourage SGD to favor certain regions of the minimum. As a result, the subset of minima that is likely to be reached by SGD can have very different structures and perhaps more connected than the full set of minima. > I am curious what are the nature of theoretical bottlenecks that impede a general analysis for ResNets, besides the general case. Extending our proof to a general analysis is challenging because the number of orbits grows rapidly with the dimension of the weight matrices. In the 1D case, the minimum consists of only two orbits, so the proof only requires analyzing the connected components of each and checking how they intersect. For higher-dimensional weights, especially when not full-rank, each distinct rank combination defines multiple orbits, and characterizing the intersections among these many orbits, while possible, becomes combinatorially complex. We therefore chose to include only the 1D ResNet example to demonstrate the proof technique. It is possible that future work with more refined tools could extend this analysis to higher-dimensional settings more efficiently.
Summary: The paper studies the topology of the minima in neural networks through their symmetries. Results of the paper consists of different simplified models of neural networks, for example using single units and single data point, or assuming linear networks. The paper the studies the effect of different types of symmetries – e.g., GL(n) for linear networks, or rescaling symmetry for homogeneous activations – on various properties of the loss landscape – e.g., the number of connected components, linear mode connectivity – in these simplified settings. Claims And Evidence: Theoretical claims are supported by proofs in the appendix and enough explanation in the main body to give an idea as to why they are correct. Sometimes the scope of the findings is inaccurate when a high-level picture is given; for example, the paper says that it shows the number of connected components for full-rank linear regression is reduced through skip-connection, but this result is limited to 3-layer networks with single neuron in every layer. Methods And Evaluation Criteria: The investigation relies on the symmetries of simplified models of neural networks; for example, the fact that when we remove all activation functions, neural networks have GL(n) symmetry, and the fact this group has two connected components and it subsumes the permutation symmetry. This limits the scope of finding to toy settings where we know for a fact the conclusions break down when we move to a simple MLP. Nevertheless, the paper has some interesting insights, for example, the fact that the topology of the symmetry group affects the topology of the loss level-sets. Theoretical Claims: No, I did not check the proof. Experimental Designs Or Analyses: Experiments are justifiably quite limited. Supplementary Material: No Relation To Broader Scientific Literature: The section on related works has a discussion of prior works on mode connectivity and symmetries of the loss landscape. I found the coverage quite adequate. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Strengths – Nice coverage of related literature – Clear presentation of necessary background on topology and groups – Clear statement of results in each section (with the exception of section 6) Weaknesses – Results rely on strong assumptions, so much so that unfortunately conclusions tell us nothing about realistic neural networks. Note that as opposed to many toy models, here, when dealing with symmetries, one can be sure that conclusions do not transfer to more complicated architectures, since these symmetries do not exist. – The paper reads like a collection of loosely related results; it includes different toy models used to study different phenomena. – The conclusions made about the role of symmetry in this paper completely ignores (and eliminates) the effect of overparameterization symmetry, which has a much larger degree of freedom than symmetries of the architecture and potentially a bigger role to play in the geometry of the landscape. Other Comments Or Suggestions: I got lost at the beginning of section 6, after reading equation 6 and proposition 6.1. It would be nice to clarify the material of that section, in particular, why such an arbitrary transformation is used, and what it has to do with a one-parameter subgroup. Questions For Authors: Could you please explain if you expect any of the findings to “approximately” translate to a realistic/practical setting of an MLP, or even a linear neural network in the overparameterized regime? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. Below, we address concerns about the scope and generalizability of our results, and clarify how our approach can be extended. **Scope and generalizability of results** > Sometimes the scope of the findings is inaccurate when a high-level picture is given… We will clarify the intended scope to reduce ambiguity. For higher-dimensional invertible weights in the skip-connection example, the number of components reduces further to 2. We are extending the analysis by relaxing the invertibility condition and will include full proofs in the final version. > The investigation relies on the symmetries of simplified models of neural networks… This limits the scope of finding to toy settings where we know for a fact the conclusions break down when we move to a simple MLP. While our analysis focuses on simplified models, it does not fundamentally break down for more complex architectures. Many modern architectures retain large spaces of continuous symmetry—for example, our results in Section 5.3 apply broadly to layers involving matrix multiplication, such as transformer blocks. When a homeomorphism exists between the symmetry group and the set of minima, topological properties such as connectedness can be inferred directly. In non-invertible settings, such as with skip connections, connectivity can still be analyzed through how orbits intersect, albeit with more care. > Results rely on strong assumptions, so much so that unfortunately conclusions tell us nothing about realistic neural networks. …one can be sure that conclusions do not transfer to more complicated architectures, since these symmetries do not exist. We respectfully disagree that our findings are irrelevant to realistic neural networks. We do not claim direct applicability to all architectures; rather, we aim to isolate the role of symmetry in shaping the loss landscape. Our simplified models allow for precise, mathematically grounded insights into how symmetries–especially continuous ones–can yield connected minima and help explain phenomena like linear mode connectivity. As explained above, our methods provide a framework that can be adapted to a wide range of architectures. Even when we do not know the full set of symmetry, a subset of symmetry or even approximate symmetry can still give useful information on the structures of minima and inform new applications, as demonstrated in section 6. > The conclusions made about the role of symmetry in this paper completely ignores (and eliminates) the effect of overparameterization symmetry… Overparameterization increases the dimension of the minima set, reflecting redundancy or flatness, but does not induce symmetry groups (e.g., permutations or rescalings) in the group-theoretic sense like the architectural symmetries. Thus, our results neither overlook nor contradict existing theory on overparameterization. Exploring interactions between high-dimensional minima and architectural symmetries remains an interesting direction for future work, and our framework may help analyze such cases. > Could you please explain if you expect any of the findings to “approximately” translate to a realistic/practical setting of an MLP, or even a linear neural network in the overparameterized regime? Our main multilayer setting (Equation (1)) already allows for overparameterization. While not all MLPs will exhibit the same topology as our simplified models, we believe our approach is generalizable. For higher-dimensional weights, connectivity analysis involves characterizing intersections among multiple orbits defined by distinct rank combinations. Extending our analysis to include nonlinearity could involve approximate symmetries, as in Section 6, or by continuously deforming the minima and studying its behavior in the limit as the network approaches a linearity. **Writing and presentation** > The paper reads like a collection of loosely related results; it includes different toy models used to study different phenomena. The paper's central theme is that parameter space symmetries, especially continuous ones, can help explain why and how minima are connected. Each model is chosen to isolate a specific aspect of this symmetry-connectivity relationship. Collectively, they build a coherent picture of how architectural symmetries shape the loss landscape and empirical behaviors observed in deep learning. > I got lost at the beginning of section 6... It would be nice to clarify the material of that section… We will revise this section to clarify and better motivate the use of one-parameter subgroups. The transformation in Equation 6 is not arbitrary–it is specifically constructed to follow a one-parameter subgroup and defines a smooth curve within the minima. The section analyzes the curvature of such symmetry-induced curves to understand when linear mode connectivity approximately holds. Proposition 6.1 formalizes this by relating interpolation loss to curvature. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Could you please clarify the implications of the full-rank assumption about X and Y in your results? --- Reply to Comment 1.1.1: Comment: Thank you for your question. The full-rank assumption ensures that the minimum is non-empty and well-structured. While it simplifies the analysis in some settings, many of our results do not rely on it, and our approach can generally extend beyond this assumption. Specifically: - **Sections 5.2, 5.3, and 6 (linear mode connectivity, curves on the minimum):** These results do not require $X$ and $Y$ to be full-rank. - **Section 4 (connected components):** Most results here assume full-rank $X$ and $Y$. Without this assumption, each loss level set may consist of more than one orbit under the group action, corresponding to different rank combinations of the weight matrices. The number of connected components may increase or decrease depending on the architecture (see Proposition A.8). In such cases, one can apply the same connectivity analysis, with the added step of characterizing intersections among multiple orbits. - **Section 5.1 (permutation-induced connectivity):** In their current form, these results assume full-rank inputs and outputs. However, they can be easily generalized: permutation still reduces the number of components when (1) an orbit is not connected due to the symmetry group comprising multiple connected components, (2) the orbit does not reside on the same connected component of the minimum, and (3) there exists a permutation that takes a point on one connected component of the group to another.
Summary: This paper look at the topology of the loss level sets in order to understand their connectedness, i.e. mode connectivity. They are specifically investigating the topology of symmetry groups of the weights under the loss. They deduce the number of connected components of full-rank multi-layer linear networks (both with and without skip connections), and prove mode connectivity up to permutation for full-rank linear regressions, as well as derive some instructive examples. Claims And Evidence: It is a very sensible approach to use topology to understand the linear connectedness of minima. In terms of linear networks, the results are reasonable and informative. I appreciate the two layer examples and the examples of failure cases going from linear networks to more realistic examples. Perhaps also a more high-level discussion about the limitations of the work would be helpful--what are the main difficulties with non-linearities and how do you expect it to affect the results? Looking for some intuition. Methods And Evaluation Criteria: Yes, it does. I think the paper is quite dense with a fair amount of math. I would appreciate more examples and plots. Especially for an 8 page conference paper, I think that would be helpful. Theoretical Claims: I have not gone through the formal verification of each proof, so I cannot assess their correctness in detail. However, the paper’s theoretical approach appears internally consistent and is built on standard techniques from topology and group theory. Experimental Designs Or Analyses: The empirical results appear reasonable and align with the paper’s theoretical statements. I did not inspect the implementation details, so I cannot comment on reproducibility or correctness of the code. Supplementary Material: I have not reviewed the supplementary results. Relation To Broader Scientific Literature: It looks OK to me, but I am not very familiar with related work. I would appreciate some more references to applied topology within machine learning. Now the related work only focuses on mode connectivity, but topology has been applied to ML problems and to weights of neural networks for some time, which I think deserves brief mentioning, e.g. "Exposition and Interpretation of the Topology of Neural Networks (2019)". Essential References Not Discussed: I am not aware of any additional critical references beyond those mentioned, though, as stated, a broader selection of topological approaches in ML might enrich the discussion. Other Strengths And Weaknesses: I think this is a solid paper. Topology is a brilliant and obvious tool to use study connectedness of spaces. I would appreciate some more high-level intuition of how this might generalize to more realistic NNs, as well as more examples and plots in general. However, I have not checked the proofs nor am I confident in addressing the novelty as I am not well-versed in this area. Other Comments Or Suggestions: N/A Questions For Authors: Please see comments above. Code Of Conduct: Affirmed. Overall Recommendation: 4
null
null
null
null
The Logical Implication Steering Method for Conditional Interventions on Transformer Generation
Accept (poster)
Summary: This paper proposes a method named LIMS to integrate logical implications into transformer models. Specifically, it builds upon the linear representation hypothesis and activation steering technique, which are extensively studied in recent years. When a certain “concept vector” is detected from the input prompt, a specific steering vector will be added to the activation in the hidden layer, in order to alter the behavior of the transformer model. Experiments show that the method can be used to enhance the detection of hallucination and toxic contents, as well as eliciting chain-of-thought behavior on math tasks. ## update after rebuttal I will maintain my score towards acceptance. Claims And Evidence: Most claims are well supported by evidence, though there is still one claim that I’m concerned about. The paper claims to have built a form of logical implication into the transformer model. This is done by detecting whether a certain concept “direction” is strongly activated on a given prompt, and adding a corresponding steering vector to alter the model’s hidden layer activation and change the model’s output. However, the examples given in this paper contains only very **simple** and **coarse-grained** logic, such as “when the concept of ‘toxic content’ is detected, steer the model with the ‘refusal’ concept.” In general, I expect to integrate **more fine-grained** symbolic logic into the model (e.g., the rule to perform modular addition). I would suggest the authors add one or two such examples of integrating fine-grained logic into the model. The examples don’t need to be comprehensive, but they need to demonstrate the possibility of doing so. This would make me more convinced. Methods And Evaluation Criteria: The methods and evaluation setting make sense. However, just as mentioned in **Claims And Evidence**, the evaluation can be conducted on more fine-grained logic. Theoretical Claims: The most notable theoretical claim is the linear representation hypothesis. This hypothesis has been extensively studied in previous works and is not the focus of this paper, so I would not comment on it. Experimental Designs Or Analyses: The experiments do not test on an OOD scenario, where the testing data comes from the same domain but not exactly the same dataset. For example, on math tasks, the concept vectors are extracted from the GSM8K dataset. I would encourage the authors to test their methods on another similar dataset GSM-Symbolic [c1], in which numerical values in the math problems are altered w.r.t. the original GSM8K dataset. Doing so properly test whether the proposed method truly captures the symbolic logic which is expected to generalize across different datasets in the same domain. [c1] Mirzadeh et al. GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models. https://arxiv.org/abs/2410.05229. Supplementary Material: I have briefly gone through the additional experimental details in Appendix B. Relation To Broader Scientific Literature: The proposed method builds on the extensively studied linear representation hypothesis in modern transformer models, as well as activation steering and knowledge editing techniques. The paper also absorbs the notion of neuro-symbolic logic, aiming to integrate some form of logical implication circuit into the model. Essential References Not Discussed: One of the core technique used in this paper is the extraction of “concept vectors” from the hidden activation of a given prompt. However, similar notion of “concept activation vector (CAV)” in [c1] and “feature vector” obtained via sparse dictionary learning in [c2] are not discussed and appropriately compared in the paper. [c1] Kim et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). ICML, 2018. [c2] Bricken, et al. Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. Transformer Circuits Thread, 2023. Other Strengths And Weaknesses: Strengths: 1. The paper is well written and easy to follow. 2. The paper clearly presents the underlying hypothesis of the proposed method (i.e., the linear representation hypothesis). 3. Sample efficiency. The paper utilizes a small number of samples to achieve fairly good performance boost. Weaknesses: 1. Limited scalability of the proposed method. the current method relies on manually curating datasets P and Q in which some form of “concept annotation” is needed, either explicitly (e.g., human-annotated toxic vs. non-toxic contents) or implicitly (e.g., prompts regarding math problems vs. general QA prompts). First, it requires one to carefully craft the datasets for each task and I’m concerned about the scalability of such an approach. Second, only coarse-grained “concept annotation” can be obtained in this way. Other Comments Or Suggestions: The current Figure 1 does not show the high-level picture of the paper, e.g., what the paper is about, a brief illustration of the method, or results that are worth highlighting. Figure 1 in this paper is not referenced until the Results section. Questions For Authors: 1. The paper claims that the concept vector is extracted from a “middle” layer in the transformer model. Which middle layer is it? I did not find a detailed description of this setting. The authors are expected to clarify which layer is chosen and why this layer is chosen. It is also better to include ablation studies that investigate how different layers influence the performance of the proposed method. 2. I would also like the authors to respond to my previous questions or concerns in each section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your time and valuable insights. We agree with the relevancy of the interpretability papers you reference and have added them to Section 2. Below, we address the your comments and describe the corresponding updates we will make to the paper: > The current Figure 1 does not … We agree and will add a new Figure 1 to illustrate a high level of the method. We’ll also revise the figure placement in the camera ready version. > … The authors are expected to clarify which layer is chosen and why this layer is chosen … We agree, please see response section 1 to reviewer bkVz. Specifically, we’ll add a new appendix section analyzing layer choice and revise the main text to specify the exact layer used (layer 17 of 32), replacing “middle layer/block.” > … I would encourage the authors to test their methods on another similar dataset GSM-Symbolic ... Thank you for the excellent suggestion and we fully agree. We will add an appendix section for results on OOD generalization; Please see response section 2 to reviewer bkVz which shows results demonstrating zero-shot generalization of the LIMS circuit from GSM8K to GSM-Symbolic. > … this paper contains only very simple and coarse-grained logic … I expect to integrate more fine-grained symbolic logic into the model (e.g., the rule to perform modular addition). … We agree that a deeper discussion of LIMS and more complex logic is valuable and will expand this in the appendix. Please see response section 4 to reviewer bkVz about this (and bkVz S. 3 for results with multi-task logic). As shown there, the general case is not as far removed from the tested examples as it may initially appear; For an artificial but illustrative example using the templated variables in GSM-Symbolic `first_name: Literal([“James”,”Alice”, ...])`, `family: Literal(["nephew", "cousin", "brother"])`, the nested behavioral control code ```python if name == "James": if family == "brother": say(“nice”) act(chain of thought) else: say("I reject your question.") ``` Is expressible in concept-predicate logic as the conjunctions of: $$(P_{\text{James}} \land P_{\text{brother}})\rightarrow Q_{\text{saynice}}$$ $$(P_{\text{James}} \land P_{\text{brother}} \land P_{\text{saynice}})\rightarrow Q_{\text{COT}}$$ $$\neg P_{\text{James}}\rightarrow Q_{\text{reject}},$$ which results in the LIMS model being the sum of circuits $$q_{\text{saynice}}f_{p_{\text{brother}}}(x)f_{p_{\text{James}}}(x)$$ $$q_{\text{COT}}f_{p_{\text{saynice}}}(x)f_{p_{\text{brother}}}(x)f_{p_{\text{James}}}(x)$$ $$q_{\text{reject}}f_{\neg p_{\text{James}}}(x).$$ Notably, the resulting logic is arguably simpler than the multitask-LIMS model evaluated in reply to bkVz section 3. While the example is synthetic, we would be happy to include an empirical LIMS evaluation of it in the paper if the reviewer feels it would be valuable. We are also currently searching for possible datasets that could support testing fixed conceptual reasoning, with the goal of including more natural examples. Regarding modular addition, we have given it careful consideration: Unlike the experiments on modular addition in [n], which construct a theoretical transformer circuit using numerical inputs and trigonometric functions, encoding modulus in concept-predicate logic likely requires assigning specific numerical values to concepts (e.g., a concept for 1, a concept for 2, etc.). This constrains the circuits to specific inputs, which limits the generalizability of the experiment. [n] Nanda et al., Progress measures for grokking via mechanistic interpretability, ICLR 2023 https://arxiv.org/abs/2301.05217. >Weaknesses: Limited scalability of the proposed method. the current method relies on manually curating datasets P and Q ... We agree that some curation is needed but emphasize the method’s data efficiency. While we used 100 examples per domain for uniformity, this was not informed by behavior labels. For example, hallucination circuits used only 15–17 positive and 33-35 negative examples (line 383-right), suggesting far fewer are needed to adequately extract each concept. In particular, this suggests extracting sensing would require significantly fewer than the 50 used per $P$ and $\neg P$. Moreover, concept vectors can often be extracted using synthetic labeling or examples. For instance, “Prompted Behavior” (line 322-left, Appendix B.2) yields reasonable steering even without labels: - On HaluEval, 100 synthetic examples gave 84% accuracy (vs. 100% with labeled) - On AdvBench, 500 synthetic examples matched labeled performance (100%) - On COT-task, only prompted behavior was used; no labeling required. If scalability remains a concern, we propose adding a new appendix section evaluating an extreme few-shot setting (10 positive/10 negative per concept). For architectural scaling, see response section 3 to reviewer bkVz, demonstrating scaling over a multitude of LIMS circuits in a single model. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. I will maintain my score towards acceptance. The rebuttal contains some new points, such as integration of more complex logic and demonstration of sample efficiency. I hope these contents will be added to the final version of the manuscript (if accepted). --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response and support for the acceptance of the paper. As suggested, we will incorporate the discussed clarifications and additional analyses into the final manuscript. We appreciate the valuable feedback, which further enhances the clarity and quality of our paper.
Summary: This paper considers the problem of conditionally steering generative LLMs. Conditional steering refers to controlling the behavior of LLMs such that whenever the input prompt satisfies some condition, the model output should follow a specified behavior. The paper also refers to this as Logical Implication Model Steering (LIMS), i.e., enforcing properties of the form $P \rightarrow Q$ on the LLM. To perform such steering, the paper leverages the observation that LLMs learn to represent semantic concepts as directions in their embedding spaces (i.e., the linear representation hypothesis). Accordingly, the basic idea is to find directions corresponding to concepts P and Q in the residual stream of a Transformer model at the last token position. Given these directions, the paper proposes a small circuit that is incorporated in the computation performed at the attention output projection map. The proposed technique is evaluated on four different tasks using the Mistral 7B Instruct model and it outperforms baselines based on prompting and fine-tuning. ## Update after rebuttal Thank you for the response! I continue to have some concerns about accessibility of this paper and about the claim that logical implication circuits are interpretable, so I will maintain my score. Claims And Evidence: Yes, to an extent. I describe my concerns in detail in the later sections. Methods And Evaluation Criteria: Yes, to an extent. I describe my concerns in detail in the later sections. Theoretical Claims: No, because the notation was not clearly explained. I elaborate on this later. Experimental Designs Or Analyses: I checked the experiments. I discuss the issues in the later sections of the review. Supplementary Material: Yes, I browsed through the appendix but did not read it as carefully as the main paper. Relation To Broader Scientific Literature: There is a growing body of literature on mechanistically interpreting LLMs and using the extracted interpretations to control the model behavior. The paper makes a contribution on this front. In particular, conditional steering (or Logical Implication Model Steering) is a unique contribution of this paper. Essential References Not Discussed: The paper does a good job at citing related work to the best of my knowledge. However, there is already a vast amount of literature on mechanistic interpretability and controlling LLM behavior, so its possible that relevant references haven't been cited. For instance, [A] is a relevant paper that could be cited as it also discusses controlling LLMs using the extracted representations. [A] Zou, Andy, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan et al. "Representation engineering: A top-down approach to ai transparency." arXiv preprint arXiv:2310.01405 (2023). Other Strengths And Weaknesses: ### Strengths 1. The idea of Logical Implication steering is interesting and novel. The method is simple (which is a good thing!), and therefore, easy to incorporate into existing models. 2. The empirical results suggest that the approach is effective at controlling model behavior without causing any regressions. 3. The broader vision of leveraging the linear representation hypothesis to incorporate logical circuits into LLMs is promising. ### Weaknesses 1. The paper is not clearly written and therefore, quite hard to follow. I elaborate on this on the next section. The paper needs a significant pass to make the presentation more understandable. 2. The paper claims that the logical implication circuits are interpretable. However, I do not understand the basis for this claim. 3. The empirical results are a little underwhelming. For instance, in many cases, DPO-finetuning significantly outperforms LIMS. 4. It is never explained how the concept labels are obtained. These are needed for extracting the concept representations. Other Comments Or Suggestions: None Questions For Authors: 1. Fig 1 appears on Pg 3 but it is not explained until much later. Same is true for Fig 2. This is not helpful for readability. 2. Line 147, Right: "there is some section of the hidden representation" --> What is "section" referring to? 3. Line 185, Left: What does "corresponding sets" refer to? 4. Line 206, Left: "we first specify datasets P,Q within the domain D" --> How is this done? Manually? 5. Line 219, Left: "we refine $p_{concept}$" --> how? 6. Line 177, 178, Right: "set the sensing circuit threshold $b_p$ to be the maximum value ..." --> this sentence is very confusing 7. Lines 193 - 200, Right: What exactly is a mergeable-LIMS circuit? The paragraph after equation (11) made no sense to me. 8. Lines 214 - 219: Why is the circuit applied at each token position? The comment about the distance from the last token is not clear. 9. Section 3.3: This whole section is very hard to follow. It would already be a big help if all the notation were precisely defined. For instance, what does $Q_q(x)$ mean formally? The one line description is not very helpful. Overall, I was not able to follow the arguments about LIMS interpretability. 10. Section 4, line 265: What does the "middle transformer block" mean? 11. Line 322, Left: The discussion about prompted behavior is not clear. Also, I think it should be "unmodified inputs are assigned to $\neg Q$" 12. Where is Figure 3 explained in the text? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you dedicated reviewing our work in detail, and have made the following revisions and clarifications in response: The reference [A] you include is relevant, and we will add it to Section 2. ## Questions > 1. Fig 1 appears on … We will revise figure placement in the camera-ready version and add a new Figure 1 illustrating the high-level method overview, as suggested by Reviewer YApn. >2. Line 147, Right … In this sentence, “some section” was meant as existential quantification over neuron subsets. In revision, we will clarify the exact quantification. > Line 185, Left: … By "corresponding sets," we refer to the set $\{x: P(x)=1\}$. We have revised the phrase to: “... and their corresponding indicator sets … ” for clarity > Line 206, Left … Yes, the datasets $P$ and $Q$ are specified manually or synthetically. We clarify this in Section 4.1 and Appendix A. > Line 219, Left … By the section “we refine $p_{\text{concept}}$ by reducing $\neg P$ to negative examples close to $P$ …” we mean to say that it is good practice, when possible, to construct the negative set $\neg P$ such that its examples are as similar as possible to those in $P$, differing only in the presence or absence of the concept being captured. This helps ensure that the vector $p_{\text{concept}}$ isolates the intended concept. For instance, in the HaluEval task, each input in $P$ (with a hallucinated answer) was paired with a corresponding example in $\neg P$ that had identical text except for the answer replaced with a grounded non-hallucinated answer. > Line 177, 178, Right … We can reword the sentence with "set the sensing circuit threshold to be a maximum value...". > Lines 193 - 200 … Equation (11) defines the m-LIMS circuit, and Equation (12) shows the mergeability into model parameters. We updated the paragraph following Equation (11) to read: “m-LIMS works well since $|p^T h(x)| < \varepsilon$ off of $P$ for some small positive $\varepsilon$, and models are generally robust to adding small perturbations $\varepsilon q$ to the representation.” > Lines 214 - 219: … Circuits apply per token due to the transformer's uniform architecture. We’ve revised the sentence to: “Notably, the representations of each token rapidly diminish in similarity to $p$ as their position moves further from the final token in the sequence (see Figure 2, top), and so the LIMS circuit is unlikely to cause interference from earlier tokens.” > Section 3.3: … We revise the wording defining $Q_q(x)$: “... on an input $x$, let $Q_q(x)$ denote whether adding the vector $q$ to the last input token position exhibits $Q$.” > Section 4, line 265: … We agree it is ambiguous, and change the text to specify that we use block 17 out of 32. > Line 322, Left: … You are right, thank you. We correct this to $\neg Q$. On this line we point the reader to examples in appendix (B.6). The most straightforward example is described on line 350-right, for the COT task. ## Weaknesses > The paper is not clearly written … While Reviewer YApn explicitly highlighted clarity as a strength and others raised no concerns, we are committed to making the paper as accessible as possible. We’ve refined language throughout the paper and are committed to further improvements. Please let us know if any points remain unclear. > The paper claims … Unlike post-hoc methods that require probing and qualitative validation of latent components, LIMS is interpretable by construction. Each circuit explicitly implements $P\rightarrow Q$, with sensing and steering grounded in defined data splits. Section 3.3 and Figure 2 empirically show that the transformation is attributable to the circuit itself. > The empirical results … We acknowledge the reviewer’s concern regarding the empirical results. However, we note that the comparison between LIMS and DPO is inherently somewhat apples-to-oranges. DPO is less interpretable, more compute-intensive, task-overfit to most tasks, and failed to learn SQuAD2 without retraining. Moreover, we only used a single circuit at one layer in a minimal-change case; further gains are possible by stacking circuits across layers, which we leave to future work. We see LIMS not as a direct replacement for fine-tuning methods like DPO, but as a complementary tool for model control with distinct advantages. > It is never explained … For sensing concepts: Labels are provided by the source datasets for HaluEval and SQuAD 2. For the toxicity and COT tasks, labels are derived from the dataset setup, as described in Section 4.1 (lines 366-left and 382-left). For steering concepts: This information is detailed in Appendix B.4. To make this clearer, we will update the paragraph on line 318-left to include an explicit reference to Appendix B.4, which describes how behavior labels are defined and used for extracting steering concepts.
Summary: The paper introduces an interpretable model steering method -- Logical Implication Model Steering (LIMS), to steer a LLM to behave according to a $P \rightarrow Q$ rule. The method first extracts concept vectors for $P$ and $Q$, then performs necessary post-processing, and uses the vectors in an activation-patching fashion to steer the model's behavior. The method has the desirable properties of being interpretable, computationally efficient, and having a linearly mergeable version that can be easily depolyed (requiring no model change). The authors demonstrated effectiveness in hallucination detection, answer abstaining, reasoning invoking and toxic instruction rejection tasks. Claims And Evidence: In general, I find the claims are well supported by the empirical evidences presented, in that: 1. the identified steering circuits showed clear positive effect on the model's performance; 2. the method is data-efficient, achieving improvement with fewer data points compared to DPO-tuning. Methods And Evaluation Criteria: The proposed method has a sensible design and evaluation tasks/metrics are relevant and effective. Theoretical Claims: I checked proposition 3.3 and its proof, which looks good to me. Experimental Designs Or Analyses: The experimental design is sound. 1. Tasks are well chosen, with different level of difficulty and complexities in the concepts associated with the behavior: lack of grounding context (in the SQuAD task) is a relatively simple concept, while invoking reasoning (in GSM8K) is a complicated concept; 2. Appropriate baselines and comparison: the experiments include base model, 10-shot prompting and DPO-tuned comparisons, covering common approaches for modifying LLMs' behaviors. 3. Analysis of results: included both effectiveness results and the impact on off-task domains. Supplementary Material: I reviewed section A.2. and the proof looks good. Relation To Broader Scientific Literature: The LIMS method falls into the category of model steering for conditional behaviors, i.e. steering the model to satisfy a $P \rightarrow Q$ property, which is one step forward compared to previous work on single property steering. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strength/Weakness on decoupling properties: Table 2 discussed an interesting finding that LIMS improved reasoning with not causing over-verbosity on non-math prompts, which demonstrated the robustness of the method. However, it is still possible that concepts would be coupled and LIMS could introduce properties that are not in design. Discussions on how to avoid such concept coupling would be valuable. Other Comments Or Suggestions: N.A. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and review. > Table 2 discussed an interesting finding that LIMS improved reasoning with not causing over-verbosity on non-math prompts, which demonstrated the robustness of the method. However, it is still possible that concepts would be coupled and LIMS could introduce properties that are not in design. Discussions on how to avoid such concept coupling would be valuable. Thank you for highlighting this important point. We agree that unintended coupling between concepts is a potential risk when extracting concept vectors. However, since LIMS uses single, explicit vectors, added for steering and activated by similarity for sensing, their effect on model behavior is directly attributable. This allows for more confident causal analysis in evaluating whether a concept introduces unintended properties, both qualitatively (e.g., through example activations or generations) and quantitatively. A good practice to follow when feasible, is to aggregate a negative set $\neg P$ or $\neg Q$ to closely match $P$ or $Q$ in all aspects except for the presence of the target concept. This encourages the learned concept vector to isolate the intended signal. For example, in the HaluEval task, inputs in $P$ (hallucinated answers) were paired with inputs in $\neg P$ that were identical except for replacing the hallucinated answer with a grounded one. For sensing, we only use the component orthogonal to $\neg P$ to further remove confounding attributes. As we show in Appendix B, introducing additional data can also help, as increased variability across examples in irrelevant factors ensures that the only consistent difference contrasting positive and negative examples are that of the intended concept. These practices help ensure that the concept vector captures only the targeted concept and not spurious correlates like surface phrasing or topic. We also believe your point is highly related to the OOD generalization of concepts and LIMS, and the potential for interference when composing multiple LIMS circuits; As such please also see our response to reviewer bkVz, where we include out-of-distribution (OOD) tests for the math circuit, and have added preliminary results combining all task-specific circuits into one model, and evaluating this across all domains.
Summary: This paper proposes Logical Implication Model Steering (LIMS), a novel method to embed logical implication circuits into pre-trained transformer models. LIMS leverages the linear representation hypothesis, which posits that high-level concepts are represented as directions in activation space. By identifying concept vectors corresponding to conditions (P) and desired behaviors (Q), LIMS builds a neuro-symbolic logic gate into the model’s activations. This enables interpretable, modular, and computationally inexpensive behavior modifications of large language models (LLMs). The paper validates LIMS across multiple language tasks — hallucination detection (HaluEval), handling unanswerable questions (SQuAD 2), refusal of toxic prompts (AdvBench), and chain-of-thought generation for math problems (GSM8K). The authors show that LIMS achieves strong performance with as few as 100 training examples, without needing gradient-based updates. They also introduce a mergeable variant (m-LIMS) which integrates directly into model weights. Compared to fine-tuning (e.g., DPO), LIMS is more efficient, interpretable, and less prone to overfitting. Claims And Evidence: The claims are generally well-supported: - LIMS enables interpretable and efficient conditional interventions. - LIMS achieves strong performance with low data and no backpropagation. - LIMS enables interpretable reasoning over internal model components. Methods And Evaluation Criteria: The methods are well-motivated and fit the problem. Theoretical Claims: Yes, I the theoretical foundations seems correct. Experimental Designs Or Analyses: The design of experiments is sound. Supplementary Material: Yes, the supplementary material was reviewed, particularly: Appendix A provides a useful deeper dive into the logic formalism, adding clarity to the predicate-based reasoning; Appendix B describes experiments for larger training sets and gives insight into failure modes. Relation To Broader Scientific Literature: This work sits at the intersection of mechanistic interpretability (e.g., concept vectors, direction-based steering), memory editing and neuro-symbolic reasoning. Essential References Not Discussed: Essential references are discussed. Other Strengths And Weaknesses: ### Strengths - Highly interpretable and modular method. - Efficient and lightweight, applicable even in low-resource settings. - Strong performance with minimal data and no gradient updates. - Clear mathematical and empirical grounding. ### Weaknesses - Evaluation remains narrow: all tasks are on text classification/generation; broader generalization (e.g., vision, cross-modal, multi-hop logic) is only discussed. - Interpretability is claimed but could be validated with user studies or more qualitative examples. - Scaling to more complex logic or long-range dependencies remains an open question. - Some experimental claims, such as broad generalization or robustness of concept vectors, are difficult to fully validate on such small-scale datasets without wider tests. Other Comments Or Suggestions: - Clarify if LIMS can be applied across layers, or if there are preferred layers for concept vector extraction. - Some typos (line 60 second column, line 134 first column) Questions For Authors: - Did you experiment with inserting the LIMS circuit at different layers? How sensitive is performance to layer selection? - Can multiple LIMS circuits (e.g., $P_1 \rightarrow Q_2$ and $P_2 \rightarrow Q_2$ coexist without interference? How do you anticipate scaling this approach? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review and encouraging feedback. We also appreciate you pointing out the typos, which we have corrected, and hope our clarifications and additions address your points. We will add a section in the appendix for each of the following: # 1. Clarification on layer selection for LIMS circuits We will add layer-by-layer performance plots across domains. LIMS is generally robust to layer choice, provided layers near the input and output layers are avoided. Concept norms peak in middle layers and near the output, and we uniformly used layer 17 of 32, without optimization of layer choice. We’ve updated the main text accordingly. LIMS circuits can also be applied at multiple layers, though we used a single-layer setting for clarity and minimal parameter change. # 2. Assessment of OOD generalization While concept vector generalization is supported in prior work, we agree assessing generalization of full LIMS circuits would further strengthen the work. Following the suggestion of reviewer YApn, we tested zero-shot generalization on GSM-Symbolic using the original LIMS circuit trained on GSM8K. It generalizes without modification, recovering even more performance than on GSM8K. ## GSM-Symbolic LIMS Generalization | Model | COT-prompt Normalized Accuracy | Avg. Tokens | |--------------|------------|--------------------| | LIMS | 72.8 % | 172.2 | | m-LIMS | 77.3 % | 180.9 | | Base | 43.8 % | 161 | | Base + COT-prompt | 100 % | 218.6 | # 3. Analyze the potential for scaling LIMS to a larger number of circuits. We tested scalability by plugging in all unchanged circuits from the main text into one LIMS and one m-LIMS model. Letting subscripts 0–3 denote the tasks HaluEval, SQuAD 2, AdvBench, and COT reasoning, respectively, the full LIMS model encoded the logic: $$ \bigwedge_{i=0,2} (P_i \rightarrow Q_i) \land (\neg P_i \rightarrow \neg Q_i) \land \bigwedge_{j=1,3}(P_j \rightarrow Q_j),$$ while the m-LIMS variant encoded: $$ (P_0 \rightarrow Q_0) \land (\neg P_0 \rightarrow \neg Q_0) \land \bigwedge_{i=1,2,3} (P_i \rightarrow Q_i).$$ We observed minimal or no degradation across tasks. ## Accuracy of Multi-task LIMS models | Model | HalluEval | SQuAD 2 | AdvBench | GSM8K (normalized to COT-prompt) | |------------------|-----------|---------|----------|----------------| | Base Model | 53.3 % | 61.8 % | 61.7 % | 55.5 % | | single-task LIMS | 83.0 % | 79.6 % | 85.0 % | 72.2 % | | multitask LIMS | 81.3 % | 78.5 % | 85.0 % | 72.2% | | single-task m-LIMS| 84.7 % | 79.9 % | 94.8 % | 76.8 % | | multitask m-LIMS | 81.7 % | 79.8 % | 92.0 % | 77.5% | # 4. How to enact complex / multi-hop logic with LIMS We add an appendix section showing how LIMS naturally supports arbitrary logic via compositions of implication circuits in detail. First note that we enact conjunctions of implications by summing the associated LIMS circuits, as we did for the experiments with circuits $(P \rightarrow Q) \land (\neg P \rightarrow \neg Q)$, since each contributes its steering effect additively when its sensing condition is active. The core logical form underlying LIMS “If sensing $P$, then behave as $Q$”, is more expressive than it may first appear. In fact, any propositional formula can be rewritten as conjunctions of clauses in the following “implicative form”: $$(P_{0} \land ... \land P_{n}) \rightarrow Q,$$ which can be directly implemented within the LIMS framework since it is logically equivalent to the nested implication: $$(P_{0} \rightarrow (... \rightarrow (P_{n} \rightarrow Q)... )).$$ With sensing circuits defined as $f_{p_{i}}(x)=\sigma(p_{i}^Th(x) - b_{p_{i}}),$ the clause above is directly implemented as a product of sensing activations composed with the steering transformation for $Q$: $$( …((qf_{p_{n -1}}(x)) f_{p_{n -2}}(x) ) … f_{p_{0}}(x))=qf_{ p_{n -1}}(x) … f_{ p_{0}}(x).$$ This embodies the implicative form within the existing LIMS framework. To build more complex logic, one identifies all desired behaviors $Q_i$​ and their exact corresponding preconditions $\psi_i$​, forming a formula of the form: $$\phi = \bigwedge_i \psi_i \rightarrow Q_i.$$ Each $\psi_i$​ can be expressed in disjunctive normal form, and using the equivalence $(A \lor B)\rightarrow C \equiv (A \rightarrow C) \land (B\rightarrow C),$ we rewrite $\phi$ as (re-indexing as necessary) $$\phi = \bigwedge_i(P_{i,0} \land ... \land P_{i,n_i}) \rightarrow Q_i,$$ where each $P_{i,j}$ is a concept (or its negation). Each such clause is implemented using a “product” LIMS circuit as above, and the full logic is realized by summing these circuits. This construction is fully general, any propositional formula can be represented this way and thus encoded using LIMS. We illustrate this with an artificial example in response to reviewer YApn, showing how a nested multi-step behavioral control code could be implemented with LIMS.
null
null
null
null
null
null
One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation
Accept (poster)
Summary: The authors introduce One-Step Diffusion Policy (OneDP), which distills pre-trained diffusion policies into single-step generators for robotic control. OneDP achieves 42× faster inference (62.5Hz vs 1.5Hz). Evaluation is performed on six simulation and four real-world tasks. Claims And Evidence: The results are promising, but I am not entirely convinced. See below. Methods And Evaluation Criteria: Ok Theoretical Claims: N/A Experimental Designs Or Analyses: Ok-ish Supplementary Material: Ok Relation To Broader Scientific Literature: Ok Essential References Not Discussed: N/A Other Strengths And Weaknesses: - My main concern with the method is that it should lose partly/fully the benefit of multimodality that is given by diffusion policy, which is perhaps the biggest strength of diffusion policy. It would have been interesting to see experiments where multimodality is required, and to see how the distilled one-step policy would deal with them. - The idea is simple, but not novel (same trick used, e.g., in image generation). Other Comments Or Suggestions: - Figure 1, training cost: I think this is highly misleading. I understand that the distillation part is much cheaper than the full training of the diffusion policy, showing that oneDP adds little overhead. However, it still requires training the original diffusion policy first Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We find it disheartening that Reviewer Bfs2 rated our paper a 1, accompanied by a very brief review and a complete dismissal of our contributions, despite our focus on a critical problem in learning fast visuomotor policies for robotic control and our efforts to advance the state of the art in this area. We respectfully urge Reviewer Bfs2 to reassess our paper more carefully and, if time permits, consider the evaluations provided by the other reviewers. We address your concerns in detail below: 1. Mode-Seeking Behavior in Reverse KL Minimization: While reverse KL minimization encourages mode-seeking, this does not necessarily lead to mode collapse. In our real-world coffee machine manipulation experiments, we deliberately collected data for closing the lid from two different angles. After distillation, the distilled policy retained this multimodal behavior. Additionally, mode-seeking can be beneficial in reinforcement learning tasks—once the policy identifies an optimal solution, it helps eliminate noisy low-density actions and enhances stability. In our case, imitation learning data is human-demonstrated and high-quality, ensuring that the pretrained policy converges to optimal solutions. Therefore, mode-seeking is not a critical concern for our approach. 2. Our submission primary area is `Applications -> Robotics`. Diffusion distillation has been applied across various domains, including 2D/3D generation, video generation, and diffusion-based language models. However, our contribution focuses on demonstrating its effectiveness in robotics, a field where efficient policy execution is critical. We put significant effort to show the effectiveness of diffusion distillation in robotics and its wild potentials for robotics. We also kindly request Reviewer Bfs2 to review the potentials mentioned by Reviewer kXET. 3. The pretraining + distillation paradigm is widely adopted across AI research, including language models (e.g., GPT-4o with its 4o-mini and o1-mini variants). Distillation plays a crucial role in making inference efficient, reducing computational costs, and enabling deployment on constrained hardware. For robotic control, where policies must run onboard with limited compute, achieving fast inference without sacrificing performance is highly practical and necessary. Given its broad adoption in other domains, distillation should not be viewed as a drawback but rather as a crucial technique for real-world applicability. We will update Fig1 to make it looks better.
Summary: This paper adapts diffusion distillation techniques from text-to-3D generation to achieve one-shot diffusion policies for robotics. The authors compare two types of distillation: 1) distilling to a stochastic one-step diffusion policy, and 2) distilling to a deterministic one-step policy. They show that the stochastic policy has a task performance slightly superior to that of the original DDPM teacher, with a ~40x inference time speedup. The method significantly outperforms a consistency policy baseline, both in terms of final performance and training speed. Claims And Evidence: The main claim is that the OneDP dramatically speeds up inference: "that OneDP not only achieves state-of-the-art success rates but also delivers an order-of-magnitude improvement in inference speed, boosting action prediction frequency from 1.5 Hz to 62 Hz." This is well-supported by the results (although see the caveat in "Experimental Designs Or Analyses"). Methods And Evaluation Criteria: The evaluation settings are thorough, involving both simulated and real-world experiments. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall, the experiments look sound. The one comment I'd have is that it seems that the underpowered hardware might accentuate performance differences. A DDPM frequency of 1.5 Hz is much lower than Diffusion Policy (10Hz diffusion), Octo (10Hz diffusion), or $\pi_0$ (~6Hz flow matching). Nevertheless, I think that ultimately these visuomotor policies will need to be executed onboard with resource-constrained hardware, and this work is correspondingly more valuable. Supplementary Material: I didn't review the code. Relation To Broader Scientific Literature: I think that the importance of faster action inference are pretty clear for robotics, especially for systems where compute has to be entirely onboard. I'd like to comment on another reason I see this paper as being important to the field that I don't see mentioned by the authors. One important challenge with these diffusion policies lies in transitioning from imitation learning pre-training to finetuning with online RL. Obvious online RL candidates like PPO don't immediately apply, as they require the gradient of the action probability w.r.t. the policy parameters; this isn't available in closed-form because the diffusion action probability is a) defined implicitly, and b) involves iterative denoising which would be hard to differentiate through anyways. This paper solves problem (b), and I think opens up a few interesting possibilities for addressing (a), such as having the policy network output a normal distribution centered at the distilled $G_{\theta}(O)$, or perhaps this idea can be adapted to flow matching where the density at a point can be evaluated. While understandably this extension isn't in scope for the current paper, I think the paper opens up some exciting next steps. Essential References Not Discussed: I'd say that $\pi_0$ is a fairly important paper that should be discussed: Black, Kevin, et al. "$\pi_0 $: A Vision-Language-Action Flow Model for General Robot Control." arXiv preprint arXiv:2410.24164 (2024). Other Strengths And Weaknesses: The primary weakness is that the paper is a little lacking in novelty, as it largely adapts ideas from the generative AI literature to robotic policies. But I still think that the paper is a valuable contribution for both its practical impact on the feasibility of diffusion policies and it setting the groundwork for incorporating online RL. The paper is overall well-written and easy to follow. Other Comments Or Suggestions: "Initializaiton" in Algorithm 1. Questions For Authors: 1. In Algorithm 1, should not the sampling of $A_{\theta}$ be deterministic in the OneDP-D case? 2. Can the authors elaborate on "distillation occurs over [2, 95] diffusion timesteps to avoid edge cases" in line 248? I haven't seen this before. 3. I'm somewhat suprised that the stochastic policy outperformed the deterministic one. I get that the stochasticity during pre-training is valuable, as human demonstrations aren't deterministic and thus it's difficult to fit a deterministic policy to them. But when we're distilling, it seems that converging to the action mode should be beneficial (as briefly discussed in the "Distillation Discussion" section). Could the authors elaborate on why OneDP-S outperforms One-DP-D? The authors write that: "A stochastic policy, which encompasses deterministic policies, is more versatile and better suited to scenarios requiring exploration, potentially leading to better convergence at a global optimum." In RL, I see why exploration is important; but here we're doing imitation learning so I don't see how this applies. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer kXET's recognition of our work's value and your insightful suggestion on extending our distillation technique to facilitate online RL fine-tuning. We agree this is a promising future direction. Below, we address your remaining concerns and questions. ### **References** We will include a discussion of *π₀: A Vision-Language-Action Flow Model for General Robot Control* in the related work section. ### **Questions & Responses** 1. **Deterministic Sampling in OneDP-D (Algorithm 1)** Thank you for pointing this out. In the OneDP-D case, $z$ is indeed fixed at zero. We will correct the typo in Algorithm 1 accordingly. 2. **Distillation Over [2, 95] Diffusion Timesteps** The reason for avoiding the extreme timesteps (near 0 and 100) is that the distilled policy’s distribution may initially differ significantly from the pretrained diffusion policy. - If nearly no noise is added, the distilled policy may generate actions that fall in low-density regions of the pretrained policy, leading to unstable distillation loss. - If excessive noise is added, the score guidance from the pretrained policy becomes nearly uniform across actions, reducing its utility for training. This aligns with observations from *DreamFusion*, where extreme noise levels were avoided. We quote the original sentence here: “We sample t ∼ U(0.02, 0.98), avoiding very high and low noise levels due to numerical instabilities”. 3. **Why OneDP-S Outperforms OneDP-D** This is due to the pretrained diffusion policy having strong performance in learning multi-modal distributions, so the pretrained policy has multi-modality. The stochastic policy benefits from the multimodal nature of the pretrained diffusion policy. As discussed in our response to Reviewer 5rTc, reverse KL divergence encourages mode-seeking behavior, allowing the stochastic policy to converge to multiple optimal action modes. - In our coffee machine manipulation tasks, we designed scenarios with two distinct successful strategies. The stochastic policy successfully captured both, whereas the deterministic policy followed only one. - Exhibiting multi-modality enables the robot to solve the request in broader regions with multiple potential solutions so it is more stable than pure deterministic policy that can only follow one specific path. We appreciate your thoughtful feedback and will incorporate these clarifications in the final revision. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their reply. I am comfortable maintaining my accept recommendation. --- Reply to Comment 1.1.1: Comment: We again thank Reviewer kXET for the thoughtful comments and the recommendation to accept our paper.
Summary: The paper introduces the One-Step Diffusion Policy, a novel approach that distills a pre-trained multi-step diffusion-based visuomotor policy into a single-step action generator for robot control​. This one-step policy greatly accelerates inference (boosting action output frequency from ~1.5 Hz to 62 Hz) while maintaining state-of-the-art task success rates. **Strengths**: - Order-of-magnitude speed improvement in action inference without performance loss. - The distillation process only adds 2%–10% to the additional training cost - Efficient Task Completion not only reacts faster but completes tasks in significantly less time. **Weaknesses**: - The study does not assess long-horizon tasks, where cumulative decision-making plays a larger role. - Some mode-seeking behaviors inherent in reverse KL minimization could lead to suboptimal exploration in stochastic tasks . - No Analysis of Failure Case Claims And Evidence: - The paper presents quantitative results comparing the inference speed of OneDP (62 Hz) vs. Diffusion Policy (1.49 Hz) - A wall-clock time comparison (Table 5) shows that OneDP's single-step action generation is much faster than the iterative denoising process in traditional diffusion models. --- - While the improvement is clear, the robot’s control frequency was limited to 20 Hz in real-world settings - Success rates are evaluated on a fixed set of tasks. The results may not generalize to more complex or unseen environments. Methods And Evaluation Criteria: **Strengths**: - OneDP effectively distills a slow diffusion policy into a single-step generator, achieving ~42× faster inference while maintaining high success rates. - Benchmark evaluation on Robomimic and real-world tasks with meaningful metrics (success rate, inference speed, task time). - Fair comparison with Diffusion Policy and Consistency Policy, demonstrating superior speed and efficiency. **Weaknesses**: - KL divergence distillation may limit policy diversity; alternative techniques are unexplored. - Evaluation is limited to short-horizon tasks; long-term planning and generalization to unseen environments are not tested. Theoretical Claims: - The paper’s theoretical foundation is based on KL divergence minimization for distillation. - The mathematical derivation follows from established diffusion model literature (Ho et al., 2020; Song et al., 2020), ensuring theoretical validity. - No formal proof or bound is given for the error introduced by the single-step approximation compared to multi-step diffusion policies. Experimental Designs Or Analyses: - Evaluations conducted on six Robomimic tasks (simulation) and four real-world robotic tasks (Franka arm), ensuring relevance to visuomotor learning. - While OneDP theoretically achieves 62 Hz, real-world experiments cap execution at 20 Hz for stability. Supplementary Material: Most demos show that the policies will work in a real-world setting. Relation To Broader Scientific Literature: 1. Diffusion Models in Robotics & Policy Learning: 2. The work is related to Consistency Policy (CP) (Prasad et al., 2024), which also attempts to accelerate Diffusion Policies. 3. KL-Based Distillation in Generative Models Essential References Not Discussed: No. Other Strengths And Weaknesses: No other strengths and weaknesses. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: 1. Would it be possible to include experiments on other imitation learning methods cited in the paper for comparison? 2. Is there a more in-depth discussion on the insights behind why the distilled policy can achieve better performance compared to the original diffusion policy? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer 5rTc's positive feedback and address your remaining concerns below: ### **Weaknesses:** 1. **Mode-Seeking Behavior in Reverse KL Minimization** While reverse KL minimization encourages mode-seeking, this does not necessarily lead to mode collapse. In our real-world coffee machine manipulation experiments, we deliberately collected data for closing the lid from two different angles. After distillation, the distilled policy retained this multimodal behavior. Additionally, mode-seeking can be beneficial in imitation learning tasks—once the policy identifies an optimal solution, it helps eliminate noisy low-density actions and enhances stability. In our case, imitation learning data is human-demonstrated and high-quality, ensuring that the pretrained policy converges to optimal solutions. Therefore, mode-seeking is not a critical concern for our approach. 2. **Evaluation on Long-Horizon Tasks** We emphasize that our study explores both simulation and real-world tasks. However, long-horizon tasks require a diffusion policy trained explicitly for long-horizon planning. We acknowledge the importance of extending our approach to such tasks and consider it a promising direction for future work. ### **Questions:** 1. **Comparison with Other Imitation Learning Methods** Implementing all cited imitation learning baselines within our framework is highly resource-intensive and beyond the scope of this work. Instead, we selected Diffusion Policy and Consistency Policy as our primary baselines, as they are the most representative diffusion-based approaches. 2. **Why the Distilled Policy Outperforms the Pretrained Policy** The distilled policy demonstrates slightly better performance than the original diffusion policy due to two main factors: - **Error Accumulation in Multi-Step Sampling:** Traditional diffusion policies suffer from accumulated errors over multiple denoising steps, while our one-step approach eliminates this issue. - **Mode-Seeking Behavior Enhancing Stability:** As previously mentioned, the distilled policy focuses more on successful trajectories, reducing the influence of low-density noisy actions. This results in a more stable and confident policy during evaluation.
null
null
null
null
null
null
null
null
Generative Social Choice: The Next Generation
Accept (oral)
Summary: The paper investigates the scope of generative social choice, aiming to generate a slate of statements (usually from a large set such as textual information) representing the voters. Queries (usually implemented by LLMs) can be made on agent utilities and on generating a good statement among a group of agents. The problem is naturally related to traditional committee voting and large language models, which have strong generative power. The paper extends previous literature by taking the overall budget and the inaccurate queries into consideration. The paper proposes a new algorithm that generates the slate (and assigns agents to statements). Theoretically, the paper gives worst-case guarantees on the performance of their algorithm reaching the balanced justified representation axiom parameterized by the errors in the queries. They also somewhat show that the guarantee is close to the optimum. Secondly, they show in synthetic experiments that the performance converges way faster than the theoretical guarantee under the same setting. Finally, they implement the generative social choice system PROSE by GPT-4o and test it on real-world data. It outperforms all the benchmarks in both keeping the BJR axiom and maintaining a high utility of the generated slates among the agents. ## After Rebuttal The author(s) address my questions well. I am happy to maintain the current score. Claims And Evidence: Yes. All the claims are supported by theories or experimental results. Methods And Evaluation Criteria: In general, yes. This paper uses LLM to evaluate the average utility of the outcome slate in real-world experiments. The evaluation method is seemingly similar to the implementation of the DISC query (utility query) in their algorithm, but they run validation experiments to show that they are likely independent. Theoretical Claims: I check Theorem 3.1 and browse Theorem 3.2. I think they are correct. Experimental Designs Or Analyses: I am not very familiar with examining experiments, but in general, especially the validation experiments, looks god to me. Supplementary Material: I look at the Lemmas and the proof of Theorem 2. I also look at the LLM implementations for PROSE. Relation To Broader Scientific Literature: The paper follows a very newly-emerged yet heated topic of generative social choice [Fish et al, 2024], which has the potential to open an utterly new research direction in this area. This paper extends the idea to more practical settings, making the prospect more realistic. The paper is also closely related to the combination of LLM and social choice/multi-agent systems, bringing insights into AI-augmented voting and decision making. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: I think this is in general a very good paper. It studies a topic full of potential (generative social choice) and gives a rigorous and practical solution. They aim to address the budget constraint and the inaccuracy of the queries (usually comes from the unstableness of LLMs) and solve the problem with a newly designed, theoretically guaranteed algorithm. The algorithm and the theories are not direct extensions of previous literature. Impossibility results are also given to illustrate the (nearly) closedness of the worst-case bound. Moreover, they implement the algorithm in practice and demonstrate its capability on real-world data. The paper significantly removes the assumptions and broadens the application of generative social choice, making this AI-augmented voting schema more promising. The presentation is also good, with most of the important assumptions or designs explained (such as BJR) or validated via experiments. Weaknesses: I don't have major complaints about the paper, yet some points are worth improvements 1. Proof for Theorem 3.1 is a bit dense for now. As this is the only proof in the main paper, I suggest that give a clear explanation of every important step to convey your key theoretical techniques. 2. The paper can benefit from a more clear discussion between itself and [Fish et al, 2024] on, for example, why its result is not a natural extension of the existing work. 3. The impossibility results consist of multiple theorems in which some parameters are fixed to 0 or 1 (the exact version). Are there more general results? 4. The implementation of the GEN queries in PROSE seems different from those in the theories. Instead of finding a statement with max support in a given subset, it seems to cluster agents and find statements for clusters. How do you justify this? Other Comments Or Suggestions: Typo: Line 2546-252: I suppose $\zeta$ and $\alpha^*$ refer to the same statement? Questions For Authors: 1. How do you justify your implementation of GEN queries in PROSE? Does your implementation still follow your theory? 2. Are there more general impossibility results where parameters are not fixed on special values? 3. You assume the cost is only on the output. Given the implementation requires a considerable number of LLM calls, do you think it is more reasonable to also take query costs into consideration? 4. Your theoretical results only consider the satisfaction of BJR, while your experiments have positive results on the average utility of the outcome slate. Does your theory also show something about the utility? 5. The error parameter $\mu$ only works on comparing the query output with other potential statements. I think this needs a justification. Why can the query output a statement with an accurate cost, but the comparison is made under $\mu$ error? 6. Is there any runtime evaluation on PROSE (and probably other benchmarks) and between different procedures in PROSE? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We will improve the clarity of the proof of Theorem 3.1 and expand the discussion on its relation to Fish et al.; see also our response to Reviewer dWvn. > How do you justify your implementation of GEN queries in PROSE? Does your implementation still follow your theory? Our theoretical results are agnostic to the implementation of the generative query. In this sense, we view different query implementations as approximate methods for solving the optimization problem defined by Equation (1). In practice, we experimented with various implementations and ultimately adopted a two-step procedure. First, we identify agents $S' \subseteq S$ who are likely to approve the same statement at level $\ell$, using embeddings and clustering or nearest-neighbor techniques. Second, we prompt the LLM to generate a length-bounded statement that would be approved by all agents in $S'$. That is, we first identify a potential supporter group and then generate a statement intended to satisfy them. We adopted this approach because LLMs were unreliable at discovering such groups. Ultimately, when presented with a generative query, we always produce multiple candidate statements, from which we return the one with the highest number of supporters at level $\ell$. > Are there more general impossibility results where parameters are not fixed on special values? Let us consider Theorem 3.4 as an example. The result is stated under the assumption of no error in how statements are evaluated (i.e., $\beta = \delta = 0$). Conditioned on this, the theorem shows that no algorithm can guarantee $(p,\frac{1}{\mu}\frac{|W|}{|W|\gamma+1})$-cBJR for any constant $p\in \mathbb{N}_0$. The immediate implication of this is that such a guarantee is also impossible for any other value of $\beta$ and $\gamma$ (adding error will never help). One might ask whether Theorem 3.4 can be strengthened for non-zero $\beta$ or $\gamma$, but this seems unlikely: Increasing them intuitively only drives up $b$ in $(b,d)$-cBJR. However, Theorem 3.4. already shows that we cannot get a guarantee for any fixed $b$ even for $\beta=\gamma=0$. We will make this implication more explicit in the paper. > Do you think it is more reasonable to also take query costs into consideration? This is an interesting point, but somewhat orthogonal to the primary focus of this paper. Our goal is to ensure that the output slate proportionally reflects user opinions – a challenging task in itself. That said, we agree this raises a possible direction for future work: one could consider imposing proportionality not only on the slate but also on the cost of the generation process. For example, one might impose that each agent "deserves" that $1 is spent on trying to generate consensus statements they like. > Does your theory also show something about the utility? Yes and no. We do not provide formal guarantees on average utility across all instances. However, our BJR guarantees ensure that sufficiently large and cohesive agent groups are mapped to statements in the slate from which they derive a high utility. For highly homogeneous instances, this would imply a non-trivial average utility. > Why can the query output a statement with an accurate cost, but the comparison is made under $\mu$ error? The role of the parameter $0 \leq \mu \leq 1$ is best understood in the absence of other errors. In this case, Equation (1) reduces to: $$ \text{sup}(\alpha^*, S, \ell) \geq \max_{\alpha \in \mathcal{U} : c(\alpha) \leq \lceil \mu x \rceil} \text{sup}(\alpha, S, \ell) $$ This means that the returned statement $\alpha^*$, even if it has cost $x$, must only be as well supported as any statement of cost up to $\mu x$. We introduced this parameter because LLMs often undershoot the provided word budget in practice. Intuitively, the model would internally only search for statements in a more conservative space (shorter than allowed). As a result, we can only expect to identify the best statement among those of length at most $z$ (for some $z\leq x$). This behavior is captured by $\mu$, which quantifies this budget undershooting. > Is there any runtime evaluation on PROSE (and probably other benchmarks) and between different procedures in PROSE? The runtime of PROSE is dominated by the response times of the LLM used in our query implementations. Our algorithm makes $\mathcal{O}(r \cdot n)$ generative queries and $\mathcal{O}(r \cdot n^2)$ discriminative queries. In our experiments, across the four datasets, PROSE used 9.6M–25.4M input and 53.5K–96.1K output tokens, with runtimes of 31–65 minutes on a single Intel i7-8565U CPU @ 1.80GHz. However, given the rapidly improving inference speeds of modern LLMs, we expect these runtimes to significantly decrease in the future. By contrast, PROSE-UnitCost required around five times fewer resources: between 2.1M and 4.4M input tokens and 15.4K to 20.2K output tokens, with runtimes between 7 and 12 minutes. --- Rebuttal Comment 1.1: Comment: Thank you for the explanation. I am happy to maintain the current score.
Summary: The authors consider the problem of generating a set of statements that is representative of a collection of agent opinions on some topic, motivated by participatory budgeting. Extending an earlier definition of proportionality for such a setting, balanced justified representation (BJR), the authors introduce a version with costs and approximate proportionality, (b, d)-costBJR. The authors consider an algorithm that has access to two types of (approximate) queries, (1) discriminative queries, which give the utility of agent $i$ for statement $\alpha$; and (2) generative queries, which for a set of agents, a utility threshold $\ell$, and a cost $x$, finds the statement with cost at most $x$ that maximizes the number of agents in the set who have utility for the statement at least $\ell$. These types of queries were introduced in prior work, but the authors extend them to the approximate case. They provide an algorithm which approximates $(b, d)$-cBJR if given access to approximate discriminative and generative queries. They also prove lower bounds matching the utility approximation error and nearly matching the proportionality approximation error. The authors demonstrate their algorithm's guarantee and impossibility results in a synthetic data experiment. Then, they propose an LLM-based approach, which they call PROSE, to heuristically implementing generative and discriminative queries for their algorithm (i.e., DISC = ask GPT-4o to estimate agent utilities for a statement given some text describing their opinion, GEN = ask GPT-4o to write a statement approved by a set of agents). The authors apply PROSE to several semi-synthetic real-world datasets, using a chain-of-thought version of their DISC prompt to estimate agent utilities for generated statements. PROSE performs substantially better in terms of (GPT-estimated) agent utilities and BJR violations than LLM baselines using clustering and zero-shot generation. ### Update after rebuttal Thanks for the very clear answers! Incorporating this info into the paper/appendix would be great. I'm happy to continue recommending acceptance. Claims And Evidence: The theorems are all supported by proofs in the main text or appendix, and the experiments do a good job of demonstrating the effectiveness of the proposed algorithm. Methods And Evaluation Criteria: I think the methods and evaluation criteria are reasonable. Theoretical Claims: The definitions and claims are very clearly presented, and the proofs I checked are well-written and convincing (Thms 3.1 and 3.4), although I have not checked every detail. Experimental Designs Or Analyses: I have some moderate quibbles with the some details of the experiment, but given the difficulty of evaluating generated statements, I think the approach the authors took is reasonable. I would have liked a little more spelling out of experimental weaknesses in the text, as described below. An important comparison that's missing: it seems like ti should be possible to run the algorithm from Fish et al. (2024) and compare it to DemocraticProcess when used in PROSE. Is this new algorithm that accounts for approximate queries a meaningful improvement when used with LLMs? - There is clearly some self-confirmation bias in using GPT-4o to compute "true" utilities against which the proposed methods are evaluated as well as the DISC queries in the execution of PROSE. But the fact that the baselines are given the advantage of using the true CoT utilities is a nice touch that alleviates this concern. It would be good for the paper to more explicitly highlight this source of bias and the mitigation strategy used to strengthen the baselines by using CoT utilities. Currently this point is relegated to footnotes. Then again, without an actual experiment with human subjects, I don't see a better way of evaluating the utility of generated statements. - Section D.5 of the appendix presents two seemingly contradictory claims, that (1) both the DISC query and the CoT utilities have high correlation with true thumbs up/down ratings and (2) the two utilities are very weakly correlated. I can see why the authors want both of these things to be true, as (1) supports the validity of GPT-4o utility estimates and (2) mitigates makes the earlier issue of self-confirmation bias, but there's also a tension between those claims. The measurement of correlation (line 1113-1117) is very strange and hard to interpret. Some more intuitive measures that I would have liked to see: the inter-rater reliability score, the fractions of agent-statement pairs where the difference in labels is 0 and 1, and most comprehensive, the full distribution of label discrepancies for each agent-statement pair. Supplementary Material: The appendix is great, with all of the prompts, example statements from the experiments, and all proofs. Minor: I noticed that Proposition 3.5 is labeled Proposition B.4 in the appendix. Relation To Broader Scientific Literature: This work builds very explicitly on the work of Fish et al. (2024), which is even reflected in the title. It is a clear improvement in the handling of approximate queries and having an algorithm with approximate guarantees. One important thing that seems to be missing is a discussion of how the DemocraticProcess algorithm is similar or dissimilar to the algorithm of Fish et al. introduced for exact DISC and GEN queries. How do the algorithms differ? Essential References Not Discussed: None which I am aware of. Other Strengths And Weaknesses: Overall, I think this is a very nice paper, but with a few areas to improve. To summarize what I've written in other sections: Strengths: - The paper is extremely well-written and clearly presented. Overall quality is great. - The paper has all of the components I like to see in a combination theory/applied paper: upper and lower bounds and application to real data. - The problem is very interesting and the results are strong Weaknesses: - I'm not convinced by balancing on word length of statements. This seems convenient but is a very odd choice for real-world applications. - The lack of comparison to the algorithm of Fish et al. (2024) either merits solid justification or it should be added - The text of the paper could do a better job highlighting the weaknesses of LLMs rather than only focusing on their advantages (and I say this as someone who thinks an LLM is the best tool for this job) Other Comments Or Suggestions: 1. If the cost of a statement is its length, it seems quite odd to require that the fraction of agents assigned to statement $\alpha$ is proportional to the the length of $\alpha$. It seems more natural to have weighted statements. E.g., with 10 agents, 3 of whom dislike ice cream and 7 of whom like it, it feels odd to require a set of word-length-balanced set of statements like {"hate ice cream", "love love love love love ice cream"} rather than: {"I really like ice cream": 0.7, "I don't like ice cream": 0.3} (same word budget of 10 in both cases). In other words, I feel like the unit cost version of the model makes more sense, but with the addition that statements should be labeled with the proportion of agents who support them at some meaningful level $\ell$ 2. lines 351-352: "PROSE also does not require any tuning of hyperparameters". This is technically true, but only because the approach uses a trillion-parameter black-box LLM with no guarantees for its responses to GEN and DISC queries. Generating statements uniformly at random also requires no hyperparameter tuning--this is obviously an unfair comparison, and I'm sure GPT-4o does a remarkably good job of generating statements based on agent text. But highlighting the lack of hyperparameters feels a bit disingenuous. 3. I would find 4.1 less objectionable if it highlighted both the strengths and limitations of using an LLM for DISC and GEN queries: yes, massive flexibility in agent preference format (free text, survey responses, ...) and cutting-edge text generation quality, but also no guarantees due to the nature of LLMs and the possibility of errors/hallucinations. I think the Impact Statement does a great job of acknowledging the limitations of using an LLM for statement generation and discrimination, but I would like to see some of these points incorporated earlier into the paper, when PROSE is introduced. 4. Minor: in lines 751-757, this algebra is correct, but I think it's preferable to avoid the appearance of "affirming the consequent" (i.e, the fallacy that x is true because x implies true). Verifying that this indeed establishes the desired inequality requires being sure every chain is an if-and-only-if rather than merely an implication. I would reorder the argument to arrive at the desired conclusion rather than start with it Questions For Authors: 1. for the drug review datasets, why artificially rebalance the dataset to make the score distribution uniform or highlight extreme and central ratings? It seems like the more natural thing to do based on the motivation for proportionally representative statements is to use the full distribution of reviews (or at least a representative random sample if the dataset is too large). 2. Out of curiosity, any reason why the budgets were 160 in the drug review experiments and 164 in the Bowling Green experiment? Not important to the results, just seems like an arbitrary inconsistency. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their comments! We will revise the paper to include a detailed discussion of the limitations of our experimental setup and LLM-based query implementations. We will also add a dedicated section elaborating on the relationship to Fish et al. (2024). > for the drug review datasets, why artificially rebalance the dataset to make the score distribution uniform or highlight extreme and central ratings? The main reason for this design decision is that the original score distributions are quite degenerate, e.g., in the obesity dataset, over 70% of users give a score of 9 or 10. This skewness makes the summarization task quite simple, as nearly all users would support the same kind of statements. > Any reason why the budgets were 160 in the drug review experiments and 164 in the Bowling Green experiment? The budget in each case was chosen to be divisible by the number of agents (n = 80 for drugs and n = 41 for Bowling Green). This avoids rounding artifacts in the Clustering baseline. > The measurement of correlation (line 1113-1117) is very strange and hard to interpret. Some more intuitive measures that I would have liked to see: the inter-rater reliability score, the fractions of agent-statement pairs where the difference in labels is 0 and 1, and most comprehensive, the full distribution of label discrepancies for each agent-statement pair. The inter-rater reliability score (via Cohen's kappa) for the two implementations is 0.41. For 39/51 users (76%), both implementations return higher mean scores on upvoted statements than downvoted statements; for 8/51 users (16%), one implementation returns higher mean scores and the other lower mean scores on upvoted statements (each implementation is correct 4/8 times). Disaggregating by user and looking at all 1275=51\*5\*5 (agent, upvoted statement, downvoted statement) pairs, both queries agree on 53% of pairs, CoT is correct and PROSE is incorrect on 12% of pairs, PROSE is correct and CoT is incorrect on 13% such pairs, and both are incorrect on 21% such pairs. > *Differences in democratic processes compared to Fish et al. (2024) and missing experimental comparison* Our democratic process is related to the one of Fish et al. but differs in several aspects: 1. We account for statements having varying costs (e.g., word lengths). 2. Fish et al.’s generative query, given a set of agents $S$ and an integer $r$, returns the statement maximizing the $r$-th highest utility among users in $S$. In contrast, our query takes a utility threshold $\ell$ and returns a statement approved by the maximum number of agents in $S$ at level $\ell$. 3. Related to 2., our algorithm iteratively considers decreasing utility levels to decide which statement to add next, which is important for deriving our proportionality guarantees under approximate queries. As for the absence of a direct experimental comparison: unfortunately, the implementation by Fish et al. (2024) is not applicable to our datasets, as it requires highly structured user input (e.g., ratings of predefined statements and answers to survey questions). By contrast, our work focuses on settings with unstructured textual input. We opted not to reimplement their queries for unstructured data, as this would introduce substantial ambiguity due to subjective implementation choices. Instead, we included the PROSE-UnitCost baseline, which aligns with the core assumption of Fish et al. (2024) – namely unit-cost of statements – but uses our own query implementations to ensure comparability. As a side note, we highlighted that PROSE does not require hyperparameter tuning because it flexibly adapts to varying problem instances without requiring dataset-specific adjustments (such as setting the number of output statements, clustering granularity, or query wording). However, we agree that the term is misleading and we will reword it. > In other words, I feel like the unit cost version of the model makes more sense, but with the addition that statements should be labeled with the proportion of agents who support them at some meaningful level $\ell$. We appreciate this suggestion and believe that the answer is dependent on the intended application. Our model is particularly well-suited for settings where the reader’s attention is limited. For instance, in online democratic deliberation platforms (c.f. Bowling Green dataset), larger groups may reasonably expect a more detailed representation of their opinions: That is, a greater share of the limited attention budget of the reader is devoted to articulating their positions in more depth. Moreover, controlling the total slate length while leaving the number of statements flexible is a more general advantage of our approach. That said, we note that our theoretical framework also applies to the proposed model. Further, PROSE could be adapted to return, along with each statement, a support value, and then use that value to define the cost of the statement.
Summary: The paper addresses the task of producing a slat of statements representative of users'opinions. The framework is based on social choice and supported by large language models (LLMs). Theoretical guarantees are provided about the accuracy of the LLM output. The case studies revolve around city improvement measures, drug reviews and showcase the effectiveness of LLMs to generate concise statements representative of users'opinions. **Update after rebuttal** I am satisfied with the authors' answer and confirm my positive evaluation of the paper thus recommending its acceptance Claims And Evidence: Yes. Statements are supported by theoretical proofs showing in support of the accuracy and robustness of their method. Methods And Evaluation Criteria: yes. The authors integrate LLMs in the pipeline of social choice. Theoretical Claims: No I did not check the proofs in detail. Experimental Designs Or Analyses: Yes. I checked the implementation details in Appendix D Supplementary Material: Appendices C D E Relation To Broader Scientific Literature: I think the impact is very broad and the paper is timely as it related to LLMs and how they can be used in the context of social choice to represent public opinions Essential References Not Discussed: not that I am aware of Other Strengths And Weaknesses: Strengths: providing theoretical guarantees next to languge output. Weaknesses: not clear to me how to measure the accuracy of the language output. Other Comments Or Suggestions: -In the introduction, "Related Work" paragraph. I invite the author to be more precise in the statement "Another differentiating aspect of our contribution is the focus on deriving mathematical guarantees." Explicitly mention what kind of guarantees you will be deriving. -In the "General Problem Statement" paragraph I find the notation a bit confusing as you refer to $\mathcal{U}$ as the set of all slates and $u_i$ to the utility of agent $i$—change letter for any of the two. Questions For Authors: How do you quantify the tolerance and error made by Chat GPT 4o output? How do you measure the accuracy from the language? Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)'] Ethical Review Concerns: Their application is about social choice but their study can be potentially applied for unethical reasons. E.g. I would not leave a political decision be taken or summarized by an LLMs. But for what they are showing this could be very much happening. However, I am not saying they are promoting that. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their review. > How do you quantify the tolerance and error made by Chat GPT 4o output? How do you measure the accuracy from the language? The general approach taken in this paper is to design a mechanism (Algorithm 1) that is agnostic to the specific implementation of the underlying queries. Crucially, we show that the mechanism continues to satisfy an approximate notion of proportional representation even when the answers to the queries are subject to error. Importantly, these theoretical guarantees hold without the mechanism needing to know the magnitude of the error in the query answers. Turning to the empirical evaluation of our implementation using GPT-4o: we assess the quality of the discriminative query implementation in Appendix D.5. Specifically, we use a dataset in which users have written comments and cast upvotes or downvotes on comments written by others. We provide our discriminative query implementation with access to a user’s own comments and task it with predicting how the user rated comments they voted on. This allows us to measure prediction accuracy against known ground truth. Unfortunately, a direct evaluation of the generative query is more challenging, as it would require knowledge of the optimal statement among a virtually infinite set of possibilities – a task that is infeasible in practice. Finally, we assess the quality of the resulting slates–which represent the ultimate output of our process–through experimental comparison with baselines (see Table 1). Here, we report (i) the average utility that agents derive from their assigned statement in the slate and (ii) the fraction of BJR violations, which serves as a proxy for the number of underrepresented groups. > Their application is about social choice but their study can be potentially applied for unethical reasons. E.g. I would not leave a political decision be taken or summarized by an LLMs. But for what they are showing this could be very much happening. However, I am not saying they are promoting that. Thank you for raising this concern. As we mention in our impact statement, "the use of LLMs to rate and generate statements introduces specific risks that must be carefully addressed before deployment in real-world settings," including "bias," (lack of) "transparency," and "manipulation." This is doubly true in the context of political decision-making, and we will expand the impact statement to make this absolutely clear.
Summary: The paper proposes a method for AI assisted democratization, aka using a model to select and aggregate representative candidate statements from social participants. Main contribution of the work: - Adding control for summary length instead of number of representative responses, allowing for direct control on the expected cognitive load to read the summary - Make the system more robust by introducing approximate queries, aka fault tolerant to inaccuracies in utility prediction and popular statement generation - Experiment demonstrates its effectiveness with GPT-4o as a judge on the user utility, outperforming listed baselines Claims And Evidence: Claim: The proposed algorithm effectively improves upon previous AI for democracy algorithms. Evidence: - Optimal user utility on four datasets (Birth Control-Uniform, Birth Control-Imbalanced, Obesity, Bowling Green), as presented in Table 1. - The cost-budget analysis framework allows for more direct control on the perceived cognitive workload for consuming the aggregated statement - Sec 3 proves the near optimality of the algorithm in terms of user utility when the approximated queries are in use Methods And Evaluation Criteria: The paper is well written and easy to follow. Recommending for weak accept for the following reasons: - The proposed method allows for an extra degree of freedom in controllability (length instead of number of inputs) and shows best result on its benchmark. - Regarding the evaluation method: Using total user utility as the main metric in evaluation does not immediately come across as the best way to quantify democracy, but neither was I sure how to make it better -- underrepresented groups are still going to be largely neglected in final result generation and their voices deserve to be heard. Also want to learn how the algorithm addresses moral dilemmas such as would it choose to kill one to save a hundred. I understand the design of such metrics would never be perfect but still want to learn the trade offs being considered here and how does the pareto surface look like. - In AI for governance works we need to think very critically with regard to what kind of value coordinates are we essentially subjecting the AI to because at the end of the day that's the core added value from human if one day say 99% of the work are automated by AIs. Theoretical Claims: Been skimming through theorem 3.1-3.4 which ensures near optimality of the proposed algorithm, nothing stands out yet. Experimental Designs Or Analyses: See above Supplementary Material: n/a Relation To Broader Scientific Literature: It's mainly relevant to the following: - AI for governance - AI for democracy - AI for policy making Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review. > Regarding the evaluation method: Using total user utility as the main metric in evaluation does not immediately come across as the best way to quantify democracy, but neither was I sure how to make it better –underrepresented groups are still going to be largely neglected in final result generation and their voices deserve to be heard. We fully agree that giving minorities a voice is a fundamental aspect of democratic decision-making. This is precisely why our algorithm is explicitly designed with proportional representation in mind – if a group of agents constitutes x% of the electorate, it should be able to exert control over x% of the slate. For example, if the budget allows for 200 words and we have 100 agents, even a minority group of 5 agents will control 10 words, thereby ensuring that a brief summary of their perspective will be present in the final slate. To formally capture this ideal, we adopt the axiom of Balanced Justified Representation (BJR). Our theoretical analysis demonstrates that our method satisfies an approximate form of this axiom, even under noisy query implementations (Theorem 3.2). In our experimental evaluation, we measure both total utility but also the frequency of BJR violations (see Table 1, fourth column), which can be interpreted as the fraction of groups whose voices are not adequately reflected in the produced slate, i.e., they are "underrepresented". Across all datasets, PROSE consistently yields fewer BJR violations than the baselines, empirically confirming its ability to give a proportional voice to all groups. > Also want to learn how the algorithm addresses moral dilemmas such as would it choose to kill one to save a hundred. I understand the design of such metrics would never be perfect but still want to learn the trade offs being considered here and how does the pareto surface look like. Our algorithm does not impose any specific normative stance on moral dilemmas. Instead, it aims to proportionally summarize the diversity of user opinions, even if these are mutually contradictory. For instance, if the electorate is divided on whether it is ever justifiable to harm one person to save many, the resulting slate might include both: “Harming a person is never ethically justified.” and “Saving many can justify harming an individual.” In this sense, our method captures the full spectrum of views rather than resolving ethical trade-offs by adhering to a particular notion. > In AI for governance works we need to think very critically with regard to what kind of value coordinates are we essentially subjecting the AI to because at the end of the day that's the core added value from human if one day say 99% of the work are automated by AIs. We fully share the concern about normative alignment in AI systems. In our setting, however, PROSE is not designed to encode or enforce specific value judgments; rather, it is a tool for faithfully summarizing the input opinions. In particular, the quality and normative content of the output slate largely depend on the user-provided statements. (Broader concerns regarding bias in the underlying LLMs used for query answering are important and discussed in our impact statement.)
null
null
null
null
null
null
Perceptually Constrained Precipitation Nowcasting Model
Accept (poster)
Summary: This paper proposes a model called PercpCast for precipitation nowcasting, aiming to predict future rainfall patterns more accurately while also improving how realistic those predictions appear. The authors use a two-stage approach in a single end-to-end framework: first, they generate a "posteriori mean" sequence of future precipitation using a ConvLSTM-based estimator, then refine those estimates through a "rectified flow" module that better aligns the predicted distributions with the ground truth. To address the challenge of longer-horizon forecasting, the authors introduce a frame-sampling strategy that assigns more weight to frames further in the future. The model incorporates an LPIPS-based loss function to enforce perceptual consistency. Experiments on radar datasets (SEVIR and MeteoNet) show that proposed method outperforms various existing approaches in terms of accuracy metrics and visual quality metrics. Claims And Evidence: The paper's main claims about improved accuracy and perceptual quality are largely supported by experiments on two types of datasets. However, some points could benefit from clearer evidence or explanation. Specifically, it is not entirely clear why the authors stop the gradient transfer between the precipitation estimator and the rectified flow model. Additionally, it is unclear how the authors decided on the specific weight for the LPIPS loss. Methods And Evaluation Criteria: The authors use recognized datasets (SEVIR, MeteoNet) and metrics (CSI, HSS, MSE, SSIM, LPIPS) that directly relate to precipitation forecasting and capture both accuracy and perceptual realism. Theoretical Claims: In appendix, the authors includes a theoretical derivation that connects the precipitation nowcasting objective to an optimal transport framework. The derivation appears logically consistent with prior work on rectified flows. Experimental Designs Or Analyses: The experiments use known radar datasets and standard precipitation metrics, aligning with typical nowcasting research. The train/validation/test splits are standard, and the comparisons with multiple baselines are appropriate. However, ablation studies on the LPIPS loss weight or frame sampling would further validate the design choices. Supplementary Material: I reviewed the supplementary appendix. It includes a theoretical derivation of the flow-based approach and several ablation results. For example, varying the scale factor (K) on SEVIR, testing 1-rectified flow, and comparing precipitation estimators (like SimVP) on SEVIR. There are also additional visual visual results on both SEVIR and MeteoNet showing comparisons with different baseline models. Relation To Broader Scientific Literature: The paper extends ongoing research in precipitation nowcasting, which traditionally separates into deterministic approaches that focus on reducing mean-squared error, and probabilistic approaches that aim for more realistic detail. By combining a precipitation estimator with a rectified flow model, this work bridges both views: it maintains the long-term accuracy of deterministic models while incorporating the realistic detail of generative methods. The introduction of a frame-sampling strategy also connects to broader ideas in temporal modeling tasks. Essential References Not Discussed: No additional foundational works appear to be missing. Other Strengths And Weaknesses: Strength: PercpCast integrates a precipitation estimator with a rectified flow model to achieve forecasts that are both accurate and visually realistic. The model's frame-sampling strategy, which emphasizes distant frames, makes it particularly effective for long-horizon predictions. The proposed work is supported by thorough evaluations on established public datasets and comparisons with multiple baselines. Additionally, the paper combines theoretical explanations with visual demonstrations, providing a well-rounded view of its strengths. Weakness: One notable weakness is the lack of clarity regarding why gradient transfer is stopped between the precipitation estimator and the rectified flow model. The paper does not clearly explain how this decision aligns with the assumption that the predicted and true frames are independent, which leaves an important theoretical justification underexplored. Additionally, the paper omits details about the hardware specifications used during training, making it challenging to assess the method's computational requirements compared to simpler baselines. Other Comments Or Suggestions: Suggestions are addressed in the “Questions for Authors” section Questions For Authors: 1. The author often labels the ConvLSTM’s output as a "posteriori mean sequence", but does not clearly explain why it represents the average of future rainfall. A brief note on how minimizing MSE leads the model to predict an "average" outcome would make this point clearer. 2. In Section 5.3, the authors states "During end-to-end training, the gradient transfer between the precipitation estimator and the rectified flow model is stopped....". Could you clarify why this is done and how it aligns with the assumption that the predicted and true frames are independent, given the historical data? 3. Please include details about the hardware specifications on which the model was trained. It would help readers compare the resource requirements of the proposed method with simpler baselines. 4. In Figure 4, PercpCast appears to overestimate precipitation in certain areas (indicated by the pink colors), compared to the ground-truth maps. Could you clarify what might cause these overestimations. 5. The paper mentions using LPIPS loss to make the predictions look more realistic, but it's not clear why a specific weight was chosen for it in the loss function. Did the authors run any experiments to figure out the best value, or was it selected based on intuition? 6. The frame sampling strategy gives different importance to frames depending on how far they are in time. Could the authors share any experimental results or tests that show how this choice affects the model's accuracy, especially for longer predictions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s detailed feedback. We will address their concerns and eager to engage in a more detailed discussion with the reviewer. ### **Q1.** Thank you for the comment. In precipitation nowcasting, the high uncertainty in short-term evolution means a single historical observation can correspond to multiple future scenarios. ConvLSTM minimizes the mean squared error loss$\mathcal{L}_{pe}=\mathbb{E}\left[\|Y-\hat{Y}^*\|^2\right]$, causing predictions $\hat{Y}^*$ to converge to the conditional expectation $\mathbb{E}[Y|X]$. This produces a posterior mean sequence – a statistical average of all possible future precipitation outcomes under given input conditions. ### **Q2 & W1** In Section 3 and Appendix A, we outlined the gradient-stopping operation. To clarify: Due to diverging learning objectives between the precipitation estimator and the rectified flow model, gradient stopping is applied to prevent the rectified flow model from interfering with the estimator's acquisition of physical motion dynamics. This constraint forces the rectified flow model to actively learn physical consistency (e.g., motion continuity and distribution alignment) directly from input data. Meanwhile, end-to-end training improves model robustness and alleviates suboptimal solutions inherent in two-stage frameworks. Regarding independence assumptions, our approach aligns with Freirich et al. (2021) in theoretical framework but diverges in the training methodology. Under the condition of stopping gradient propagation from $\hat{Y}^*$ to $\hat{Y}$, the generation of $\hat{Y}$ does not influence $\hat{Y}^*$. This establishes a Markov process: $\hat{Y} \leftarrow \hat{Y}^* \leftarrow X \rightarrow Y$. Consequently, given historical data $X$, $Y$ remains independent of both $\hat{Y}$ and $\hat{Y}^*$, while $\hat{Y}$ depends solely on $\hat{Y}^*$. This ensures compliance with the predictive independence assumption with theoretical justification, achieving causal decoupling between variables. ### **Q3 & W2.** Thank you for the comment. Our model employs mixed-precision training (FP16) on a single NVIDIA A100 80GB GPU, with supporting hardware including an Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz. Resource utilization metrics of our method compared to the ConvLSTM on the SEVIR dataset are summarized in the following Table. | Method | Parameters (M) | GPU Memory (Batch Size=1,GB) | Training Time(hour) | Inference Time (Batch Size=1,Second) | |---------------------|---------------:|----------------------:|------------------------:|------------------------:| | Proposed Model | 55.87 | 4.1 | 16 | 2.47 | | ConvLSTM | 17.81 | 3.4 | 11 | 1.92 | ### **Q4.** To understand the issue, we can see Figure 9 that short-term predictions are relatively easier due to closer alignment between predicted and real frame distributions, while long-term predictions suffer significant distribution drift - a discrepancy amplified by frame sampling strategies that prioritize learning long-term variations, thereby compromising short-term prediction accuracy. As shown in the Figure 9, the issue can be alleviated by using a moderate small $k$. However, a smaller $k$ may reduce long-term prediction accuracy. Hence, it is a trade-off. ### **Q5.** We present the experimental results of different LPIPS loss weight configurations in Table 4, Table 5, and Figure 6. These demonstrate that incorporating the LPIPS loss effectively suppresses checkerboard artifacts in the rectified flow model. Notably, within a reasonable range(0.5~1), the specific weighting configurations do not significantly affect the experimental outcomes. Detailed results for additional weight configurations will be supplemented as the table below. | lpips weight | CSI | HSS | SSIM | LPIPS | MSE | |------|------:|------:|------:|------:|-------:| | 0.0 | 0.256 | 0.328 | 0.701 | 0.324 | 0.0102 | | 0.2 | 0.260 | 0.342 | 0.714 | 0.287 | 0.0098 | | 0.5 | 0.267 | 0.360 | 0.722 | 0.268 | 0.0092 | | 0.7 | 0.265 | 0.357 | 0.718 | 0.265 | 0.0095 | | 1.0 | 0.265 | 0.358 | 0.711 | 0.272 | 0.0094 | --- ### **Q6.** The experimental results and visualisations (Tables 4 and 6; Figures 5 and 7-9) show our exponential sampling strategy, where K determines the range of variation of the sampling probability. Figure 5 shows the k-probability relationship, while Table 4 evaluates different k settings. Figure 7 confirms larger k improves distant frame prediction accuracy. Figure 8 compares linear and exponential sampling. Experiments show that with more iterations, moderate increases in distant frame sampling probability improve long-term prediction. However, over-amplification undermines learning of other frames, causing performance drops.
Summary: This work proposes PercpCast, integrating both Precipitation Estimator (Video prediction model) and the Rectified Flow module. Rectified Flow module learns the transmission from the distribution of the posterior mean predicted by Precipitation Estimator to the distribution of ground truth. Further, LPIPS regularization is introduced in addition to the two typical loss terms for the Precipitation Estimator and Rectified Flow Modules. Besides, temperature-distance weighted scheduling is implemented to ensure the model focuses on the later frames. With all these techniques, PercpCast showcases its effectiveness in the outlined evaluation setting across two radar echoes datasets: SEVIR and MeteoNet. Claims And Evidence: Most claims are supported by the quantitative result in Table 2 and 3. Methods And Evaluation Criteria: The proposed methods and evaluation criteria makes sense for this problem. To verify the generalisation of PercpCast, its performance is evaluated across two datasets and compared with several SOTA. Theoretical Claims: The use of Rectify Flow is well grounded by ample previous works. There is no concern on the attempt here. However, I am quite confused on what do author attempt to show in Appendix A by proving $\mathbb{E}[ || \hat{Z}_1 - \hat{Y}^{\*} ||^2] \leq \mathbb{E}[ || Y - \hat{Y}^{*} ||^2]$. The said MSE is between the final prediction and **precipitation estimation’s output**, not the ground truth. Do the authors intend to use this to show that equation 20 is small enough so that equation 2 can also be satisfied? Experimental Designs Or Analyses: The experimental design is generally fine, except a minor problem: - LPIPS is chosen to be one of the evaluation metrics. Meanwhile, it is introduced as a regularization term during the training of PercpCast. This might have fairness issues compared with other baselines. The authors can consider using FVD or pooled CSI as an alternative to LPIPS like the PreDiff paper. Supplementary Material: The appendix is read and reviewed. Relation To Broader Scientific Literature: This work adopts a common strategy of using a precipitation estimator and fine-tuning module for precipitation nowcasting like DiffCast and CasCast. Different from previous works like DiffCast and PreDiff which utilize diffusion models in the second stage, the use of the Rectified Flow Module presents a similar but novel idea to model the difference in distribution. I believe this will very much benefit future studies. Essential References Not Discussed: Most essential references are discussed. It will be better to compare a few more diffusion-based models with PercpCast, such as PreDiff and CasCast (Gong et al., ICML 2024) since they have a closer structure with PercpCast. Other Strengths And Weaknesses: This section summarizes the strengths and weaknesses discussed above. **Strengths:** - Using Rectified Flow to learn the distribution difference between the posterior mean and the ground truth is quite a new idea in this task. - It showcases its remarkable performance compared with SOTA in the perspectives of perceptuality and accuracy. **Weaknesses:** - The current evaluation scheme (LPIPS) might be unfair. - A few minor but confusing parts in the appendix. Overall, this paper delivers an interesting solution to precipitation nowcasting. Judging from the good performance results, I am inclined to accept the paper. --- ### Update after rebuttal The authors mostly addressed my concerns and adopted my suggestions. I will keep the recommendation. Other Comments Or Suggestions: - Some information is not very consistent. In Table 1, the output sequence length is shown to be 49, but in the main text it is described to be 36. Does that mean the PercpCast model also reconstructs the input beside forecasting the future? - The writing in the appendix is quite messy, especially in Appendix C. Please proofread and fix. - A lot of previous works (PreDiff, DiffCast, CasCast, etc.) also report a pooled CSI with different thresholds to evaluate the “skillfulness” of the forecasts. Observing the tables in the papers, realistic and clear forecasts tend to have higher pooled CSI. This will also be a good indicator to replace LPIPS. Questions For Authors: Questions are asked in the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for the reviewer's acknowledgment of our work ​and their detailed feedback, which will help us refine our research. ### **Theoretical Claims.** Equation (2) can be solved through either Equation (20) or our proposed method, which has different error bounds. Freirich et al. established and showed that the theoretical optimal solution for Equation (20) is 2MMSE. Here we show the error bound of our method is smaller than 2MMSE by proving $\\mathbb{E}\\left[\\left\\|\hat{Z}_1-\\hat{Y}^*\\right\\|^2\right] \\leq \\mathbb{E}\\left[\\left\\|Y-\\hat{Y}^*\\right\|^2\\right]$ in Appendix A. Since $\\hat{Z}_1$ is the final output ​under the independence assumptions, we can get the conclusion by using the following equation: $\\begin{aligned} \\mathbb{E}\\left[\\left\\|Y-\\hat{Z}_1\\right\\|^2\\right] & =\\mathbb{E}\\left[\\left\\|Y-\\hat{Y}^*\\right\\|^2\\right]+\\mathbb{E}\\left[\\left\|\\hat{Z}_1-\\hat{Y}^*\\right\|^2\\right] \\ & \\leq 2 \\mathbb{E}\\left[\\left\\|Y-\\hat{Y}^*\\right\\|^2\\right]=2MMSE\\end{aligned}$ ### **W1 & S3.** Thank you for the advice. We incorporate additional experiments with CasCast and reported the pooled CSI prediction results as shown in the following table. The experimental results further validate the effectiveness of our method. The results will be updated in the revised manuscript. | Method | SEVIR | | | Meteonet | | | |----------------|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:| | | Pool1 | Pool4 | Pool16 | Pool1 | Pool4 | Pool16 | | MAU | 0.241 | 0.268 | 0.285 | 0.197 | 0.231 | 0.260 | | ConvLSTM | 0.240 | 0.266 | 0.292 | 0.192 | 0.236 | 0.264 | | SimVP | 0.241 | 0.263 | 0.283 | 0.165 | 0.196 | 0.214 | | Earthformer | 0.214 | 0.254 | 0.265 | 0.158 | 0.189 | 0.207 | | Earthfarseer | 0.209 | 0.252 | 0.267 | 0.161 | 0.193 | 0.212 | | STRPM | 0.213 | 0.236 | 0.271 | 0.154 | 0.190 | 0.203 | | CasCast | 0.238 | 0.262 | 0.289 | 0.183 | 0.207 | 0.231 | | DiffCast | 0.244 | 0.270 | 0.294 | 0.199 | 0.235 | 0.265 | | PercpCast | ​**0.267** | ​**0.287** | ​**0.299** | ​**0.209** | ​**0.240** | ​**0.268** | ### **W2 & S2.** Thanks for your careful review. We have reviewed Appendix C and corrected the experimental results in Tables 5–6, which will be updated as follows: | (Lpe, Lrf, Llpips) | CSI | HSS | SSIM | LPIPS | MSE | |---------------------|------:|------:|------:|------:|-------:| | (0, 1, 0.5) | 0.044 | 0.312 | 0.311 | 0.369 | 0.0217 | | (1, 0, 0.5) | 0.240 | 0.307 | 0.663 | 0.233 | 0.0085 | | (1, 1, 0.0) | 0.256 | 0.328 | 0.701 | 0.324 | 0.0102 | | (2, 1, 0.5) | 0.266 | 0.360 | 0.717 | 0.269 | 0.0091 | | (1, 2, 0.5) | 0.264 | 0.355 | 0.712 | 0.270 | 0.0093 | | (1, 1, 0.5) | 0.267 | 0.360 | 0.722 | 0.268 | 0.0092 | | (1, 1, 1.0) | 0.265 | 0.358 | 0.711 | 0.272 | 0.0094 | | $K$ | CSI | HSS | SSIM | LPIPS | |-------|------:|------:|------:|-------:| | 0.00 | 0.262 | 0.348 | 0.703 | 0.278 | | 0.02 | 0.263 | 0.343 | 0.709 | 0.276 | | 0.05 | 0.267 | 0.360 | 0.722 | 0.268 | | 0.07 | 0.266 | 0.352 | 0.716 | 0.265 | | 0.1 | 0.266 | 0.346 | 0.705 | 0.280 | | 0.2 | 0.250 | 0.327 | 0.682 | 0.292 | ### **S1.** Thank you for identifying this issue. The precipitation estimator ​reconstructs the input 13 frames and ​predicts 36 future frames, resulting in a total sequence length of 49. To eliminate ambiguity, we will revise Table 1 as follows: | Dataset | |Size | | Seq Len| | Spatial Resolution | |-----------|----------|--------|--------|-------|-----------|---------| | | Train | Valid | Test | In | Out | H × W | | SEVIR | 13020 | 1000 | 2000 | 13 | 36 | 128 × 128 | | MeteoNet | 8640 | 500 | 1500 | 13 | 36 | 128 × 128 |
Summary: This article proposes a new precipitation forecasting model PercpCast, which introduces perceptual constraints into precipitation forecasting tasks. This method first uses ConvLSTM as a precipitation estimator to obtain the posterior mean sequence of future frames. Then, a module based on "rectified flow" is used to adjust the distribution of the posterior mean sequence to the distribution of the real target frame. Finally, a distance weighted frame sampling strategy is used to further enhance the attention to future frames. The experimental part was thoroughly validated on two public datasets, SEVIR and MeteoNet, and the results showed that the method exhibited certain advantages in perceptual quality (LPIPS, SSIM) and event detection metrics (CSI, HSS) while maintaining a low mean square error (MSE). Claims And Evidence: Yes, the problem being solved is the inaccurate landing point of gan and diffusion in precipitation, that is, the inability to balance CSI and image quality index LIPIS. Methods And Evaluation Criteria: Yes, it does. Theoretical Claims: The images are rescaled to the range [0, 1] and binarized. "Are you sure about binarized? Because the forecast is based on values in the range of 0-255, Binarized doesn't look right. Experimental Designs Or Analyses: 1. In the innovation of the paper, it is mentioned that the current refined Gan and Diffusion have random sampling, which cannot balance CSI and image quality LIPIS (poor CSI, good LIPIS). Under the perceptual constraints of LIPIS, the second stage of flow matching can move towards a determined path towards the target distribution, reducing the accuracy of high echo landing points. The obvious difference between this method and Diffusion is that in the second stage, flow matching is used instead of Diffusion to refine the model, lacking the ability to ablate Diffusion and your Rectified Flow Model when using CnovLSTM as a precipitation estimator. This makes it difficult to verify the advantages of flow matching in balancing CSI and LPIPS, and it is unclear whether it is temperature weight weighting, lip loss, or flow matching performance that leads to the advantages of balancing CSI and LPIPS. 2. The comparison method in quantitative indicators uses MAU and Earthfarseer, There is no visual comparison between these two models in the visualization (Figure 3 and Figure 4, as well as the visualization in the supplementary materials) Supplementary Material: Yes, it contains analysis and proof, datasets, more experimental analysis, and more precipitation cases. Relation To Broader Scientific Literature: The problem being solved is the inaccurate landing point of gan and diffusion in precipitation, that is, the inability to balance CSI and image quality index LIPIS. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: The innovation lies in the use of stream matching in the second stage, which assigns greater weights to frames with longer lead times as the forecast progresses, resulting in better forecast performance after 1 hour compared to other models. The motivation is clear. Weakness: Some comparison methods are relatively old and lack comparison with some updated typical SOTA methods. Other Comments Or Suggestions: Fig. 4 with 'preparation study (in the blue box)' appears to be a black box, not a blue box. Questions For Authors: See the above questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the reviewer's valuable suggestions. We will try to address the reviewer's concerns and are eager to engage in a more detailed discussion with the reviewer. ### **Theoretical Claims**. Thank you for pointing out this issue. We perform ​normalization (not binarization) to rescale images to [0, 1]: SEVIR (0–255) is divided by 255, and MeteoNet (0–70) by 70. We will correct the 'binarized' with 'normalized' in the revised manuscript. ### **Experimental Designs Or Analyses 1** We would like to clarify that unlike diffusion models reconstructing precipitation predictions via conditional integration, our method employs end-to-end learning to directly optimize the posterior mean sequence distribution from the precipitation estimator. To enhance this framework, we introduce two key components: ​temperature-weighted scaling and ​LPIPS perceptual loss. Comprehensive ablation studies demonstrate: (1) LPIPS regularization successfully suppresses checkerboard artifacts in rectified flow, enhancing visual coherence(Tables 4 & 5); (2) Temperature weighting significantly improves long-term frame prediction accuracy(Tables 4 & 6); (3) The rectified flow module achieves exceptional modeling of data distributions, generating meteorologically plausible precipitation patterns that effectively address issues such as high echo attenuation and missing details(Tables 5, Figure 4 & 6). To further compare the performance of diffusion models and Rectified Flow in precipitation prediction, we conducted experiments by replacing the Rectified Flow module with a diffusion model. Specifically, due to the instability caused by adapting end-to-end training to diffusion models, we first constructed a pre-trained precipitation estimator. While keeping other configurations unchanged, we then utilized noise and predicted frame as inputs for diffusion modeling during the frame sampling process. Additionally, we employed CasCast as a baseline comparison. CasCast is a non-end-to-end precipitation prediction framework where its first stage originally uses a Vision Transformer (ViT) for precipitation estimation, followed by a second stage that applies diffusion models for distribution refinement. In our implementation, we replaced CasCast's ViT-based precipitation estimator with a ConvLSTM model. The experimental results, presented in the following Table, further validate the effectiveness of the rectified flow model. The results will be supplemented in the revised manuscript. | Method | | | SEVIR | | | | | MeteoNet | | | |-----------|------:|------:|------:|------:|------:|------:|------:|---------:|-----:|-------:| | | CSI | HSS | SSIM | MSE | LPIPS | CSI | HSS | SSIM | MSE | LPIPS | | with Diffusion | 0.223 | 0.288 | 0.697 | 0.0135| 0.297 | 0.177 | 0.269 | 0.797 | 0.0065| 0.268 | | CasCast | 0.238 | 0.301 | 0.709 | 0.0120| 0.285 | 0.183 | 0.274 | 0.810 | 0.0062| 0.252 | | Proposed Model | 0.267 | 0.360 | 0.722 | 0.0092| 0.268 | 0.209 | 0.305 | 0.820 | 0.0049| 0.237 | | Method | | | SEVIR | | | | | MeteoNet | | | |-----------|------:|------:|------:|------:|------:|------:|------:|---------:|-----:|-------:| | |CSI74| CSI133| CSI160| CSI181| CSI219| CSI16| CSI24| CSI32| CSI36| CSI40| | with Diffusion |0.437 |0.185| 0.075| 0.054 |0.021|0.299| 0.215| 0.098 |0.035| 0.022| | CasCast |0.440 |0.193| 0.105| 0.067 |0.023|0.315| 0.228| 0.108 |0.043| 0.020| | Proposed Model | 0.496| 0.251| 0.134| 0.099| 0.037| 0.354| 0.276| 0.132| 0.068| 0.027 ### **Experimental Designs Or Analyses 2** These materials will be supplemented in the revised manuscript. ### **W** Our experiments have included SOTA methods DiffCast (CVPR 2024), Earthfarseer (AAAI 2024). Following your suggestion, we also compare our method with CasCast (ICML 2024) as shown in above tables and the results will be updated in the revised manuscript. This ensures necessary comparisons with 2024 conference benchmarks. [1] Yu, D., Li, X., Ye, Y., Zhang, B., Luo, C., Dai, K., Wang, R.,and Chen, X. Diffcast: A unified framework via residual diffusion for precipitation nowcasting. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024 [2] Wu, H., Liang, Y., Xiong, W., Zhou, Z., Huang, W., Wang,S., and Wang, K. Earthfarsser: Versatile spatio-temporal dynamical systems modeling in one model. In Proceed-ings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 15906–15914, 2024. [3] Gong, J., Bai, L., Ye, P., Xu, W., Liu, N., Dai, J., Yang,X., and Ouyang, W. Cascast: Skillful high-resolutionprecipitation nowcasting via cascaded modelling. In International Conference on Machine Learning, pp. 15809–15822. PMLR, 2024 ### **S** Thank you for identifying this inconsistency. We will correct the description from "blue box" to "black box" in the revised manuscript.
Summary: This paper proposes a precipitation forecast model based on perceptual constraints. Its main contributions include: proposing a new perspective on the precipitation forecast problem, that is, converting it into a posterior mean square error problem under specific constraints; designing a model architecture based on precipitation estimator and correction flow to predict precipitation series while maintaining its authenticity and continuity; and proposing a weighted sampling strategy for long-distance frames to improve the model's prediction ability for long-term series. Experimental results show that the model has better prediction accuracy than the existing optimal model. Claims And Evidence: The precipitation forecast model proposed in this paper is based on a new perspective and its effectiveness is demonstrated through experiments. Methods And Evaluation Criteria: The method and evaluation criteria proposed in this paper are meaningful for solving the current precipitation forecast problem. This paper proposes a new perspective to solve the problems of existing methods in predicting long series, and adopts appropriate evaluation indicators to measure the accuracy and perceived quality of the model. Theoretical Claims: This paper argues that the introduction of perceptual constraints can improve the performance of the current precipitation forecast model. Specifically, the model transforms the current precipitation forecast problem into a posterior mean square error problem and implements perceptual constraints by constructing a transmission between distributions. The experimental results of the model show that its performance is better than the current state-of-the-art model. Experimental Designs Or Analyses: From the paper, it is evident that the authors have considered multiple factors in their experimental design and analysis, conducting detailed comparisons and evaluations. They selected several representative baseline models for comparison and tested the model performance under different parameter settings. Additionally, the authors provided a thorough explanation of the hyperparameter selection process and presented concrete experimental results, including specific data and figures. Therefore, I believe the experimental design and analysis in this paper are sound. Supplementary Material: The supplementary material of this paper provides an introduction to the dataset and more experimental analysis. Relation To Broader Scientific Literature: The main contribution of this paper is the proposal of a perceptually constrained precipitation prediction model, which improves prediction accuracy and image quality by introducing perceptual constraints. This is different from the current precipitation prediction methods that only focus on minimizing the mean square error (MSE). This model addresses the limitations of existing methods by reconstructing the precipitation prediction problem and using perceptual constraints. The model also uses a sparse sampling strategy based on the attention mechanism and a residual flow structure to enhance the ability to focus on distant frames and capture future changes. These methods have better performance and stability than existing precipitation prediction methods. Therefore, the research results of this paper are meaningful for improving related research in the field of precipitation prediction. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: A new perceptually constrained precipitation prediction model is proposed, which can effectively improve the prediction accuracy and image quality. The residual flow structure and sparse sampling strategy are used to enhance the ability to focus on distant frames and capture future changes. Experimental verification is carried out on two public datasets, and better performance and stability are achieved than existing methods. Weaknesses: The prediction effect in some extreme cases has not been analyzed in detail and needs further discussion. The experimental results do not provide detailed parameter settings and hyperparameter adjustment processes, making it difficult to reproduce the experimental results. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our ideas and theory. ### **W1** Thank you for your question. Due to the introduction of perceptual constraint, our model has the advantage of accurately preserving high-value part in prediction image, which indicates extreme weather storms. As shown in Figures 4 and 13, our model accurately predicts the evolution of the heavy precipitation band (above 160) and gives reliable intensity estimates.In spite of the advantage, our model may also fail to predict sudden convective storms that develop precipitation abruptly where no storm signals appear at the beginning. Improving such predictions requires incorporating atmospheric variables including temperature, humidity, and wind patterns during precipitation formation, which is a key objective for our subsequent research. We will add necessary discussions in our final version. ### **W2** We have elaborated on the impact of weight configurations for loss functions and distance sampling (Tables 4-6) in both the experiments and appendices. For other hyperparameters (e.g., learning rates), we identified appropriate values within the range of 1e-3 to 1e-5 and documented them in the main text(Section 5.1 Implementation Details). All hyperparameters in the model have been thoroughly specified, and the experimental code will be made publicly available on a community platform shortly as well.
null
null
null
null
null
null
Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency
Accept (spotlight poster)
Summary: The paper introduces three new equivariant neural networks (adapted from three non-equivariant models: ResMLP, ViT and ConvNeXt) for the purpose of showing that the equivariant models maintain a comparable number of floating-point operations (FLOPs) per parameter versus their non-equivariant counterparts. The main idea behind the authors' work is that they replace linear layers with block diagonal equivariant linear layers using irreps of the group C_2 (the flopping group), which ultimately parameterise feature spaces in terms of mirror-symmetric and mirror-antisymmetric features. They describe how to obtain invariant and flopping equivariant features using a patch embedding layer, how to implement the non-linear equivariant layers, and how to adapt self-attention to obtain equivariance. The authors evaluate their models on the ImageNet-1K dataset and compare them based on three measures: number of parameters, FLOPs and throughput. Claims And Evidence: The claims appear to be well-founded, with well-explained supporting evidence. However, as stated below, I question the extent of the contribution made, given that the analysis that appears only considers equivariance to the group C_2. Methods And Evaluation Criteria: Given the question that the authors posed throughout, I thought that the proposed methods and benchmarks made sense for the problem at hand. Theoretical Claims: I checked the correctness of the derivation of the linear layers in Section 3.1, and deemed it to be correct. I also checked the equivariant version of the pointwise activation in (3), and confirmed that it is correct. Experimental Designs Or Analyses: I read about the design of all of the experiments, and found them to be sound, with fair comparisons being made. In particular, I liked the explanation behind the authors' decision to consider the three measures that they did, and thought that their argument was well-considered. Supplementary Material: I read the entirety of the supplementary material and found it, in the main, to be complementary to the main paper. However, I have two main issues with it. ~~1) I thought that the version of Schur's Lemma (A.2) provided by the authors was a bit overkill, given, as they state, that they are "satisfied with finite-dimensional real vector spaces in this paper".~~ 2) I strongly felt that the discussion in lines 676-689, 1st column, needed to be given instead in the main paper and discussed further, given that the authors only consider C_2 equivariance in the main paper. I will add more comments on this in my answer to a later question. Relation To Broader Scientific Literature: I found that the authors set their paper well in the context of the broader scientific literature, in particular, with a well-framed related work section. They set their work in the context of geometric deep learning (Bronstein et al. 2021), considered the numerous works behind steerable equivariant ConvNets, and highlighted the differences behind their ViT networks and those used by Kundu and Kondor (2024). I also enjoyed the discussion behind equivariant networks for other input than images, although I note that the group considered in these related works is different from the one under consideration in this paper (there is, however, no issue with this). Essential References Not Discussed: No, I didn't pick up any important ones. Other Strengths And Weaknesses: Strengths: I thought that the paper was well structured and easy to read, and I commend the authors for achieving this. The question of achieving scalable equivariant models that do not dramatically increase computational costs is an important one that needs further study in the equivariant community. The authors clearly demonstrate that the equivariant models that they have considered achieve a comparable number of floating-point operations per parameter to the baseline ordinary neural networks. I commend the authors for providing a clear example where equivariant models can be scaled whilst maintaining their usefulness. Weaknesses: Despite this, I question the depth of the originality behind these results, given that the paper targets only one type of equivariance whose symmetries, beyond perhaps the data studied (image data), are maybe not so important - this is my biggest concern behind this paper. However, I am open to the authors addressing this point of mine in their rebuttal if they can provide me with convincing evidence to the contrary. Complementary to my previous point, I think that the commentary given in lines 676-689, 1st column, needs to be given instead in the main paper and discussed in much more detail. In particular, I think that the authors need to discuss whether they think that the conclusion that they give in this paper (about the comparability of FLOPs per parameter) would extend to groups with the more "difficult" irreps. If what they have presented here is only a special case, then that would, in my view, reduce the significance of the present contribution. Finally, the way that the equivariant linear layers are constructed use an equal number of invariant and flopping equivariant features: as the authors themselves say in lines 181-184, 2nd column, it is not obvious that this is optimal. What happens if they are not equal? Does the entire analysis/construction of the models break down? It seems to me that it might do, but again, I am open to the authors clarifying this for me. Other Comments Or Suggestions: I have a big gripe about the sentence given in lines 46-48, 2nd column: "In a nutshell, our case is that equivariant neural networks are simple models that scale well." For me, this claim as stated is far beyond what is presented in this paper: indeed, there are many more equivariant neural networks that exist for far many more groups beyond C_2 in the literature. I would strongly recommend that the authors adjust this sentence in a camera-ready version of this paper should it be accepted. I think that the Patch embedding layer could be described better - it is not entirely clear from Figure 3 how this works. Could the authors provide a clear example of this procedure, perhaps even in the appendix? This is a small point but I would personally move Figures 1 and 2 to the second page - it looks a bit messy on the first page. Questions For Authors: I have provided the questions that I would need answering to be able to adjust my score in the weaknesses section. To repeat them here, they focus on: 1) Could the authors provide other examples where their C_2 equivariant construction might be useful beyond the image data they have experimented with? 2) Do the authors think that the FLOPs results that they have presented here for C_2 equivariant models extend to groups with more difficult irreps? I am concerned, given the way that the authors have presented their work, that the results only work for this group (and this situation) as a special case; however, as stated above, I am open to being told otherwise. 3) What happens to the analysis provided in the paper if the number of invariant and flopping equivariant features is not equal? Can their construction still be used, or not? If not, how likely would one use an equal number of these features? EDIT: Following the authors' rebuttal, I have decided to raise my score from 3 to 4. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review. We address their concerns here, which will sharpen the paper. > [...] the version of Schur's Lemma (A.2) provided by the authors was a bit overkill [...] We are unaware of simpler versions of Schur’s Lemma for real irreps of finite groups than the one given. If the reviewer has a simpler form in mind, we welcome any suggestions. > Q1: Could the authors provide other examples where their C_2 equivariant construction might be useful [...]? Since $C_2$ is the simplest possible symmetry group, it appears in many situations. Examples include - 3D tasks: Many natural and man-made objects (and parts of objects) are equally likely to appear mirrored. Thus, tasks involving these are often mirror equivariant/invariant. This includes data such as meshes, point clouds, or 3D images. - Many games have mirror symmetry. E.g., for training neural networks to play Go, it is common to apply reflection augmentation. - Mirror equivariance/invariance is also common in 1D data. E.g., measurements such as soil samples along a path should be classified the same regardless of the path's direction. We believe that experiments on images are sufficient to demonstrate the idea and that the new architectures are a valuable contribution to the literature on vision models. > Q2 [and earlier comments]: Do the authors think that the FLOPs results that they have presented here for C_2 equivariant models extend to groups with more difficult irreps? The paper makes two different points. The first point is that the FLOPs-per-parameter ratio is exactly the same for non-equivariant and equivariant linear layers with the same input and output feature dimensions. This only holds when the irreps are one-dimensional, as mentioned in Appendix A. We will go through the main paper and make it clearer there. For instance, as pointed out by Reviewer s6u9, there is a formulation in Section 5 where “equivariant networks can be designed to have the same number of FLOPs…” will be changed to “flopping equivariant networks…”. We do not think we have over-claimed the contribution since the paper's title already clarifies that we mainly consider the flopping symmetry in this work. The second point is that hard-coding equivariance results in computational savings, due to block-diagonalization of the linear layers. This argument generalizes straightforwardly to any other groups due to Schur’s lemma. In particular, for groups with more than two irreps, layers mapping between feature spaces transforming according to (multiples of the isotypical decomposition of) the regular representation will decompose into more blocks, resulting in larger computational savings. To rephrase slightly, the FLOPs-per-feature is much reduced in linear layers with hard-coded equivariance, which generalises to all finite groups. We think the Reviewer’s suggestion to include a discussion around the generalisation to other groups in the main paper is a good idea, and we will do so. > Q3: What happens to the analysis provided in the paper if the number of invariant and flopping equivariant features is not equal? The block decomposition still works if we do not have an equal amount of invariant and (-1)-equivariant features. The 0-blocks in equation (2) will be smaller and so the computational savings per feature will be smaller. The number of FLOPs per parameter will still be the same. We have included an extra experiment in the rebuttal to Reviewer s6u9 (Q1) to test whether having more invariant features is helpful for the classification task. Having an equal amount corresponds to having features that transform according to the regular representation of $C_2$, but in a diagonal basis (the isotypical decomposition). Weiler & Cesa (2019) found that the regular representation is generally a good choice. > I have a big gripe about [...] "In a nutshell, our case is that equivariant neural networks are simple models that scale well." We thank the reviewer for pointing this out. Indeed, the intended meaning is not that all equivariant neural networks are simple and scale well. We will revise the sentence: "In a nutshell, our case is that equivariant neural networks *can be* simple models that scale well.” > I think that the Patch embedding layer could be described better [...] Consider the case of 1D-signals: We have a signal [a, b, c, d] and its flopped version is [d, c, b, a]. If we use a symmetric filter [v, v] with stride 2, we get [v(a + b), v(c + d)] and [v(d + c), v(b + a)] respectively. Thus, when the input is flopped, the feature map is flopped too. If we use an antisymmetric filter [w, -w], we get [w(a - b), w(c - d)] and [w(d - c), w(b - a)] respectively. Thus, when the input is flopped, the feature map is flopped and changes sign. In the PatchEmbed-layer we use equally many symmetric and antisymmetric filters to obtain invariant and (-1)-equivariant features. As mentioned above, this is a common choice. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal. I have read all of the reviews and corresponding rebuttals. I retract my comment about their statement of Schur's Lemma - I must have misread it in my original reading - and am happy with what they have presented in their paper. I would particularly like the authors to include the additional examples in the main text - this will help demonstrate the potential impact of their work. I am glad the authors will include a discussion around the generalisation to other groups in the main paper - again, for potential impact, I think that this is important. The authors wrote: "We do not think we have over-claimed the contribution since the paper's title already clarifies that we mainly consider the flopping symmetry in this work." I agree that although in principle they have not overclaimed the contribution, people easily forget the title, and so I think it is important, especially in making any type of claim, that the authors are precise in what they are saying. However, I do note that they are going to edit the "gripe" I picked up and the one that Reviewer s6u9 picked up, which I welcome. In summary, I have decided that, so long as the commitments to making the proposed changes appear in a revised version of this paper, I am happy to raise my score from 3 to 4. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response and confirm that we commit to making the proposed changes in a camera-ready version of the paper if it is accepted to the conference.
Summary: The paper suggests a clever implementation of flop-equivariant linear layers - together with some strategies to adapt other layers too - which allows to achieve flop-equivariance in most popular vision architecture while halving the computation cost with respect to their non-equivariant counterparts. The paper is very well written and motivated and includes extensive empirical analysis. Claims And Evidence: All claims seem well supported by empirical evidence and theoretical arguments. Moreover, all the assumptions are backed up by extensively citing many previous works which provided relevant insights. Overall, this makes the story of the paper very convincing. Methods And Evaluation Criteria: See above Theoretical Claims: See above Experimental Designs Or Analyses: See above Supplementary Material: I quickly read the appendix but I might have missed some details. Relation To Broader Scientific Literature: The manuscript properly places itself in the literature, extensively citing the long line of works which provided the insights used to motivate this work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is really well motivated, with a very convincing story and introduction. In particular, I really enjoyed reading Sec 3.5, which includes some very elegant arguments and provides in a clear and concise way many practical insights about the design of equivariant architectures. I would even suggest the authors to mention some of these points in the introduction as part of the contribution. Sec 3.1 also manages to give an intuitive yet precise description of equivariant layers and the potential computational gain, making the main ideas in the paper accessible for a wider audience. I do not see any significant weaknesses in this work. Of course, the empirical evaluation could be made stronger by scaling further and performing hyper-parameter tuning of the equivariant models as already argued by the authors, but I agree this is beyond the scope of this paper. I am looking forward to see the community picking up on these ideas and further extend the empirical validation. Other Comments Or Suggestions: The block diagonal decomposition of the linear layers in Sec 3.1 doesn't directly work for convolution layers, unless one only considers flop-invariant filters since anti-symmetric filters can map between the two types of features. This is briefly discussed later in Sec 4.3 but I think the authors could expand this point earlier in the paper for improved clarity. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the review and for highlighting Section 3.5. We are pleased to read that the reviewer finds the paper valuable. > The block diagonal decomposition of the linear layers in Sec 3.1 doesn't directly work for convolution layers, unless one only considers flop-invariant filters since anti-symmetric filters can map between the two types of features. This is briefly discussed later in Sec 4.3 but I think the authors could expand this point earlier in the paper for improved clarity. We agree with this comment and will modify Section 3.1. We thank the reviewer for the suggestion. Finally, we have included one extra experiment in the rebuttal to reviewer s6u9 (Q1), which could interest all reviewers.
Summary: In this paper, the authors present a flopping equivariant variant of known vision models like ConvNext, ViTs, and ResMLP. The key idea is to parameterize feature space in terms of mirror symmetry and mirror anti-symmetry features. This approach reduces FLOPs and wall-clock time to give an efficient scalable and efficient symmetry architecture. The empirical experiments are done on the Imagenet 1K dataset Flops, throughput, and number of parameters are reported for three different architectures. ## post rebuttal I will keep my score as the authors gave a satisfactory rebuttal. Claims And Evidence: The claim in section 5: *equivariant networks can be designed to have the same number of FLOPs...* seems to lack evidence, as the paper only shows this for a flop equivariance, and this might not be true for higher dihedral groups. Methods And Evaluation Criteria: Yes, ImageNet 1K makes sense as well and the evaluation criteria are reasonable for the problem. Theoretical Claims: Yes, they seem correct. Experimental Designs Or Analyses: Yes, all of the experiments were presented. The experimental design seems reasonable, although there are a few questions regarding the broader use/ implications of the choice made in the paper. The relation to contrastive learning (with equivariance) as well as learned canonicalization especially post hoc way in models are some ways interesting comparisons to the proposed approach as they are more commonly used for images. [1,2] show that augmentation helps with performance even with equivariant models, and given the task of image classification, augmentations are not used in the pipeline. The classification task which is an invariant task with respect to transformations is considered, and tasks like image editing which is equivariant to transformations are not considered without any justification. 1. Structuring Representation Geometry with Rotationally Equivariant Contrastive Learning, Gupta et al. 2. On the Utility of Equivariance and Symmetry Breaking in Deep Learning Architectures on Point Clouds, Vadgama et al. Supplementary Material: Yes, sections B and C. Relation To Broader Scientific Literature: The key contribution of the paper is to scale equivariant models using mirror symmetry and mirror anti-symmetry is interesting and is novel, although the idea of using invariants to design equivariant networks is not uncommon. Important links to existing works in a slightly different domain (point clouds, contrastive learning approach) are missing and also effective use for image classification tasks seems superfluous as ViT with post hoc canonilication or conservative learning transformation pipeline can achieve similar results. The important takeaway from the paper is that equivariant models can be scaled in the proposed manner, and there could be more use cases beyond the ones mentioned in the paper. Essential References Not Discussed: Yes, most of the related works are covered in the paper. Although a few important works regarding are missing. 1. Structuring Representation Geometry with Rotationally Equivariant Contrastive Learning, Gupta et al. 2. On the Utility of Equivariance and Symmetry Breaking in Deep Learning Architectures on Point Clouds, Vadgama et al. 3. On genuine invariance learning without weight-tying, Moskalev et al. 4. Equivariant Adaptation of Large Pretrained Models, Mondal et al. 5. Equivariance with learned canonicalization functions, Kaba et al Other Strengths And Weaknesses: ## Strengths - This paper presents a way to scale equivariant models by considering flop equivariant neural networks. - The proposed approach is used for three different architectural choices: ViT, ConvNext, and ResMLPs which provides a good understanding of image classification using these architectures with their equivariant and hybrid counterparts. ## Weaknesses - The paper does not include any suggestions for effective improvement in the pipeline like augmentation or generalizability to other group. - The low performance of smaller equivariant models is not justified. Other Comments Or Suggestions: - In section 3.5, Limitations: It is unclear what the limitations described in the first paragraph. - In section 3.5, important works on relaxing equivariance have not been mentioned. A lot of symmetry-breaking and relaxed equivariance works have been done for the domain of point clouds (molecules etc). See the Essential References section. Questions For Authors: 1. The equal split of invariant and (-1)-equivariant features are used in the paper. How would the framework’s performance change if this ratio were adjusted based on task-specific learned symmetries and is there a way to learn the optimal division for? 2. Is it possible to generalize this approach beyond D2 (flopping)? How does that affect the performance with similar FLOPs? 3. Methods like learned canonicalization and its adaptation to large models improve performance without huge computational costs [1, 2]. How does the proposed approach compare to these techniques? 4. Does data augmentation like rotations or flips (commonly used in contrastive learning) help with performance in the proposed approach? 5. With libraries like Cu-equivariance [3] which provides improved irreps computation with cuda implementation, how does this affect the impact of the proposed work? [1] Equivariant Adaptation of Large Pretrained Models, Mondal et al. [2] Equivariance with learned canonicalization functions, Kaba et al [3] [cu-equivariance](https://github.com/NVIDIA/cuEquivariance) and see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful review and relevant questions. > The claim in section 5: equivariant networks can be designed to have the same number of FLOPs... seems to lack evidence [...]. We thank the reviewer for the pointer and will refine it to “flopping equivariant networks”. > Essential References Not Discussed The suggested references are relevant and will be included. Note that our idea for speeding up equivariant networks has not been applied in any of these papers. > In section 3.5, Limitations: It is unclear what the limitations described in the first paragraph. One argument against hard-coding equivariance is that it limits what features can be learned. We argue that this limitation is weak in the first paragraph. We will make it clearer. > Q1. [...] How would the framework’s performance change if this ratio were adjusted based on task-specific learned symmetries and is there a way to learn the optimal division for? To answer this question, we have implemented a ViT that has an equal split of invariant and (-1)-equivariant features for the first half of the network and only invariant features for the second half of the network (the (-1)-equivariant features from the first half of the network are invariarized by taking their absolute value, incidentally this is a “complete invariant”). For layers with only invariant features, the block-decomposition in equation (2) turns into a dense matrix $W_{1,1}$ since $W_{-1,-1}$ doesn’t exist. So, there are no FLOP savings compared to ordinary layers, and the FLOP-per-parameter ratio is also the same. The accuracies on ImageNet are as follows: ViT-S: no convergence, ViT-B: 82.5%, ViT-L: 84.1%, ViT-H: 84.7%. Comparing these with the results in Table 1, we see that they consistently outperform the equivariant ViT with equal amounts of invariant and (-1)-equivariant features. This aligns with the prior work as discussed in Section 3.5. In terms of total FLOPs and parameters the new networks are between the previous equivariant and the baseline networks, for ViT-H the values are almost exactly the same as the previously considered hybrid network. We are not aware of a method to guarantee learning the optimal division of irreps, but as outlined in Section 3.5, we regard this as promising and interesting future work. > Q2 [and earlier comments]. Is it possible to generalize this approach beyond D2 (flopping)? How does that affect the performance with similar FLOPs? Yes, this is possible. We discuss this extension in Appendix A. The FLOPs-per-parameter ratio depends on the group and it is always possible to achieve computational savings. Please refer to the rebuttal to Reviewer RXrY (Q2) for a detailed discussion. > Q3. Methods like learned canonicalization and its adaptation to large models improve performance without huge computational costs [1, 2]. How does the proposed approach compare to these techniques? Canonicalization would mean training an extra network that maps an image to a canonical orientation and then feeding that image into a pretrained network. This is useful when a pretrained network exists which is not invariant. A common example would be a network trained on upright images not generalising to rotated images. If the data that the pretrained network was trained on already contains the symmetry (e.g. upright images are equally common in flopped format) then canonicalization can not improve performance because the pretrained network is not better on a particular orientation of the images. Also, highly symmetric images can pose a problem for canonicalization-based methods (see e.g. Appendix A of the mentioned [2]). Finally, canonicalization methods incur an extra overhead of running the canonicalization network, while we instead obtain networks that run faster than the baseline networks. > Q4. Does data augmentation like rotations or flips (commonly used in contrastive learning) help with performance in the proposed approach? Yes, we use heavy data augmentation like the baseline methods, including Mixup, Cutmix, etc. Refer to Table 2 in the appendix. > Q5. With libraries like Cu-equivariance [3] which provides improved irreps computation with cuda implementation, how does this affect the impact of the proposed work? Libraries with faster equivariant layers further strengthen the broad case for equivariant networks made in Section 3. Cu-equivariance is specialised for processing 3D graphs with rotation ambiguity, such as molecular data, and thus it is orthogonal to our implementations for images.
Summary: The paper's main goal is to develop new equivariant vision models that scale effectively. It presents equivariant networks that preserve mirror symmetry while keeping FLOPs comparable to non-equivariant models. The focus is on simple image symmetries for more efficient computation. The proposed network divides all feature maps into two types: flopping invariant features and flopping equivariant features. The latter undergo a sign flip when mirroring occurs. This split architecture enables smaller matrix multiplication while maintaining equivariance, cutting the number of FLOPs in half. The paper discusses a patch embedding layer along with considerations for non-linearities and attention. It examines three successful vision architectures: ResMLP (Touvron et al., 2023), ViT (Dosovitskiy et al., 2021), and ConvNeXt (Liu et al., 2022). Each architecture is explained simply, with adaptations proposed using the flopping equivariant architecture. The experiments compare: 1) model size versus FLOPs, 2) network throughput, and 3) memory usage. Results for all architectures are presented on ImageNet, showing that the proposed models scale well compared to state-of-the-art non-equivariant vision models. Claims And Evidence: The main motivation of the paper is to encourage work on scaling equivariant neural networks. The main claims are 1) flopping equivariant networks can achieve accuracy comparable to the state of the art vision networks. 2) the number of FLOPs vs model size tradeoff is comparable. 3) The memory footprint is reduced. The results in figure 5 and table 1 do accurately support the main claims of the paper. Methods And Evaluation Criteria: Yes, the evaluation criteria (comparison in terms of accuracy, throughput, memory, FLOPs, model size) are sensible. The three baseline architectures are well-optimized in the literature on ImageNet, so the comparison actually puts the proposed model at a slight disadvantage. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The experimental design and the choice of metric is sensible. The appendix describes experimental recipes in more detail. Supplementary Material: Yes, I read through the supplementary material. It composes of 1) A review of the necessary background on GCNNs -- this is to ensure only an intuitive presentation in the paper itself without too much mathematical detail. 2) Details on implementation and experimental protocols. Relation To Broader Scientific Literature: Exploring the scaling behaviour of equivariant models is only a recent area of interest. This paper considers specific vision models and then proposes equivariant counterparts that scale well. I think the key contribution is well-motivated and of broader interest. Essential References Not Discussed: I think references are discussed adequately. However, I would like to suggest the authors also take a look at the following paper: "Symmetry-Based Structured Matrices for Efficient Approximately Equivariant Networks" by Samudre et al. This paper is about exploring a similar theme, even though the paper is mainly theoretical, and about approximate equivariance. Note that the experiments seem to have a typo (FLOPS instead of FLOPs). Other Strengths And Weaknesses: As such, the ideas used in the paper are technically straightforward. However, its main goal is to encourage designing equivariant models that have favourable scaling properties. In that goal, I believe the paper does a decent job. Other Comments Or Suggestions: N/A Questions For Authors: Could the authors expand on why implementing the Winograd scheme might be involved on GPU? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the clear review, which accurately summarises our main contributions. > I think references are discussed adequately. However, I would like to suggest the authors also take a look at the following paper: "Symmetry-Based Structured Matrices for Efficient Approximately Equivariant Networks" by Samudre et al. This paper is about exploring a similar theme, even though the paper is mainly theoretical, and about approximate equivariance. Note that the experiments seem to have a typo (FLOPS instead of FLOPs). This is a good suggestion for an additional reference, which we will add. Combining the structured matrix approach with the irrep-decomposition approach could be interesting for future work. > As such, the ideas used in the paper are technically straightforward. However, its main goal is to encourage designing equivariant models that have favourable scaling properties. In that goal, I believe the paper does a decent job. We hope the technical straightforwardness can be regarded as a strength in this context. > Could the authors expand on why implementing the Winograd scheme might be involved on GPU? Implementing a naive version of the scheme is quite straightforward. What is challenging is making the implementation competitive with highly optimised GPU-kernels for ordinary convolutions, such as the ones in the widely used closed source cuDNN library. It requires careful handling of data shuffling within the GPU as well as proper utilization of hardware specifics such as tensor cores. Finally, we have included one extra experiment in the rebuttal to reviewer s6u9 (Q1), which may interest all reviewers. --- Rebuttal Comment 1.1: Comment: Thank you for the concise clarifications, which I found useful. I would like to keep my recommendation of accept.
null
null
null
null
null
null
The Complexity of Learning Sparse Superposed Features with Feedback
Accept (poster)
Summary: In this work, the authors study how well the feature learned by a neural network can be retrieved by the means of some agents, e.g., an LLM, in the form of *relative triplet comparisons*. Formally, leveraging the linear feature decomposition that encodes features in a dictionary, the authors investigate how well a learner provided with sparse triplets given by an agent can identify the relevant features (up to normal transformation). After introducing the protocol with which a learner receives triplets from an agent, the authors provide bounds on the feedback efficiency, i.e., the minimal number of interactions to learn feature matrices (normal transformations of feature dictionary). The authors provide tight bounds in sparse and standard settings with an agent that gives either constructive or distributional sparse triplets. Finally, the authors provide empirical validation of their results on synthetic and large-scale models. Claims And Evidence: The theoretical claims are well supported by clear and detailed proofs, and the authors also provide experimental validation of their theory on synthetic tasks and large-scale models. Methods And Evaluation Criteria: The main contributions of this work are theoretical, and the authors provided detailed proofs as well as experimental computation to validate the theory both in synthetic and large-scale settings. I believe the methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: The proofs are well thought, detailed, and clear. I find them elegant, and in my opinion, these are the main strengths of this work. One does not need to be an expert in the research field of the current submission to understand the proofs and the theoretical results. Experimental Designs Or Analyses: The authors provided the code to reproduce their results. A quick inspection of the code seems to indicate that the experiments were well-conducted, although I did not re-run the experiments. The experimental setup is well introduced however, I have some issues with understanding how well the experiments confirm the theoretical results. The current submission would be improved by clearly stating in more detail what empirical confirmation can be seen from the figure. A comparison between the theoretical bound and the one observed in practice could also be of interest. Supplementary Material: I read the proofs, the experiments left in appendix and reviewed the code provided in an anonymous link. Relation To Broader Scientific Literature: I find that related work and prior works are well introduced and compared. The submission's contributions are part of a growing interest to better understand the internal mechanism at play in neural networks, especially in LLMs (e.g., prior works on Sparse Autoencoders, mechanistic interpretability). The protocol to retrieve features learned by a neural network using an agent such as a human or an LLM, along with the bounds on the feedback complexity, seems novel. Essential References Not Discussed: To the best of my knowledge, there are no essential references not discussed in the current submission. Other Strengths And Weaknesses: **Strengths** - The proofs are clear and well-detailed, and notations are defined early for the reader - The problem considered seems original and is well explained - I find the proofs techniques elegant with the null space analysis and cardinal computations. I find it refreshing to see the use of algebra. **Weaknesses** I list below what I think are weaknesses, but I would be happy to be corrected if I misunderstood some important aspects of the authors' contributions. - In my opinion, the main weakness of the current work is the contextualization of the theoretical results. More discussion regarding their impact and comparison to the existing literature could greatly improve the current submission (e.g., benefits of the bounds for the feature learning community, numerical impact on the computational resources needed to retrieve neural network features, etc.). - Moreover, it is not clear to me how the experiments validate the theoretical findings, and since there is no extended intuition/explanation of them, it is quite hard to grasp their impact, and it makes the global work lack some coherence. An idea to improve the submission would be to add some comparison between theoretical vs. empirical bounds. - A conclusion/discussion section is missing to summarize the contributions and discuss limitations and future work. Overall, I find the paper well written, and the framework studied is interesting and quite original. I value the technical contributions for the bounds, however, I have trouble seeing the benefits and practical takeaways of the approach in the context of SAE and feature learning because of the lack of explanations/intuition. This is the reason for my current score. I remain open to modifying my score, provided the authors clarify the points mentioned in the weaknesses section. **Update after rebuttal**: increased score from 3 to 4. Other Comments Or Suggestions: I list below some potential typos: - Table 1 / first cell: "Stardard Constructive" --> "Standard Constructive" Questions For Authors: Related to weaknesses: 1) How do the bounds translate in terms of computational resources needed to retrieve features learned by neural networks in practice? The theorems indicate the convergence rate, but it could be interesting to compare it to the naive baseline or common methods used in practice. Could the authors elaborate on that? 2) Could the authors add more discussion on the takeaways of theoretical results and empirical validation? 3) Could the authors compare the theoretical and empirical bounds to better understand how well the theory predicts the practice (for instance, on some simple synthetic task)? 4) Could the authors add a conclusion/discussion section at the end of the paper to summarize the contributions and discuss the limitations and room for future work? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful questions and suggestions. Below, we provide detailed responses to the main concerns. --- **1. Can the authors compare theoretical vs empirical bounds?** We perform experiments based on RFM (cf Section 6, p.8) on a monomial regression task to establish a high correlation of theoretical bounds to empirical bounds. A pdf showing plots and a single Jupyter Notebook can be found here: https://anonymous.4open.science/r/rebuttal_ICML_featurelearning-2E0B/ --- **2. Conclusion/discussion section missing.** We will complete a discussion section in the main text to summarize the contributions and discuss the limitations and room for future work. --- **3. Benefit to the feature learning community and numerical impact on computational resources for feature retrieval.** First, we compare some known empirical methods for feature retrieval: - **Numerical Impact:** - **SAEs:** SAEs are widely used to recover interpretable feature dictionaries. However, training SAEs is computationally expensive and typically scales as $\mathcal{O}(Tnpd)$, where $T$: number of iterations, $n$: number of samples, $p$: activation dimension, and $d$: input space dimension. [1] E.g.: > "We take a one-layer transformer with a 512-neuron MLP layer, and decompose the MLP activations into relatively interpretable features by training sparse autoencoders on MLP activations from 8 billion data points..." [2] - **CRAFT:** CRAFT uses non-negative matrix factorization (NMF) to extract concepts from hidden activations. It solves a dense matrix factorization problem via ADMM with per-iteration complexity $\mathcal{O}(npr)$, where $r$ is the number of latent components [4]. This method requires access to full activations. - **Probing:** [5] studies how semantic properties are linearly decodable from hidden representations. While not aimed at full feature recovery but rather interpretable linear concepts, these methods include: - **Mass-Mean:** $\mathcal{O}(np^2 + p^3)$ - **Logistic Regression:** $\mathcal{O}(Tnp)$ - **Contrast-Consistent Search:** $\mathcal{O}(mp^2)$ ($m$ is the number of contrast pairs) - **Our Method:** Compared to the empirical performance of these baselines, our feedback-based method achieves these provable bounds: - **Random/Sparse Sampling:** $\mathcal{O}(p^2)$ or $\tilde{\mathcal{O}}(p^2)$ ($\tilde{\mathcal{O}}$ hides log and sparsity-related factors) - **Eigendecomposition:** $\mathcal{O}(r^2 + p)$ ($r$: the rank of the feature matrix) - **Relevance to Feature Learning:** - **Radhakrishnan et al. (2024):** [3] posits a Neural Feature Ansatz, showing a strong correlation between weight outer products and a neural network's Average Gradient Outer Product (AGOP). Our method complements this line of work by providing insights into how efficient feature learning could be possible, leveraging low-dimensional structure of task-specific relevant directions (in AGOP). - **Distillation:** Our results are applicable to model distillation, suggesting that more efficient algorithms can be designed by exploiting the low-dimensional structure of the features of larger models. - **Feature Formation:** Our work highlights that learning certain classes of features can have inherent statistical and computational bottlenecks, especially in high dimensions. Even with feedback, quadratic dependence on dimension is generally unavoidable. --- **4. Takeaways of the theoretical results and empirical validation.** Key takeaways: - Our results suggest potential sample complexity lower bounds on feature formation in the activation of a layer or a trained SAE for standard learning frameworks, e.g., active learning, iid learning, machine teaching. - There is a clear trade-off between the expressiveness of high-dimensional feature matrices and the difficulty of recovering them: more complex or expressive dictionaries generally require more feedback or data. - In the general case, the required feedback scales quadratically with dimension, though this can be reduced under structural assumptions (e.g., low rank). - Empirically, we find recovery becomes harder in high dimensions, and results suggest leveraging structure may improve efficiency, motivating future works to dimensionality reduction. --- ### **References** [1] [Open Problems in Mechanistic Interpretability](https://arxiv.org/abs/2501.16496) [2] [Towards Monosemanticity: Dictionary Learning](https://transformer-circuits.pub/2023/monosemantic-features/index.html) [3] [Mechanism for Feature Learning](https://www.science.org/doi/10.1126/science.adi5639) [4] [CRAFT: Concept Recursive Activation FacTorization](https://arxiv.org/abs/2211.10154) [5] [The Geometry of Truth](https://arxiv.org/abs/2310.06824) --- We hope these responses address your concerns. Please let us know if you have additional feedback or comments, and we will be happy to provide further clarification. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed rebuttal and additional experiments. This addresses my concern regarding the validation of theoretical findings. I also appreciate the explanation on the relevance to feature learning community. I believe the additional experiments and discussion should be included in the paper to understand the contributions better along with conclusion section discussing limitations and future work. Given the authors's rebuttal, I will increase my score to 4 (accept). Best, Reviewer 8xsq --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for the comments and suggestions and for increasing the score positively. Some of the concerns about relevance and comparison to the literature have led us to rethink the contribution. In the revision of the work, we will include the additional experiments with discussion in the main paper and also write a conclusion section detailing the contribution, future directions, and limitations of the work. Thanks, Authors
Summary: This paper proposes a new problem, inspired by the recent works in ML interpretability, specifically in the literature that posits that activations *linearly* encode concepts. (This is known as the linear representation hypothesis). The authors suggest that we can use an agent (potentially an LLM) to generate labeled data in representation space. These labels are for triples, and indicate which of two elements of the triple are closest to the third (or equal in the case of a tie). The goal is essentially to learn a PSD matrix for which the induced distance metric matches all the labeled data. The authors consider 4 different constraints on the agent's data generating process. (1) Standard Constructive: In this case, if the matrix is rank $r$, we can first provide labels that enumerate the nullspace, and then provide the remaining $O(r^2)$ points to learn the low-rank $\Phi$. (2) Sparse Constructive: The unit vectors and their pariwise sums suffice, but we cannot exploit the rank. (3) Sampled Activations: Under the assumption each sampled representation can define a corresponding rank-1 matrix, and all are a.s. linearly independent, so the $p(p+1)/2$ can be achieved, to determine $\Phi$ (4) Sparse Sampled Activations: We only get to samples of representations that are at most $s$-sparse, as we would expect in practice. I didn't fully grasp proof--maybe the exposition can be improved. Claims And Evidence: The main point of this paper is a new theoretical formulation and a solution to that formulation. Given that the proofs in the appendix are correct, (the sketches for at least Thm.1-3 are quite convincing), then there is sufficient evidence of the claims. Experiments also seem to validate the claims. Methods And Evaluation Criteria: Yes. Proofs for formal claims + some small experiments for additional verification. Theoretical Claims: I did not check the proofs in the appendix, but the proof sketches for Theorems 1-3 are sufficiently detailed (at least for upper bounds) that I am convinced that the statements are relatively easily proved. For Theorem 4, I did not follow the proof sketch, and did not check the proof. Experimental Designs Or Analyses: Experiments are quite limited. For the final version, the authors might try to work on some additional experiments with greater scale/complexity. However, I don't think the experiments are needed to verify any of the claims, and I recognize that the approaches considered (esp. for Sparse Sampled Activations) might not scale well and require many samples, making them unsuited for real models. Supplementary Material: I looked over Appendix I thoroughly. I quickly looked at Appendix E to see if proofs were in line with my expectations. Relation To Broader Scientific Literature: This paper has value in its contribution to the theoretical foundation of mechanistic interpretability. The theory and ideas are suggestive of potential new approaches for learning representations, and establish bounds on the complexity of learning from simple comparison data. Essential References Not Discussed: Perhaps some limitations of linear representations could be discussed. See [1] [1] Recurrent Neural Networks Learn to Store and Generate Sequences using Non-Linear Representations, Róbert Csordás, Christopher Potts, Christopher D Manning, Atticus Geiger Other Strengths And Weaknesses: I think the unique perspective of this work is in it's interesting problem framing, using agents to generate labeled data and learning. An additional weakness is the potential lack of practicality of the first three theorems. Other Comments Or Suggestions: I think the "additional experiments" section is interesting, and worthy of inclusion at the end of the paper. I understand the challenge of presenting it though, as it deviated significantly from the approaches discussed in the theory due to practicality. Questions For Authors: 1. What is $p_s$? Can you provide some more information in the text? 2. Why is sparse constructive also not $\Theta$ (given the argument)? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful questions and suggestions. Below, we provide detailed responses to the main concerns. --- **1. Potential lack of practicality of the first three theorems.** As discussed in our response to Reviewer 2 (gr2L), our framework considers general feature learning. The results in Theorems 1–3 apply broadly across domains such as machine teaching, program synthesis, and student–teacher models, where best-case complexity is a central focus. These bounds provide foundational insights and can guide the design of more efficient teaching or interaction systems in these areas. --- **2. Discussion of additional experiments in the main text.** In the revised version of the paper, we will incorporate a substantial discussion of the methodology and findings from the additional experiments directly into the main text. --- **3. What is $p_s$? Can you provide more information in the text?** As detailed in Appendix H, $p_s$ is the probability of the event where $P$ sparse activations (with sparsity $s$) are sampled with a pattern such that the design matrix $\mathbb{M}$ has non-zero determinant (see proof outline of Theorem 4, pp. 7–8). This ensures that at least $P = p(p+1)/2$ linearly independent rank-1 matrices can be constructed from these activations. We will highlight and define this quantity more clearly in the main paper. --- **4. Why is sparse constructive also not $\Omega$?** As noted in the remark following Theorem 2, the lower bound of $\Omega(p^2)$ holds when feedback sparsity is of constant order, i.e., $s = O(1)$. This is sufficient for cases where feedback is restricted to constant-sparse vectors. However, it remains open whether this lower bound remains tight when $s$ grows with $p$, for instance, if $s$ is strongly sublinear in $p$. The upper bound in Theorem 2 is stated for 2-sparse feedbacks, and it applies more generally to higher sparsity values as well. --- We hope these responses address your concerns. Please let us know if you have additional feedback or comments, and we will be happy to provide further clarification.
Summary: The paper demonstrates that complex features encoded in sparse superposed representations can be effectively learned through a surprisingly minimal and indirect form of feedback. More specifically, the authors show that there is low feedback complexity required to learn sparse superposed features using relative triplet comparisons. ## update after rebuttal I have carefully reviewed your rebuttal and found that most of my concerns are likely addressable and trust that the authors will be able to improve the draft. I do suggest the authors move the experimental information in Appendix 1 to the main text to demonstrate applicability. I will be increasing my score for the submission. Claims And Evidence: The authors demonstrate the following differences in complexity when going from a fully general (worst-case) setting to an assumption of low-rank structure: **Worst-case scenario:** $$\frac{p(p+1)}{2}$$ - \(p\) denotes the dimension of the representation space **Improved complexity (low rank):** $$\frac{r(r+1)}{2} + (p - r) - 1$$ - \(r\) represents the rank of the feature matrix . Since \(\Phi^*\) is effectively \(r\)-dimensional, one can exploit that low-rank structure to reduce the number of constraints needed to recover \(\Phi^*\) up to a positive scalar factor. Therefore, you only need to describe how the matrix acts on its r-dimensional subspace (where it’s nonzero). Methods And Evaluation Criteria: The authors take two approaches to evaluate their claims: **Theoretical Analysis** They derive upper and lower bounds on the number of feedback constraints required to identify a target feature matrix up to a positive scale factor. By contrasting constructive feedback (where any activation can be chosen) with sampling-based feedback (where activations are drawn from known distributions), the authors show how assumptions like low-rank structure dramatically reduce the needed constraints. **Empirical Evaluation** They implement four feedback strategies—Eigendecomposition, Sparse Constructive, Random Sampling, and Sparse Sampling—and test them on synthetic tasks (Recursive Feature Machines) and large-scale sparse autoencoders (e.g., Pythia-70M). Key metrics include mean squared error between the reconstructed matrix and the ground truth, along with the total number of constraints each method requires. Theoretical Claims: As a reviewer that does not work in the domain of theory, I did not carefully assess the correctness of the proofs in this work. Experimental Designs Or Analyses: Outside of reviewing this paper, the soundness/validity was not checked. Supplementary Material: The supplementary outside of the PDF of the draft was not reviewed. Relation To Broader Scientific Literature: The key impact this paper makes appears to be towards the interpretability community. From this perspective, many of the formulations presented in the main text may not apply. For example, the construction setting is not possible to apply to models (like LLMs) since the activation features cannot be controlled in such a manner. Essential References Not Discussed: I do not have strong familiarity with the theoretical work in this area to know if essential references are missing. Other Strengths And Weaknesses: The reduction in complexity is a desirably property in the space of model interpretability. Without a discussion section or conclusion, it was very difficulty to properly evaluate this paper. In its current form, the paper feels incomplete. Also the flow of the paper is very difficult to parse given all the different constructions of the derivations. Other Comments Or Suggestions: Table 1: where the row header says “Stardard Constructive” instead of “Standard Constructive.” Questions For Authors: What is the value of the constructive setting outside of theoretical considerations? How robust are the recovery guarantees when the feedback is noisy or inconsistent? How sensitive are the results to the assumptions on activation sparsity and the underlying distribution? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful questions and suggestions. Below, we provide detailed responses to the main concerns. --- **1. Evaluation of the paper.** We emphasize the broad relevance of our theoretical work - They apply to feature extraction in LLMs (see Section 6 and Appendix I for additional experiments for LLMs). - Our results bound sample/feedback complexity for any class of dictionaries, e.g., sparse ones. - Furthermore, they inform directions for mechanistic interpretability based on Neural Feature Ansatz. [1], [2] **2. Lack of a discussion section or conclusion.** In the revision, we will include a dedicated discussion section that highlights the main contributions, draws connections to related approaches in feature learning, and outlines the limitations of our work. --- **3. What is the value of the constructive setting outside of theoretical considerations?** Our framework encompasses general feature learning and applies to both standard and sparse samples/activations. The constructive setting can be useful in several practical contexts: - **Machine teaching and program synthesis:** There is growing interest in understanding the *teachability* of a hypothesis space. Analyzing the best-case complexity of teaching is essential in settings where an agent selects samples to guide a learner. Similarly, in program synthesis, constructive feedback or targeted examples can be valuable for efficiently steering a learner to a fixed goal. [3], [4] - **Student-teacher models:** The bounds in the constructive setting translate directly into guarantees for interactive learning in student-teacher models. While the setting may appear contrived, the resulting bounds clarify the gap in informativeness between different types of feedback. [5] --- **4. How robust are the recovery guarantees when the feedback is noisy or inconsistent?** This is an interesting direction for future work. There are two settings worth exploring: - **Weak teaching agent:** Here, the agent provides feedback with some noise (e.g., Gaussian). This leads to an additional error-dependent factor in the feedback complexity, on average. Note that the reduction in Lemma 2 (p. 4) assumes that feedback vectors are equivalent in norm under the feature map, so exact recovery is not possible in the noisy setting. The increase in feedback complexity is due to difficulties in reliably sampling low-error feedbacks, which could be addressed via concentration bounds under specific noise models. - **Learner receives noisy feedback:** When the learner only receives relative (and noisy) feedback on activations, standard information-theoretic arguments imply that, in the worst case, the feedback complexity is unbounded. Specifically, the differences between rank-1 matrices generated by noisy feedbacks must lie in the orthogonal complement of the target feature matrix in the space of symmetric matrices—an event that occurs with zero probability in general (see related argument in Section 5). It would be interesting to study whether this can be improved in interactive settings or under structured assumptions on the feature matrix. --- **5. How sensitive are the results to the assumptions on activation sparsity and the underlying distribution?** We analyze two sampling setups: - **Random sampling:** We assume the input distribution follows a general Lebesgue measure, and we apply Sard's Theorem to derive tight bounds. Thus, unless one considers more exotic measures, the result is robust across a wide class of distributions. Since Lebesgue measures cover most distributions of practical and theoretical interest, the bounds provide a meaningful characterization of feedback informativeness in this setting. - **Sparse sampling:** For sparse activations, we follow the distributional assumptions common in the dictionary learning literature (cf. Gribonval et al., 2015). The upper bound in Theorem 4 (p. 7) does not depend on a specific distribution. Instead, it uses a pattern-matching argument and Hoeffding's inequality. The bound depends on the sparsity level and the probability of non-zero coordinates in the activation vectors. Thus, feedback complexity is lower for less sparse activations and for distributions with higher probability mass on more dimensions. --- ### **References** [1] [Mechanism for Feature Learning](https://www.science.org/doi/10.1126/science.adi5639) [2] [Aggregate and conquer](https://arxiv.org/pdf/2502.03708) [3] [An Overview of Machine Teaching](https://arxiv.org/pdf/1801.05927) [4] [Program Synthesis](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/10/program_synthesis_now.pdf) [5] [Teacher-Student Architecture for Knowledge Distillation: A Survey](https://arxiv.org/abs/2308.04268) --- We hope these responses address your concerns. Please let us know if you have additional feedback or comments, and we will be happy to provide further clarification. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I have carefully reviewed your rebuttal and found that most of my concerns are likely addressable and trust that the authors will be able to improve the draft. I do suggest the authors move the experimental information in Appendix 1 to the main text to demonstrate applicability. I will be increasing my score for the submission. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thanks for increasing the score positively. We agree to include a dedicated discussion section and experimental information in the main text. Thanks, Authors
Summary: This paper investigates theoretical bounds on feedback complexity for learning feature matrices through triplet comparisons. The authors analyze both constructive settings (where agents select activations) and distributional settings (with sampled activations). In particular, for a rank-r feature matrix in p-dimensional space, they prove bounds of Θ((r(r+1)/2) + (p-r)) comparisons in the constructive setting and Θ(p(p+1)/2) for general activations sampled from a Lebesgue distribution (alongside bounds for sparse sampling settings). They validate their findings using Recursive Feature Machine-trained models and dictionaries from sparse autoencoders trained on language models like Pythia-70M and Board Game models. Claims And Evidence: The paper's theoretical claims are supported by mathematical development in the main text, with detailed proofs referenced in appendices. The authors establish a learning framework using triplet comparisons, which they reduce to pairwise comparisons with equality constraints in Lemma 2. This allows them to reformulate the learning problem in terms of matrices that annihilate a specific subspace, providing a geometric interpretation. For constructive feedback settings, Theorem 1 proves tight bounds of Θ((r(r+1)/2) + (p-r) ) for rank-r matrices through a decomposition approach. For general sampling, Theorem 3 establishes bounds of Θ(p(p+1)/2) using Lebesgue measure properties. For sparse sampling, Theorem 4 provides an upper bound based on probability theory. The experimental validation shows alignment with theoretical predictions. Figure 1 demonstrates that for a 10×10 feature matrix, eigendecomposition-based and sparse constructive methods achieve the predicted performance. Figure 2 shows how sparsity probability affects feedback requirements, with higher sparsity requiring more samples as predicted, with some additional results on language models like Pythia-70M and Board Game models in Appendix. Methods And Evaluation Criteria: The paper establishes a well-defined mathematical framework for analyzing feedback complexity. The authors use an oblivious learner model (Definition 3) that randomly selects a feature matrix satisfying all feedback constraints, allowing them to focus on information-theoretic aspects. For evaluation, they use mean squared error (MSE) between learned and ground truth feature matrices to assess reconstruction quality. The experiments employ Recursive Feature Machines (RFMs) to generate controlled test cases with known ground truth matrices. The monomial regression task f*(z) = z₀z₁1(z₅ > 0) provides a controlled environment for testing. For large-scale experiments, the authors mention using batch-wise gradient descent for optimization with dictionaries from language models. Theoretical Claims: The paper presents several theoretical results that I reviewed at a high level, though I did not work through all the mathematical details: 1. The key theoretical contribution is establishing bounds on the minimum number of triplet comparisons needed to learn feature matrices under different settings. The authors develop a framework where triplet comparisons are reduced to pairwise equality constraints, then analyze the problem through the lens of orthogonal complements in matrix spaces. 2. Their approach decomposes feature matrices into eigenspace and null space components, allowing them to derive tight bounds for the constructive setting. For distributional settings, they leverage measure theory to establish bounds for general activations and probability theory for sparse activations. 3. These theoretical developments build upon established results in linear algebra, measure theory, and probability, though a complete verification would require detailed examination of the appendices. Experimental Designs Or Analyses: The experimental design includes both synthetic and language model-based evaluations: 1. RFM Experiments: The authors use monomial regression with RFMs to generate controlled test cases. They compare four feedback methods (eigendecomposition, sparse constructive, random sampling, sparse sampling) on 10×10 matrices of rank 4. 2. Sparsity Analysis: Figure 2 investigates how sparsity probability affects feedback requirements, showing increased requirements with higher sparsity. 3. Language Model Dictionaries: The paper mentions experiments on dictionaries from Pythia-70M and Board Game models with dimensions of 32k×512 and 4096×512 respectively, using batch-wise gradient descent for optimization. The authors are encouraged to include experiments and discussions based on LLM dictionary learning in the main paper to demonstrate how the ideas work in realistic settings and at scale. Supplementary Material: No Relation To Broader Scientific Literature: The paper connects to several research domains: 1. Dictionary learning: The work extends dictionary recovery approaches by focusing on learning from feedback rather than direct samples, building on prior work by Gribonval & Schnass (2010) and Arora et al. (2013). 2. Mechanistic interpretability: The paper contributes to understanding feature representation in neural networks, relating to work on sparse autoencoders for interpretability by Bricken et al. (2023). 3. Superposition theory: The work addresses neural network superposition as described by Elhage et al. (2022), where models represent more features than dimensions through sparse linear combinations. 4. Mahalanobis distance learning: The triplet comparison framework connects to work on metric learning by Kulis (2013) and Schultz & Joachims (2003). 5. Recursive Feature Machines: The experiments leverage recent work on RFMs by Radhakrishnan et al. (2024), which use the Average Gradient Outer Product for feature learning. Essential References Not Discussed: Some relevant recent works that could strengthen the paper: 1. Lan et al. (2024) "Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models" (arXiv:2410.06981), which shows how SAEs transform LLM activations into interpretable spaces where feature universality can be studied. 2. Radhakrishnan et al. (2024) "Linear Recursive Feature Machines provably recover low-rank matrices" (arXiv:2401.04553), which provides theoretical guarantees for RFMs in low-rank matrix recovery. 3. Recent work from Anthropic on scaling sparse autoencoders to larger models and using them for causal interventions, that demonstrate practical applications for dictionary learning. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - More detailed analysis of language model experiments would strengthen the paper. - Discussion of adaptive feedback mechanisms could enhance future directions. - More explicit connection to interpretability applications would increase impact. Questions For Authors: 1. For the sparse sampling scenario, do you have insights on potential matching lower bounds to complement your upper bound? Not sure if I missed some details here. 2. In your language model dictionary experiments, how well did the empirical feedback requirements align with your theoretical predictions? What specific challenges did you encounter at this scale? 3. How might your theoretical framework extend to adaptive feedback scenarios, where triplets are selected based on previous responses? Could this potentially reduce the number of comparisons needed in practice? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful questions and suggestions. Below, we provide detailed responses to the main concerns. --- **1. Potential lower bound to complement the sparse sampling upper bound (Theorem 4)** We conjecture that the dependence on $p^2$ and $\log \tfrac{1}{\delta}$ in the feedback complexity bound of Theorem 4 is fundamental. A matching lower bound is likely to follow from standard statistical lower bound techniques, such as Fano’s inequality. Specifically, identifying a feature matrix from sparse feedback observations constitutes a high-dimensional hypothesis testing problem, where each feedback vector reveals limited information. Under such settings, Fano-style arguments imply that at least $\Omega(p^2 \log \tfrac{1}{\delta})$ feedbacks are necessary to achieve reliable identification with confidence $1 - \delta$. Additionally, the quantity $p_s$ in Theorem 4, which captures the number of monomial components affected by sparse activations, could potentially be refined. A sharper bound may be achievable by analyzing high-probability events where sampled sparse activations span the full $p^2$-dimensional space of symmetric matrices. We note that deriving a fully rigorous tight lower bound is valuable future work. --- **2. Empirical vs. theoretical feedback complexity in large-scale SAE experiments** In Table 2 (Additional Experiments, p. 29), we report the empirical feedback complexity and the corresponding Pearson Correlation Coefficient (PCC) for various feedback mechanisms on the ChessGPT SAE. For Eigendecomposition, Sparse Constructive, and Random Sampling, the empirical observations align closely with the theoretical bounds stated in Theorems 1–3. For Sparse Sampling, we used 3-sparse activations, which result in a small $p_s$. Consequently, the PCC improves with increasing feedback but remains lower than the other mechanisms due to the fixed sparsity relative to the input dimension $p = 4096$. We observe similar patterns on Pythia-70M SAEs. To further support the comparison between theoretical and empirical bounds, we include a synthetic experiment designed to highlight the alignment between the two (see response in bullet point 1. to Reviewer 8xsq). --- **3. Challenges in scaling to large dimensions** For models such as ChessGPT and Pythia-70M, the total number of degrees of freedom $P = p(p+1)/2$ reaches 8.3 million and 512 million, respectively. These scales present significant challenges in terms of storage and computation. Except for Eigendecomposition—which exploits the low-rank structure of the feature matrix—all feedback mechanisms require complexity scaling with $P$. To address this, we implemented a sparse feedback representation and avoided storing full dense vectors. Without constant sparsity, storing even $10^7$ feedback vectors becomes impractical. Furthermore, solving the resulting linear systems via direct constraint satisfaction is infeasible. Instead, we use a batch-wise gradient descent method (Algorithm 3, p. 29) to solve for the feature matrix using the feedback generated by rank-1 outer products. --- **4. Extending to adaptive feedback settings** We agree that extending our framework to incorporate adaptive feedback is a compelling direction for future work. In our current setting, the learner is *oblivious*—it selects a feature matrix uniformly at random from the set of hypotheses consistent with received feedback. As a result, the teacher cannot leverage adaptivity across feedbacks. However, in more expressive settings where the learner uses a *preference function* to select hypotheses (e.g., minimizing Frobenius norm, promoting sparsity, or favoring identity-like solutions), adaptive feedback strategies could substantially reduce feedback complexity. Related ideas have been explored in recent work on machine teaching and interactive learning: - [Teaching via Best-Case Counterexamples in the Learning-with-Equivalence-Queries Paradigm](https://openreview.net/forum?id=Ee7IOrpLwT) - [Preference-Based Batch and Sequential Teaching: Towards a Unified View of Models](https://papers.nips.cc/paper_files/paper/2019/hash/4dc3ed26a29c9c3df3ec373524377a5b-Abstract.html) Incorporating such learner models into our framework would allow for interaction-aware teaching agents and could yield tighter feedback complexity bounds. This remains an exciting avenue for future investigation. --- **5. Essential References** We thank the reviewer for highlighting these relevant references. We will incorporate them into the appropriate sections of the revised paper. Additionally, we note that the work of Radhakrishnan et al. (2024) addresses low-rank matrix recovery under exact measurement conditions, whereas our setting involves relative feedback, which introduces additional complexity. --- We hope these responses address your concerns. Please let us know if you have additional feedback or comments, and we will be happy to provide further clarification.
null
null
null
null
null
null
Beyond Sensor Data: Foundation Models of Behavioral Data from Wearables Improve Health Predictions
Accept (poster)
Summary: This work provides a foundation model for wearable devices on the behaviour data instead of the raw sensor signals. The model was trained on a large-scale wearable dataset totalling over 2.5B hours of wearable data from 162K individuals. This paper has performed extensive experimentations on the choice of tokeniser, model backbone and hyper-parameter optimisation. The best performing model was evaluated on 57 tasks ranging from the the disease detection, health state monitoring (pregnancy) to behaviour monitoring (sleep). The model showed superior performance against a baseline that was trained using summary statistics of the selected behaviour. Claims And Evidence: - Strong performance of behaviour data for health detection: not clear; even though the authors have shown their model performance across a rich set of downstream tasks. It is not clear how difficult the downstream task given that there is no competitive benchmark, also detailed were not provided in terms of case/non-case distribution potentially making the binary classification task very hard or very easy. - Integrating behavioural and sensor data: yes, when combining the proposed behaviour model with a PPG model, the authors showed that the integration increased the performance on majority of the downstream tasks - Developing a foundation model for wearables behaviour data for irregularly sampled data: yes, this is the one of the first foundation models on behavioural time series data at this scale. But the authors have not talked about sharing the model weights or codebase make it much harder for others to reproduce the work. Methods And Evaluation Criteria: This paper explored the combinations of three tokenisers and three backbone model architectures - Two types of tokeniser classes were selected one with dense model and another in the form of tuple allowing for a single input token for each behavioural measurement - The choices for model architectures were well-motivated including standard transformers and a state-space model i.e. Mamba-2 - The choice of the pre-training loss has already been shown to achieve good performance in other data modality e.g. PPG A key weakness of the method is on incorporating data modalities that are very sparsely sampled particularly around that have samples for ≤10% of the time such as fall count, body mass index and 6 minute walk distance. Even though these matrices are highly relevant for health but when treating them like time series requires a strong motivation as the model always just see some constant input therefore making it difficult to leverage the temporal dynamics. Your results in Table 10 is painting the same story that the linear probing results on the learnt embedding has a $R^2$ of 0 with body mass index and 0.096 with numbers of times fallen. So at least a portion of input data with low sampling frequency and low variations can possibly be removed to computational efficiency. The evaluation is the area that requires further justifications on. - The disease labels for the data used comes from self-report which will might under or over report different clinical outcomes introducing differential bias in the ground truth. Would be to discuss how this can be handled. - The case/non-case distributions are not being shown for each disease, so it is challenging to know the difficulty of each that. Would be great if the authors describe the case/non-case distributions for each disease and how the non-case subjects are selected. - On the ascertainment for sleep ground truth, the authors did not explain what sort of quality control is done on the sleep labels for example the removal of outliners, minimal wearable time etc.,. Without a careful quality control, it could lead to high measurement error which can potentially explain why the PPG embedding has low $R^2$ 0.1-0.3 to the sleep statistics. Theoretical Claims: None Experimental Designs Or Analyses: I’ve checked its experimental setup for train and eval which is reasonable to obtain performance metric with confidence intervals for the comparisons for different models. Supplementary Material: I’ve read most of the supplementary materials as lots of the results are in the supplement. Relation To Broader Scientific Literature: The key contributions of this paper are: 1. Explore the concept of using behaviour level modelling instead of lower-level sensor representation for health inference which will make it easier to leverage longitudinal wearable time series. Previous foundation models in wearable signals are mainly for low-level sensor representation. 2. It nicely paved a way of integrating behaviour level information with additional model like the PPG and demonstrated how this multi-modal approach could aid in health inference Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - The authors have provided extensive reports on their definitions of the behaviour metrics and downstream task creation which is clear Other Comments Or Suggestions: 1. What sort of quality control did you apply for your input data for each modality. 2. Can you provide a model card for your optimal model covering model size, numbers of layers and training config? Questions For Authors: Will the model weights and codebase be made open source? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive feedback and helpful comments towards improving our work. We focus on responding to the major themes of your comments: **Measuring Difficulty of Downstream Tasks** We agree that it is important to contextualize the difficulty of the tasks. To overcome this, we included baseline statistics of the input data and an existing PPG foundation model as a strong baseline to contextualize the performance of WBM. The performance of these models shows that no task is particularly easy or hard, but the strong performance of WBM relative to these baselines helps show the efficacy of the proposed model. Per your suggestion, we will include the case/non-case distribution for all binary outcome labels in the camera ready version of the paper to improve our evaluation report. However, importantly, prevalence does not immediately define the difficulty of the downstream task. **Using Highly Sparse Input Modalities** This is a key characteristic of an observational study such as AHMS collected under real-world conditions where these variables are less often logged due to their nature and behavior of the participants. It is true that given the unique nature of these variables, it is harder for the model to leverage their temporal dynamics and changes, but our goal here is to model the data as-is but use state-of-the-art architectures such as state space models and Transformers that can leverage any potential temporal dynamics or inter-variable relationships. We experimented with tokenizers such as “Tuple“ tokenizer that does not do any form of imputation (therefore no constant input for sparse variables); we observed degraded performance with the ”tuple“ tokenizer in our hyperparameter search experiment in Appendix Tables 8 & 9. We do find that the learned embeddings retain information for several variables that are natively sampled at a weekly frequency (e.g. six minute walk distance and walking steadiness score, see Table 10 in the Appendix). However, as you point out, there are some variables that we do not capture well in our embeddings. This could be due to their low prevalence or the use of the contrastive loss for training, which we will discuss further in the camera ready. As you suggest, we could remove such variables for computational efficiency, and we will add a line explaining this to the camera ready version of the paper. **Quality Control of Inputs and Labels** For input behavioral data, we do not do a per-modality or per-variable quality control and we use the data as is. However, for turning these behavioral data into week-level segments for training WBM, we perform the following cleaning steps: 1) z-score each variable then clip any outliers to [-5,5], 2) drop weeks whose number of variables are in bottom 5% percentile of all weeks, 3) drop weeks that have less than 5 days of data, 4) drop subjects with less than 5 weeks of usable data, or who were enrolled in the study for less than 90 days. For segment-level labels of baseline history and medication, you bring up an important point in that we are working with self-reported labels. In the camera ready version, we will discuss the caveat of parsing the labels from self-reported surveys (although for some of our evaluations we do use more rigorous lab measurements, eg, diabetes). Finally, we clarify the quality control for defining sleep labels. To obtain sleep labels, a participant must wear the watch overnight in order to get sleep metrics. We further limited to weeks where 5/7 days in the week had sleep metrics, meaning the watch was on overnight. This means that the PPG was recorded overnight as well, ensuring a fair comparison between both techniques. We will be sure to clarify all of these points in the camera ready version. **Making Model Weights Open Source** Unfortunately the model weights cannot be shared due to the specifics of the informed consent for participants in the study. We will provide all necessary details in the camera ready and will put a note for interested parties to reach out to the authors for more details. However, for completeness, we will include a model card with the exact details of model size, number of layers, and other necessary training parameters in the camera ready.
Summary: This paper develops a foundation model for behavioural data from wearables to improve health predictions. The authors process 2.5 billion hours of wearable data and comparing different tokenization strategies and model architectures. They find a Mamba-2 architecture with TST tokenization performs best. The model is tested on 57 health-related tasks including demographic prediction, disease classification and health state detection. Results show that the behavioural foundation model outperforms a statistical baseline on most tasks. The authors also compare WBM to a PPG foundation model, finding WBM performs better on some behavior-driven tasks like sleep prediction, while PPG excels at others. Combining WBM and PPG embeddings yields the best performance across most tasks, indicating complementary information between behavioral data and raw sensor signals. ## Update after rebuttal The authors acknowledged most of my concerns. There were two points I added further clarification on A) the ways in which the signals are different for PPG and WBM B) the label leakage. I am well aware that these are monumentally difficult to correct for but they nonetheless affect the results interpretation. A demographic classification model trained using pure physiological signals is not the same claim as a using physiological signals + demographic label conditioning, even if the two models perform to identical levels. However, these limitations are not reasons for rejections and I improved my recommendation 3 to 4 and do urge the authors to discuss these limitations, as they have done in their most recent response, in the paper. Claims And Evidence: - The main claim is substantiated with good results of the behaviour model. - The claim of the model learning from behaviour might be overstated (see methods). - The smaller claims based on comparison of “sensor data” vs “behaviour data” do not appear to be backed up sufficiently. - The paper presents a very optimistic narrative about WBM's performance, but the results reveal that WBM only outperforms the PPG model in 18 out of 47 baseline disease and medication outcomes, and of these, only 4 results are statistically significant. A more accurate framing would acknowledge that behavioral data provides valuable information for specific types of tasks (e.g. sleep and mobility), low-level sensor data appears more broadly effective across the majority of tasks. - Related to the above, while WBM outperforms the simple baseline on a majority tasks, the performance improvements are quite modest (median AUROC improvement of only 0.017). This modest gain raises questions about whether the additional complexity of foundation models is justified compared to simpler statistical approaches for many applications, particularly when it comes to interpretability, sensitivity and failure modes. These tradeoffs are not discussed sufficiently. Methods And Evaluation Criteria: - The WBM is pre-trained from 27 variables from the wearable device, an Apple watch, to predict age and biological sex. These variables include estimated active energy (calories burned) and basal energy, which can be problematic. I am fairly certain the apple watch uses the individual’s age and biological sex directly to estimate these values, causing label leakage. This would then mean that the model is trained to predict age and biological sex not only on pure behaviour signal, but based on values that are conditioned on the age and biological sex. This could also be the case for VO2max to a lesser extent, typically calculated using the person’s estimated max heart which is in turn a direct function of the person’s age. Although label leakages are almost impossible to avoid in real-world healthcare research, I urge the authors to check if these can be minimised further or, at least, better acknowledged. - The approach used for combining WBM and PPG is not discussed. Theoretical Claims: The theoretical claims about the complementary nature of behaviour and signal data are mostly supported with the combination of the two models. However this theoretical aspect is not a major claim of the paper. Experimental Designs Or Analyses: - The authors make claims on the signal strength of behaviour vs low-level sensor signals that are based on comparisons of WBM and PPG. However these approaches differ fundamentally in more than just signal types. The sampling frequency, data processing, model architectures appear very different. So this seems to be as much of a comparison of sampling frequency and architecture as data signals. It seems a better way to make these claims would have been to ablate the WBM by removing behaviour features and leaving in the “low level” features, i.e. heart rate. - There is possible selection bias in the data representativeness, i.e. limited to Apple watch users. This limitation isn’t sufficiently discussed. Supplementary Material: No separate supplementary material. Appendix includes various implementation details. Relation To Broader Scientific Literature: This is a weak point of the paper as the discussion of the results in broader context is limited. The discussion does not place the findings in the context of other digital health interventions or wearable technologies beyond a narrow set of foundation models. The authors miss opportunities to connect their work to broader healthcare trends, the potential clinical impact of wearable-based health predictions or how these models might integrate with existing healthcare systems. Essential References Not Discussed: - There are not many prior works in this field apart from the Merrill and Althoff (2023) already cited and an earlier SSL wearable paper that also uses behaviour signals Kolbeinsson et al. "Self-supervision of wearable sensors time-series data for influenza detection." (2021). - If the combination of WBM and PPG is ensemble-style, then some references to prior work on ensemble models in health would be appropriate. - It would also be clearer to move the seminal citations for rotary transformers and mamba to directly after the paragraph headers where they are named. Other Strengths And Weaknesses: Strengths: - The systematic approach to architecture selection is thorough and usually well-justified - The dataset size is impressive and allows for robust foundation model development - The diversity of downstream tasks provides a comprehensive evaluation framework - The behaviour model shows particular promise for sleep and mobility predictions Weaknesses: - Limited discussion of computational requirements and model efficiency - Lack of evaluation across different demographic groups to assess fairness - The model's interpretability is not discussed, which is important for healthcare applications - No discussion of how the model might perform on non-Apple devices with different sensors - Limited exploration of more sophisticated fusion techniques when combining WBM and PPG Other Comments Or Suggestions: - On line 216 “1-layer multi-layer perceptron”, does this mean a single-layer-perceptron or an MLP with one hidden layer? The current phrasing is a bit clumsy - The paper switches between “wearables data” and “wearable data”, the former seems more semantically correct but I do not have a preference as long as it is consistent. - L169 “Hourly aggregation ensures consistency across variables” using “ensures” here seems overclaimed, as it is not guaranteed. “Supports” or “promotes” might be more accurate. - L295 The split is 80/20 train/test. Should this be 80/10/10 to match with L183? - L205 TST is not properly defined and citation could be clearer - Caption for table 1 and table 2: WHB → WBM Questions For Authors: - L294 How is the decision to fit to week or participant made? - Are some of the features estimated using the individual’s age and biological sex? What implications might that have? - How are WBM and PPG combined? * What is the computational cost of training and inference for the WBM model compared to traditional approaches? * Were any analyses done to model performance across different demographic groups to assess potential biases? * Have there been any tests on the model's robustness to different wearing patterns and compliance levels? * What privacy-preserving techniques could be implemented alongside this approach for real-world deployment? How might they affect performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback and suggestions aimed towards improving our work! **Contextualizing Comparisons between WBM and Baseline/PPG** We appreciate your feedback on tempering our claims. First, we clarify the WBM vs baseline comparison. The subject-level tasks in Figure 3 are intentionally simple, as they aggregate a subject’s full history. Basic aggregate feature statistics and demographics may perform well, explaining the small median improvement of WBM. However, WBM significantly outperforms the baseline in a few key cases (e.g. smoking status and anti-psychotics usage). Its real strength lies in the more difficult time-varying tasks, where it consistently surpasses the baseline in detecting changes in health state on all tasks. Next, we emphasize that WBM and PPG are complementary. WBM excels in some tasks (e.g., sleep duration and infection), while PPG is stronger in others (e.g., diabetes). However, combining both achieves the best subject-level performance in 42/47 tasks (with a majority being significant), and in all but 1 of the segment-level tasks (where it’s within margin of error). Behavior data should complement, not replace, sensor data when building prediction models from wearables. We will edit the language in the camera ready based on your feedback and our response above to better clarify our contributions. **Combining WBM and PPG Representations** We apologize that this was unclear. We will clarify in the camera ready that we combined WBM and PPG embeddings by concatenating the two 256D embedding vectors into one 512D embedding vector. There are many better ways to build multimodal representations using fusion techniques either at the input or representation level that we did not explore. We will add a discussion of these as future work in the camera ready. **Relation to Broader Wearable Community** Thanks for raising this point. We will improve the discussion by connecting our work with the broader space of digital health and wearables, and mention the potential clinical impact such wearables-based health predictions might have in the future if safely deployed at-scale. **Label Leakage in Downstream Tasks** This is a subtle but important point, as label leakage is a major challenge in building foundation models, and you are correct that a small number of our input variables (e.g. basal energy) rely on age and sex as inputs. However, we clarify that age and sex prediction are not meant to showcase the value of our model. We view these tasks as sanity checks that our model is able to encode information that we already expect should be partially available. We will make this caveat more clear in the camera ready. We will also emphasize the importance of the other tasks, especially the segment-level tasks as mentioned above. Label leakage should not be a major concern for the 55 other downstream tasks, as none of those labels are used as part of the input variables. **Computational Cost of Training and Inference** The final WBM was the result of 6 epochs of training which took 16 hours of training time on 8 A100 GPUs. The learned model can quickly perform inference, and embeddings can be used easily across many tasks. We will add these details in the camera ready. **Robustness to non-Apple devices and other wearing patterns** You bring up an important point about generalizing to non-Apple devices and other wearing patterns. During training, we opted to remove weeks with low wear time, so we expect performance will degrade when applied to participant weeks with limited wear time. Evaluations on non-Apple devices is not possible, as much of the data can only be collected on Apple devices (particularly behavioral features derived via proprietary algorithms). However, our training details and insights provide a useful framework for others to train models on other wearables. We will discuss these limitations in the camera ready. **Typos and Writing Suggestions** We appreciate the feedback on typos, writing and citation improvements; we will fix these in the camera ready. We clarify that “1-layer multi-layer perceptron” means there is one hidden layer in addition to the input/output layers. **Miscellaneous Responses** Thank you for bringing up important topics regarding interpretability, selection bias, and evaluation across different demographic groups. Due to space constraints, we point you to responses to R1 and R2 respectively for these topics. We will also clarify in the camera ready version that the choice to fit models weekly vs. at a participant-level was made based on the task. For time-varying tasks where we make predictions at every week, we fit models at a weekly level. For static targets, we fit models at a participant level. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I appreciate their willingness to address my concerns and their commitment to improving the paper. The additional details provide important practical context that strengthen the paper. However, I have two points to reiterate. A) On the comparison between WBM and PPG, this conflates differences in data signals with differences in sampling frequency, data processing and model architectures. This makes it difficult to isolate whether performance differences truly reflect the relative importance of behavioural versus low-level sensor data or simply approach variations. B) On label leakage, I agree that this is not a critical issue and primarily affects, and devalues to a small extent, the sanity checks. However, it does also affect the interpretation of the main tasks. With the foundation model conditioned on demographic labels (age, sex), it will be able to learn these more directly than a model that only has access to behavioural data. Although behavioural data is often heavily influenced by demography, the signal will be very different to one provided by direct demographic labels/conditions. This means the model isn't modelled on behaviour signals, with their natural demographic influences, alone but also trained on indirect age and sex metadata. With the extensive changes the authors have promised, I will not let these two points prevent me from raising my recommendation from 3 → 4. However, I do urge the authors to mention these two limitations in the paper to better place the results in context. --- Reply to Comment 1.1.1: Comment: Thank you so much for the thoughtful response, your willingness to engage with us, and for increasing your score! We appreciate that you are helping us produce a much stronger final paper. A few last comments below, and we’ll add a discussion of these limitations to the camera ready. Re A — one point we would emphasize is that a major part of the difference in the data signals for PPG vs WBM is the difference in the native sampling frequency of these modeled quantities. Eg PPG is generally observed at 64Hz for 60 second intervals, whereas the health/behavior data for WBM has sampling frequencies that vary from every few minutes (e.g. heart rate, step count, active energy burned) to daily or weekly measurements, which we then project onto a fixed hourly grid. This makes it near impossible to disentangle the effect of differences in the underlying quantities being measured vs differences in the sampling frequencies. Another important difference we will mention is that the quantities modeled by WBM cover most periods of time during the week, whereas PPG is only opportunistically captured a handful of times during the day, depending on how often someone wears a watch and is at rest. We also did use different data processing and design decisions for each data type, and in this work we only used a frozen pre-trained PPG encoder and did not explore the same architectures used for WBM (e.g. Mamba-2). We will mention these points in our discussion. Re B — one point that may help clarify the role that age/sex play in our modeling is to consider, as a thought experiment, what might happen if we had access to gold-standard reference values for some of the health/behavioral quantities that strongly depend on demographics. Take VO2max as an example — it is well known that VO2max declines with age, and tends to be lower for females than for males. The FRIEND study (https://www.mayoclinicproceedings.org/article/S0025-6196(15)00642-4/pdf) provides useful population distributions for VO2max by age/sex subgroups — for instance, the median VO2max for age 20-29 males is 48, whereas for 70-79 females it is 18.3. In fact, the upper 95th percentile for females 70-79 (24.1) is still lower than the lower 5th percentile for 20-29 males (29), so there is near perfect separability for VO2max between older females and younger males. In order to provide as accurate an estimate of VO2max from submaximal exercise data as possible, Apple Watch uses demographics as input to the VO2max algorithm, but even if we were to use gold-standard, invasively collected VO2max values, this strong demographic signal would still exist. In either case, using our wearables-derived and demographics-conditioned estimate of VO2max or the gold-standard value, we would expect learned representations of either data type to be strongly predictive of demographics, although they might have different performances and make different errors. We would also expect to see similar issues for other health/behavioral variables that are estimated using demographics as an explicit input (e.g. basal and active energy). To be clear — we agree that this is a super important point, and we’ll add it to our limitations in the discussion! We wanted to point out that there is no simple solution here, as many different health and behavioral quantities that we can collect via wearables will have strong correlations to underlying demographics; the distinction is that only some of the time are demographics explicitly used as inputs to estimate these quantities in the first place.
Summary: This manuscript considers the problem of health condition tracking using pretrained foundation models trained on the Apple health movement dataset. In contrast to past work that used raw sensor signals from PPG and ECG, they leverage higher-level ‘behavioral’ metrics that are extracted from IMU (eg steps), user input (BMI) or intermittent sampling (VO2Max). They survey several architectures, noting special challenges in irregularly sampled data. Following network architecture comparison, they used a dense matrix of features per hour that were passed through bidirectional mamba2 and trained using a contrastive loss with pairs of users as positive samples. They examine demographic classifications tasks, inter-subject tasks predicting health states, and intra-subject classification of demographic information like age and biological sex, showing impressive performance that is fairly competitive with PPG on most tasks done using linear probing. They also examine combinations with PPG, and discuss discrepancies. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: Yes, read relevant sections. Relation To Broader Scientific Literature: Appropriate work is cited and related, see below. Essential References Not Discussed: THe work is well contextualized. Other Strengths And Weaknesses: **Strengths:** - The manuscript is polished and clear with a full description and documentaiton of experimental details and methods and clear interpretation of data - There are essential baselines included (including a null baseline and fairly SOTA PPG), multiple interesting tasks considered, and ablations. - The approach provides fairly large advantages over a baseline in many tasks. **Weakness** Overall I think the manuscript is a strong accept, but I am offering some directions that would improve its utility to myself and the field. - Scaling laws of performance with amount of pretraining data would be helpful. - The importance of the 27 features presented is unclear, and they vary quite a bit in their missingness. It would help to delineate which were the most important for the prediction. The R2 from model reconstruction is a start, but not a full interpretability analysis. - Clarifying the statistical significance of results. Comparisons in Figure 3 for instance are presented without error bars, and throughout the differences in models are so small that significance should be conveyed. It is also unclear to me how the bootstrap was calculated specifically. - For the sleep metrics in particular, my understanding is these are trained from an algorithm consuming the same information as behavior/PPG, except perhaps a raw version of the IMU. Can you comment on how including raw IMU would impact these results. This sensor stream is conspicuously absent from all of these papers Other Comments Or Suggestions: - L172 “Driven by our goal of detecting health states at a temporal resolution of human behavior” unclear what behavior means here. Behavior is overloaded throughout the manuscript. - As the authors point out, contrastive pairs from the same user don’t necessarily make the most sense in this task, especially for intra-subject tests like sleep staging. Are there other tasks that might be useful they can propose. Questions For Authors: Minor - L365 “PPG will not provide the same holistic view of an individual’s week, since it is only captured opportunistically a few times each day” How frequently is this captured? If so, why is the PPG readout of deep sleep so good while normal sleep so bad. - Statsig of different comparisons e.g. classifier performance vs PPG. Should be listed per comparisons. - Ablations over feature importance. Given the uneven coverage, would be valuable to inform simplest set of features to construct. May obviate need for masking. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your positive feedback and useful comments for enhancing our work. We respond to specific suggestions below: **Interpretability of WBM** Interpretability of foundation models is an active area of research that remains extremely important. Unfortunately, it remains non-trivial to understand how input features affect the learned representation in order to ascertain feature importance for any given downstream task. As you suggest, one technique might involve independently perturbing each input sensor to understand its effect on the learned representation. However, understanding the correct way to perturb these irregular data in a meaningful and scalable way remains an important open problem. We will discuss the importance of interpretability and some potential next steps in the camera ready version of the paper. **Sleep Metrics and IMU** When evaluating using sleep metric labels, our goal was to showcase one example where we expect behavior data to be much more predictive than PPG. Sleep metrics are only estimated when a subject wears their watch overnight, ensuring that we have some amount of PPG overnight alongside the other behavior data fields. In general, a passive measurement of PPG is attempted roughly every 2 hours for most subjects, and the measurement is only retained if the subject is sufficiently quiescent ensuring that the PPG data has low noise. The sleep metrics on the Apple Watch are derived only from a continuous stream of IMU (3-axis accelerometer) during a sleep session, and PPG/behavior is not used. Processing such continuous IMU streams involves the use of complex data pipelines that were not available to us; the volume of such data would make it impossible to scale to using most days and subjects from across the study. Therefore, including such IMU data in our modeling was out of scope for our study. However, given that sleep labels are derived from IMU, we expect markedly stronger predictions if we include IMU in the input of our models. **PPG prediction of deep sleep & "PPG will not provide the same holistic view of an individual’s week" phrasing** It is generally the case that total sleep duration, sleep efficiency, and deep sleep in particular decrease with age. Since PPG (collected roughly every 2 hours - see above) contains strong age-related signals, we would expect that it should be able to leverage such information to make decent predictions about average sleep metrics for an individual. Note that the baseline model (which explicitly includes demographics) also performs better on deep sleep prediction, suggesting that demographics plays a more important role. We will rephrase this to “PPG does not provide as comprehensive a view of an individual’s week, since it is only measured a few times each day”. We will also add additional clarification and caveats around the sleep analyses. **Improvements on Contrastive Learning Framework** We agree that the contrastive framework could be improved upon. We explored the use of a masked auto encoder approach, but found that this resulted in poor performance (see Appendix A.5.3). We hypothesize that this may be due to the high degree of noise and irregularity in the behavior data, making complete reconstruction of the input an overly challenging task that leads to representations that do not generalize well to new tasks. We will expand on this hypothesis further in the camera ready, as well as discuss other techniques that future work could consider adapting to this type of data to improve upon our framework such as joint-embedding predictive architectures (JEPA). Even though our contrastive learning approach is not set up to capture intra-subject changes, empirically based on our results it still has some ability to do so. **Clarifying “Human Behavior” Throughout** Thank you for finding this unclear sentence, we will rephrase in the camera ready. We agree that the use of the term “behavior” may be confusing — we will be sure to carefully go through the manuscript and only use behavior in the intended use-case (i.e., when discussing behavior data) and avoid overloading the term. **Details on Statistical Significance and Bootstrap Performance** Thank you for the great suggestion for including bootstrap CIs and p-values in the manuscript, we will add these in the camera ready format. We will also clarify how we calculate bootstrap confidence intervals: we resample the test set 1,000 times and recompute performance metrics on each resampled test set for each method. The confidence intervals and p-values are then computed empirically on this bootstrapped set of performance metrics.
Summary: This paper proposes WBM, a foundation model trained on wearables dataset to improve health predictions. The paper states that behavioral signals including physical activity and mobility metrics align better with physiologically relevant timescales than raw sensor data. The proposed model is trained on over 2.5 billion hours of data from 162k individuals, and is evaluated across 57 health-related tasks. The results suggest that the proposed model has improved performance on behavior-driven tasks like sleep prediction compared to existing models based on raw sensor data. Claims And Evidence: The claims made in the submission such as behavioral data provide valuable insights into health conditions beyond raw sensor data, and that the proposed model outperforms baselines in various health detection tasks is supported by experimental results. However, the reason that Mamba-2 is chosen to be the backbone for behavioral data modeling is not stated, nor is it rigorously compared against other deep learning architectures. Methods And Evaluation Criteria: The methods are well-described in Sections 3-5, including dataset preprocessing, model architecture, and evaluation metrics. Evaluation is performed on a broad set of tasks. Inclusion of more baseline models would improve the rigor of the evaluation. Theoretical Claims: No theoretical claims in this paper. The parameter details in appendix look solid. Experimental Designs Or Analyses: The experiments provide reasonable baselines and comparisons. The evaluation on 57 downstream tasks is impressive, but some analyses lack deeper breakdowns on task-specific performance. Supplementary Material: I reviewed the appendix. It contains details on dataset, model architecture, and additional results. The pretraining loss details and ablation studies are well-documented. Relation To Broader Scientific Literature: The work contributes to wearable-based health monitoring and references important works in this field. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Major contributions include systematic evaluation across 57 health tasks with large datasets. Clear description of model architecture and experiments. Comparison to SOTA deep learning models beyond Mamba-2 is limited. Other Comments Or Suggestions: The paper is well-written and has clear and meaningful visuals. Questions For Authors: How does WBM compare against state-of-the-art transformer models? Have you analyzed potential biases in the dataset, particularly in terms of demographic representation? How does the model's performance vary across different demographic subgroups? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive comments and constructive feedback to help us improve this work! We focus on responding to the major themes of your comments: **Choice of Mamba-2 and comparison to SOTA deep learning models:** This is an important point to clarify. As stated in Sections 4.2 and 4.3 and Appendix Section A.5.1, we compared the Mamba-2 model architecture with two alternative architectures: Self-attention Transformer, and Rotary Transformer. These two are among the most commonly used state-of-the-art Transformer architectures in various domains. In addition, for a fair comparison, we did a full grid search over these 3 architectures, and over 3 different tokenizers, as well as sweeping other hyperparameters. Within our hyperparameter search experiment, we observed that Mamba-2 with the TST tokenizer generally outperformed the other alternatives, including the 2 Transformers architectures. Refer to Table 9 in Appendix Section A.5.2 for a full comparison of how often TST+Mamba-2 achieved the best performance compared to the other 8 architecture+tokenizer combinations. **Inclusion of more baseline models would improve the rigor of the evaluation:** We have included comparisons of WBM with baseline architectures/tokenizers, and a competitive PPG baseline. In addition, we have discussed a baseline comparison with respect to other pre-training methods such as masked autoencoding (Narayanswamy 2024) in Appendix A.5.3, as well as a simple baseline of turning behavioral data to its mean and standard deviation statistics. However, we do agree with that including more baselines can improve our evaluations, and we will discuss this as a caveat in the camera ready version. We welcome any feedback on specific baselines that you feel would particularly strengthen our work. **Potential demographic biases and performance in demographic subgroups:** We agree that characterizing demographic biases are essential for health applications. The Apple Heart and Movement Study has its own limitation and biases (Truslow et al. 2024), for example, the dataset is more biased towards a younger male population. However, given the large scale of this study compared to other studies, our models are still trained and evaluated on a large cohort including participants from diverse demographics. In the camera ready version of the paper, we will include distributions of demographics statistics from our pre-training data. We will also add to the appendix the performance of our models within demographic subgroups on a representative subset of the full set of tasks considered, focusing on tasks where the combination of WBM+PPG performs best. Specifically, we will show demographic subgroup performance on a representative set of targets: heart failure, active smoker, and calcium-channel blocker baseline tasks, as well as on the pregnancy and infection tasks. We will add some discussion around potential fairness concerns and demographic biases to the camera ready version of the paper, noting that a complete fairness investigation into our final models was out of scope for this work.
null
null
null
null
null
null
FIC-TSC: Learning Time Series Classification with Fisher Information Constraint
Accept (poster)
Summary: The paper introduces a novel framework for time series classification (TSC) that addresses domain shift issues by leveraging Fisher information as a constraint. Main Contributions: Domain Shift Problem in TSC: The paper highlights the challenge of domain shifts in time series classification, where the test set distribution deviates from the training set, leading to reduced classification accuracy. It examines the limitations of existing normalization-based solutions, such as Reversible Instance Normalization (RevIN), which were effective in regression tasks but ineffective in classification. Fisher Information Constraint (FIC-TSC): The proposed method incorporates Fisher Information Constraint (FIC) into training, which guides neural networks toward flatter minima, enhancing generalization under distribution shifts. Direct computation of Fisher Information Matrix (FIM) is computationally expensive, so the authors use a diagonal approximation and gradient re-normalization to efficiently impose the Fisher information constraint. Theoretical Insights: The method is justified mathematically, linking Fisher information to sharpness-aware minimization and showing that networks trained with FIC achieve flatter minima. Theoretical convergence guarantees are maintained while improving generalization. Empirical Validation: The method is evaluated on 30 UEA multivariate and 85 UCR univariate time series classification datasets. Results show superior performance over 14 state-of-the-art models, including TsLaNet, GPT4TS, ROCKET, and TimesNet. The method is computationally efficient, achieving better generalization without requiring an additional backward pass per iteration. Main Findings: FIC-TSC outperforms prior approaches in handling domain shifts and achieves higher classification accuracy across diverse time series datasets. The proposed Fisher Information Constraint reduces sharpness in loss landscapes, leading to better robustness and generalization. The approach is efficient, requiring only a single backward pass per iteration, unlike prior sharpness-aware methods such as SAM. Key Takeaways: FIC-TSC introduces a novel constraint-based optimization approach that significantly improves time series classification under domain shifts. It is both theoretically grounded and practically effective, making it a promising advancement in time series analysis. Claims And Evidence: 1. Claim: “Domain shifts in TSC degrade classification performance.” Evidence & Analysis: The authors illustrate distribution discrepancies in the UEA datasets by plotting histograms for selected train/test sets and computing Wasserstein-1 distances. They also show Reversible Instance Normalization (RevIN) helps reduce shift in regression but not in classification, motivating the need for an alternative method. Verdict: These examples, while drawn from a subset of datasets, convincingly demonstrate that train–test divergence is common enough to undermine classification accuracy. 2. Claim: “Constraining Fisher information leads to flatter minima and improved generalization.” Evidence & Analysis: Theoretical Foundation: The authors use the known relationship between Fisher Information Matrix (FIM) and the Hessian at a local optimum, arguing that bounding Fisher information fosters lower sharpness. Diagonal Approximation & Gradient Renormalization: To keep computation feasible, they approximate the FIM by its diagonal and rescale gradients if the FIM norm exceeds a threshold. They prove the approach retains an 𝑂(1/𝑇)O(1/T) convergence rate. Empirical Sharpness Reduction: Post-training, they compare the sharpness of baseline vs. Fisher-constrained models, showing the latter achieves consistently flatter minima. Verdict: Although off-diagonal elements are ignored, the approximation appears effective. Empirical results suggest that the constraint indeed decreases sharpness. 3. Claim: “FIC-TSC outperforms 14 state-of-the-art methods on 30 UEA and 85 UCR datasets.” Evidence & Analysis: The authors compare their approach (with both universal and dataset-specific hyperparameters) to a broad slate of methods, including ROCKET, InceptionTime, PatchTST, and TimesNet, showing average accuracy gains in the 1–3% range. They incorporate Wilcoxon signed-rank tests to highlight the statistical significance of improvements. Verdict: The large-scale experiments are thorough. Since these are standard splits, additional “real shift” setups (e.g., temporal splits) could further confirm the model’s shift-handling capability. However, the reported gains over many baselines look convincing. 4. Claim: “Our Fisher-based method is more efficient than other sharpness-aware solutions.” Evidence & Analysis: FIC-TSC uses only one backward pass per mini-batch, while methods like SAM require two, roughly doubling computation. The authors compare iteration-level runtimes on selected datasets to demonstrate FIC-TSC’s speed advantage. Verdict: Given standard deep-learning frameworks, a single-pass approach should indeed be faster. This is plausible and supported by iteration-time measurements. Potential Weaknesses: Diagonal Approximation While standard in large-scale second-order methods, ignoring off-diagonal terms could limit accuracy of curvature estimates. An ablation or discussion on how robust performance is under this approximation might enhance confidence. Realistic Domain Shift Splits Most experiments still rely on standard train/test divisions from UCR/UEA. Inducing controlled or temporal shifts in the data could help illustrate exactly how robust FIC-TSC can be in true real-world drift scenarios. Hyperparameter Sensitivity The paper briefly explores 𝜖 settings (the Fisher constraint threshold) but does not deeply map how different ϵ values influence final accuracy or model stability across varied tasks. Final Verdict Overall, the paper’s key contributions—(i) identifying domain shift issues in TSC, (ii) introducing Fisher-based constraints to improve generalization via flatter minima, and (iii) empirically surpassing a range of strong baselines—are supported by both theoretical arguments and extensive benchmark results. While points like diagonal approximation, real-world shift tests, and hyperparameter exploration merit deeper discussion, the evidence is generally strong enough to validate the authors’ central claims. Methods And Evaluation Criteria: 1. Proposed Methods 1.1 Fisher Information Constraint (FIC) Core Idea: The authors introduce a Fisher Information Constraint to encourage flatter minima. They approximate the Fisher Information Matrix (FIM) by its diagonal and rescale gradients if the overall Fisher norm surpasses a threshold 𝜖 Implementation: Unlike double-backward techniques (e.g., SAM), FIC-TSC needs only a single backward pass each iteration. This keeps overhead low while preserving benefits of second-order information. Why It Makes Sense: Time series data often exhibit domain shifts, where distributions differ between training and testing. FIC encourages the model to be less sensitive to small input perturbations, boosting robustness. The diagonal approximation is a practical trade-off, capturing enough curvature to reduce sharpness without excessive computation. 1.2 Addressing Domain Shifts Non-Stationarity: Many real-world TSC tasks—sensor data, medical signals—can shift over time. A sharper minimum can lead to overfitting. By bounding the Fisher norm, the paper aims to reduce such overfitting, achieving better generalization under unseen conditions. 2. Evaluation Criteria 2.1 Datasets UEA (Multivariate) and UCR (Univariate) Cover 30 and 85 datasets respectively, spanning a wide array of TSC problems. They are standard archives in the TSC community. While standard splits may not always replicate “hard” domain shifts, they do exhibit real differences between train/test distributions. 2.2 Metrics and Comparisons Accuracy: The paper primarily uses classification accuracy, supplemented by additional metrics like F1 scores on some datasets. Statistical Tests: Wilcoxon signed-rank tests assess significance of accuracy improvements. Baselines: Comparison with 14 methods, including ROCKET, InceptionTime, TimesNet, and PatchTST. This broad coverage clarifies where FIC-TSC stands versus both older and newer TSC models. 2.3 Suitability The authors’ approach aligns with typical TSC evaluation practices—large-scale experimentation on UCR/UEA is widely recognized as a benchmark standard. Reporting both mean accuracy and statistical tests is methodologically sound, ensuring that improvements are not mere artifacts of specific datasets or chance. 3. Overall Alignment with the Problem Focus on Distribution Shifts TSC often suffers from training–testing mismatches, so a method that systematically curbs parameter sensitivity is well-justified. Efficiency FIC-TSC’s single backward pass makes it more practical than other sharpness-aware methods, especially for larger networks or frequent online training updates. Comprehensive Benchmarking Evaluating on 115 total datasets (30 UEA + 85 UCR) ensures results are not tied to one domain. The authors also highlight relevant statistics (precision, recall, F1) and run significance tests, reinforcing the robustness of their comparisons. Conclusion FIC-TSC directly targets a core challenge in time series classification: the impact of domain shifts on model performance. By constraining the Fisher Information via a diagonal approximation, the method encourages learning a flatter solution that is less sensitive to distributional changes. The authors’ evaluation strategy—using well-known UCR/UEA benchmarks, reporting standard metrics, and conducting statistical significance tests—effectively demonstrates the method’s advantages over existing TSC models. Hence, the proposed methods and evaluation align well with the problem, showing both conceptual appropriateness (mitigating domain shifts) and empirical thoroughness (large-scale comparisons on standard archives). Theoretical Claims: 1. Overview of Theoretical Claims Equivalence of Fisher Information and Hessian The paper states that the Fisher Information Matrix (FIM) is asymptotically equivalent to the Hessian of the negative log-likelihood at a local optimum. This is a well-established result in information geometry. The authors’ statement aligns with known derivations (e.g., under regularity conditions, the expected Hessian and FIM coincide). Sharpness Reduction via Fisher Constraint They link Fisher-based constraints to reduced sharpness, arguing that smaller FIM norms imply flatter minima. The proof involves approximating sharpness around a local minimum using a Taylor expansion and relating the Hessian to the FIM. The steps appear consistent with prior sharpness-aware minimization work. Convergence Rate The paper claims that imposing the Fisher constraint preserves an O(1/T) convergence rate under standard smoothness assumptions. The authors outline a gradient-based convergence analysis, treating the constraint as a re-normalization step. While not exhaustive, the argument follows typical first-order proof templates and does not introduce obvious contradictions. 2. Checked Details and Issues FIM–Hessian Equivalence: The authors’ derivation is standard, relying on well-known results in maximum likelihood theory. No major oversights were found. Flatness and Taylor Expansion: The paper’s link between smaller diagonal FIM and flatter minima uses the logic that the Hessian’s diagonal dominates local curvature in a diagonal approximation. Though it omits cross-terms, the rationale is mathematically sound for large-scale deep networks that rely on diagonal approximations. One potential limitation is that ignoring off-diagonal terms may underestimate curvature in certain directions, but this is disclosed as a simplifying assumption rather than a full second-order analysis. Convergence Proof: The authors provide a high-level sketch showing that re-normalizing gradients when the FIM norm exceeds a threshold does not break standard Lipschitz-based convergence arguments. A full, line-by-line formal proof with all constants and step sizes is not fully detailed, but the outline is in line with existing gradient-descent proofs. 3. Conclusion Overall, the theoretical claims appear largely correct and adhere to established principles in optimization and information geometry: FIM–Hessian Relationship: Properly stated and well-known in literature. Sharpness Reduction: Reasonably extended from the FIM–Hessian link and standard Taylor expansions. Convergence Rate: Based on recognized gradient-descent proofs, with a plausible argument that enforcing an upper bound on Fisher norm does not slow the asymptotic 𝑂(1/𝑇) rate. While the diagonal approximation and partial discussion of cross-terms mean the proofs do not capture every nuance of a full second-order method, they are consistent with standard practices and do not exhibit fundamental errors. Experimental Designs Or Analyses: 1. Scope of Datasets and Baselines UEA (30 datasets) and UCR (85 datasets): These are highly regarded open-source repositories covering both univariate and multivariate TSC, offering a broad distribution of domains (e.g., sensor signals, ECG data, image outlines). Validity: Using these archives is widely accepted for benchmarking TSC models, thus the selection is appropriate and representative. Baseline Comparisons: The paper benchmarks against 14 diverse methods, including both classic (e.g., ROCKET, InceptionTime) and modern (e.g., TimesNet, PatchTST) approaches. Validity: This large set of baselines, covering a variety of algorithmic designs, strengthens the credibility of the results. Statistical Tests: They employ Wilcoxon signed-rank tests to compare model accuracies across multiple datasets. Validity: The Wilcoxon test is a standard, non-parametric choice in multi-dataset settings. It bolsters confidence that reported gains are not merely coincidental or dataset-specific. 2. Experimental Protocols Train/Test Splits: They follow the standard UCR/UEA data partitions rather than generating alternative splits. Potential Issue: These standard splits do not always strictly reflect real-world distribution shifts (e.g., chronological shifts). However, the authors do point out that some train–test differences already exist (shown via histograms/Wasserstein distances). More explicit or controlled shift scenarios might have provided stronger evidence of domain-robustness. Metrics: Accuracy is the primary metric, supplemented by F1, precision, and recall for selected experiments. Validity: Accuracy is typical for classification; adding more metrics helps address class imbalance or multi-class comparisons. This is consistent with established TSC practices. Hyperparameter Selection: The authors discuss two strategies: a “universal” (same hyperparameters for all datasets) and a “full” (dataset-wise tuning). Validity: Showing both strategies clarifies how stable the method is when hyperparameters are not exhaustively tuned, and how much improvement is possible with targeted tuning. This split is a fair approach and demonstrates practical applicability. Computational Efficiency Reporting: The paper compares runtime per iteration with Sharpness-Aware Minimization (SAM), illustrating that the proposed Fisher-based approach is lighter due to its single-pass design. Validity: Presenting runtime differences is important for real-world feasibility. They show consistent speed gains over SAM, which seems credible given SAM’s two backward passes. 3. Analysis of Results Average Accuracies: The authors list comprehensive tables of accuracy scores for all 115 datasets, often showing 1–3% gains over competitive methods. They use statistical significance tests (Wilcoxon) to confirm whether these gains are systematic. Validity: Reporting means plus significance metrics is standard in large-scale TSC evaluations. The improvement margins appear reasonably consistent across many datasets. Sharpness Measures: They measure and visualize sharpness or curvature in the parameter space to show how the proposed method yields flatter minima. Validity: These additional visual/quantitative analyses back up the theoretical claims. While each sharpness metric is an approximation, it does support their notion that the method reduces sensitivity to domain shifts. Ablation Studies: They vary 𝜖 (the Fisher norm threshold) to see how it affects performance. Potential Issue: Some readers might want deeper exploration of the hyperparameter’s sensitivity across more datasets. The paper gives partial insights but could further detail the trade-offs between different 𝜖 values in a more systematic manner. 4. Overall Soundness Positives: Large benchmark coverage (115 datasets), multiple baselines, and recognized statistical tests. Clear demonstration of both universal vs full hyperparameter settings, indicating robustness. Reported run-time comparisons for efficiency claims. Minor Caveats: Standard train/test splits in UCR/UEA do not precisely mirror real-world domain shifts, although the authors do show partial evidence of distribution differences. The hyperparameter search could be more comprehensively documented to fully assure reproducibility, though the current studies do give enough for a fair comparison. Conclusion: The paper’s experimental designs and analyses generally adhere to standard TSC practices and provide robust support for the proposed method. Although more targeted shift experiments or extended hyperparameter sensitivity tests could strengthen the domain robustness argument, the large-scale evaluations, statistical comparisons, and runtime analyses all appear fundamentally sound and valid. Supplementary Material: Yes Extended Theoretical Proofs: The supplementary included a more detailed derivation of how the Fisher Information Matrix relates to the Hessian of the loss, confirming the main text’s claims. The additional steps—particularly around the local equivalence of FIM and Hessian—were consistent with known results in information geometry. Algorithmic Details: The paper provided an algorithmic summary of the proposed Fisher Information Constraint (FIC) optimization procedure, clarifying the diagonal approximation and gradient re-normalization steps. This supplemented the main text by showing pseudo-code that helps reproduce the method. Extended Experiments and Ablations: Some additional ablation studies on the threshold ϵ and batch-size variations were included. These experiments corroborated the main paper’s findings: smaller ϵ encourages flatter minima but can slightly slow early-stage optimization, while larger ϵ yields quicker training but less sharpness reduction. Overall, the supplementary material offered further technical and empirical details. I found it valuable for confirming the completeness of the theoretical arguments, clarifying the FIC algorithm’s implementation, and illustrating the sensitivity of hyperparameters across a broader range of settings Relation To Broader Scientific Literature: 1. Fisher Information and Sharpness-Aware Optimization Prior Work: Sharpness-aware methods like SAM (Sharpness-Aware Minimization) introduce a second backward pass to encourage flatter minima. In parallel, Fisher Information has long been connected to curvature in information geometry, relating to the Hessian of log-likelihoods. Paper’s Advancement: This paper combines these strands by using Fisher information as a direct constraint, making the optimization process single-pass rather than double. It thus situates itself alongside second-order and sharpness-focused approaches but does so with a diagonal FIM approximation, keeping computational overhead minimal. 2. Time Series Classification under Domain Shifts Prior Work: TSC literature often centers on specialized architectures (InceptionTime, ROCKET, various Transformers) or similarity-based methods (DTW). However, handling non-stationarity or train–test distribution shifts in TSC has received comparatively less attention beyond methods like Reversible Instance Normalization (RevIN), which primarily aids forecasting/regression tasks. Paper’s Advancement: By directly addressing distribution shifts within TSC (rather than purely focusing on new architectures), the paper provides a relatively novel perspective: it treats domain shifts as a fundamental optimization/robustness issue, not merely a data-augmentation or normalization challenge. This complements prior TSC work on normalization techniques by proposing that regularizing curvature (via Fisher Information) can outperform, or at least fill the gap left by, normalization-based solutions in classification settings. 3. Empirical Benchmarks and Method Comparisons Prior Work: Many TSC papers rely on UCR/UEA archives for evaluation, but most emphasize raw accuracy improvements with domain-specific innovations (CNN designs, ensemble methods, or transform-based approaches). Ensemble methods like HC2 or shapelet-based frameworks have historically been robust but often with higher computational cost. Paper’s Advancement: The authors show that a general optimization-level technique—FIC—can match or surpass specialized TSC architectures on the same standard benchmarks. This demonstrates that certain general-purpose enhancements (aimed at stability or curvature control) can be as crucial as sophisticated network designs in pushing performance bounds. 4. Connection to Information Geometry and Generalization Prior Work: Fisher Information has been used to study generalization, but mostly in contexts such as Bayesian inference, elastic weight consolidation, or advanced network compressions. These approaches often require extra memory or partial second-order updates. Paper’s Advancement: By framing Fisher Information as a constraint on gradient norms (rather than storing or inverting any matrix blocks), the paper contributes a simpler, more scalable variant of these second-order ideas. It thereby extends the principle that controlling curvature (through the FIM) can guard against overfitting, especially in domains—like TSC—where distribution shifts are common. 5. Overall Positioning Bringing Second-Order Insights to TSC: The paper merges second-order optimization insights with TSC challenges, emphasizing domain shift resilience rather than purely augmenting classification architectures. Bridging Normalization Gaps: Techniques like batch or instance normalization have been central in other time series tasks (especially forecasting). The paper positions itself as a complementary or alternative approach for classification scenarios where direct normalization sometimes obscures class differences. Potential for Future Extensions: This approach could be extended or integrated with other robust classification methods (e.g., adversarial training, domain adaptation), indicating synergy with broader machine learning methods beyond TSC. Conclusion: In summary, the paper’s key contributions—a Fisher-constraint–based approach for flatter minima, tailored specifically to TSC’s domain shift challenge—fit well into existing second-order/sharpness-aware frameworks while tackling a recognized but less-explored problem in the TSC literature. It bridges general-purpose optimization insights (FIM-based regularization) with domain-specific concerns (non-stationarity in time series), making it a noteworthy addition to both communities. Essential References Not Discussed: 1. Prior Work on Fisher Information / Second-Order Methods Kronecker-Factored Approximate Curvature (K-FAC) Reference: Martens & Grosse, ICML 2015, “Optimizing Neural Networks with Kronecker-Factored Approximate Curvature.” Relevance: Demonstrates a practical way to approximate second-order information using Kronecker-factored matrices, thus improving training efficiency without resorting strictly to diagonal approximations. While K-FAC is mainly used for faster convergence, it also relates directly to the Fisher Information Matrix. The current paper’s diagonal-Fisher approach could be contrasted with or informed by this more elaborate approximation. Elastic Weight Consolidation (EWC) Reference: Kirkpatrick et al. PNAS 2017, “Overcoming catastrophic forgetting in neural networks.” Relevance: Uses Fisher Information to preserve previously learned knowledge when new tasks arrive. Even though EWC focuses on continual learning, its reliance on Fisher-based constraints parallels this paper’s method for controlling sharpness. Citing EWC would acknowledge foundational uses of Fisher Information in deep learning optimization and illustrate that the notion of stabilizing parameter updates via FIM is an active line of research. 2. Domain Adaptation / Distribution Shift in Time Series Deep Domain Adaptation for Time Series Example Reference: Purushotham et al., KDD 2017, “ Other Strengths And Weaknesses: Strengths Novel Combination of Existing Ideas The paper blends known second-order optimization insights (i.e., Fisher Information, Hessian relationships) with a specific target of distribution shifts in time series classification. While Fisher-based methods are not new, applying them as a one-pass constraint to flatten minima for TSC is an original twist. Significance and Broad Applicability Domain shift problems are pervasive in real-world time series tasks (e.g., evolving sensor conditions, non-stationary signals). Demonstrating a general solution that can integrate with both convolution- and transformer-based TSC models enhances the paper’s practical value. Clarity in Writing and Structure The paper is generally well-structured. The motivation (domain shift in TSC), main idea (constrain Fisher norm), and experimental evaluation (comprehensive benchmarking on UCR/UEA) are laid out in a logical progression. Key points—like the contrast to Reversible Instance Normalization—are relatively clear and straightforward to follow. Extensive Experiments and Statistical Significance The authors present results on 115 datasets, with multiple baselines, and use Wilcoxon signed-rank tests. This thorough coverage supports the claim that the method is widely applicable and not narrowly tuned to a handful of problems. Weaknesses Limited Exploration of Realistic Shift Scenarios Although the paper shows distribution differences in standard train/test splits, it does not deeply investigate scenarios like chronologically separated training and testing. A controlled shift experiment (e.g., training on earlier time frames, testing on later frames) could further validate real-world domain-robustness. Diagonal Approximation Restricts Full Second-Order Information While efficient, ignoring off-diagonal terms may limit capturing interactions among parameters. An ablation comparing diagonal vs. block-diagonal approximations (or referencing Kronecker-factored approaches) would strengthen the paper’s second-order argument. Hyperparameter Sensitivity Analyses The paper briefly discusses setting the threshold ϵ but does not provide a thorough grid-based sensitivity exploration across multiple datasets. More systematic experimentation could clarify how much ϵ-tuning is required for robust performance. Somewhat Limited Theoretical Details While the authors offer a high-level sketch of convergence arguments, more detailed proofs or bridging steps could help. In particular, it would be helpful to see how quickly the algorithm converges in practice under typical TSC conditions (e.g., smaller sample sizes, higher dimensional signals). Overall Assessment Originality: The idea of enforcing Fisher constraints in a single-pass framework, specifically tailored to time series classification, stands out for handling domain shifts—an underserved area in TSC compared to the often-solved architecture innovations. Significance: Given that domain shifts are a prevalent real-world challenge, the approach has practical and theoretical importance. Clarity: The paper is mostly coherent and well-motivated, though a deeper discussion of certain hyperparameters or more elaborate shift scenarios might enhance understanding. Despite some noted weaknesses, the paper’s strengths—innovative re-interpretation of Fisher-based constraints for TSC, extensive comparative experiments, and strong performance—indicate a valuable contribution to time series research. Other Comments Or Suggestions: Below are additional suggestions and minor observations: Hyperparameter Tuning Explanation It would be helpful to have a more detailed discussion in the text on how you searched or selected the threshold 𝜖 for the Fisher Information Constraint. Although the paper mentions a small grid search, providing explicit rationale or heuristics would benefit readers trying to replicate or extend the approach. Results Section Organization You might consider splitting the main performance tables into (a) univariate (UCR) and (b) multivariate (UEA) subtables for clarity. The current combined presentation is still understandable but separating them could help readers compare methods more intuitively by domain type. Highlighting Real-World Case Studies If any of the UCR/UEA datasets map closely to real industrial or medical shift scenarios, explicitly mentioning them might further strengthen the practical motivation. A short example could illustrate why flatter minima matter in that setting. Questions For Authors: Below are several questions that address points where additional clarification or detail could potentially change the evaluation of the paper: Explicit Domain Shift Scenario Question: Have you conducted any experiments using a chronologically split setup or another explicit shift scenario (e.g., training on earlier data, testing on later data) to confirm the method’s robustness beyond standard UCR/UEA splits? Why It Matters: If FIC-TSC demonstrates notable improvements under controlled real-world shifts, that would further validate the claims about domain-shift resilience. Conversely, if such tests haven’t been done, it’s possible that the standard splits do not fully capture how well the method handles non-stationary data. Hyperparameter Sensitivity Question: Please provide more systematic results on the impact of different ϵ values across multiple datasets, and explain how a practitioner might choose ϵ for new tasks? Why It Matters: The threshold ϵ appears central to the Fisher constraint’s effectiveness. Detailed guidance, or an ablation over a broader range of datasets, would clarify how sensitive the method is to this parameter and whether it needs per-dataset tuning. Diagonal Approximation Trade-Off Question: Have you tested any partial or block-based FIM approximations to see if ignoring off-diagonal terms significantly affects performance or memory demands? Why It Matters: This would reveal whether the diagonal approximation is sufficiently capturing curvature or if a more nuanced approximation could yield further improvements (albeit at higher computational cost). Comparison to Domain Adaptation Methods Question: How does FIC-TSC compare with existing domain adaptation or transfer learning approaches for time series, especially methods that explicitly align distributions (e.g., adversarial alignment techniques)? Why It Matters: Although your focus is on robust optimization, the domain-adaptation community also tackles shifts in time series. Clarifying these potential synergies or differences may strengthen the positioning of FIC-TSC in the broader literature. Runtime and Memory Scaling Question: Beyond single-batch iteration timing, did the authors benchmark FIC-TSC’s training speed and memory use on large-scale datasets (e.g. tens of thousands of training samples) or with very deep networks? Why It Matters: Demonstrating consistent scaling properties can solidify the paper’s claim that FIC-TSC is more efficient than other sharpness-aware approaches (SAM) in practical, large-scale scenarios. Ethical Review Concerns: Based on the content and scope of this paper—an algorithmic and methodological contribution focused on time series classification—I see no obvious ethical concerns regarding data misuse, participant harm, or similar issues. The datasets involved (UEA/UCR) are publicly available and widely used in research, and the paper does not suggest any problematic data collection or privacy violations. Thus, there is no apparent need to flag the paper for ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer's effort in carefully reviewing our paper and giving very constructive suggestions. We also thank the reviewer for recognizing our novelty, significance, and theoretical and empirical analysis. --- ### **Explicit Domain Shift Scenario.** We consider the following two scenarios. - **Healthcare dataset**: Splitting data by patient introduces explicit domain shift due to physiological variability. We use four popular public datasets (anonymized in accordance with ethical standards): TDBrain, ADFTD, PTB-XL, and SleepEDF. - **Online Handwriting Recognition (OnHW-Chars)**: The training set contains characters from **right-handed** writers, while the test set features **left-handed** writers, who often differ in stroke direction, pressure, slant, and orientation. Additionally, the two datasets were released at different times, introducing potential temporal shifts. **Our method outperforms the baseline with accuracy gains of 1.6%–6.8% and F1 improvements of 0.7%–4.3%.** **View full results at https://github.com/AnonymousUserss/ICML2025-4119-Response.** |Dataset|Method|Acc.| |-|-|-| |TDBrain|Baseline|93.0| ||**+FIC**|**96.2**| |ADFTD|Baseline|46.0| ||**+FIC**|**52.8**| |PTB-XL|Baseline|73.9| ||**+FIC**|**75.1**| |SleepEDF|Baseline|85.1| ||**+FIC**|**86.7**| |OnHW-Char (R→L)|Baseline|44.1| ||**+FIC**|**46.8**| [1] Wang, Yihe, et al. "How to evaluate your medical time series classification?.". --- ### **Systematic Results on $\epsilon$.** Please refer to our response to **Reviewer kJBN Q2**. --- ### **Diagonal Approximation Trade-Off Question.** Yes, we did. Computing the full Fisher Information Matrix is theoretically and practically expensive, substantially growing the time complexity from $O(n)$ to $O(n^2)$. As noted in Appendix G.4, our empirical analysis confirms that diagonal approximation is essential for practical use. **We tested the computation of the full FIM using a mini-batch size of 64 in a single iteration of InceptionTime (about 0.6M parameters) on an NVIDIA A100 40GB GPU. This operation took over 15 minutes per iteration, rendering full FIM computation infeasible for real-world training.** In contrast, with the diagonal approximation, the same computation can be completed in approximately 0.1 seconds (see Table 5), demonstrating a significant improvement in efficiency. --- ### **Domain Adaptation/Transfer Learning.** Domain adaptation and transfer learning address distribution shifts by enabling models trained on a source domain to generalize to a related target domain. Common strategies include aligning feature distributions by minimizing statistical distances [2] or using adversarial training to make features indistinguishable across domains [3,4]. In contrast, our method is orthogonal to these approaches. It does not require access to target domain data or labels during training, making it suitable when the target distribution is unknown or unavailable. **Importantly, domain adaptation/transfer learning could be applied as a post-training or downstream enhancement once target domain data becomes available.** In this sense, our method and these techniques can be complementary. [2] HoMM: Higher-order moment matching for unsupervised domain adaptation AAAI 2020 [3] Purushotham, Sanjay, et al. "Variational recurrent adversarial deep domain adaptation." ICLR 2017. [4] Jin, Xiaoyong, et al. "Domain adaptation for time series forecasting via attention sharing." International Conference on Machine Learning. PMLR, 2022. --- ### **Runtime and Memory Scaling Question.** We scale PatchTST by setting different numbers of blocks. The test is conducted on InsectWingbeat (30k training samples). **Our method reduces memory usage by 5–8% and cuts runtime by more than 50%.** |Model|#Params|Method|Memory(MB)|Time/Epoch(s)| |:-:|:-:|:-:|:-:|:-:| |PatchTST-1|4.5M|Ours|**706**|**1.8**| |||SAM|768|4.3| |PatchTST-5|17.1M|Ours|**1022**|**5.2**| |||SAM|1106|12.3| |PatchTST-20|64.4M|Ours|**2254**|**18.4**| |||SAM|2370|43.3| --- ### **Essential References to Be Included.** EWC preserves prior knowledge by penalizing changes to important weights, using a Gaussian posterior centered at previous weights with precision from the observed Fisher information (Laplace approximation). Notably, **EWC uses a diagonal approximation, aligning with and supporting the efficiency goals of our work.** K-FAC addresses the high computational cost of the FIM by approximating large blocks of it, corresponding to entire layers, as the Kronecker product of two much smaller matrices. We consider this a promising direction for future work to achieve more accurate and efficient FIM approximations. [5] Overcoming catastrophic forgetting in neural networks [6] Optimizing Neural Networks with Kronecker-Factored Approximate Curvature --- **We will include the above discussion in our final version.**
Summary: FIC-TSC introduces a novel training framework for time series classification by enforcing a Fisher information constraint to guide the optimizer toward flatter minima, aiming to improve robustness against domain shift. The method leverages two key approximations—a diagonalized Fisher information matrix and a gradient normalization strategy that requires only one backward pass—to achieve computational efficiency. Experiments on standard UEA and UCR datasets demonstrate that FIC-TSC outperforms several state-of-the-art methods in terms of classification accuracy and runtime, while also reducing the sharpness of the loss landscape. ## Update After Rebuttal I raised my score as a result of a discussion with the authors (see below). Claims And Evidence: ### Unjustified motivation. The primary motivation of the paper is the assumption that "a flat minimum is less sensitive to small perturbations of parameters, and hence, is more robust to domain shifts" (p.2), as conceptually illustrated in Fig. 1. However, this assumption is not supported by rigorous theoretical or experimental evidence. The authors do not cite prior work that empirically or theoretically validates this assumption. Moreover, while Fig. 9 compares the loss landscapes of the baseline and the proposed method, both appear fairly flat, and there is no clear demonstration of how domain shifts would transform these landscapes. ### Questionable assumption. The assumption above tells that if domain shift occurs, the loss landscape plotted in 2-D plane, where x- and y- axes are neural network weight parameters and loss value respectively, "slides" without changing its shape (Fig. 1). However, it is unclear why distribution shift in *data domain* leads to the slide in *weight domain*. As I mentioned above, this nontrivial hypothesis is neither verified theoretically or experimentally. ### Sharpness in data domain is not considered. The paper’s analysis and visualizations (e.g., Fig. 1) are confined to the loss landscape in the weight parameter space. It implicitly assumes that under a domain shift, the loss landscape "slides" without changing its shape—a nontrivial claim that connects a shift in the data distribution to a simple translation in the weight domain. This assumption is neither theoretically justified nor empirically verified. Moreover, since domain shifts occur in the data domain, for the hypothesis that flat minima yield better cross-domain generalization to hold, the authors need to demonstrate that: (i) existing approaches produce sharp minima when visualized in the data domain, (ii) FIC-TSC results in flat minima in the data domain, and (iii) FIC-TSC outperforms baselines in cross-domain experiments. Without such evidence, the connection between parameter-space flatness and robustness to domain shifts remains unproven. ### Questionable link between flatness and generalization. The relationship between flat minima and generalization is still a matter of debate. Works such as Dinh et al. (2017) ("Sharp Minima Can Generalize For Deep Nets") have shown that conventional flatness or sharpness measures are not invariant under reparameterizations—meaning that a flat minimum can be transformed into an arbitrarily sharp one without changing the network’s function. Similarly, Petzka et al. (2021) ("Relative Flatness and Generalization") argue that generalization is more closely tied to feature robustness than to absolute flatness in parameter space. The failure to discuss these critical findings raises concerns about the validity of the paper’s central premise. ### Concern on reproducibility.   The experimental results are reported as scalar values without error bars or confidence intervals, making it challenging to evaluate the stability and robustness of the proposed approach. This lack of statistical rigor raises doubts about the significance of the observed performance gains. Furthermore, without a public release of the code, it is even more difficult for reviewers and future researchers to assess the reproducibility and practical limitations of the method. ### Technical simplifications are not adequately evaluated. FIC-TSC introduces two technical simplifications for computational efficiency: (1) the diagonal approximation of the Fisher information matrix, and (2) enforcing the Fisher constraint using a single backward pass rather than a costly double back-propagation. While these tricks reduce computational load, the paper does not provide a rigorous ablation study comparing FIC-TSC with a variant that uses the full Fisher matrix and/or double back-propagation—even on simple artificial datasets or small networks. Without such controlled experiments, it is difficult to assess whether these approximations negatively impact the method’s robustness or generalization performance. Methods And Evaluation Criteria: Please see the above Claims And Evidence section. Theoretical Claims: I have reviewed the high-level theoretical claims but there remains a possibility that oversights exist. Experimental Designs Or Analyses: Please see the above Claims And Evidence section. Supplementary Material: The authors do not provide supplementary material. Relation To Broader Scientific Literature: The paper builds on a long-standing debate in the literature regarding the relationship between loss landscape flatness and generalization. Early works (e.g., Hochreiter & Schmidhuber, 1997; Keskar et al., 2017) linked flat minima to improved generalization, but subsequent studies (e.g., Dinh et al., 2017; Petzka et al., 2021) have shown that conventional flatness measures can be manipulated through reparameterizations, challenging their direct connection to generalization. FIC-TSC contributes by proposing a computationally efficient Fisher information constraint to enforce flatness, aiming to enhance robustness to domain shifts in time series classification, and thereby adds to the ongoing discussion by attempting to operationalize flatness in a practical setting. Essential References Not Discussed: Please see the above Claims And Evidence section. Other Strengths And Weaknesses: FIC-TSC's main strength lies in its computational efficiency—it leverages a diagonal Fisher information approximation and a single backward pass to enforce its constraint, making it potentially scalable and applicable to real-world time series classification scenarios. Other Comments Or Suggestions: The paper was an enjoyable read. Please note that my comments represent my initial impressions and may include misunderstandings. I welcome further discussion on these points and am open to revising my score once my questions and concerns are adequately addressed. (Thank you for further clarification during the discussion period, I raised my score accordingly.) Questions For Authors: I have incorporated my questions into Claims And Evidence section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for these insightful suggestions, and we are very glad to hear that you enjoyed the reading. --- ### **Unjustified motivation/Questionable assumption/Sharpness in data domain.** - Our primary motivation is that time series data often suffers from domain shift between train and test set, and we propose to constrain FIM during learning to alleviate the issue, which potentially results in a flatter minimal and improves the generalization. - Regarding the concern about citing prior work for the claim that "a flat minimum is less sensitive ...", we have referenced studies earlier in the sentence (Keskar et al., 2016; Neyshabur et al., 2017; Zhang & Xu, 2024). Notably, in Keskar et al. (2016), the authors explicitly define sharpness as a measure of sensitivity (see p.5, Section 2.2.2). We will cite them in a more proper place in the final version. - **Fig. 1** is a conceptual illustration of landscapes with domain shifts. **We do not have the assumption that the shapes of the landscapes on test and train data remain the same.** - **The landscape is related to both data domain $\mathcal{D}$ and the weight domain $\Theta$, i.e., calculating loss $\mathcal{L}(\mathcal{D};\Theta)$ needs dataset and weights of a network**, so when training data and test data have domain shift, the landscapes with same weights can be different, i.e. $\mathcal{L}(D_{train};\Theta)$ and $\mathcal{L}(\mathcal{D}_{test};\Theta)$. **In Fig. 1, we use different colors to denote different data domains $\mathcal{D}$**. We improve it at https://github.com/AnonymousUserss/ICML2025-4119-Response. - **Fig. 9 is a visualization of the landscapes to better illustrate the concept and show the relative flatness reduction.** The **quantitative results** are presented in **Fig. 8** (not Fig. 9) and **Fig.5**, as the key evidence to support our claim: **Compared with baseline, using our method can obtain an average 40% reduction in the sharpness (see (i) and (ii)) across all datasets and finally translate to ~4% accuracy gain (iii).** Please refer to our response to **R#YXQa** for explicit domain-shift experiments. --- ### **Flatness and Generalization.** We thank the reviewer for highlighting these important works. We fully acknowledge that the relationship between flat minima and generalization remains an open and nuanced research question. **Rather than taking a definitive stance in this ongoing debate, our work aims to contribute to this conversation by demonstrating that a regularization strategy informed by Fisher information and sharpness can lead to improved robustness and generalization in real-world time series tasks.** **Importantly, we have taken care to avoid overclaims in the paper, using qualified language such as "potential" and "achievable" to reflect the limitations inherent in this area.** While Dinh et al. (2017) and Petzka et al. (2021) raise concerns about its limitation, **these results are derived under specific assumptions** (e.g., fully connected ReLU networks and carefully constructed reparameterizations). Their applicability to general architectures and practical training setups remains limited. Moreover, recent empirical studies [1-2] suggest that in practical settings, where such reparameterizations are not applied, **sharpness (as commonly measured) can still correlate meaningfully with generalization**. These observations support the idea that sharpness-based metrics, while theoretically imperfect, can still provide **practical value**. In addition, as discussed in our related work section, several recent papers (Zhang & Xu, 2024; Foret et al., 2020; Andriushchenko & Flammarion, 2022; Kim et al., 2022a; Yun & Yang, 2024; Ilbert et al., 2024) have - supported the utility of sharpness-related method; - successfully leveraged it to improve learning outcomes. **We believe our results add to this growing body of evidence, particularly in the underexplored domain of time series data, and we remain cautious yet optimistic about the promise of these methods.** [1] Fantastic generalization measures and where to find them [2] Towards Understanding Sharpness-Aware Minimization, ICML2022 --- ### **Reproducibility.** **We have made the code and the trained weights at https://github.com/AnonymousUserss/ICML2025-4119-Response.** - **standard deviation (%)** on two benchmarks. ||Acc.|Bal. Acc|F1|P|R| |:-:|:-:|:-:|:-:|:-:|:-:| |UEA 30|1.1|1.2|1.4|1.5|1.2| |UCR 85 |0.5|0.8|0.9|1.4|0.8| --- ### **Technical Simplifications.** - Please see our response to **Reviewer 3cHV Q4** regarding diagonal approximation. - Double-backward vs single-backward with FIC is presented as follows. **Our method is on par with double-backward in accuracy but substantially reduces the runtime. Full comparison is at the same repo**. |Metrics|Accuracy(%)| |Runtime(s)| | |:-:|:-:|:-:|:-:|:-:| |10 datasets|Double|Ours|Double|Ours| |**Avg.**|77.4|76.8|0.147|0.070| --- Rebuttal Comment 1.1: Comment: Thank you for your detailed and thoughtful response. I appreciate the hard work you put into addressing my concerns, especially your efforts in ensuring reproducibility by making your code publicly available and by providing standard deviation metrics for key benchmarks. However, my primary concern remains regarding the "sharpness in the data domain." While your explanation that the loss landscape $\mathcal{L}(\mathcal{D};\Theta)$ depends on both the data domain $\mathcal{D}$ and the weight space $\Theta$ is conceptually sound, the rebuttal does not offer explicit quantitative or visual evidence comparing the sharpness of the loss landscapes computed on training data versus those on shifted (test) data. My concern specifically pertains to the data-loss plane, which is critical for understanding the model's robustness to domain shifts, rather than the weight-loss plane, which is the primary focus of your current analysis. Given the importance of this issue to your central claims, I must maintain my original score until further evidence is provided that directly addresses the sharpness in the data domain. --- Reply to Comment 1.1.1: Comment: ## Thank you for taking the time to review our response. We appreciate your recognition of our efforts. Below, we address the remaining concerns. --- **Since sharpness values across datasets can differ in scale, we report the sharpness reduction, $reduction = \frac{Baseline- FIC}{Baseline}$.** ### Sharpness Reduction in Weight-Loss Plane We first show that our method can reduce sharpness (w.r.t weight) in both the training domain and the test domain. (We originally only showed a sharpness reduction in training data; as suggested, we added sharpness reduction on test data here). |dataset|EC|FD|HW|HB|JV|SCP1|SCP2|SAD|UW|PS|Avg.| |---|---|---|---|---|---|---|---|---|---|---|---| |Weight Sharp. Reduction on Training|0.31|0.12|0.45|0.13|0.82|0.10|0.44|0.84|0.32|0.34|0.41| |Weight Sharp. Reduction on Testing|0.11|0.10|0.22|0.31|0.24|0.73|0.34|0.31|0.16|0.33|0.29| As shown in the table, **our method can reduce sharpness by 41% and 29% w.r.t weight on training and test data, respectively**. --- ### Sharpness Reduction in Data-Loss Plane Similar to Sharpness defined on weight domain, the **sharpness on data domain**, at data $x$ with weight $\Theta$ is defined as $\text{Sharpness}(x) = \max_{x' \in \mathcal{B}_2 (\rho, x)}\frac{L(x', y;\Theta) - L(x, y;\Theta)}{1 + L(x, y;\Theta)}$, which measures the sensitivity of loss within a local neighborhood of input data, i.e. data sensitivity. Here, $\mathcal{B}_2 (\rho,x)$ is a Euclidean ball with radius $\rho$ centered at $x$. We present the data sharpness reduction on both the training set and the test below. |dataset|EC|FD|HW|HB|JV|SCP1|SCP2|SAD|UW|PS|Avg.| |---|---|---|---|---|---|---|---|---|---|---|---| |Data Sharp. Reduction on Training|0.42|0.59|0.89|0.72|0.87|0.63|0.88|0.54|0.49|0.37|0.64| |Data Sharp. Reduction on Testing|0.22|0.40|0.01|0.67|0.15|0.61|0.83|0.30|0.10|0.36|0.37| Similarly, **our method can reduce sharpness by 64% and 37% w.r.t data on training and test data, respectively.** --- ### Landscape Mismatch: Quantifying Generalization Gap To better understand the reason for flatter minimal help generalization, we define a metric **landscape mismatch** to measure how **the difference between the training landscape and test landscape** at the local minimal $\Theta$: $\Delta M = \int_{\Theta^{\prime}\in \mathcal{B}_2(\alpha, \Theta) } |L(D_test;\Theta^{\prime}) - L(D_train;\Theta^{\prime})| \text{d} \Theta^{\prime}$. Here, again $\mathcal{B}_2(\alpha, \Theta)$, a Euclidean ball with radius $\alpha$ centered at $\Theta$, and we use the Monte Carlo method to approximate $\Delta M$. The results are presented as follows. Again the reduction is calculated as $reduction = \frac{Baseline- FIC}{Baseline}$. |dataset|EC|FD|HW|HB|JV|SCP1|SCP2|SAD|UW|PS|Avg.| |---|---|---|---|---|---|---|---|---|---|---|---| |Train-Test Mismatch Reduction|0.42|0.08|0.19|0.23|0.19|0.28|0.47|0.36|0.16|0.41|0.28| The results suggest that **Our method can reduce the mismatch of landscape between test and training datasets** by 28%. --- In summary, these new analyses demonstrate that our method: - Reduces sharpness in both weight and data domains; - Achieves this reduction on both training and test data; - Mitigates the landscape mismatch between training and test data. Together, these results support the claim that our method encourages the model to converge to a flatter minimum through FIC, thereby translating this flatness into improved generalization performance. --- ## We hope our response addresses your concern. If you have any further questions, please let us know. Thanks!
Summary: This paper addresses the failure of RevIN in out-of-distribution (OOD) scenarios for time series classification and proposes a constraint method based on the Fisher Information Matrix to enable smoother model optimization, thereby improving the generalization ability of the classification model. Furthermore, considering the quadratic complexity of computing the Fisher Information Matrix and the need for two rounds of backpropagation, this paper introduces a diagonal approximation information matrix and a Fisher information constraint method to reduce the computational cost. The effectiveness of the proposed method is validated through performance evaluation experiments on the UEA 30 and UCR 85 datasets, case studies, sharpness evaluation, and landscape visualization. ## update after rebuttal The authors' responses addressed my concerns, and I chose to maintain my original score. Claims And Evidence: Yes, the claims are convincing. Methods And Evaluation Criteria: Yes, the evaluation criteria and benchmark datasets are appropriate for the problem and application at hand. Theoretical Claims: Yes, I have checked. Experimental Designs Or Analyses: Yes, I have checked. Supplementary Material: Yes, I have reviewed all supplementary material. Relation To Broader Scientific Literature: No. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This paper clearly articulates the limitations of RevIN in out-of-distribution (OOD) scenarios for time series classification and innovatively leverages Fisher information constraints to mitigate OOD issues in time series classification. 2. To address computational cost concerns, the paper proposes a diagonal approximation of the Fisher Information Matrix and a Fisher information constraint method. 3. The experiments are comprehensive, including performance evaluation on UEA30 and UCR128 datasets, case studies, sharpness evaluation, and landscape visualization, which thoroughly validate the effectiveness of the proposed method. Weaknesses: 1. Although the authors suggest that using Fisher information can smooth the model's optimization space, the paper lacks a thorough analysis of whether and why time series data itself or the model inherently causes sharpness phenomena in time series classification. 2. There is insufficient justification for the rationality of the diagonal approximation of the Fisher Information Matrix and the Fisher information constraint method. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: 1. It would be better to clarify why time series data lead to sharpness, improving the rationale and persuasiveness of the proposed method. 2. Considering that Epsilon is a crucial hyperparameter, please conduct a sensitivity analysis and discussion of Epsilon. 3. Compare with other advanced time series normalization methods, such as SAN [R1], Numerically Multi-scaled Embedding in NuTime [R2]. 4. Explain the rationality of the diagonal approximation of the Fisher Information Matrix under the premise of significant parameter correlation or strong model non-linearity (which is common in deep learning). 5. Provide a reasonable explanation of how the Fisher information constraint reduces the Fisher Information Matrix. 6. The authors state in the experimental section: "To fully explore the ability of our method, we perform a grid search for hyperparameters for each dataset." However, for the UCR 85 archive and the UEA 30 archive, each time series dataset only contains a training set and a test set, without a validation set. In the absence of a validation set, how did the authors conduct grid search to select hyperparameters? [R1] Liu Z, Cheng M, Li Z, et al. Adaptive normalization for non-stationary time series forecasting: A temporal slice perspective. Advances in Neural Information Processing Systems, 2023, 36: 14273-14292. [R2] Lin, Chenguo, et al. "NuTime: Numerically Multi-Scaled Embedding for Large-Scale Time-Series Pretraining." Transactions on Machine Learning Research, 2024. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and insightful comments, especially for recognizing our contributions and empirical validation. We addressed the main concerns as follows. --- ### **Q1. Data and Sharpness.** Time series datasets are often small in size (e.g., UW has 120 samples, SCP2 has 200 samples), which increases the risk of overfitting. In such low-data regimes, models often exhibit low training error (low bias) but high variance, making them prone to overconfidence and instability. This instability is reflected in the geometry of the loss landscape: models may converge to sharp minima, where small perturbations in input or parameters cause large changes in loss. Sharpness thus captures this sensitivity, and sharp minima are empirically linked to poor generalization. Additionally, time series tasks frequently involve domain drift, a shift between the distribution of training and test data, due to factors like temporal changes or varying sensor conditions. Even when the dataset is moderately sized, this distributional shift can exacerbate the generalization gap. Encouraging the model to find flatter minima, i.e. correspond to solutions that are less sensitive to such shifts, can improve robustness and performance on unseen data. --- ### **Q2. Analysis of Epsilon.** The sensitivity analysis of $\epsilon$ is presented in Section 6 and Fig. 5. While some datasets exhibit performance fluctuations, setting $\epsilon = 2$ provides a generally reasonable balance. To be more comprehensive, we have extended the analysis to include all 26 datasets (consistent with Table 1). |$\epsilon$|0.02|0.5|2|4|20|100| |-|-|-|-|-|-|-| |Acc.|69.7|75.2|76.9|76.2|75.5|72.6| We observe that when $\epsilon$ is set too high, the gain is marginal. This is due to, in this case, the fisher information of the network is very likely below $\epsilon$, and accordingly, we do activate the renormalization mechanism. Conversely, if $\epsilon$ is too small, model updates become challenging because we always renormalize its gradient. This may lead to performance degradation. The optimal choice of $\epsilon$ depends on both the dataset and model architecture. While $\epsilon = 2$ serves as a strong default, we also report the results obtained using the best-matched $\epsilon$ for each individual dataset. |Uniform $\epsilon$| Best-matched $\epsilon$| |-|-| |76.2|**77.5**| For a new dataset, $\epsilon$ should be treated as a hyperparameter. However, **its search space can be efficiently narrowed by considering the initial range suggested by the Fisher information estimated during the first few iterations.** --- ### **Q3. More Comparison.** We compare our method with NuTime (29 datasets selected by NuTime) as follows. SAN is a variant of RevIN that may have similar issues to RevIN, while NuTime is a self-supervised method. Our method demonstrates better overall performance than both of them. **Full comparison is available at https://github.com/AnonymousUserss/ICML2025-4119-Response.** ||Ours (UNI.)|NuTime (self-supervised)|ITIME+SAN| |-|:-:|:-:|:-:| |Avg.|**78.3**|77.8|74.4| We will include it in our final version. --- ### **Q4. Rationality of the diagonal approximation.** We argue that this is a necessary trade-off, i.e., sacrificing the precision of FIM to achieve a feasible computational cost. As mentioned in G.4, we have tested computing the full FIM with a mini-batch size of 64 in **one iteration** on InceptionTime ($\sim$ 0.6M parameters) on an A100 40GB GPU, and it has been running for more than 15 minutes and is still ongoing. We realized that even in such a small-scale network, the computation required for one iteration is prohibitively heavy, and we would need thousands of iterations for full training; hence, applying full FIM is infeasible in broader scenarios with larger networks and/or larger-scale datasets. - We also found related works that consistently apply diagonal approximation to tackle a similar computational issue, and they mention that the diagonal elements contain sufficient important information. We will include these discussions in our final version, but indeed, this is a potential limitation of our work, and we plan to investigate it in the future. [1] Overcoming catastrophic forgetting in neural networks [2] Overcoming Catastrophic Forgetting by Incremental Moment Matching [3] Fedfisher: Leveraging fisher information for one-shot federated learning --- ### **Q5. How FIC works.** In FIC-TSC, whenever the Fisher Information exceeds some threshold $\epsilon$, the parameter gradients are downscaled accordingly. By “capping” how large Fisher information can grow, the algorithm actively steers the model away from sharp or overly sensitive parameter regions and potentially reduces the overall Fisher information. --- ### **Q6. Grid Search.** Following TsLaNet and ConvTran, we split 20\% of data from the training set as a validation set. --- Rebuttal Comment 1.1: Comment: The authors' replies address most of my concerns, and I hold a generally positive view of the paper. However, I consider a score of 3 to be appropriate and do not intend to revise it. While the proposed method is supported by a coherent motivation, the paper lacks model design considerations specific to time-series data characteristics. Lastly, two minor points remain to be addressed: **Q4**: Concern solved. Please include the mentioned relevant references in the paper. **Q5**: Reporting only averaged accuracy is insufficient. With a large number of datasets, extremely high or low values can skew the average. For instance, based on results from the authors’ shared anonymous link, NuTime shows a better average rank than UNI on UEA 29 datasets. Also, the p-value suggests no significant difference between the two. Thus, highlighting only UNI’s higher averaged accuracy offers limited insight. | Dataset | UNI | NuTime | |---------------------------|-------|--------| | ArticularyWordRecognition| 99.3 | 99.4 | | AtrialFibrillation | 56.7 | 34.7 | | BasicMotions | 100 | 100 | | CharacterTrajectories | 99.7 | 99.4 | | Cricket | 100 | 100 | | DuckDuckGeese | 65.0 | 55.2 | | EigenWorms | 85.5 | 91.0 | | ERing | 91.9 | 98.6 | | Epilepsy | 98.9 | 99.3 | | EthanolConcentration | 39.2 | 46.6 | | FaceDetection | 68.4 | 66.3 | | FingerMovements | 65.0 | 61.2 | | HandMovementDirection | 49.3 | 53.2 | | Handwriting | 61.6 | 22.8 | | Heartbeat | 81.0 | 78.4 | | JapaneseVowels | 99.1 | 98.3 | | Libras | 79.4 | 97.6 | | LSST | 65.3 | 69.3 | | MotorImagery | 65.0 | 62.2 | | NATOPS | 98.9 | 94.0 | | PEMS-SF | 79.2 | 92.5 | | PenDigits | 97.6 | 98.8 | | PhonemeSpectra | 31.3 | 32.0 | | RacketSports | 89.8 | 93.4 | | SelfRegulationSCP1 | 90.1 | 89.9 | | SelfRegulationSCP2 | 59.4 | 60.3 | | StandWalkJump | 63.3 | 66.7 | | SpokenArabicDigits | 100 | 99.3 | | UWaveGestureLibrary | 90.2 | 95.5 | | **Average Rank** | 1.52 | **1.41** (small is better) | | **P-value** | | 0.3981 | --- Reply to Comment 1.1.1: Comment: **We sincerely extend our gratitude to the reviewer for the thoughtful comments and the effort in the response. We greatly appreciate your generally positive view of our paper and your acknowledgment that most concerns have been addressed.** Below, we address the two remaining minor points: --- **Model design considerations specific to time-series data characteristics.** We thank the reviewer for pointing this out. While our method primarily focuses on addressing domain shift in general time series data via Fisher Information Constraint, we agree that explicit modeling of time-series characteristics such as temporal locality, seasonality, or autocorrelation could further enhance performance. In future work, we plan to explore integrating time-series-specific inductive biases (e.g., temporal convolutions or frequency-domain features) into our framework while maintaining the FIC regularization for improved generalizability. --- **Include the mentioned relevant references in the paper.** We will definitely incorporate the relevant references as suggested into the final version of the paper to ensure proper attribution and contextual completeness. --- **Compare with NuTime.** To better reflect the relationship between UNI and NuTime, we will revise our wording in the manuscript to explicitly state that out method (UNI.) is **on a par with** NuTime, rather than implying any definitive superiority. This adjustment will more accurately convey the results in light of the average accuracy, average rank, and p-value. --- ## We thank you again for the time and effort, and please let us know if you have any further questions.
null
null
null
null
null
null
null
null
When Will It Fail?: Anomaly to Prompt for Forecasting Future Anomalies in Time Series
Accept (poster)
Summary: This paper formulates the abnormal prediction problem in time series, aiming to forecast specific future time points where anomalies will occur. Accordingly, the authors propose Anomaly-Aware Forecasting and Synthetic Anomaly Prompting to address the problem. ## update after rebuttal The authors adequately addressed my concerns, so I raised the score to 3. Claims And Evidence: I think the results mostly support the claim. Methods And Evaluation Criteria: Yes. The paper employs some commonly used datasets, but more datasets and more baselines are expected. Dataset: - TimeSeAD: Benchmarking Deep Multivariate Time-Series Anomaly Detection Baselines: - TranAD: deep transformer networks for anomaly detection in multivariate time series data - BeatGAN - TimeMixer - DiffusionAD: Imputation-based time-series anomaly detection with conditional weight incremental diffusion models Theoretical Claims: There are no theoretical claims. It would be better to include some but not mandatory to me. Experimental Designs Or Analyses: Yes, I checked. The numerical results do not seem very high to me. I am not sure if it is caused by the nature of the task, but anyway, the overall gain looks very substantial. I am curious about how the authors split the datasets. Do they use non-overlapping windows or overlapping windows to generate samples? I am also curious about the average anomalies for each sample. As shown in Fig.8, it seems only one animal pattern per sample. Supplementary Material: Yes, I briefly checked every part of the ablation, hyperparameter, evaluation, etc., Relation To Broader Scientific Literature: The paper builds upon prior works in time series forecasting and anomaly detection. Essential References Not Discussed: Yes. Other Strengths And Weaknesses: The Synthetic Anomaly Prompting seems like a retrieve-based approach for anomaly detection, while integrating it in the framework looks novel to me. My main concern is that the problem is very naive in real-world applications, and it can be very common in trajectory modeling in automount driving. The cross-attention design is also very intuitive, as one input is a query, and one is a key/value. Another limitation is that there are too many loss terms and hyper-parameters, making tuning and optimization more complex. Other Comments Or Suggestions: N/A Questions For Authors: - How does the method determine the threshold for the abnormal pattern? - Can you clarify at which stage which parameters are trainable? Are AFFN trainable at the second stage? I feel like all components can be trained simultaneously. If this is not optimal, what is the potential reason for that? - What is [cls] token used for? Is there any specific reason for employing it? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for giving us meaningful feedback! **More Datasets and Baselines** - TimeSeAD |Model|Exathlon-Avg.F1|SMD-Avg.F1| |:-:|:-:|:-:| |P-TST+AT|14.76|30.58| |**A2P(Ours)**|**15.28**|**39.72**| - TranAD, BeatGAN, TimeMixer, DiffusionAD |F model|AD model|Avg.F1| |:-:|:-:|:-:| |P-TST|TranAD|42.63| ||BeatGAN|43.08| ||DiffusionAD|30.08| |MICN|TranAD|41.22| ||BeatGAN|45.04| ||DiffusionAD|31.04| |GPT2|TranAD|43.40| ||BeatGAN|38.40| ||DiffusionAD|29.89| |iTransformer|TranAD|43.97| ||BeatGAN|43.29| ||DiffusionAD|30.45| |FITS|TranAD|42.57| ||BeatGAN|40.17| ||DiffusionAD|27.09| |TimeMixer|AT|42.72| ||DC|34.40| ||CAD|17.16| ||TranAD|41.85| ||BeatGAN|43.55| ||DiffusionAD|26.10| |**A2P(Ours)**||**46.84**| Our method outperforms all existing baselines across widely used benchmark datasets, demonstrating its effectiveness and generalizability. **Numerical Modesty of Results** AP is challenging as it requires predicting rare anomalies. Even small metric gains are meaningful. Table 1 shows our method consistently outperforms baselines, proving its effectiveness. **Dataset Splitting** Following prior works [1], we used non-overlapping windows for AP. [1] Xu et al. “Anomaly transformer: Time series anomaly detection with association discrepancy.” ICLR 2022. **Dataset Statistics** |Dataset|Avg.AnomalyRatio(%)|Avg.AnomalyLen|#AnomalySeg|#Batches|AnomalySeg/TotalSample|AnomalySeg/AnomalySample| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |MBA|33.80|29.48|86|75|1.14|1.18| |Exathlon|12.69|91.94|1091|8911|0.12|1.00| |SMD|4.15|47.26|623|7083|0.08|1.00| |WADI|4.88|62.19|27|344|0.07|1.04| The visualized sample in Fig. 8 typically contains a single anomaly pattern, but our method also handles subtle anomalies effectively, as shown in the table. **Trajectory Modeling** AP focuses on forecasting rare anomalies, while trajectory modeling predicts paths under normal conditions [1,2]. Some works [3] focus on anomaly detection, not future predictions, so AP remains underexplored in this context. A2P’s core idea could be explored in trajectory modeling in future research. [1] Tang et al. “HPNet: Dynamic Trajectory Forecasting with Historical Prediction Attention.” CVPR 2024. [2] Phan-Minh et al. “CoverNet: Multimodal Behavior Prediction using Trajectory Sets.” CVPR 2020. [3] D'amicantonio et al. “uTRAND: Unsupervised Anomaly Detection in Traffic Trajectories.” CVPR 2024. **Cross-attention** While the cross-attention design may seem intuitive, the core contribution of AAF lies not in the architecture itself but in the idea of learning the inherent patterns of prior and posterior signals to predict future anomalies. By modeling the relationship between these signals, AAF captures temporal dependencies that are critical for AP, going beyond traditional forecasting or anomaly detection approaches. **Loss terms and Hyperparameters** Our method introduces two unique loss terms, $L_{AAF}$ and $L_{D}$, with others extended from traditional time series loss functions. A2P shows strong robustness across hyperparameter settings,as shown in Section E of the supplementary material. This suggests that they do not significantly complicate the tuning process. **Anomaly Threshold** The anomaly threshold follows the widely accepted protocol from [1], adjusting for a percentage of anomalies in the test data. This approach ensures consistency with established standards for anomaly detection tasks. [1] Shen et al. "Timeseries anomaly detection using temporal hierarchical one-class network." NeurIPS 2020. **Trainable Parameters** In the pre-training stage, only AAFN and APP parameters are trainable. In the main training phase, only the backbone parameters are trained. **Simultaneous Training** |Method|MBA|Exathlon|SMD|WADI|Avg.F1| |:-:|:-:|:-:|:-:|:-:|:-:| |Simultaneous Training|45.70|17.90|33.26|59.86|39.18| |**Ours**|**67.55**|**18.64**|**36.29**|**64.91**|**46.84**| We experimented with training all components simultaneously and found it suboptimal as shown in the table. This may be due to the Anomaly Probability output from the Anomaly-Aware Forecasting Network, which, when not fully trained, hinders proper forecasting. By using a two-stage training strategy, we ensure that the anomaly probability is learned first, allowing it to enhance future time series prediction during the main training phase, leading to better performance. **[CLS] Token** The [cls] token is a learnable embedding used to capture global representations, similar to its role in BERT [1]. It helps select the most relevant anomaly prompt in A2P, enabling effective abnormal signal synthesis. [1] Devlin et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." NAACL 2019. We greatly appreciate your helpful comments, and we will be sure to include the above-mentioned details in our revision. This will provide a more thorough explanation and address the points you raised in a comprehensive manner, enhancing the overall quality of the paper. --- Rebuttal Comment 1.1: Comment: Sorry for the late response. I thank the authors for addressing most of the concerns. I decided to raise my score, and the paper can definitely be accepted. I still think predicting future anomalous or risky scenarios has been explored in fields such as autonomous driving and motion planning, where the aim is to anticipate and avoid potential hazards or obstacles (these tasks are a kind of spatial-temporal analysis, which is related to time series analysis). I suggest authors can include more discussion in their final version. **Given the limited time, no response here is totally fine with me :)** --- Reply to Comment 1.1.1: Comment: Thank you for the thoughtful follow-up and for raising your score. While we acknowledge related efforts in autonomous driving and motion planning, we would like to clarify key differences between those and our proposed Anomaly Prediction (AP) task. The closest setting in autonomous driving is early accident anticipation [1, 2], which focuses on detecting the possibility of an accident as early as possible within a short video clip, without predicting when it will happen. In contrast, our AP task aims to forecast both **if** and **when** an anomaly will occur, often requiring longer lead times and carefully consider subtle signals—making it more general and more challenging. We believe these aspects highlight the novelty and difficulty of AP, setting it apart from the related work. Thank you again for your valuable feedback, and we will incorporate this discussion into our revised paper. [1] When, Where, and What? A Novel Benchmark for Accident Anticipation and Localization with Large Language Models (ACM MM 2024) [2] Graph(Graph): A Nested Graph-Based Framework for Early Accident Anticipation (WACV 2024)
Summary: This paper proposes a novel framework, Anomaly to Prompt (A2P), to address the Anomaly Prediction (AP) task in time series analysis, which aims to predict future anomalies. The framework integrates two key components: Anomaly-Aware Forecasting (AAF) that learns relationships between anomalies and future signals, and Synthetic Anomaly Prompting (SAP) generates synthetic anomalies through signal-adaptive prompting. Experiments on real-world datasets demonstrate A2P’s superiority over SOTA methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria makes sense. Theoretical Claims: I think the theoretical claims of this paper are correct. Experimental Designs Or Analyses: Yes, the experimental designs are reasonable. Supplementary Material: Yes. I have read supplementary materials for training details and more experiments on hyperparameters and ablation study. Relation To Broader Scientific Literature: The key contributions of this paper bridges the gap between time series forecasting and anomaly detection, which provides insights for future study in anomaly prediction rather than the popular anomaly detection tasks. Essential References Not Discussed: No. Related works are well discussed. Other Strengths And Weaknesses: Strengths: 1. The paper fills a critical gap between traditional anomaly detection (AD) and forecasting by defining Anomaly Prediction (AP) as a distinct task requiring precise localization of future anomalies. 2. The proposed A2P achieves state-of-the-art F1 scores across different datasets. 3. The proposed model is easy to follow and codes are provided. Weaknesses: 1. The initialization and optimization details of the the proposed APP are unclear. For instance, how are prompts initialized? 2. Computational costs for pre-training AAF and APP are not quantified, which is critical for real-world deployment. Other Comments Or Suggestions: Please refer to the weakness. Questions For Authors: 1. Since the anomaly prediction (AP) task fundamentally differs from anomaly detection (AD) methods, the authors construct baselines with combinations of forecasting and anomaly detection models. Could the proposed model be compared with existing models that were specifically designed for AP tasks? 2. Can the model detect precursors of impending anomalies? For example, before an anomaly occurs (the status of the device is still normal), the model can identify early warning signals in the data, enabling interpretable anomaly prediction. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are sincerely grateful for giving us positive comments and acknowledging the contribution of our proposed method A2P for tackling the challenges of AP. **Initialization and Optimization Details** All additional parameters introduced for APP are initialized using a standard uniform initialization method in the PyTorch library. We will include this detail in the revision. **Computational Complexity** - To investigate the impact of additional computational complexity of our proposed methods, we measured GFLOPs per iteration (including both pre-training and main training) and the total number of parameters of the baseline (combination of PatchTST and AnomalyTransformer) and our proposed A2P. As shown in the figure (URL:https://ifh.cc/v-SRor7L), A2P does require additional computation compared to the baseline. However, since our model unifies both forecasting and anomaly detection within a single framework, it significantly reduces the overall parameter footprint. More importantly, this additional cost is only incurred during training, with no extra overhead at inference time. Despite the modest increase in training complexity, A2P achieved an average 10% improvement in performance over the baseline, demonstrating a favorable trade-off between cost and accuracy. Moreover, the train time consumed is utmost 1 hour in WADI dataset, which is negligible and is not really a heavy burden for A2P to be applied in real-world scenarios. - Regarding scalability, the four datasets in Table 1 cover various ranges of dataset scales, as well as dynamic environments. To further validate A2P’s effectiveness, we conducted additional experiments on the KPI dataset (representing large-scale data) and the NeurIPS-TS dataset (characterized by highly dynamic anomaly patterns across all five anomaly types specified in [1]). A2P achieved state-of-the-art F1-score on both datasets, highlighting its robustness and scalability in large and dynamic settings. |Model|NeurIPS-TS|KPI| |:-:|:-:|:-:| |P-TST+AT|23.09|24.49| |**A2P(Ours)**|**34.18**|**33.46**| [1] Lai et al. "Revisiting time series outlier detection: Definitions and benchmarks." NeurIPS 2021. **Existing Method for AP** To the best of our knowledge, there are currently no existing methods specifically designed for the AP task. However, a related study [1] conducted AP experiments using existing forecasting models, though these approaches do not directly address the unique challenges of AP. For a fair comparison, we constructed baselines by systematically combining established forecasting and anomaly detection models. This setup allows us to benchmark our method against the most relevant alternatives and highlights its distinct advantages. [1] You et al. "Anomaly Prediction: A Novel Approach with Explicit Delay and Horizon." ICCP 2024. **Precursor Detection** Our proposed model A2P can detect precursors of impending anomalies. Since there are no labels indicating ‘precursor of anomalies’ in the dataset, we cannot measure the numeric performance of detection of precursors explicitly. Instead, we visualized the attention map of the cross attention in Anomaly-Aware Forecasting Network, to investigate if the model can relate specific parts of the prior signals when identifying future anomalies, as shown in the figure (URL:https://ifh.cc/v-cQD6Ka). When identifying the future anomalies (red-shaded area), which is the main purpose of AP, the model focuses on specific area of prior signals (circled area). While not perfect, our proposed model can provide a further explainable information to infer which prior time steps contribute to predicting future anomalies. This approach can be effective in many real-world scenarios, for example, medical doctors can scrutinize the normal data of patients to give them appropriate instructions to prevent possible diseases. We will incorporate the additional details such as initialization and computational costs in the revision, and thank you again for your careful attention. --- Rebuttal Comment 1.1: Comment: The authors have solved my problems. In my opinion, anomaly prediction is a more interesting and meaningful task than traditional anomaly detection, and the authors have provided a solution to it. Thus, I have raised my rating. --- Reply to Comment 1.1.1: Comment: We truly appreciate your kind reassessment and glad that your concerns have been resolved. Thank you also for your thoughtful review and the time you devoted to evaluating our manuscript. Should any further questions arise, we would be grateful for your continued feedback.
Summary: The paper introduces a novel framework called A2P designed to forecast future anomalies in time series data. Unlike traditional forecasting models—typically trained on standard signals and consequently fail to accurately predict abnormal events—the proposed method integrates anomaly-aware components into the forecasting process. A2P is comprised of two key elements: - **Anomaly-Aware Forecasting (AAF):** This component pre-trains a forecasting network to learn the relationships between past anomalies and future trends, enabling the model to predict signals more accurately and reflect potential abnormal events. - **Synthetic Anomaly Prompting (SAP):** This technique employs a learnable Anomaly Prompt Pool (APP) to inject synthetic anomalies at the embedding level during training. This process helps the model simulate and recognize various anomaly patterns. The authors validate their approach with extensive experiments on multiple real-world datasets, demonstrating significant improvements in anomaly prediction and forecasting accuracy over existing baselines. Claims And Evidence: **Evidence Supporting the Claims:** The paper provides extensive empirical evaluations on several real-world datasets, including MBA, Exathlon, SMD, and WADI, which strongly support the core claims. The experimental results, including detailed ablation studies, convincingly demonstrate that the proposed A2P framework—through its components of Anomaly-Aware Forecasting (AAF) and Synthetic Anomaly Prompting (SAP)—achieves significant improvements in both forecasting accuracy and anomaly prediction performance compared to established baselines. **Problematic Aspects:** The integration of AAF and SAP introduces additional computational complexity. The paper does not provide a detailed evaluation of the computational trade-offs or scalability of the proposed approach in large-scale or highly dynamic environments. In addition, given that similar work had been done before, the author claims that “we first propose a method to deal with the problems of Anomaly Prediction.” is suspect. Methods And Evaluation Criteria: The proposed methods are well-aligned with anomaly prediction in time series data. Integrating Anomaly-Aware Forecasting and Synthetic Anomaly Prompting directly addresses the challenge of forecasting future anomalies—a problem where traditional forecasting methods often fall short. The experimental evaluation, which uses benchmark datasets such as MBA, Exathlon, SMD, and WADI, is appropriate for the application, as these datasets represent diverse real-world scenarios in domains like medical monitoring and industrial systems. Additionally, relevant metrics (e.g., F1-score with tolerance for anomaly detection and Mean Squared Error for forecasting) provide clear quantitative evidence of the framework's performance. The problem is that the author's description of his newly created evaluation index is too brief for the controversial F1-related evaluation criteria. More detailed explanations should be given for the evaluation indicators to avoid confusion among readers. Theoretical Claims: The submission primarily focuses on algorithmic innovations and empirical validation rather than on formal theoretical proofs. While the paper does provide mathematical formulations of its loss functions and training objectives (such as LAAF, LD, and LF), it does not include formal proofs for theoretical claims. I verified that the provided formulations are logically consistent with the intended design of the A2P framework; however, since no formal proofs were presented, there was no need to check the correctness of theoretical proofs in a rigorous sense. Experimental Designs Or Analyses: **Experimental Design Validity and Analysis:** I reviewed the experimental setups detailed in the submission, which include evaluations of both forecasting and anomaly prediction capabilities. The experiments are conducted on multiple real-world datasets (MBA, Exathlon, SMD, and WADI), ensuring diverse and representative scenarios. The following points summarize my assessment: - **Soundness of Evaluation Metrics:** The use of Mean Squared Error for assessing forecasting accuracy and F1-score (with a tolerance window for anomaly detection) is appropriate and well-justified for the tasks at hand. - **Ablation Studies:** Comprehensive ablation experiments are presented to isolate the contributions of the two key components: Anomaly-Aware Forecasting (AAF) and Synthetic Anomaly Prompting (SAP). These studies effectively demonstrate the individual and combined benefits of each component. Supplementary Material: The author has not provided supplementary materials. Relation To Broader Scientific Literature: The paper’s contributions build on and extend several key ideas from the broader literature on time series analysis, anomaly detection, and forecasting. Specifically: - **Integration of Forecasting and Anomaly Detection:** Traditional time series forecasting methods (e.g., PatchTST, GPT2, iTransformer) are designed to predict normal behavior and, thus, often neglect anomalies. In contrast, this work builds on the observation—also noted in recent studies such as Jhin et al. (2023) and You et al. (2024)—that forecasting models need to account for abnormal signals to be truly effective in real-world scenarios. By embedding anomaly awareness into the forecasting process, the paper extends prior work that typically treats forecasting and anomaly detection as separate tasks. - **Anomaly-Aware Forecasting (AAF):** The idea of pre-training a forecasting network to learn the relationship between past anomalies and future trends is novel in its application to anomaly prediction. While earlier studies have focused on detecting anomalies from historical data or near-term forecasts, this approach leverages anomaly-aware pre-training to predict future abnormal events, thereby addressing a gap in the literature. - **Synthetic Anomaly Prompting (SAP):** Using a learnable Anomaly Prompt Pool (APP) to inject synthetic anomalies during training is conceptually related to techniques in data augmentation and prompt-based learning seen in other areas of machine learning. However, its application in time series anomaly prediction is innovative. This idea builds on earlier work in synthetic data generation and augmentation but adapts these principles to enhance the representation of anomalies, thereby improving detection accuracy in forecasting future signals. Essential References Not Discussed: The submission provides a solid set of citations for time series forecasting and anomaly detection, including recent works on anomaly prediction. However, some additional related works could further contextualize its key contributions: - **Prompt-Based Learning Literature:** The Synthetic Anomaly Prompting component is reminiscent of prompt tuning techniques that have seen significant development in the NLP community [1]. While the paper adapts the concept to time series data, discussing these foundational works could help readers understand the conceptual lineage and justify the design of the learnable Anomaly Prompt Pool. - **Synthetic Data Generation for Anomalies:** Although the authors reference anomaly injection schemes (e.g., Darban et al., 2025), other works focus on synthetic anomaly generation and data augmentation in time series that might offer additional insights or alternative approaches. A broader review of such methods could strengthen the rationale for the proposed synthetic anomaly prompting strategy. - **Unified Architectures for Forecasting and Anomaly Detection:** The paper contributes by merging forecasting and anomaly detection within a shared architecture. There is an emerging literature on integrated approaches in this space. While the submission cites several relevant studies, additional references that discuss unified or multi-task learning frameworks for time series analysis could further clarify how the proposed method fits into and extends current trends. [1]Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Other Strengths And Weaknesses: - **Originality:** The paper exhibits originality by creatively combining ideas from time series forecasting, anomaly detection, and prompt-based learning. The introduction of Synthetic Anomaly Prompting (SAP) via a learnable Anomaly Prompt Pool (APP) is a novel adaptation of concepts initially developed in the NLP domain. Additionally, integrating Anomaly-Aware Forecasting (AAF) to bridge the gap between regular signal forecasting and anomaly detection provides a fresh perspective on the long-standing challenge of forecasting anomalies. - **Significance:** Addressing the problem of forecasting future anomalies has considerable practical significance, particularly in applications such as medical monitoring and industrial system maintenance. The proposed framework demonstrates improved predictive performance over existing baselines and has the potential to influence the design of early-warning systems in critical domains. This application-driven impact underscores the broader importance of the contributions. - **Clarity:** The paper is technically rigorous and detailed, clearly describing its methodology and experimental setups. However, the presentation can be dense and highly technical in certain sections, which might impede accessibility for a broader audience. Improving the narrative flow or providing additional intuition behind complex components could enhance clarity without sacrificing depth. - **Additional Strengths:** The extensive empirical evaluations across multiple benchmark datasets and comprehensive ablation studies lend strong support to the proposed approach. These experimental results validate the effectiveness of the individual components (AAF and SAP) and demonstrate the unified architecture's benefits in learning robust representations for forecasting and anomaly detection. - **Additional Weaknesses:** No code was provided for verification. Other Comments Or Suggestions: **Other Comments and Suggestions:** Overall, the paper is well-structured and presents its ideas in a technically detailed manner. Nonetheless, a few minor points could further enhance its clarity and presentation: - **Typographical and Formatting Consistency:** Although a search for explicit typographical errors did not reveal any major issues, I observed occasional minor inconsistencies in formatting. For example, there are instances where spacing around mathematical symbols and figure captions could be more uniform. Additionally, the notational switch between “AAF” and “AAFN” (when referring to the anomaly-aware forecasting components) might confuse readers; ensuring consistency in the notation throughout the manuscript would improve readability. - **Clarity in Complex Sections:** The technical sections are dense, particularly those describing the Synthetic Anomaly Prompting (SAP) module and its associated loss functions. Consider incorporating additional intuitive explanations or visual aids to help demystify these complex components for a broader audience. - **Proofreading for Minor Errors:** A careful proofreading to check for any minor grammatical errors or inconsistencies in punctuation could further polish the manuscript. Questions For Authors: 1. Can you elaborate on the sensitivity of the Synthetic Anomaly Prompting (SAP) component to hyperparameter settings (e.g., the size of the anomaly prompt pool and the number of prompts attached)? 2. Could you discuss the integrated A2P framework's computational complexity and scalability, especially compared to standard forecasting methods? 3. Are there specific scenarios or types of datasets where the A2P method underperforms relative to existing baselines? If so, can you provide an analysis of these cases and potential strategies for improvement? 4. This work would be more convincing if a reproducible verification code were provided. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your meaningful feedback, and your insights are invaluable as we continue to improve our work! **Computational Complexity** Please refer to the Computational Complexity section in Reviewer ADJk. **AP Task** The only prior work on Anomaly Prediction in time series is [1], which only formulates the problem without proposing concrete solutions. In contrast, our work is the first to introduce practical methods for AP, specifically through SAP and AAF, which directly address the core challenges and advance the field. [1] You et al. "Anomaly Prediction: A Novel Approach with Explicit Delay and Horizon." ICCP 2024. **Underspecified Metrics** The conventional point adjustment for evaluation, as discussed in [1], has known limitations. Since the goal is to predict anomalies within a reasonable time window rather than at exact points, using raw prediction outputs is not ideal. For instance, medical doctors are more concerned with detecting abnormal symptoms over a time window, rather than at specific seconds or minutes. To address this, we use the F1-score with tolerance, a modified version of the traditional F1 with point adjustment, which allows error tolerance with a set time window around predicted anomalies, offering a more realistic evaluation, as shown in the figure (URL: https://ifh.cc/v-ZLV6Kf). [1] Kim et al. "Towards a rigorous evaluation of time-series anomaly detection." AAAI 2022. **Data Augmentation** ||NoAug|Spikes|Context|Flip|Noise|CutOff|Scale|Wander|Avg|**A2P(Ours)**| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Avg. across datasets (F1)|38.89|39.18|37.62|39.27|37.60|37.63|39.56|38.94|38.76|**46.84**| We compared A2P with data augmentation methods from [1]. While these methods improve prediction to some extent, they are limited by fixed anomaly injection rules. A2P, with context-aware prompts, generates more realistic anomalies, yielding better performance. [1] Goswami et al. “Unsupervised model selection for time series anomaly detection.” ICLR 2023. **Unified Framework** Additional references on unified and multi-task frameworks, like [1] and [2], will be discussed. However, these models are unsuitable for AP. [2] only focuses on forecasting and simply combining forecasting and anomaly detection modules-such as in foundation models that handle each task independently-performs poorly, as shown in Table 1 baselines. AP requires forecasting anomalies and learning temporal patterns, which existing models lack. In contrast, A2P is specifically built for AP, integrating AAF with a learnable prompt pool to extend unified modeling trends into proactive anomaly handling. [1] Gao et al. “UniTS: A unified multi-task time series model.” NeurIPS 2024. [2] Woo et al. “Unified training of universal time series forecasting transformers.” ICML 2024. **Method Flow** Our two methods—SAP and AAF—work together in the A2P framework to bridge anomaly detection and forecasting. SAP addresses the challenge of limited abnormal signals in training data by using trainable anomaly prompts to create realistic synthetic anomalies, enhancing the model's anomaly recognition. AAF, in contrast, forecasts anomalies directly, improving predictions in abnormal conditions. Together, SAP and AAF complement each other—SAP enriches training data, while AAF uses this enriched data to enhance forecasting signals with anomalies. This synergy allows A2P to effectively detect and predict anomalies. **SAP Explanation** To clarify the SAP module, we include a figure (URL: https://ifh.cc/v-3dyYJd) that illustrates its mechanism. The Anomaly Prompt transforms normal features into anomalies, and the loss function $L_D$ guides the prompt pool to generate meaningful anomalies. **Grammatical Errors** We will carefully proofread the manuscript to correct any minor grammatical errors, e.g., “we formulate called Anomaly Prediction” or “are jointly pre-train”, and inconsistencies in punctuation, ensuring a more polished and professional presentation. **Hyperparameter Sensitivity** Please refer to the Loss terms and Hyperparameters section in Reviewer Xvbm. **Extra Anomaly Scenarios** |Model|Point Global|Point Context|Contextual Global|Contextual Seasonal|Contextual Trend|Avg| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |P-TST+AT|2.79|3.04|54.59|59.28|55.80|35.10| |**A2P(Ours)**|**8.29**|**8.97**|**75.65**|**72.32**|**61.93**|**45.43**| We evaluated A2P on five univariate NeurIPS-TS datasets, covering various anomaly types, as outlined in [1]. A2P outperformed the baseline in F1-score, especially on contextual anomalies. However, it struggles with point anomalies due to their short duration. Future work could explore adding context or using multi-step forecasting to better detect point anomalies. [1] Lai et al. "Revisiting time series outlier detection: Definitions and benchmarks." NeurIPS 2021. **Code Provision** We provide the reproducible code in anonymous repository https://anonymous.4open.science/r/A2P-E2FC. --- Rebuttal Comment 1.1: Comment: The author's reply, to some extent, answers my doubts. Additionally, something that needs to be considered is the description of evaluation indicators on page 6, lines 281-287: "In addition, Fl-score was calculated without point adjustment introduced in (Audibert et al., 2020). Instead, we used F1-score with tolerance t ...". It is widely known that the point adjustment evaluation method is already a very lenient and easy-to-exaggerate indicator of the actual performance of the model, and the description in this work seems to adopt an even more lenient evaluation indicator. Therefore, the practicality and accuracy contribution of this work may be questioned. For this point, I hope to receive further detailed explanation and clarification. --- Reply to Comment 1.1.1: Comment: Thank you for the valuable feedback and the opportunity to further clarify our evaluation strategy. As you pointed out, the original point adjustment evaluation method can exaggerate the performance of anomaly detection, which is a well-known issue in the field. To address this, in our evaluation, we adopted the F1-score with tolerance $t$, which uses **a fixed time window** ($±t$) around each predicted anomaly point, rather than considering the entire anomaly segment window – the main limitation of the point-adjusted F1-score (please refer to the following figure: https://anonymous.4open.science/r/A2P-E2FC/png/tolerance.png). **Therefore, we would like to emphasize that the F1-score with tolerance $t$ is an even stricter, more application-aligned, and practical evaluation metric compared to the point-adjusted F1-score, rather than being a more lenient metric.** We will revise the paper to clearly describe the motivation and advantages of our evaluation metric, with a more detailed explanation. We thank you again for your thoughtful feedback, and please do not hesitate to reach out with any further questions regarding the evaluation metric.
null
null
null
null
null
null
null
null
Distributionally Robust Multi-Agent Reinforcement Learning for Dynamic Chute Mapping
Accept (poster)
Summary: This paper attempts to optimize the dynamic chute mapping problem to optimize throughput/reduce recirculation rates by formulating the problem as a multi-agent RL problem. The authors then extend the vanilla MARL framework by introducing concepts from group distributionally robust optimization into the framework and further optimize the computation efficiency of the proposed framework by using a contextual-bandit based worse-case reward predictor. Experimental results on a simple toy problem and large scale simulations shows the efficacy of the proposed method. ## Update after rebuttal I thank the authors for addressing my comments and I find the response sufficient. As such, I will maintain my previous recommendation of accept. Claims And Evidence: Yes, claims are supported Methods And Evaluation Criteria: Yes, proposed method and evaluation criteria makes sense Theoretical Claims: Yes, the proofs of Lemma 3.1 and 3.2 Experimental Designs Or Analyses: Yes, experiments are sound Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: I believe this paper is generally related to the area of robust MARL as well as relevant to the areas of operation research. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: Overall, I believe this is a strong paper. The paper is well-written, organized and structured. The idea of applying Group DRO concepts to MARL and the introduction of a contextual bandit-based reward predictor to improve efficiency is novel to the best of my knowledge. The experimental results also supports the claims with sufficient theoretical justification and I believe the method introduced in this paper may also translate beyond the application of dynamic chute mapping. Weakness: Certain parts of the paper could be better organized in order to improve the readability for reader either not familiar with RL/Dynamic Chute mapping. The paper is also lacking of additional experiments and ablation studies for different parameters of the environment. Other Comments Or Suggestions: In Eq. 1, it is not clear why is there an addition $-2a^i_t$ in the reward function I would suggest mentioning the detail listed in the appendix of using an external tool to resolve the feasibility of joint actions in the main paper as it seems like a critical part of the framework. Additionally, a slightly more detail explanation of the problem (dynamic chute mapping) would also improve the readability of the paper for readers not familiar with this application. It is not clear why are the observation spaces different for the small scale problem vs the large scale problem, as listed in the appendix. I understand that the large scale experiments are already conducted on real world data, but as always, additional experiments on different simulation parameters, for example, varying the number of destination points/number of chutes and demonstrating that the conclusions still hold would make the paper stronger. Questions For Authors: 1. Given that the contextual bandit-based worst-case reward predictor selects the most adverse induction distribution at each step, have the authors observed scenarios where the worst-case reward signal is excessively negative, leading to instability in learning dynamics or premature convergence to a suboptimal policy? Additionally, how does the framework mitigate the risk of overly pessimistic policy updates that may arise from extreme worst-case scenarios? 2. The paper compares the proposed method primarily against MARL and exhaustive search, but how does it perform relative to other robust RL approaches, such as adversarial training methods that explicitly perturb the environment to simulate worst-case scenarios? 3. Are there any foreseeable drawbacks of the proposed method over existing methods? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful comments, thoughtful questions, and encouraging feedback. Your suggestions greatly improve the paper. Below are our responses, following the order of the reviewer comments, with references to tables and papers prefixed by “R-” for clarity. We address all readability and presentation concerns in the revision. In (1), actions are introduced as penalty to prevent MARL from over-allocating chutes to a few destinations, which could block incoming packages from other destinations. However, MARL still fails to generalize to OOD induction patterns. We will clarify this in the revision. For small-scale problems, the observation space excludes recirculated packages, making the transition probability independent of induction $X$ and aligning with the assumptions in Lemma 3.2. For large-scale problems, as shown in Appendix C.5, the DR Bellman operator in Lemma 3.2 acts as an accurate estimator when the transition probability depends on $X$, which does not hinder DRMARL from learning an effective robust policy. Ablation studies were performed under various warehouse layouts, primarily varying the number of chutes and unique induction destinations, which directly affect the action space and transition probability. STDs omitted due to character limits. |# of Chutes|50|100|115|125|135|150|187| |-|-|-|-|-|-|-|-| |DRMARL|**-499.00%**|**-197.33%**|**-115.54%**|**-60.59%**|**-28.64%**|**50.52%**|**79.97%**| |MARL|-730.97%|-358.70%|-190.91%|-138.85%|-84.25%|-47.42%|0% (baseline)| **TABLE R-3**: Relative recirculation rate improvement (↑) compared to the baseline MARL. |% of Destinations|50|60|70| 80|90|100 (max)| |-|-|-|-|-|-|-| | DRMARL| 83.58% | **82.82%** | **88.60%** | **91.57%** | **91.88%** | **79.97%** | | MARL| **86.99%** | 81.71% | 75.66% | 58.81% | 37.31% | 0% (baseline) | **TABLE R-4**: Relative recirculation rate improvement (↑) compared to the baseline MARL across different percentages of 120 destinations remains in the induction. In Table R-3, DRMARL consistently outperforms MARL, demonstrating its effectiveness and robustness across different environments. In Table R-4, we examined the performance of both policies while gradually reducing the number of induction destinations. DRMARL maintained its performance, but MARL outperformed it when only 50% destinations remained. This occurs because DRMARL conservatively reserves available chutes for potential future inducts, even for destinations that have not yet appeared and may never appear with 50% destinations. In contrast, MARL aggressively allocates chutes to existing destinations without considering the risk of blocking newly inducted destinations. Overly pessimistic decisions is a common risk in robust policies. We mitigate this by grouping induction data from multiple days rather than treating each day separately, reducing the impact of extreme outliers. Our experiments did not exhibit instability from extreme worst cases. However, if such cases arise, one can remove extreme outliers to stabilize training and switch to a heuristic policy in practice when extreme induction patterns are detected. This is a foreseeable drawback compared to MARL since the policy may become overly conservative with extreme inductions. We compare DRMARL with 3 Robust RL (RRL) approaches: <1> an adversarial agent deliberately blocking available chutes [R-4], <2> an adversarial environment with perturbed transition probabilities [Pinto et al., 2017], and <3> a (non-distributionally) robustified reward function [Wiesemann et al., 2013]. Implementation and result details are provided in the revision. While certain robust policy (<2>) achieve comparable cumulative recirculation performance to DRMARL, DRMARL additionally guarantees improved worst-case performance by explicitly optimizing for it. To validate this, we analyze the empirical distributions of recirculated packages across all time steps and groups. In Table R-5, we report the relative reduction in both the worst-case and CVaR (5% confidence level) of recirculated packages. DRMARL outperforms all RRL due to its distribution awareness, leading to more accurate worst-case reward estimation and robust action value functions, ultimately optimizing worst-case performance. This is crucial, as sudden surges in recirculation can cause robot congestion, delaying the entire sorting process. Since congestion is not explicitly modeled in the simulation, DRMARL’s advantage over RRL is expected to be even greater in practice. |Method|CVaR reduction(↑)|Worst-case recirc reduction(↑)|Recirc rate reduction (↑)| |-|-|-|-| |<1>|61.11%|53.57%|66.19%| |<2>| 58.48%|23.68%|77.09%| |<3>|58.97%|60.23%|70.43| |DRMARL|**77.09%**|**76.61%**|**79.97%**| **TABLE R-5**: Relative step-wise worst-case recirc amount, CVaR, and recirc rate reduction compared to baseline MARL. **References:** [R-4] Mandlekar et al., Adversarially Robust Policy Learning: Active construction of physically-plausible perturbations, IROS 2017.
Summary: The paper addresses the “dynamic chute mapping” task in robotic warehouses, where packages must be assigned to chutes in the face of uncertain and shifting arrival (induction) patterns. It proposes a distributionally robust multi-agent reinforcement learning (DRMARL) framework, combining group distributionally robust optimization with MARL to prepare for worst-case variations in package flow. The authors introduce a contextual bandit–based predictor that selects the likely worst-case distribution group for each state-action pair, reducing computational overhead compared to exhaustively checking every group. They demonstrate in both a simplified and a large-scale warehouse simulator that their DRMARL approach consistently lowers package “recirculation” (re-sorting) and achieves improved throughput across a range of induction scenarios, including out-of-distribution ones. Claims And Evidence: The paper applies a group DRO (distributionally robust optimization) approach to multi-agent reinforcement learning (MARL) for a warehouse “chute mapping” problem. Their claim is that, by anticipating worst-case changes in package arrival rates, the learned policy is more “robust” than a regular MARL policy. They present some experiments in both small and large simulation settings that show lower “recirculation” (which is re-sorting effort) and better throughput when induction rates deviate from the usual pattern. The evidence is mainly the simulation results comparing their DRMARL approach to standard MARL or simpler baselines. While the simulation outcomes look decent, the evidence is somewhat specialized to their chute mapping scenario. They also have some basic group DRO math (like rewriting the Bellman operator) to explain how they handle uncertain reward distributions. But it’s not a deep theoretical contribution—more of an application. Methods And Evaluation Criteria: They start from a standard MARL setting, add group DRO for different induction patterns, and then do a contextual bandit trick to pick the “worst-case” induction group. They evaluate performance by how many packages get recirculated, how many total packages get sorted (“throughput”), and how stable the policy is across multiple induction distributions. The method is essentially an existing robust RL idea (DRO + MARL) applied to their warehouse environment. Their evaluation metrics make sense in that domain (recirculation rate, throughput). But it doesn’t push the boundaries for new RL or new optimization ideas. It’s more about showing that existing robust RL methods can handle uncertain arrival distributions. Theoretical Claims: The paper does not have theoretical contribution. Experimental Designs Or Analyses: The main experiment is to train a policy on multiple “groups” (induction patterns), then test on new patterns. They show that their approach has better worst-case performance than normal MARL. This setup is straightforward and in line with standard RL or robust RL experiments. The chute system is a bit niche, so it’s unclear if this generalizes to bigger or different warehouse tasks. Also, they only compare to baseline MARL or a naive robust policy, so we don’t see how it might stack up against other specialized robust RL frameworks. The experiments seem valid but narrow. Supplementary Material: Experiments. Relation To Broader Scientific Literature: They cite work in robust RL, distributionally robust optimization, and multi-agent resource allocation, and they connect it to existing RL-based sorting approaches (like the MARL method from Shen et al. 2023). The main novelty is combining group DRO and multi-agent RL with a “contextual bandit” step for computational speed. But conceptually, these are known techniques—there’s not a big new theory angle here. They haven’t proven new theorems that break ground on robust MARL. Essential References Not Discussed: Nothing noteworthy to me. Other Strengths And Weaknesses: Strength: 1. The paper tests an actual warehouse scenario, which is relevant to industry. Weakness: 1. Conceptually, it’s mostly applying known ideas (group DRO, multi-agent RL, contextual bandits) rather than introducing a fundamentally new theory. 2. The chute problem is a narrow application, so broader impact could be limited unless it’s widely adopted in automated warehouses. 3. It’s not fully clear if the “worst-case distribution group” approach is truly necessary if the environment doesn’t vary that much in practice. Other Comments Or Suggestions: It would help if you provided concrete evidence or references showing that real warehouses do experience large swings in induction rates, especially since many readers at ICML may not be as familiar with that domain. Could you cite any operational research studies, internal data, or industrial reports that quantify these variations and explain why they're significant enough to justify a robust RL approach? Questions For Authors: 1. Do you see a path for applying the same idea to other resource-allocation tasks beyond warehouse sorting? If so, have you tested on simpler domains? 2. Could a standard robust RL approach (without group DRO or contextual bandit) handle moderate variability in induction rates almost as well? It’d be helpful to see a direct comparison. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your insightful comments and thoughtful questions. Your suggestions greatly improve the paper. Below are our responses, following the order of the reviewer comments, with references to tables prefixed by “R-” for clarity. If any referred content is missing here, please find it in the responses to other reviewers. The proposed framework builds on existing methods, but their integration is novel and strategically bridges gaps in DRRL, particularly related to scalability for large-scale multi-agent RL problems. In the robotic sortation warehouse, the reward (recirculation) function is implicit and highly nonlinear. Consequently, unlike typical well-structured DRO formulations with finite-dimensional convex equivalent formulations, applying similar methods to the complex, large-scale chute mapping problem is infeasible. With millions of packages sorted daily, estimating the worst-case reward for each state-action pair becomes computationally impractical using existing DRRL techniques. Although group-DRO makes worst-case reward evaluation finite-dimensional, the complexity modeling of the warehouse sortation and the large number of groups render exhaustive search still infeasible. Unlike supervised learning, where soft reweighting can avoid exhaustive search, such techniques cannot be directly applied to RL, as the worst-case group and reward vary across state-action pairs. An important contribution of our work is to leverage a contextual bandit to significantly reduce the exploration cost for estimating the reward of the worst case group. This approach effectively bridges the gap in **scalability** and **applicability**. We conducted extensive experiments for the comparison between DRMARL and naive Robust RL. Please kindly find the detailed result in Table R-5. Compared to naive Robust RL, DRMARL benefits from its distribution awareness for more accurate estimation of the worst-case reward and robust action value functions. Even certain naive Robust RL policy (<2>) achieves a comparable cumulative recirculation throughout the day, DRMARL still significantly outperforms it in terms of the step-wise recirculation amount (both worst-case and CVaR), which is the direct metric for robustness when comparing the performance of robust policies. The proposed DRMARL framework is designed to address general RL problems where the reward function undergoes distribution shifts. Since Lemmas 3.1 and 3.2 are not specifically tailored to resource allocation problems, the framework is broadly applicable to other DRRL settings. It can also benefit large-scale RL problems with group-DRO, particularly when an exhaustive search is impractical. We see strong potential for extending this approach beyond sorting problems, as DRMARL directly addresses reward function ambiguity (i.e., environmental uncertainty). The core DRMARL framework (Algorithms 1 and 2) remains unchanged when applied to other resource allocation problems, with only the environment varying. We are currently working on extending this framework to general RL problems beyond resource allocation, with early-stage promising results, and we will report our findings in a separate manuscript. Major e-commerce companies have publicly reported double digit growth in retail sales YoY (Year-over-Year) in their Q4 earnings press releases on multiple years which translates in the package sortation context to large swings in induction rates YoY. Due to the double-blind policy, we will provide reference in the final paper. Package volume and distribution exhibit substantial seasonal and yearly shifts. Seasonality drives fluctuations during sales events and holidays, while yearly shifts reflect evolving customer behavior and business growth. To support our claims, we quantify distribution changes using the empirical Type-1 Wasserstein Distance [R-3], as shown in Table R-2. By comparing four selected weeks to Week 1 within each year, we reveal significant seasonal variations, while the Wasserstein distance between consecutive days is typically on the order of 20. The last column highlights yearly shifts by comparing the same period across years to Year 1, showing that yearly changes are even more pronounced than seasonal ones. Figure 5 and Table R-1, evaluated on production data, demonstrate that MARL struggles with large induction pattern changes, underscoring the necessity of DRMARL. |Week #|2|3|4|5|Avg Dist to Year 1| |-|-|-|-|-|-| |Year 1|76.87|202.96|147.17|241.80|0.0| |Year 2|36.46| 70.47|301.25|123.45|1172.85| |Year 3|25.81| 23.87|72.91|78.82|585.98| |Year 4|72.99| 45.04|24.92|62.39|288.86| **Table R-2**: The first four columns represent Type-1 Wasserstein distances for each week compared to Week 1 of the same year. The last column represents the average Wasserstein distance of each year to Year 1. **References:** [R-3] Villani, Cédric. Optimal transport: old and new. Vol. 338. Berlin: springer, 2008.
Summary: This paper proposes the Distributionally Robust Multi-Agent Reinforcement Learning (DRMARL) framework for dynamic chute mapping in robotic warehouses. The integration of group Distributionally Robust Optimization (DRO) with a contextual bandit-based predictor to handle induction rate variations is the main contribution. However, there are some presentation issues for this paper. I will consider increase the score if the authors can make proper improvement. Claims And Evidence: This paper proposes DRMARL algorithm which is validated in simulated and large-scale environment. Methods And Evaluation Criteria: The proposed algorithm makes sense. Theoretical Claims: The theoretical part is weak, but the current statements are reasonable. Experimental Designs Or Analyses: The experiments are reasonable. Supplementary Material: I have read through the supplementary. Relation To Broader Scientific Literature: The paper proposes a new method for the chute mapping tasks in the field of operation research. This method has some potential to be applied to broader fields. Essential References Not Discussed: The paper provides extended literature reviews. Other Strengths And Weaknesses: Strengths • The proposed algorithm that combines group DRO with MARL is novel and reasonable. • The literature review is extended and well organized. • The experiment results are comprehensive. • The experiment shows significant improvement over the baseline MARL algorithm. See weaknesses in the below part. Other Comments Or Suggestions: • Overall, the presentation is not very clear in several places. • For the theory part, the two lemmas provided are not very strong. Could the author provide, e.g., a convergence analysis/the contraction property of the distributionally robust Bellman operator • Induction rate is the key concept of this paper, but the paper never gives a clear description of it. This makes the paper inaccessible to the audiences of ICML. Is it a real number or does it contain other information? • A similar problem applies to DRO. • What are the features of the observation? What are the differences between state and observation in this chute mapping problem since the paper models both state and observation in L162. • L176 refers to Figure 5 that appears several pages after the description. • For Lemma 3.2, is it a definition or a theoretical result? What is the definition of distributionally robust Bellman operator? What is the benefit of using it in the algorithm? • How many times do you run the policies for each group in Table 1? Why do the authors not show the std of throughput and recirculation amount? • The notation X appears in L372 seems to conflict with the notation $X$ that appears in e.g., Eq. 11~13. • The authors define the ambiguity set as a convex combination of past patterns. Could this modeling capture the patterns that gradually change year by year (e.g., the more and more packages for promoting seasoning of each year)? The authors try to address this in experiment results shown in Figure 6. But how far is the test distribution from the ambiguity set? • In Figure 9, why the rewards decrease along with training? • For Figure 5, are Year 1-4 sorted from past to current? A natural test would be training the model on Year 1 and test it on the later years. Why do the authors reverse? • Form the statistical values, DRMARL improves significantly over the MARL baseline. But could the authors provide detailed cases where the DRMARL policy outperforms the baseline? E.g., the strategies of it in extremely busy scenarios? Questions For Authors: See the above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments, thoughtful questions, and encouraging feedback. Your suggestions greatly improve the paper. Below are our responses, following the order of the comments, with references to tables and papers prefixed by “R-” for clarity. If any referred content is missing here, please find it in the responses to other reviewers. The contraction property of the DR Bellman operator is shown in the proof of Lemma 3.2 (Line 668). We explicitly highlight this in the revision for clarity, and replace the paragraph starting from Line 681 with: Since $\gamma \in [0,1]$, this establishes that DR Bellman operator is a contraction mapping under the $\ell\infty$ norm. By Banach’s Fixed Point Theorem [R-1], there exists a unique fixed point $\tilde{Q}^*$ such that $\tilde{\mathcal{T}}_{\mathcal{G}}(\tilde{Q}^*) = \tilde{Q}^*$. Consequently, iteratively applying the operator ensures convergence to $\tilde{Q}^*$, proving the stability of the robust Q-learning algorithm. This contraction property of the DR Bellman operator was also addressed in [R-2] when the ambiguity set is defined for transition probability. Due to the character limit, the induction rate, in short, is a vector that contains number (integer) of packages inducted per destination and hour. A comprehensive definition of induction rate/pattern is provided in the revision. We provide a more detailed explanation of DRO in the revision. In this formulation, the state space $\mathcal{S}$ represents the global state, while each agent $i$ has a local observation $\mathcal{O}_i \subset \mathcal{S}$ containing partial information about the global state. The observation features are detailed in Appendix C.1, Line 739. While the state provides full system information, each agent’s observation is limited to local aspects relevant to its decision-making. This partial observability requires agents to act based on their own experiences and available information. We will refine our explanation in Line 162 to better highlight this distinction. We place the Figure 5 near Line 176 in the revision. Lemma 3.2 defines the DR Bellman operator, derives its explicit form for group DRO with MARL, and establishes its contraction property. Its formal definition appears in Equation (20), Line 655, with benefits discussed in Line 220. Notably, minimizing the worst-case Bellman error among groups does not necessarily yield a policy optimal under worst-case rewards. This is because the worst-case Bellman error reflects the worst-case deviation from the target Q-function but does not guarantee convergence to the optimal robust Q-function. Instead, the DR Bellman operator and its corresponding DR Bellman error ensure robustness. We refine Lemma 3.2 and the corresponding text for clarity in the revision. In Table 1, each group is tested 100 times. STDs are omitted for table readability, and are included in the revision. In Line 372, $X$ denotes the induction pattern (packages per destination per hour) and is a realization of a random variable following the induction generating distribution. Thus, the immediate reward function (recirculation) depends on $X$. Convex combinations of distribution groups aim to span the space of potential induction distributions, using available distribution groups to model the space of potential target distributions is a common approach in DRO. As we collect more production data, the distribution groups will expand to better capture year-to-year shifts without increasing the DRMARL training complexity. Figure 6 shows that the robust policy generalizes well to test distributions specifically designed outside $\mathfrak{M}$ without theoretical guarantees. Its Type-1 Wasserstein distance to $\mathfrak{M}$ is 818.19, while the average distance among distributions within $\mathfrak{M}$ is 542.96. In Figure 9, the reward is indeed increasing as the y-axis is flipped. Fixed in the revision. The years are ordered from past to present. Year 4 was chosen arbitrarily. In Table R-1, training the MARL policy on Year 1 also results in significant performance degradation on OOD data, with similar outcomes as MARL trained on Year 4 when compared with DRMARL. |Year|2|3|4| |-|-|-|-| |Recirc Degradation(↓)|51.01% ± 0.17%|74.45% ± 0.22%|42.89% ± 0.64%| **Table R-1**: Relative recirculation rate degradation compared to the baseline MARL (trained on Year 1) on induction data from Year 2-3. The detailed chute allocation action and strategy difference are omitted due to the character limit and are presented in the revision. Please kindly refer to the text under Table R-4 and Table R-5 for high-level strategy difference and step-wise recirculation outcomes between MARL and DRMARL. **References:** [R-1] Rudin, W. Principles of Mathematical Analysis (3rd ed.). McGraw-Hill, 1976. [R-2] Iyengar, G. Robust dynamic programming. Mathematics of Operations Research, 2005.
null
null
null
null
null
null
null
null
Inverse Flow and Consistency Models
Accept (poster)
Summary: This work addresses the challenge of denoising corrupted data in the absence of ground truth observations with 2 proposed methods: Inverse Flow Matching and Inverse Consistency Model. IFM leverages a reverse flow process modeled by an ODE to transform noisy data towards a cleaner state, while ICM offers a more efficient solution by using a consistency function. the key idea is train a denoiser $\theta$ for both denoising and noising process, e.g., in IFM, KLD between $p_\theta(x_t)$ and $q(x_t | g_\theta (x_t) )$ while $g_\theta$ provide a estimated $x_0$. However, I have some minor reservations with respect to ICM. Claims And Evidence: In the author’s proof, the ODE appears to fulfill the claim made in the paper, namely, that it recovers the distribution of x_0. However, I question whether this is truly the case. In the context of unconditional generation, the prediction for x_0 could simply be the mean of all the observed noisy data, which would fail to recover an effective distribution. Would it be possible for the authors to conduct a set of control experiments, where in Algorithm 1, Line 4, x_0 is not derived from sg(ODE), but is instead the mean of all noisy observations? I believe this experiment could be crucial in determining the broader effectiveness of the proposed approach. Methods And Evaluation Criteria: This work has been tested on toy datasets as well as some real-world datasets related to biology. However, I believe these tests fail to convincingly demonstrate the effectiveness of the proposed approach. The scale of these datasets is minimal compared to those of existing datasets, and the baseline comparisons are based on relatively old work. Theoretical Claims: please check my concerns about Claims and Evidence Experimental Designs Or Analyses: As previously mentioned, the experimental design in this paper fails to convincingly demonstrate its scalability. Supplementary Material: Supp part provides the relevant proofs for the algorithms, detailed experimental setup and results, as well as some necessary preliminary proofs. Relation To Broader Scientific Literature: This paper primarily addresses the denoising of simple images, achieving acceptable high PSNR without relying on ground truth data. This approach could potentially influence a broad spectrum of image processing applications. Essential References Not Discussed: n/a Other Strengths And Weaknesses: The novelty of the proposed approach is somewhat limited. Both the IFM and ICM are essentially predictions based on the existing frameworks, focused on refining the estimate of x_0. also could I also expect the authors to conduct denoising experiments on RGB datasets, such as ImageNet, to further demonstrate the effectiveness of the proposed approach? the last concern is about the effective of generated x_0 as I mentioned in previous section. Other Comments Or Suggestions: n/a Questions For Authors: in this part I repeat my major concern: 1. Would it be possible for the authors to demonstrate the validity of predicting x_0 by replacing the prediction with the mean of the noisy images, thereby supporting the claim that the approach recovers an effective data distribution rather than a shifted one? 2. Would it be possible for the authors to present denoising results on RGB images? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's detailed feedback. We address these questions one by one below: 1.**Regarding recovering the distribution of $x_0$**, our theoretical analysis (see Theorem 1 and Appendix A.2.1) establishes that under appropriate assumptions our ODE-based inverse flow does indeed recover the full distribution $p(x_0)$ rather than merely its mean, which are well supported by empirical results including additional experiments which we will describe below. • By “the mean of all the observed noisy data”, if the reviewer means the overall dataset mean $E(x_1)$—averaging all noisy observations into one single point—this would indeed fail to capture any structure in the data. In our experiments (we are not allowed to upload figures, so we will describe the results in text and include them in the final revised manuscript), with a toy model where the clean data form two distinct circles, our method successfully recovers the two separate circles rather than converging to one average point. • If the reviewer refers to the conditional expectation $E(x_1|x_0)$, under centered noise distribution this equals $x_0$ and will perfectly recover clean data and its distribution, but when the noise is non-centered (as in a Jacobi process), the conditional expectation does not recover the clean distribution. In our toy experiment with a 1D distribution perturbed by a Jacobi process, our method was able to recover the clean data distribution rather than simply output the mean of the noisy observations which will produce a shifted distribution. It is also important to emphasize that in our setting each sample has only one noisy observation, so averaging multiple observations to denoise is not an option. 2.**Regarding the novelty of our method**: While our method indeed builds on the rich framework of continuous-time generative models, its key contributions lie in the formulation of the Inverse Flow framework and the development of the Inverse Flow Matching and Inverse Consistency Model algorithms. These contributions extend the scope of existing methods by enabling denoising without ground truth, accommodating any continuous noise distribution, and reframing the denoising task as an inversion of the forward noise process. It was not obvious that flow matching or diffusion can recover data from the earlier timepoint without requiring data for that timepoint and we showed to our knowledge the first general solution to this problem applicable to the entire family of continuous-time generative models. Other works have attempted to address related questions without acheiving a general solution for all noise distributions. Thus, this novel perspective provides both theoretical insights and practical benefits that are not achieved by simply refining estimates of $x_0$. 3.**Regarding validation on RGB datasets**. We would like to clarify that our evaluation spans a wide range of datasets including RGB images. In the Appendix A.5.2 (Table 4), we report RGB denoising results on the BSDS500 dataset, which clearly demonstrate the method’s capability in handling natural images. Moreover, we have now extended our experiments by conducting denoising on the ImageNet validation set. In these experiments, our method consistently outperformed the compared approaches in terms of PSNR, further underscoring both its effectiveness and scalability to larger, more challenging RGB datasets. We will include the results in the revised manuscript. We believe these additional results provided more robust evidence for the broad applicability of our approach. | Dataset | Input | Noise2Void | Noise2Self | Ours | |----|----|----|----|----| | ImageNet | 20.17 | 28.95 | 26.34 | 29.65 | Finally, we sincerely invite the reviewer to share any further comments or questions. We greatly value constructive feedback and are more than willing to provide additional clarifications or revisions in the manuscript. Thanks! --- Rebuttal Comment 1.1: Comment: Thank the author for the clarification. I slightly increased my rating (from 2 to 3). --- Reply to Comment 1.1.1: Comment: Thank you again for your thoughtful feedback and for increasing your rating! We truly appreciate your constructive insights and are glad that our clarifications helped address your concerns.
Summary: The authors present two novel methods, inverse flow matching (IFM) and inverse consistency models (ICM), for unsupervised denoising based on flow matching and consistency models. For training, both methods require only noisy data as well as a statistical model for the measurement noise. IFMs learn a vector field describing an ODM that can through iterative application move a noisy data point to the clean data manifold. IFMs are trained in a bootrapped fashion by (i) using the current vector field model to derive a clean datapoint form a noisy one, (ii) applying noise according to the noise model, and (iii) using interpolation to derive a vector field whichis then regressed by the learned vector field. In ICMs, a consistency function mapping noise and intermediate datapaoint to clean targets is trained bootstrapped in s similar way, using the current model to produce clean data and the noise model to create noisy versions and intermediate steps, which then serve ar training data for the consistency loss. Both methods are evaualated on a number of different denoising problems and perform favourably, compared to baselines. ## update after rebuttal I appreciate the insightful response of the authors. I will increase my rating to accept. Claims And Evidence: I will discuss the most important claims below: **"A main contribution of our approach is generalizing continuous-time generative models to inverse generation problems such as denoising without ground truth"** The authors provide the theoretical justification and derive two such methods and validate them experimentally. **"IF can be seamlessly integrated with generative modeling to generate samples from the ground truth rather than the observed noisy distribution."** Unfortunately, this claim is not validated. It is not clear how exactly this would work. In order to function as true generative models, the presented methods would have to map pure noise at x_1 to clean data points at x_0. While it is conceivable that this works it is not obvious and not discussed or experimentally validated. Methods And Evaluation Criteria: I think the selected datasets make sense. I especially appreciate the fact that the authors show the applicability of the methods in so many different denoising tasks. With respect to the denoising task, the selected evaluation metric (PSNR) makes sense. Theoretical Claims: I checked the background, derivations of IFM and ICM as well as the algorithms. I don't see any problems on the theoretical side. Experimental Designs Or Analyses: I believe the experiments are valid and sound. However, there are some open points and questions: * In the experiment regarding denoising of RNA-sequence data, it is not clear what noise model is used. How would could this be determined? Maybe I overlooked this? * I am missing a quantitative comparison of the two presented methods (IFM and ICM). Most experiments are only done using ICM. Is this due to the computational cost of IFM? I am missing a clearer analysis and discussion of pros and cons. It seems IFM is generally for expensive, but does it perform better? How much more computation does it require? Does it perform better than ICM if time is not an issue? Supplementary Material: I have checked the additional experimental details and results presented in the supplement. I appreciate the detailed description of experimental details except for the missing info on the RNA data noise model. Relation To Broader Scientific Literature: The methods are based on Flow Matching and Consistency models. This connection is clearly identified and described. The presented methods allow denoising without clean GT, this is related to self-supervised methods like N2V and N2S, however, I think the relationship is not completely clear. N2V/S are regressing an MMSE prediction for the clean image given the noisy input that is they try to find the center of mass of the posterior of clean images given the noisy input.Even though this should be optimal wrt to PSNR, this can lead to blurry results for images that are very noisy, as the prediction is essentially a compromise between possible solutions. I believe this approach is quite different from the presented methods. I would love to see a theoretical discussion of this. How should we interpret the denoising result of IFM or ICM? As MAP solution, or as a sample form the posterior or as MMSE? I believe the authors missed a line of work ([1,2,3] in next section) on unsupervised denoising. These methods allow sampling from an approximate posterior instead of producing a single output. Essential References Not Discussed: The authors missed a VAE-based work [1,2,3,4]. These works are trained using only noise data and a model of imaging noise (or learn is on the fly) and then provide a generative model for the clean data distribution, the noisy data distribution, and the posterior of clean images given a noisy image. I hope the relationship to this line of work can be discussed in the final version of the paper. [1]: Prakash, Mangal, Alexander Krull, and Florian Jug. "Fully Unsupervised Diversity Denoising with Convolutional Variational Autoencoders." International Conference on Learning Representations. [2]: Prakash, Mangal, et al. "Interpretable Unsupervised Diversity Denoising and Artefact Removal." International Conference on Learning Representations. [3]: Iwamoto, Yuichiro, et al. "High throughput analysis of rare nanoparticles with deep-enhanced sensitivity via unsupervised denoising." Nature Communications 16.1 (2025): 1728. [4]: Salmon, Benjamin, and Alexander Krull. "Unsupervised Denoising for Signal-Dependent and Row-Correlated Imaging Noise." WACV 2024 Other Strengths And Weaknesses: **Other Strengths:** * The problem of unsupervised denoising without clean ground truth is highly important especially for scientific imaging problems. * I think the approached are an elegant extension of the existing methods, turning a generative model into a denoisier. * I think the paper is well written. All concepts are introduced to the point in the background chapter. **Other Weaknesses:** * I think the computation time for IFM seems to be an issue. A comparision of the computation time would be good. Other Comments Or Suggestions: * The authors claim that there method is only applicable to "continuous noise models", but could it not for example be applied as well to discrete noise such as pure Poisson shot noise as well? * Typo after Eq. 2: should have a ',' at the end not '.' Questions For Authors: * I am missing a direct comparison of the two presented methods. * How does the method relate to the references mentioned above? * Would it make sense to use the method e.g. for deconvolution/debluring? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your encouraging feedback on our approach. We address each of your points in detail below. 1.**For the RNA-seq experiment**, we performed denoising in a linear latent space transformed by PCA. We expect the noise in the latent space to be very close to Gaussian distribution due to central limit theorem, and have empirically verified it by bootstrap resampling of the scRNA-seq reads. We will add more details about the noise model to the revised manuscript. 2.**“seamlessly integrate with generative modeling”**: we have now backed up this statement with new experimental results. We have conducted experiments on CelebA where training with this extended timepoint results in the generation of clean samples from pure noise. Specifically, we extend the training timepoint to a higher $t_{max}$ so that $x_{t_{max}}$ becomes pure noise. For timepoints $t$ below the noise level $\sigma$, the training follows inverse flow whereas for higher $t$, the training is identical to regular continuous-time generative models like flow matching. These experimental results validate our claim that IF can be integrated with generative modeling. We will include this additional experiment and visualization of results in the revised manuscript to further support this point. 3.**Interpretation of the denoising result**: Indeed our method is distinct from methods regressing an MMSE prediction of clean image like N2V/S. Specifically, we learned an ODE that simultaneously performs denoising and serves as a mapping between $p(x_0)$ and $p(x_1)$, therefore the denoising output has to be a valid sample from $p(x_0)$ and a clean image rather than a blurry image. This property is similar to generative sampling from flow matching or probability flow ODE models. For the interpretation of the denoising result which is not MMSE, we can provide two perspectives: the first perspective is proven in our Section 3.1 Lemma 1, where we showed that the learned ODE vector field at any time $t$ points straight toward the expectation of the ground truth $x_0$. At high $t$ the expectation is similar to MMSE prediction which is a blurry image, but as $t$ reduces, the expectation gradually becomes a clean image. The second perspective is by transport cost. Our method leverages a linear interpolation to define the conditional ODE vector field, which has been proven [1] to induce an ODE-based coupling between $p(x_0)$ and $p(x_1)$ that reduces the transport cost between the noisy and the clean data distribution (compared to the training coupling based on the conditional noise distribution). Therefore, our method reduces the transport cost in the mapping from noisy to clean data, thus we obtain a sample from the clean image distribution that is close to the noise image. 4.**Comparison between IFM and ICM**: Based on our experiments, we observed that while IFM tends to yield a slightly higher denoising performance than ICM (PSNR in the table below), with ICM 2x faster during training and 10x faster during inference. In our setup, IFM requires solving the ODE at every training step, which makes it slower, whereas ICM uses a simulation‐free consistency function that bypasses the need for iterative ODE evaluations. In other words, if computation time is not a major constraint, IFM might offer marginal performance improvements. However, for most practical applications where efficiency is important, ICM offers a nearly equivalent performance at a substantially lower cost. | Dataset | Input | ICM | IFM | |:-------:|:-----:|:-----:|:-----:| | BSDS500 | 20.17 | 28.16 | 28.33 | | Kodak | 20.18 | 29.08 | 29.25 | | Set12 | 20.16 | 29.19 | 29.34 | 5.**Apply to discrete noise**: While our theoretical derivations assume continuous noise distributions for analytical clarity, our method can be extended to discrete noise models. In practice, one can apply dequantization techniques to convert discrete measurements into a continuous representation. In fact, even without an explicit dequantization step, treating the discrete data as if it were continuous already yields strong denoising performance. As evidenced in Appendix A.5.2 (Table 4), our experiments on Poisson noise demonstrate that the method performs robustly under these conditions. 6.We thank you for highlighting the VAE-based approaches which is another direction for utilizing generative models for denoising. We will discuss the distinctions of our approach in the final manuscript. 7.We thank you for bringing up applications to deconvolution and denoising. Our method is indeed well-suited for deconvolution and deblurring, with and without additional correlated noise, as these operations can be easily represented via conditional ODE. We genuinely appreciate the reviewer’s time and insights. We welcome any additional feedback or questions. Thank you again for your thoughtful evaluation. [1] Liu et al., Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow, 2022 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply! I have a follow-up question regarding the interpretation of the denoisied results. The authors write in their response: "At high the expectation is similar to MMSE prediction which is a blurry image, but as reduces, the expectation gradually becomes a clean image." Is this clean image to be interpreted as a random sample form the posterior distribution? Is it the maximum a-posteriori estimate? --- Reply to Comment 1.1.1: Comment: Thank you for the insightful follow-up question! To clarify, the clean image recovered by our method is neither a random sample from the posterior distribution nor a maximum a posteriori (MAP) estimate. The learned mapping from noisy image $x_1$ to clean image $x_0$ via the ODE is a one-to-one deterministic mapping function, thus it does not generate a random sample. The mapping also differ from MAP estimate. For example, in the extreme case where $x_1$ is pure noise (i.e., large noise limit), the posterior $p(x_0|x_1)$ becomes independent of $x_1$, and the MAP estimate would always be the mode of $p(x_0)$—i.e., a single point regardless of the input. This differs from the behavior of our model, which will still provide a one-to-one mapping and generate output that remain close to the input in transport cost. Thus, our denoising output is not the MAP, MMSE or a random sample in the posterior, and we provide two different perspectives above to help understand it.
Summary: The authors provide a method to do inverse sampling of clean data p(x0) with having access to a distribution (data) of corrupted data p(x1) and assuming a knowledge of p(x1 | x0). They do not assume access to p(x0) at training time. They propose an approach similar to the consistency model to do the trick. They provide theoretical results and empirical evidence showing that their approach works better than some baselines. Claims And Evidence: I expand on claims in “Theoretical Claims” and in “Experimental Designs or Analyses”. On a high level: * The authors claim that their approach is theoretically justified (which might not be true). The assumptions for their method seem to be too strong and one line in the proof might not be valid. * The authors claim that their method outperforms the baselines empirically, but the choice of baselines and validity of experiments raises some doubts. Methods And Evaluation Criteria: The main evaluation criteria the authors chose is to run experiments on inverse problems (i.e., denoising) and compare the empirical results to baselines. I have significant doubts about the choice of baselines and about the validity of results (see below “Experimental Designs or Analyses”) – the chosen baselines are not built for the same setting, there is no information about how the baselines were optimized, there is no information about hyperparameters optimization. The authors also have some theoretical claims, but I also doubt whether the assumptions are realistic and whether the theoretical claims are actually valid (see below). Theoretical Claims: The authors provide the proof for the theorem, in line 697 in Appendix, the authors say that “Since the solution of ODE is unique,....”, this is not obvious at all. In ODE literature, proving that the solution exists and is unique is the core of theoretical analysis. Moreover, the exact form of ODE is not specified. In Theorem 1, the authors suggest that `p(x_1|x_0)` must satisfy the condition that any noisy data distribution `p(x_1)` there exists only one probability distribution `p(x_0)` that satisfies p(x_1) = \int p(x_1 | x_0) p(x_0) dx_0. It seems that this assumption is very strong. The authors should at least discuss how restrictive this assumption is for the model class they try to study. For example, if I take a noise distribution p(x_1 | x_0) which is factorised p(x_1 | x_0)=\prod_{i} p(x^i_1|x^i_0), such that for some i, we have p(x^i_1|x^i_0)=p(x^i_1), it would not uniquely determine p(x_0). The other theoretical claims seem valid (I haven't checked the proofs in detail), but they are not as significant as Theorem 1. Experimental Designs Or Analyses: I have a few doubts about the experimental validity and chosen baselines. Major: * Experimental validity: The authors do not provide any experimental details on how the baselines were tuned. Moreover, The authors do not provide any information on how the hyperparameters for their method were chosen and how many experiments they run to make it work. * Choice of baselines: In the conclusion, the authors write “A limitation of inverse flow is assuming prior knowledge of the noise distribution, and future work is needed to relax this assumption”. It bears the question whether the baselines were chosen appropriately. The authors focus their comparison on Noise2X approaches which as far as I understand, do not require the knowledge of p(y | x). The authors focus efforts on comparing performance to these approaches, which is fundamentally unfair, because the authors approach uses more information (i.e., p(y|x)). The authors in A.3 claim that "The most comparable approaches to our method are those that explicitly consider a noise distribution, including Stein’s Unbiased Risk Estimate (SURE)-based denoising methods (Soltanayev & Chun, 2018; Metzler et al., 2020) and Noise2Score (Kim & Ye, 2021).". Moreover, they mention “Ambient diffusion and GSURE-diffusion” baselines but do not compare them. They could add a comparison to these even if the data is linearly corrupted, to see whether the proposed method behaves similarly as in other experiments. The authors write: “Further analysis revealed that the supervised method encountered overfitting during the training process, which led to suboptimal performance. In contrast, our method did not exhibit such issues, highlighting the superiority of our approach.” – this claim is not supported by any empirical evidence. No links to the appendix. The results of the proposed method are very close to “Supervised” and to Noise2X baselines depending on the noise process. It would be useful to have a more detailed explanation on why this method performs better than baselines. It is not clear what the computational cost of the proposed method compared to baselines? Supplementary Material: I read through the supplementary material but it does not contain some information, for example, it doesn't discuss how the hyperparameters are chosen, how many experiments were conducted, how baselines are tuned and how their hyperparameters are chosen, why “supervised method encountered overfitting”. Relation To Broader Scientific Literature: I think the paper adequately discusses existing literature. Essential References Not Discussed: No concerns. Other Strengths And Weaknesses: No other comments. Other Comments Or Suggestions: For unfamiliar audience, it would be useful to add an example of an inverse problem in Page 3 when Section 3 begins, especially when it makes sense to have access to both, x1 and noising distribution p(x1|x0). The list of noises (1-3) seems a bit out of place with the rest of the text (in Experimental section). Questions For Authors: No further questions. Ethical Review Concerns: No concerns. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank you for raising these important points. Below we address each point in the same order and numbering as in the review. 1.**Regarding “Since the solution of ODE is unique…”**: In our framework we assume that the learned ODE (typically a neural ODE) satisfies standard conditions (e.g. Lipschitz continuity) that guarantee both existence and uniqueness of solutions. In Appendix, we clarified this by explicitly stating the necessary conditions [1][2]. We note that these are standard assumptions of all neural ODE including flow matching literature [3][4]. 2.**Assumption in Theorem 1** is natural in denoising—since without it, if any dimension of $x_1$ contained no information about $x_0$, no method could recover the clean signal. For example, consider the Gaussian noise model: $p(x_1|x_0)=N(x_1;x_0,\sigma^2)$. The observed distribution is given by: $p(x_1)=\int N(x_1;x_0,\sigma^2)p(x_0)dx_0$. In the Fourier domain it becomes $\Phi_{(x_1)}(\omega)=\Phi_{(x_0)}(\omega)\cdot \exp(-1/2 \sigma^2||\omega||^2)$, and since the noise kernel is strictly positive for all frequencies, the mapping is injective. Further, considering that errors in estimating the transform may be amplified by $\exp(1/2 \sigma^2||\omega||^2)$, if we assume that the clean distribution is band-limited to frequencies $||\omega||<=W$, then the worst-case error amplification factor is $\exp(1/2 \sigma^2||W||^2)$. This bound shows that for moderate noise levels and naturally smooth signals, the recovery remains stable. In practice, many real-world denoising problems involve noise distributions—such as Gaussian noise with finite variance or other well-behaved noise models—where the Fourier transform of the noise kernel does not vanish. Thus, the assumption required by Theorem 1 is not restrictive. 3. **Experimental Validity**: We have now provided additional details on how all baselines were tuned. We use the same model architecture for all methods (from [5]). In our experiments, hyperparameters for both our method and baselines were chosen via systematic grid searches and 5-fold cross-validation on the BSDS500 dataset. Therefore, the performance comparison is fair. 4. **Choice of Baselines**. We argue that the baselines were carefully chosen and indeed the most comparable methods to our settings. All Noise2X methods also make assumptions about $p(y|x)$. Except for Noise2Score, all other methods assume independence across dimensions and centered (i.e., zero-mean) noise distributions. Noise2Score further requires exact knowledge of $p(y|x)$ and supports certain families of distributions such as Gaussian and Poisson, and we showed that we outperform Noise2Score, as shown in Appendix A.5.2. We originally put the comparison in Appendix because Noise2Score loss function is specific to the distribution and not available for SDE (Jacobi diffusion). We will move the comparison to the main text. In response to your comment, we have also added comparisons to SURE (assuming Gaussian noise) and G-SURE-diffusion (assuming linearly corruption followed by additive Gaussian noise, the same as ambient diffusion). We outperform both SURE and GSURE-diffusion in terms of PSNR, as shown below. | Dataset | SURE | G-SURE | Ours| |:---:|:---:|:---:|:---:| | BSDS500 | 27.58 | - | 28.16 | | Kodak | 28.23 | - | 29.08 | | Set12 | 28.95 | - | 29.19 | | CelebA | - | 36.40 | 38.86 | As there are currently no alternative methods that support arbitrary noise distributions with the same generality as our approach, and we have now compared with most methods that do not require ground truth data and rely on assumptions of $p(y|x)$. Therefore, we feel that our choice of baselines is now both fair and justified. 5. **Computational Cost**. We use the same model architecture for all methods. In our experiments, the training and inference cost of ICM is equal to that of the baselines. IFM, which simulates an ODE, incurs higher computational cost. For a detailed comparison, please refer to our response to the reviewer t72D. This part will be included in the revised manuscript. 6.We clarify that “supervised method encountered overfitting” was considered as the explanation of the inferior performance in our benchmark. We agree that there are alternative possibilities and will remove this statement. 7.We thank you for the suggestion on improving the text, we will add an inverse problem example at the start of Section 3, and will improve the flow in the Experimental section. We hope that these clarifications and additional experimental results address your concerns. [1] Kidger, On Neural Differential Equations, 2022 [2] Song et al., Consistency models, 2023 [3] Lipman et al., Flow matching for generative modeling, 2023 [4] Tong et al., Improving and Generalizing Flow-Based Generative Models with Minibatch Optimal Transport, 2024 [5] Lehtinen et al., Noise2Noise, 2018
null
null
null
null
null
null
null
null
Towards an Explainable Comparison and Alignment of Feature Embeddings
Accept (poster)
Summary: The authors propose the Spectral Pairwise Embedding Comparison (SPEC) framework for comparing feature embeddings in an explainable manner. The goal is to identify differences in how two embeddings cluster data points, rather than relying solely on downstream performance metrics. The main contributions are: 1) a spectral analysis approach leveraging eigendecomposition of differential kernel matrices to detect mismatches in clustering behavior, 2) a scalable implementation that reduces computational complexity, 3) the SPEC-align method for aligning embeddings by minimizing clustering differences, and 4) numerical results demonstrating the effectiveness of SPEC on benchmark datasets such as ImageNet and MS-COCO. The study shows that SPEC can reveal embedding discrepancies and improve cross-modality alignment, such as enhancing CLIP embeddings with single-modality features. Claims And Evidence: The authors claim that the SPEC framework enables explainable comparisons of feature embeddings by identifying clustering differences between them. They support this claim through theoretical derivations using spectral analysis. They show that the eigendecomposition of the differential kernel matrix highlights mismatches in clustering behavior. The paper also introduces SPEC-align, which aligns embeddings by minimizing these differences. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem of explainable embedding comparison. Furthermore, the evaluation is robust. The authors used well-established benchmark datasets such as ImageNet and MS-COCO. The scalability of SPEC is validated through efficient computation techniques, making it feasible for large datasets. Theoretical Claims: The theoretical claims presented in the paper appear well-founded. Specifically, the authors provide rigorous proofs demonstrating that the eigendecomposition of the differential kernel matrix effectively identifies clustering differences between embeddings. Moreover, they establish the scalability of their method by proving that the computational complexity grows linearly with the sample size. Experimental Designs Or Analyses: 1. The authors evaluated the SPEC framework using well-known benchmark datasets, including ImageNet and MS-COCO. 2. The authors compared different feature embeddings using the eigendecomposition of differential kernel matrices to identify differences in clustering behavior. 3. The scalability of the approach is validated by implementing an optimized computation strategy for large datasets. Supplementary Material: I reviewed the supplementary material, focusing on the detailed theoretical derivations, experimental details, and extended discussions on the SPEC-align optimization process. Relation To Broader Scientific Literature: The SPEC framework proposed by the authors leverages the eigendecomposition of differential kernel matrices to compare embeddings. This aligns with spectral clustering and diffusion maps, which are widely used in manifold learning within the existing scientific literature. Essential References Not Discussed: Most of the essential references were cited in the paper. Other Strengths And Weaknesses: Strengths: 1. The paper presents a novel spectral framework (SPEC) for comparing embeddings. 2. The SPEC-align method provides a structured way to align embeddings, resulting in improved cross-modality performance. 3. The scalability of the approach is well-addressed. This makes it feasible for large datasets. 4. The authors demonstrated the effectiveness of SPEC in detecting clustering differences across various embeddings and experiments. Weaknesses: The paper could benefit from direct comparison with existing embedding alignment techniques. Other Comments Or Suggestions: 1. Overall the paper is well-written. 2. "eigendecomposin" → "eigendecomposition" 3. consistently use "difference kernel matrix" or "differential kernel matrix" to avoid confusion. Can you check this? Questions For Authors: 1. The paper uses both "difference kernel matrix" and "differential kernel matrix." Are these terms interchangeable, or do they refer to distinct concepts? 2. The SPEC framework focuses on spectral analysis for comparing embeddings. How would it perform against non-spectral clustering methods such as hierarchical clustering or DBSCAN? 3. The paper discusses the computational efficiency of SPEC, but how does SPEC-align scale in high-dimensional spaces with very large datasets? 4. The paper introduces SPEC-align for embedding alignment, but how does it compare to existing alignment techniques such as Procrustes analysis or Wasserstein distance-based alignment? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer KUAe for the thoughtful and constructive feedback on our work. Below is our response to the comments and questions in the review: ([Our Numerical results are shown in this link](https://github.com/ICML6204/ICML6204/blob/main/ICML_Rebuttal.pdf)) **1- Comparison with existing embedding alignment techniques** A distinction of our proposed alignment approach is that SPEC-Align operates directly on kernel matrices, enabling cluster-level comparison between encoders without requiring pointwise embedding alignment. The standard Procrustes and Wasserstein distance-based alignments usually enforce alignment in the sample level, which is a stronger alignment requirement compared to SPEC-align. This cluster-centric perspective is particularly advantageous when the embeddings have different topologies that should not be forcefully aligned or when only cluster-assignment consistency (rather than pointwise matching) is needed for the downstream task. We will include this discussion in the revised text. **2- Difference kernel matrix and differential kernel matrix** We thank the reviewer for pointing this out. We will update the paper and consistently use the term “difference kernel matrix” to avoid any confusion. **3- Non-spectral clustering methods such as hierarchical clustering or DBSCAN** While the current SPEC framework follows spectral clustering for kernel-based embedding comparison, extending non-spectral clustering methods such as hierarchical clustering or DBSCAN [1] to perform cluster-based embedding comparison will be an interesting direction for future exploration. We will discuss this future direction in the conclusion section. [1] Ester et al., "A density-based algorithm for discovering clusters in large spatial databases with noise" KDD 1996 **4- SPEC-align computational complexity and scalability in sample size** Following Proposition 5.1, we showed that SPEC-align can be computed using the power method on a matrix with size $(d_1+d_2)\times (d_1+d_2)$, and the gradient calculation can be run with only $O(\max\lbrace n_B, (d_1+d_2)^2\rbrace)$ computations ($O((d_1+d_2)^2)$ is the cost of one run of the power method), considering a batch size $n_B$ and embedding dimensions $d_1$ and $d_2$. We will explain the complexity of SPEC-align in the revised text.
Summary: The work is proposed to allow an interpretable comparison of feature embeddings from different methods through a Spectral Pairwise Embedding Comparison (SPEC) on clustering and also proposes an approach to align one embedding with another. The work is relevant for a wide audience where interpretability and flexibility with embeddings is desired. Claims And Evidence: The contributions are centered around the proposed SPEC method for comparing two embeddings, an O(n) implementation for the same along with an alignment method to align one embedding with another. Evidence has been provided using methods, experiments and insights which are discussed in other comments. Methods And Evaluation Criteria: - The study uses four widely used benchmark image based datasets, a dataset constructed by overlaying text labels on the ImageNet-1k dataset and text based datasets generated by GPT-4o. - Embeddings used for the evaluation are fairly representative of the commonly used embedding methods (CLIP, DINOv2, etc.) - The methods are described in detail -- the algorithm has been described for spectral pairwise embedding comparison (SPEC) using the kernel covariance matrices and the loss for aligning embedding maps of one approach with another by adding a penalty term based on the SPEC-diff term that penalizes the mismatch between the with the reference embedding. Theoretical Claims: I have not validated the proofs for the theoretical claims. However, the framework definition, formulation and proofs are consistent. Experimental Designs Or Analyses: - Experiments for embedding comparisons have been performed in Section 6 (and supplementary material) where the experiment settings have been reported. Figure 1 shows the top-3 clusters based on the approach1 (DINOv2, CLIP, SWAV) in comparison to approach 2 (CLIP, DINOv2, DINOv2) which is validated through the tSNE representation of each embedding individually. The pairwise SPEC embedding approach provides insights like 'CLIP ranking random samples higher as compared to DINOv2' which is qualitatively shown in Figure 8,9. Experiments have also been performed by performing a typography attack to highlight that CLIP clusters are based on overlaid text whereas DINOv2 focuses on overlaid text. - The alignment of CLIP with DINOv2 to improve CLIP's performance on MS-COCO 2017 is highlighted in figure 3, 12 where the alignment of Kernel matrices of CLIP to DINOv2 is performed to obtain the SPEC-align CLIP Kernel using the penalty based alignment objective discussed in the methods. Supplementary Material: Detailed proofs and additional experiments on text and image embeddings have been provided in the supplementary material. Further insights like CLIP being more effective at identifying children groups as compared to DINOv2 have been shared. Relation To Broader Scientific Literature: Text and Image embeddings are widely used in many applications to perform downstream tasks or to just store objects for retrieval purposes. The interpretability of these embeddings is sought for in multiple areas like healthcare where situations like domain shift or bias in embeddings are undesirable. The pairwise embeddings comparison approach could allow the users to take a well informed decision for choosing the embeddings for the specific application and embedding-alignment approach could be beneficial in adapting embeddings for obtaining specific outcomes. This work is (probably) a first step towards providing insights on the commonly used embedding methods through experiments on the selected datasets. Essential References Not Discussed: I am not aware of any essential references that have not been discussed. Other Strengths And Weaknesses: The paper is well-written and readable. References have been provided to point to the existing literature where needed. The framework of embeddings comparison and alignment is defined well and has been theoretically and experimentally validated extensively. Other Comments Or Suggestions: - It would be good to provide the API endpoints for the embeddings used in this work. The methods have been cited but the endpoints would help in reproducibility of the approach. - As there have been several insights provided by comparing embedding methods like CLIP and DINOv2 by highlighting cases where the embeddings have some limitations. It would be good to provide some general guidelines about how to use the SPEC approach for the broader audience which may be interesting in comparing their embeddings on general tasks. For instance, the insight provided in Subsection "SPEC comparison of embeddings on different image and text datasets." regarding the RoBERTa clustering based on gender and profession could have fairness implications for the wider audience. It would be good to provide a detailed implementation for further validation of the approach in other domains (and datasets) and ensure reproducibility of the results. - In Section B.4.1, does the 'alignment_loss_weight' parameter refer to the β term in Eqn: 6? It may be good to clarify the difference between 'clip_contrastive_alignment_loss_weight' and 'alignment_loss_weight'. I can see that the 'coca_contrastive_loss_weight' is mentioned in the OpenCLIP github repository but the alignment terms may need some clarification. - In the last line of B.1, 'was' can be removed from '..observe that E5 was managed to cluster captions...' - For the alignment approach, there could be some explanation on the benefits of alignments other than CLIP->DINOv2. For instance, it would be good to see some experiments on text embedding alignment and/or to see the behavior of embeddings upon performing other types of alignment. It would also be good to discuss how the role of β penalty on the alignment. Questions For Authors: I have mentioned my comments in other sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer 7vC9 for the thoughtful and constructive feedback on our work. Below is our response to the comments and questions in the review: ([Our Numerical results are shown in this link](https://github.com/ICML6204/ICML6204/blob/main/ICML_Rebuttal.pdf)) **1- API endpoints for the embeddings used in the work** We would like to clarify that we have used open-source embeddings in our numerical experiments from their main repositories. Specifically, the embeddings discussed in the paper are in the following repositories, for which we will include the links in the revised paper: DINOv2 downloaded from: https://huggingface.co/docs/transformers/en/model_doc/dinov2 CLIP downloaded from: https://huggingface.co/docs/transformers/en/model_doc/clip SWAV downloaded from: https://huggingface.co/lixiangchun/imagenet-swav-resnet50w2 RoBERTa downloaded from: https://huggingface.co/docs/transformers/en/model_doc/roberta E5 downloaded from: https://huggingface.co/intfloat/e5-base-v2 Inception V3 downloaded from: https://huggingface.co/docs/timm/en/models/inception-v3 **2- alignment_loss_weight parameter in Section B.4.1** We thank the reviewer for pointing this out. The alignment_loss_weight parameter refers to the $\beta$ hyperparameter in Equation 6, which we have set as $0.1$ (mentioned in the Appendix) in addition to the parameters of the OpenCLIP Github repository. We will make this clear in the revision. **3- Additional experiments for text embeddings’ alignment** We thank the reviewer for the suggestion. To further address the reviewer's comment, we aligned the CLIP text embedding with the T5-XL model. In Figure 6 of the rebuttal link (last page), we can observe that the CLIP kernel has become more similar to T5-XL, and the SPEC-diff is also decreasing. T5-XL model huggingface: https://huggingface.co/google/t5-v1_1-xl **4- role of $\beta$ penalty on the alignment** In our experiments, we conducted a grid search for $\beta$ values ranging from 0.05 to 1. During our experiments, we observed that when $\beta$ is large ($\beta$ > 0.5), the consistency between CLIP text and image alignment decreases, and aligning should occur gradually. For our experiment, we ultimately used $\beta$ = 0.1. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my concerns in detail. I believe that the work is valuable to the community and should be accepted.
Summary: This paper proposes a method for comparing the embedding spaces for pairs of models by constructing PSD kernel matrices of each embedding space and studying the differences of these PSD matrices. The theoretical section discusses how, under some assumptions, the eigendecomposition of this difference between the PSD matrices carries the information about which clusters differ between the embeddings. The authors then perform an experimental analysis of their results. Claims And Evidence: The claims in the paper are supported by evidence, although the evidence could be made stronger. It seems that the author's primary mechanism for verifying their method is via tSNE plots of those points which are in line with the principal eigenvectors of the difference matrix. The idea here is that if these points are in tight clusters in the tSNE plot, then they must have a similarly nice structure in the embedding space. Although I agree with this argument in principle, I have two concerns with it. First, this is a qualitative measure and therefore cannot tell the full story. For example, Figure 1's third tSNE plot is not very clear. Which side is supposed to be better clustered? My second concern is a more theoretic one about relying on tSNE outputs. As far as I know, given a tSNE output, it's not clear what can be confidently said about the corresponding input. Thus, if the tSNE output is clusterable, this does not necessarily guarantee that the input has a similarly nice structure. In short, the tSNE plots are not particularly convincing and, even if they were, I'm not sure what information can be gleamed from them. Similarly, I am not sure that Figures 8 and 9 are the best format in which to make the point the authors are making. I agree that if the center of a cluster's nearest neighbors belong to tha same cluster, then the cluster potentially has a nicer structure than one whose centroid has nearest neighbors from other clusters. However, I again have two concerns with this being a definitive statement. First, it could certainly be the case that it is purely by luck that this structure emerges. For example, suppose there is a cluster, $C_1$, which is perfectly separable from the other ones except for two points which land within it from another cluster $C_2$. Then $C_1$ is precisely the kind of cluster the authors are looking for. However, these two points which belong to $C_2$ could be in the center of $C_1$, thereby skewing the author's nearest-neighbors-to-the-center metric. (I also have slight concerns about figures showing results being relegated to the appendix but discussed in the main body of the paper, but it's not a big deal). Rather than the metrics the authors utilized, I would instead be much more interested in metrics which are (a) quantitative and (b) global. For example, one could fit k-means to the clusters found by SPEC and evaluate the cost of the k-means clusters. Additionally, one could visualize the variance of the embedding space by showing violin plots of the distances of points in the clusters to the center. These kinds of plots would unambiguously make the authors' point by showing that SPEC finds clusters which are compact and separable in one embedding space but have poor structure in the other one. As it stands now, this point (which is the premise of the paper) has not been shown unambiguously by the results. Methods And Evaluation Criteria: The datasets and models used in the paper are reasonable for proving the authors' point. Theoretical Claims: The theoretical claims are reasonable and follow with what I would expect, having worked in kernel-based ML for a bit of time now. However, I would be curious whether the authors can evidence experimentally that their assumptions (Conditions 1 and 2) are reasonable to expect in practice. Specifically, what are the bounds on $\varpesilon_1$ and $\varepsilon_2$ on the datasets the authors used in the paper? What is the value of $\zeta$ which the authors obtain and is the bound in Theorem 4.1 supported experimentally? Experimental Designs Or Analyses: My thoughts on the experimental design and analysis are outlined above. Supplementary Material: I have looked through the supplementary material and its contents seem reasonable. I do think the authors are a bit too comfortable referring to the supplementary material in the main body of the paper, though. The main text should be entirely self-contained. However, there are many references to figures and experiments in the appendix throughout Section 6. This isn't a huge concern, but something that would be best to remedy. Relation To Broader Scientific Literature: I am not well-acquainted with the broader scientific literature regarding explainability via studying the clusterability of the embedding spaces. However, this work is clearly in line with the themes from kernel-based ML literature. I appreciate the author's reference to Kernel PCA, as they are essentially doing PCA using the difference matrix rather than the kernel matrices themselves. Perhaps some references to explainability via principal components of the embedding distribution would be nice to add. Essential References Not Discussed: All of the essential references seem present. I would, however, note that the way the references are included is slightly strange. There are 0 references on the first page and then a littany of them in the related work section. It would seem appropriate to include some references in Sections 1, 3 and 4 to back up the statements which are made. Here's an example sentence which would do well to have a few references to support it: "Understanding these differences can aid in interpreting and debugging embeddings and can also be leveraged to align multiple embeddings. Furthermore, interpreting the discrepancies between embeddings can be utilized to select representation models for downstream applications such as generative model evaluation." Other Strengths And Weaknesses: A strength of this paper that I have not focused on enough in the review is its clear description and intuitive idea. I am surprised this has not been done before and it feels like a very natural way to establish the differences between learned representations. As soon as it was described in the intro, I understood the point and it was obvious to me that it should work reasonably well. One weakness which I feel needs to be mentioned is that I am not convinced that this actually aids in explainability. Certainly, the method makes it clear which samples one model has clustered better than another model, but couldn't this be evaluated using the classification accuracy on those samples? Why is this method necessary in a way that distinguishes it from other standard explainability measures? Other Comments Or Suggestions: No other comments come to mind. Questions For Authors: The questions for the authors can be found scattered throughout the review. Perhaps the most pressing one is in finding quantitative metrics and analysis which support the claims. I would be interested in seeing analyses of the distances from points to the cluster centers across the sets of points identified by SPEC. The absence of quantitative supporting evidence leaves the paper less convincing than it otherwise would be. The second question that would be good to address would be how this differs in a measurable way from other explainability measures which compare between models. What does SPEC tell the user which is complementary to, for example, evaluating the classification accuracy of various samples between models? This would again require quantitative evidence to back it up. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer iKx5 for the thoughtful and constructive feedback on our work. Below is our response to the comments and questions in the review: ([Our Numerical results are shown in this link](https://github.com/ICML6204/ICML6204/blob/main/ICML_Rebuttal.pdf)) **1- Quantitative evaluation of the SPEC method** We thank the reviewer for the suggestions. Following the suggestion, we have analyzed the cluster distributions using violin plots to visualize normalized distances between data points. The plots also suggest that one embedding can cluster the points more strongly. Additionally, we performed KMeans clustering on the embedding features and computed normalized mutual information (NMI) between SPEC labels and KMeans clusters. The results indicate that one embedding reaches a stronger alignment and correlation with the KMeans labels. **2- Referring to the Appendix figures in the main text** We thank the reviewer for pointing this out. We will move the referrals to Figures 8, 9, and 12 to the Appendix in the revision. **3- Experimental Validation of Assumptions: Conditions 1 and 2** We analyzed Conditions 1 and 2 in the experiment of Figure 2 in the main text. The numerical results are presented in the Rebuttal Figures 1 and 2. **4- References in Sections 1 and 3** We thank the reviewer for pointing this out. We will cite references related to the spectral clustering, kernel PCA, alignment of embeddings, and explainability of PCA approach to embedding analysis in the revised sections 1 and 3. **5- SPEC method vs. using classification accuracy for embedding comparison** We appreciate the reviewer’s point on using the labeled samples by embedding models to explain their differences. We note that the selection of the label set can significantly influence the result of this approach. On the other hand, the SPEC method performs unsupervised learning to identify the soft-labels (i.e., eigenvectors) for performing the comparison of embeddings. Therefore, the SPEC method can avoid any biases that might be introduced by using a given label set. Also, we highlight that the SPEC method performs a soft clustering, which is different from a hard labeling of the samples assigning each data point to only one label. We will discuss these points in the revised introduction and conclusion. --- Rebuttal Comment 1.1: Comment: I thank the authors for their hard work. Disclaimer: I will not open any links as it would be unfair to authors of other papers I'm reviewing who stuck to the 5K character limit. The authors are welcome to describe their results in text if they wish. Regarding point (5) in the rebuttal, I'm still not convinced. Yes -- the clustering results are unsupervised and therefore cannot be biased. But... the labels DO correspond to the class distinctions, almost by definition. Even if it's not a perfect fit. So wouldn't the labels still give signal to which things were grouped together differently by the different models? The suggestion that one shouldn't use labels towards explainability seems contrived to support the authors' proposed method. In either case, I am keeping my score and believe this paper should be accepted. These are simply details. ------ EDIT: Thank you for the comment! I didn't realize links were allowed. I looked through the outputs and they indeed look quite convincing. I feel that my concerns have been addressed appropriately. I appreciate the hard work. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer iKx5 for the thoughtful feedback on our response. We appreciate the reviewer’s point on the 5000 character limit for the response. We would like to point out that our provided anonymized link follows the rules in the conference website (https://icml.cc/Conferences/2025/PeerReviewFAQ). In the following, we provide a summary of the URL's related numerical results in text format: **Quantitative evaluation of the SPEC method (Violin plots)** Following the reviewer’s suggestion, we have analyzed the cluster distributions using violin plots to visualize normalized distances between data points. In the tables below, we report the mean and standard deviation of pairwise distances of embedded data in each cluster (normalized by the average pairwise distance over all the data pairs for each embedding to ensure a fair comparison between the embeddings). Table 1: The averaged pairwise distance of embedded data pairs in the clusters of Figure 2 in the main text. | | Inter-Clusters | Cluster #1 | Cluster #2 | Cluster #3 | Cluster #4 | Cluster #5 | |-------------------------------------|-----------------|------------------|-----------------|-----------------|------------------|------------------| | **CLIP Clusters (CLIP - DINOv2)** | 0.96 $\pm$ 0.01 | 0.82 $\pm$ 0.186 | 0.53 $\pm$ 0.09 | 0.52 $\pm$ 0.16 | 0.55 $\pm$ 0.12 | 0.57 $\pm$ 0.13 | | **DINOv2 Clusters (CLIP - DINOv2)** | 0.95 $\pm$ 0.02 | 1.00 $\pm$ 0.13 | 0.89 $\pm$ 0.13 | 0.85 $\pm$ 0.01 | 0.88 $\pm$ 0.02 | 0.92 $\pm$ 0.01 | | **CLIP Clusters (DINOv2 - CLIP)** | 1.01 $\pm$ 0.09 | 0.95 $\pm$ 0.14 | 0.94 $\pm$ 0.09 | 0.89 $\pm$ 0.09 | 0.86 $\pm$ 0.12 | 0.85 $\pm$ 0.15 | | **DINOv2 Clusters (DINOv2 - CLIP)** | 1.02 $\pm$ 0.01 | 0.39 $\pm$ 0.06 | 0.37 $\pm$ 0.05 | 0.51 $\pm$ 0.04 | 0.42 $\pm$ 0.04 | 0.52 $\pm$ 0.07 | Additionally, we performed KMeans clustering on the embedded data vectors and computed normalized mutual information (NMI) between SPEC labels and KMeans-cluster labels. The results indicate that the source embedding reaches a stronger correlation with the KMeans labels. NMI results for Figure 1 of main text: | Embedding$_X$ - Embedding$_Y$ | K-means (Emb$_X$) & SPEC NMI | K-means(Emb$_Y$) & SPEC NMI | |-------|-----------|------| | DINOv2 - CLIP | 0.94 $\pm$ 0.0003 | 0.65 $\pm$ 0.0014 | | CLIP - DINOv2 | 0.78 $\pm$ 0.0006 | 0.44 $\pm$ 0.0002 | | SWAV - DINOv2 | 0.74 $\pm$ 0.0003 | 0.49 $\pm$ 0.0002 | NMI results for Figure 2 of main text: | | **CLIP** KMeans & SPEC NMI | **DINOv2** KMeans & SPEC NMI | |-------------------|----------------------------|------------------------------| | **DINOv2 - CLIP** | 0.33 $\pm$ 0.0016 | 0.96 $\pm$ 0.0005 | | **CLIP - DINOv2** | 0.68 $\pm$ 0.0005 | 0.16 $\pm$ 0.0001 | **Regarding Item 5 in our response**, We appreciate Reviewer iKx5's feedback on the response. We would like to clarify that we did not mean that the unsupervised SPEC method is universally better than a supervised comparison including the idea mentioned by Reviewer iKx5. In fact, we agree that in scenarios with available fine-grained labeled datasets, the supervised comparison can perform more efficiently compared to an unsupervised approach. On the other hand, datasets with sufficiently fine-grained labels are not always available for a general application. Also, even a seemingly fine-grained label set (like the labels of ImageNet) may not be comprehensive enough and lack some required details in a general case. In settings with unlabeled data or labeled samples lacking fine-grained labels, the unsupervised nature of SPEC can be beneficial. One future direction could be to propose a semi-supervised comparison approach that utilizes both the labeled and unlabeled samples. In the revision, we will include this discussion to better compare the unsupervised SPEC method vs. supervised and potential semi-supervised approaches that utilize labeled data.
Summary: The authors derive a method for identifying clusters which are strongly clustered by one encoder and weakly clustered by another encoder. The run time of the method scales linearly with the number of samples in the dataset. This is deployed on a few image datasets for some established image foundation models, and a collection of text samples for some established language foundation models. The authors also propose and briefly demonstrate that the metric can be deployed for fine-tuning to transfer the clustering of one encoder into another. ## update after rebuttal I am happy to increment my score 3->4 after clarifications and requested changes from the authors during the rebuttal period. *Note on Adjusted Mutual Information for new results* It would make sense to either use NMI or AMI for all the results; it is redundant to show both, and inconsistent to swap between them. AMI is generally better than NMI because it accounts for the coincident information measurements due to chance. It's especially important when the number of clusters differs between clusterers to compare, hence that's when it becomes critical to use AMI instead of NMI. If you're in the situation where this sometimes happens, I recommend you swap to using AMI for *all* results. Claims And Evidence: Okay. Methods And Evaluation Criteria: The text dataset used is a bit opaque. It is not clear why GPT-4o was used to generate the text samples instead of using an existing text dataset, and it is also unclear what prompting was used to generate this text data. It would be advantageous to include more quantitative evaluation. For example, does low SPEC-diff on a dataset imply that two models have similar performance at the task established by the labels for that dataset? Theoretical Claims: The proofs are in the appendix. I checked "A.2. Proof of Proposition 4.3", which I take to be correct. Some steps in the main text eluded me and could be made clearer. For instance, L156 makes sense having read Proof of Proposition 4.3, but wouldn't otherwise. Also, it was unclear why a Gaussian RBF kernel needed to be used when other parts of the work were happy to support the kernel used being cosine-similarity. Experimental Designs Or Analyses: Okay. Supplementary Material: A.2. Proof of Proposition 4.3 and skimmed App B and its figures. Relation To Broader Scientific Literature: The work is novel and relevant. This tool has the potential to be useful for the model interpretability community. Essential References Not Discussed: The relationship to Laplacian spectral clustering (e.g. Ng, et al, NeurIPS 2001) wasn't really discussed. I think it would be helpful to articulate the similarities and differences between the SPEC method and spectral clustering to the reader. Other Strengths And Weaknesses: **Strengths** It was not initially obvious that this eigen-based clustering comparison would scale linearly with dataset size, but the proof and algorithm made it clear this was the case. This scaling is important for the utility of the method, since it is desirable to be able to deploy at scale. **Weaknesses** - The work on fine-tuning using SPEC seems a bit rushed within the paper, possibly it was a late addition and space was limited. I think the paper would benefit from further experiments exploring this component of the work. The fact that this can be done to induce clusterings from one encoder into another *without having to align their embedding spaces* is certainly a strength, but it is not highlighted in the paper. - The paper should emphasize clearly that the distance metric is asymmetric. - Fig 1 caption should show more detail about which classes and samples are selected to show in the tSNE plots. Other Comments Or Suggestions: - L133 left. The matrix is positive semi-definite because k is an inner product, so $k(\cdot,\cdot)\in[0,1]$. However the more general statement for a kernel function only constrains its output to the reals. I think the implication could be made more clear by adding the missing step, e.g. on L131 by specifying $<\phi(x),\phi(x')>\in[0,1]$. - L068 right. should be \citet not \citep if the citation is part of the text, as it is here. It is bad English grammar to write that something is in (a parenthetical description). As author names appear in the text, you should replace this with "has been studied by \citet{ref1,ref2,ref3,ref4,ref5}." - L077 right. in this case the citation of Gao (2021) should be parenthetical, \citep, instead of textual. The Muennighoff citation is correctly textual \citet. However this sentence doesn't make sense anyway - how is Muennighoff (2023) offering something within the work of Gao (2021)? - L080, L099, L103, L107, L369 citations should also be \citet - L145 left. Bracket should go around the whole fraction (use `\left(` and `\right)`) - L164 left. Should be $\phi_1$ and $\phi_2$, not $\phi_1$ twice. - L180 left. Please hyperlink here to the proof within the Appendix. Similarly for other deferred proofs. - L190 right. Missing subscript command on $\Gamma\Psi\Psi$. - L191 right. if -> of - L215 right. shift-invarian -> shift-invariant - L266 left. Consider replacing *Embeddings’* with *Embedding* - L380 right. No citation for ImageWoof dataset. (There's no paper for it so usually people cite the imagenette github repo.) - L711 Need to adjust so the 1 and T in the sub/superscripts above one another don't run together. At the moment they look like one big, combined dagger symbol! **References** Be careful with casing (often messed up in the bibtex file and has to be manually corrected). Some names need to be title cased: Fréchet, Vendi, ImageNet. Some initialisms need to be changed from lower to upper case: GAN, PCA, COCO, BERT, CLIP. But the worst citation formatting is `Ro{bert}a`, which has the wrong casing and literal curly braces (also for `{bert}`)! Questions For Authors: Why use this generated text data instead of an established dataset? Why change from CLIP to OpenCLIP for Table 1? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer hTAA for the thoughtful and constructive feedback on our work. Below is our response to the comments and questions in the review: ([Our Numerical results are shown in this link](https://github.com/ICML6204/ICML6204/blob/main/ICML_Rebuttal.pdf)) **1- Experiments on GPT-4o generated text data** We generated a dataset using GPT-4o, covering categories including professions, emotions, genders, actions, objects. We used the bellow prompt to generate. In the revised manuscript, we will explain the generation process. The prompt for generating the text dataset: “You are an expert in text-to-image models. Text-to-image models take a text prompt as input and generate images. Your task is to generate a prompt describing a person in [Profession] [Emotion], and [Gender] performing [Action] with [Object]. **2- Additional experiment for real text datasets** To further address the reviewer’s comment, we also validated our approach on a large-scale real text dataset: WikiText-2. We split the dataset into 10K samples, each containing 100 tokens. Then, we used SPEC to compare CLIP and RoBERTa embeddings. The results can be found in Figure 2. We observed that RoBERTa better clustered Military Operations, Species Biology, Historical Figures, and Music, while CLIP embeddings more strongly clustered Sports and Science. We also examined the distribution of pairwise distances within each cluster to verify that one embedding successfully captured these clusters while the other was less inclined to do so. Also, we ran the K-means clustering algorithm 50 times on each of the embedding's features and computed the averaged (across the 50 runs) Normalized Mutual Information (NMI) between the K-means labels and the SPEC-identified labels. The results demonstrate that one embedding achieved considerably stronger alignment with KMeans labels. **3- Evaluation on SPEC-diff** To show that SPEC-diff effectively measures embedding similarity in SPEC-Align, we considered the SPEC-Align of CLIP to DINOv2 and tracked the SPEC-diff scores while plotting their kernel matrices. The results in Figure 3 show that as the kernel matrices look more similar, the SPEC-diff value decreases. The results suggest the SPEC-diff's utility for quantifying embedding alignment in SPEC-Align. **4- Explanation on the statement in Line 156** We will make the explanation clear that since the multiplication order in $C=\frac{1}{n}\Phi^\top \Phi$ and $C=\frac{1}{n}\Phi \Phi^\top$ is flipped, they share the same non-zero eigenvalues, and those eigenvectors are in one-to-one correspondence. **5- The choice of Gaussian vs. cosine similarity kernel functions** The choice of kernel function determines the similarity measure used for the clustering algorithm. The cosine similarity looks only at the angle between the vectors, while the Gaussian kernel concerns the Euclidean distance between the input points. The choice of the kernel function depends on the input embeddings and their induced geometry in the embedding space. As we discussed in the text, our analysis can be efficiently applied to both kernels. **6- Relationship to Laplacian spectral clustering** In the revised draft, we will make the connection to Laplacian spectral clustering more clear. One main difference between the proposed SPEC clustering and Laplacian spectral clustering is the usage of kernel matrix in SPEC (similar to Kernel PCA) vs. the Laplacian (kernel minus the diagonal degree matrix) in spectral clustering. We will explain this difference in the draft. **7- Additional results on embedding alignment using SPEC-Align** We aligned the CLIP text embedding with the T5-XL model. In Figure 6 of the rebuttal link (last page), we observe that the CLIP kernel has become more similar to T5-XL, and the SPEC-diff is also decreasing. **8- Symmetry of SPEC-diff** We would like to clarify that SPEC-diff is a symmetric measure because it is defined as the *spectral radius* (eigenvalue with maximum absolute value) of the difference of the two kernel matrices. Since the definition concerns the spectral radius (and not the maximum eigenvalue), the SPEC-diff is symmetric with respect to the embedding order. We will make this point clear in the writing. **9- Clarification on kernel values** We would like to clarify that the paper’s main results suppose a normalized kernel function where for every $x$, we have $k(x,x)= \langle \phi(x), \phi(x) \rangle=1$, i.e. $\Vert \phi(x) \Vert = 1$. Following Cauchy-Schwarz inequality, $| k(x,x’)|\le 1$ holds for every $x,x’\in\mathcal{X}$. We will make this point clear in the revised draft. **10- Typos and writing improvements** We thank the reviewer for pointing them out. We will correct them in the revised draft. **11-Change from CLIP to OpenCLIP** For alignment experiments, we utilized the widely-cited open source OpenCLIP github repository. We note that in Table 1, the CLIP backbones are the same. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. 1-2. Thank you for adding the additional details on the GPT-4o prompt, and experiments on WikiText-2. This addresses my concern on this point. N.B. If the number of clusters which can be identified by SPEC and KMeans are not the same, I recommend using the Adjusted Mutual Information instead of NMI since the former corrects for the chance level agreement between clusterings. 3,4,6,7,9,10. Thank you for the clarifications and for adding this. 5. For sure, one should mention the relationship between the Gaussian RBF kernel and Euclidean norm in the text, to explain the use of a Gaussian beyond it merely being well known. I don't recall the specifics now, but it seemed that some of the theorems assumed a Gaussian kernel and others assumed cosine similarity and it wasn't clear why this was inconsistent. This was what I was referring to. 8. I think I was too brief and unclear before so to clarify. SPEC(A,B) involves taking the difference between the kernels for A and B and is asymmetric, resulting in graphs which are different for DINOv2-CLIP than for CLIP-DINOv2. The methodology doesn't find differences, it finds clusters that are prevalent in A but not B, thus SPEC(A,B) is different from SPEC(B,A). Consequently, SPEC-diff(A,B) is also not the same as SPEC-diff(B,A). This asymmetry with respect to their arguments means neither SPEC nor SPEC-diff are distance metrics, just like how KL-divergence is a useful measurement but isn't a distance metric. The authors don't describe either SPEC nor SPEC-diff as a distance metric in the paper, but they do refer to it as a **distance measure** several times, e.g. the paragraph starting L064 left. Hence I think it would be prudent to point out it at this point in the paper which says "distance measure" that SPEC is asymmetric and hence isn't a distance metric, to forestall potential confusion on this point. > 11-Change from CLIP to OpenCLIP > > For alignment experiments, we utilized the widely-cited open source OpenCLIP github repository. We note that in Table 1, the CLIP backbones are the same. 11. No, in Table 1 it says the OpenCLIP backbone is the same as the OpenCLIP backbone and the CLIP backbone is not mentioned. Having looked at it again, the methodology for this section (L425, Aligning embeddings using SPEC-align) is still very unclear. I really don't understand what experiment has been done and shown in Table 1. If I have to guess, I can only assume the reason for the switch to OpenCLIP for this experiment was you needed to know the training data and script to use, since that is publicly known for OpenCLIP but not CLIP. Otherwise the experiment could have been done with CLIP since it is an open-weight model. Did you train your own version of OpenCLIP from scratch with SPEC-align throughout? Or did you fine-tune OpenCLIP with SPEC-align? If it is a fine-tuning experiment, why couldn't you fine-tune the CLIP model for consistency with the rest of the paper and the first half of this section, and why does Table 1 only say "LAION 400M" for SPEC-align[ed] OpenCLIP, instead of both LAION 400M and MS-COCO 2017? The methodology provided in this paragraph (L412, right) is "we conducted an experiment similar to (Oquab et al., 2024)", a citation of DINOv2, which I assume is only intended indicate the evaluation methodology and not any of the training methodology. Is this "SPEC-aligned OpenCLIP" model shown in Table 1 the same as the model for which the "SPEC-align CLIP Kernel" is shown in Figure 3? If so, why do they have different names? If not, why does the first paragraph (L436, left) link to Appendix B.4.1, which exclusively discusses OpenCLIP. The only thing that I'm sure on from this section is I think you need to add a citation to OpenCLIP to the reference list, either the [Zenodo for the repo](https://zenodo.org/badge/latestdoi/390536799), or their [scaling paper](https://openaccess.thecvf.com/content/CVPR2023/html/Cherti_Reproducible_Scaling_Laws_for_Contrastive_Language-Image_Learning_CVPR_2023_paper.html). 12. Additional point: t-SNE (2002) is overly sensitive to the choice of the perplexity parameter, which can result in misleading plots when trying to observe by eye the strength of clustering in plots. I recommend considering using [PaCMAP (2021)](https://arxiv.org/abs/2012.04456) instead, as this is a newer and more stable/reliable dimensionality reduction method which does a better job at retaining the structure of data when projecting it down to 2d, without needing as much fine-tuning of its parameters. I should have mentioned this before, but didn't want to nitpick unnecessarily. Other. I mentioned before that the subscript was missing for $\Gamma\Psi\Psi$ at L190. Not that it is also incorrect at L231 right. As there are still snags which I feel should be addressed, I will keep my score for now. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer hTAA for the thoughtful feedback on our response. Regarding the points in the reviewer’s feedback: **Adjusted Mutual Information** We thank Reviewer hTAA for the suggestion. In the revision, we will report the Adjusted MI in the cases where the KMeans clustering is run to find fewer numbers of clusters compared to the number of SPEC-identified clusters. **The types of kernel functions in the theorems** We would like to clarify that only Proposition 4.3 exclusively holds for finite-dimension kernels, that does not apply to the Gaussian (RBF) kernel. Theorem 4.4 aims to address this gap by using the proxy (finite) random Fourier features that applies to any shift-invariant kernel such as the Gaussian kernel. Except for these results, the rest of our theoretical discussion applies to a general kernel function. We will make this point clear in the revised text. **Symmetry in SPEC(A,B) vs. SPEC-diff(A,B)** The reviewer is absolutely right that the kernel difference matrix in SPEC$(A,B) = K_A - K_B$ is not symmetric, and the eigenvalues can have a non-symmetric distribution around 0. On the other hand, regarding the (scalar) SPEC-diff measure, we indeed have SPEC-diff(A,B)=SPEC-diff(B,A). This is because SPEC-diff is the spectral radius of $K_A - K_B$, which is an even function equal to the spectral radius of $K_B - K_A$. However, since the spectral radius does not satisfy the triangle inequality, SPEC-diff does not provide a metric distance. In the revision, we will clarify that SPEC-diff is a symmetric pseudo-distance which does not satisfy the triangle inequality. **Experiments in Figure 3 and Table 1** We thank the reviewer for pointing this out. We would like to clarify that Figure 3’s alignment results of CLIP to DINOv2 has been performed on the ImageNet dataset (not the MS-COCO dataset used to align OpenCLIP [1] in Table 1). A better positioning of Figure 3 would be right after Figure 2, because the two figures share the same problem setting. We will revise and position Figure 3 right after Figure 2 to avoid any confusion. **Results on aligning OpenCLIP in Table 1** We appreciate the reviewer’s comment on the switch to the OpenCLIP model in Table 1. First, we would like to clarify that the experiment in Table 1 uses the **MS-COCO 2017 dataset** for the kernel-based alignment to DINOv2. The reason we reported the OpenCLIP results in Table 1 is the considerable gain in the linear-probe ImageNet accuracy results for the aligned OpenCLIP. Although the fine-tuning of OpenCLIP has been performed on MS-COCO, the linear-probe ImageNet accuracy gain was significant as we reported in Table 1. On the other hand, in our experiments, the same MS-COCO-dataset fine-tuning of the (OpenAI) CLIP model did not lead to a significant gain in ImageNet accuracy. The original CLIP ImageNet linear-probe accuracy was 67.2%, which only changed to 67.4% accuracy in the fine-tuning on the MS-COCO dataset. Still, we would like to highlight that the cluster similarity between the aligned (OpenAI) CLIP and DINOv2 boosted significantly in this experiment, as the SPEC-diff value reduced from 0.49 to 0.03 during the alignment. However, for the OpenCLIP case, both the SPEC-diff and ImageNet accuracy simultaneously improved, as reported in the text. Regarding these numerical observations, we hypothesise that the different ImageNet accuracy of CLIP vs. OpenCLIP is due to the (unknown) training data of CLIP and the distribution mismatch between ImageNet (testing data) and MS-COCO (fine-tuning data). To validate this hypothesis, we used the SPEC-Align on the ImageNet data (instead of the MS-COCO 2017 dataset) for aligning CLIP to DINOv2, and we observed that the ImageNet linear-probe accuracy could jump from 67.2% to 73.9% after only 4 epochs of the alignment fine-tuning. We will include the above discussion in the revised paper. Also, we will only discuss the (OpenAI) CLIP model results in the revised main text, and will defer the numerical results of OpenCLIP in Table 1 to the Appendix to ensure the main text’s experimental results have sufficient consistency. **Reference of OpenCLIP model** We will include the reference paper and GitHub repository and the paper of OpenCLIP [1] in the revised text. **Using PaCMAP for dimensionality reduction and visualization of clusters** We thank Reviewer hTAA for the suggestion. We will include the PaCMAP visualizations in the revised text. **Missing subscript** We thank the reviewer for catching the missing subscript. We will fix it in the revision. [1] Cherti et al., “Reproducible Scaling Laws for Contrastive Language-Image Learning” CVPR 2023
null
null
null
null
null
null
Provable Maximum Entropy Manifold Exploration via Diffusion Models
Accept (poster)
Summary: The authors consider the problem of exploration in planning and decision-making problems. This problem has many applications including to the exploration-exploitation paradigm in reinfocement learning. While in most applications the exploration step is performed by sampling from a Gaussian process, the authors consider more general exploration distributions modeled by neural networks. Specifically, for datasets which lie near lower-dimensional manifolds, the authors aim to sample from entropy-maximizing distributions, which maximize entropy aver the lower-dimension manifold. The main challenge encountered by the authors is that while generative models are able to generate points, the authors wish to explore “atypical” subsets of this distribution which may have convenient properties for the given application. In order to compute this entropy-maximizing distribution and sample from it, the authors use the fact that the score function in diffusion models is (in the limit as t —> 0) given by log density of the distribution from which the data was sampled. This allows the authors to explore the space of distributions in an entropy-maximizing manner. Next, the authors apply their exploration method to design an algorithm based on mirror descent for the exploring the manifold by sampling from a maximum-entropy distribution on the manifold. ## update after rebuttal Thank you for the helpful clarifications. Claims And Evidence: The authors provide theoretical guarantees which show (i) that their entropy maximizing exploration step is optimal and (ii) that their algorithm (Algorithm 1) converges to the maximum entropy density after infinitely many iterations of their algorithm. Visually, the designs from their method appear (subjectively) to be more unusual/less conventional than the baseline model, but possibly at the cost of being less realistic and of lower quality. Empirically, the authors implement their algorithm on an architecture dataset where the goal is to explore atypical architectural designs. Methods And Evaluation Criteria: The theoretical results prove the optimality of the authors exploration method (Theorem 5.2), and the "asymptotic correctness" of their algorithm (Theorem 7.1). However, they do not provide any non-asymptotic guarantees for their algorithm. For empirical criteria, the architecture dataset used by the authors provides a good illustration of the exploration problem, and allows the authors to compare their algorithm to baseline methods on a compelling application. Visually, the designs from their method appear (subjectively) to be more unusual/less conventional than images sampled without any iterations of their mirror descent algorithm distribution (which serves as a baseline model), but possibly at the cost of being less realistic and of lower quality. Thus, it would be good to discuss the apparent tradeoff in visual sample quality. Theoretical Claims: I did not carefully check the proofs in the appendix. Experimental Designs Or Analyses: Please see "Methods And Evaluation Criteria" question above. Supplementary Material: I did not carefully check the proofs in the appendix. Relation To Broader Scientific Literature: Empirically, the authors compare to the stable diffusion model as a baseline. However, it may be good to include additional baselines to compare to, if this is feasible. Also, it is a bit unclear from the captions in Figure 3 and Table 1 which models are the authors' and which are the baselines (I believe that $\pi_pre$ is the authors and $\pi_3$ is a baseline model, but this could be made a bit more clear) Essential References Not Discussed: N/A Other Strengths And Weaknesses: The main strengths of the paper are in proposing a method for entropy-maximizing exploration, and for providing an algorithm with one-step optimality and (asymptotic) convergence guarantees for this problem. The main weaknesses are (i) that the convergence guarantees are asymptotic (there is no guarantee on how fast their method converges) and (ii) the empirical results could be made a bit more clear. Specifically I found it a bit unclear what are the baseline model(s) being compared to empirically, and what the tradeoff is between image quality and "creativity" or "entropy" when the authors. If I understand correctly, from Figure 3 there appears to be a tradeoff, so it would be good for the authors to discuss this in more detail. Other Comments Or Suggestions: Please see "Other Strengths And Weaknesses" question above. Questions For Authors: Can the authors clarify in the empirical section which baseline model(s) they are comparing to? I believe the baseline model is the stable diffusion model, but this is a bit unclear to me. Also, if I understand correctly, from Figure 3 there appears to be a tradeoff, so it would be good for the authors to discuss this in more detail. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for appreciating our work and asking interesting questions. In the following, we address several important points mentioned within the review that can hopefully let the Reviewer appreciate more the content of this work. **Asymptotic convergence guarantees** We thank the Reviewer for raising these concerns and would like to provide a clarification: We can in fact establish convergence guarantees under three different assumptions, ordered by increasing generality: - Perfect fine-tuning: If fine-tuning is exact, our algorithm terminates in a single step (see Section 5). - Unbiased noise oracle: When the noise oracle is unbiased (i.e., $ b_k = 0$ in Eq. (16)), standard mirror descent analysis yields a convergence rate of $\mathcal{O}(k^{-1/2})$ (see [3]). - General bias term: If the noise oracle in Eq. (16) includes a general bias term, then under the arbitrary slow decay assumption in Eq. (18), a polynomial-time guarantee is no longer feasible [3]. In this case, stochastic approximation techniques are required, and the best achievable rate is $\tilde{\mathcal{O}}((\log\log k)^{-1})$, which follows from our proof. We chose to present an asymptotic result rather than explicitly stating this rate, as the difference is negligible and in this case it is conventional to present the asymptotic result [3,4]. Among these, the third setting is the most practical, which is why we focused on it in Section 7. We will incorporate these clarifications into the revision. **Empirical results clarification** Within the Experimental Evaluation section, we evaluate qualitatively (i.e., visually) and quantitatively (i.e., via the metrics within Fig. 2-d and Table 1) the performance of models $\pi_k$ obtained by fine-tuning a pre-trained model $\pi^{pre}$ for $k$ iterations of S-MEME. In the case of text-to-image experiments, Fig. 3 shows on the top row a set of images obtained by sampling the pre-trained model $\pi^{pre}$, which in this case corresponds to Stable Diffusion 1.5 [1] trained on the LAION-5B dataset [2], while the bottom row shows images sampled via $\pi_3$, which is the diffusion model obtained after $3$ iterations of our algorithm. Qualitatively, one can visually notice that the images within the top row appear mostly gray and similar to each other, while the images from the bottom row show more diversity both among them and compared with the ones above, while preserving semantic meaning. Quantitatively, ideally we would want to estimate the entropy of the marginal density induced by the fine-tuned model, but this is hard in practice, as explained within Sec. 8.2. As a consequence, within Table 1 we show several proxy metrics to give numerical insights about the performances of the fine-tuned models. Within this table, the label 'S-MEME 1' refers to the model obtained after one single iteration of S-MEME, 'S-MEME 2' after two iterations, and 'S-MEME 3' after three iterations. Crucially, we show that the FID and cross-entropy scores, which aim to capture the degree of diversity of the marginal density induced by a model $\pi_k$ from the pre-trained model, increases over the iterations of S-MEME, meanwhile the CLIP score, which assesses the naturalness, or image quality, of the generated samples, is kept high across increasing iterations of S-MEME. Clearly, the trade-off between surprise maximization (i.e. diversity from pre-trained model) and naturalness due to regularization with the pre-trained model, can be chosen arbitrarily by tuning the $\{\alpha_k\}$ parameters of the algorithm, which manage this trade-off as shown In Eq. (9). The experimental results in Sec. 8 aim to evaluate a specific parameter choice to show practical relevance of the proposed scheme. We thank the Reviewer for these question sand hope that the explanations given can help the Reviewer better appreciate our work. We will update a revised version of the paper with better clarification of these aspects. **References** [1] Rombach et al., High-resolution image synthesis with latent diffusion models. CVPR 2022. [2] Schuhmann et al., An open large-scale dataset for training next generation image-text models. NeurIPS 2022. [3] Karimi et al., Sinkhorn flow as mirror flow: A continuous-time framework for generalizing the sinkhorn algorithm. AISTATS 2024. [4] Borkar et al., The ODE method for convergence of stochastic approximation and reinforcement learning. SIAM Journal on Control and Optimization 2000.
Summary: This paper introduces a maximum entropy manifold exploration problem. They proposes a modification to the pretrained diffusion model to maximize an entropy objective function. They also proposed an algorithm to solve this optimization problem. They supported their results with numerical experiments. Claims And Evidence: Yes. Methods And Evaluation Criteria: Their problem is new, so I feel this point is not applicable here... Theoretical Claims: No. Experimental Designs Or Analyses: I didn't check the code. Supplementary Material: No. Relation To Broader Scientific Literature: It is related to diffusion guidance, an important technique used in the diffusion model commmunity. The paper also mentioned applications in molecular generation. Essential References Not Discussed: Essential references are discussed. Other Strengths And Weaknesses: The problem is novel, while the motivation is not that clear to me Other Comments Or Suggestions: 1. Isn't the maximizer of the entropy function just the uniform distribution over $\Omega^{pre}$? 2. How related are the two objectives (7) and (9)? 3. I feel Section 4.3 is not presented very clearly: What is the motivation for using $s^{pre}$ in the actual implementation instead of $s^{\pi}$? 4. I am not very familiar with RL, but I feel in general we not only want to do exploration but also want to do exploitation and maximize a reward function. This is also the case for diffusion guidance: practioners want to guide the sample generation towards a direction that maximizes certain reward function. Can the authors discuss extension in this aspect? Questions For Authors: 1. Isn't the maximizer of the entropy function just the uniform distribution over $\Omega^{pre}$? 2. How related are the two objectives (7) and (9)? 3. I feel Section 4.3 is not presented very clearly: What is the motivation for using $s^{pre}$ in the actual implementation instead of $s^{\pi}$? 4. I am not very familiar with RL, but I feel in general we not only want to do exploration but also want to do exploitation and maximize a reward function. This is also the case for diffusion guidance: practioners want to guide the sample generation towards a direction that maximizes certain reward function. Can the authors discuss extension in this aspect? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the Reviewer for reading our work. In the following, we address several fundamental points and questions mentioned within the review that can hopefully let the Reviewer appreciate more the content of this work. **Motivation of the work** In the following, we aim to make clear the main motivation of this work. Pre-trained generative models can generate plausible objects of a certain data type, e.g. valid images or molecules. Nonetheless, the probability of sampling rare, novel, objects will be very low. This is because the model has been trained on a dataset of existing objects to approximate the data distribution. Therefore rare objects in the data will be rare to sample. In this work, we adapt the generative model so that its induced distribution is not proportional to that of existing data, but is shifted towards lower-probability regions where novel designs can be sampled, while preserving plausibility (i.e., staying within the approximate data manifold). To the best of our knowledge, this is a fundamental (open) problem for discovery via generative models. **Question 1** Yes, it is. In fact, if $\Omega_{pre}$ would be a known set representable in a computer one could simply define a uniform distribution on such a set and then sample from it. But what if $\Omega_{pre}$ is an unknown and possibly very complex set, e.g., space of valid molecules or images, that cannot be represented in a computer if not implicitly via a generative model? How can we explore such a set? This work tackles exactly this problem, which is both fundamental and non-trivial. We thank the Reviewer for asking this question as we believe it is of central importance, and we will make sure to further clarify this point in a revised version of the work. **Question 2** Equation (7) describes an optimization problem over the set of densities supported over a set $\Omega_{pre}$ represented only implicitly by a pre-trained diffusion model. This fully captures the manifold exploration problem. On the other hand, Eq. (9) is an unconstrained KL-regularized optimization problem of a reward function obtained by linearizing the entropy, as explained in details within Sec. 4. As explained in Sec. 6, a concave function like entropy can be maximized via a Mirror Descent scheme [1], which relies on a sequence of simpler optimization problems. In this context, Eq. (7) represents the concave optimization problem, while Eq. (9) captures each simpler optimization problem. Crucially, via this viewpoint, we can use scalable methods to solve each smaller problem to solve the complex exploration problem. We will make sure to make this more explicit within an updated version of the work. **Question 3** Within this work, $s^{pre}$ is the score of a pre-trained diffusion model that one wishes to fine-tune by optimizing a regularized measure of surprise (Eq. (9)).This quantity has to be estimated with respect to the previous model, which at the first iteration of S-MEME corresponds to the pre-trained model with score $s^{pre}$. In practice, as mentioned within Sec. 4.3, the score of the fine-tuned model $s^\pi$ can be initialized as $s^\pi = s^{pre}$. After this step, there is no difference in using $s^\pi$ or $s^{pre}$ to estimate the induced marginal density as these score networks are equal. We will make sure to make this more clear in order to prevent any doubt. **Question 4** Reinforcement learning spans several problems, including pure-exploration ones, e.g., [2,3], where the goal is to explore a certain space, typically via maximization of an intrinsic reward, which in the case of this work would correspond to surpise (i.e., the entropy first variation) in Eq. (9) and (10). Pure exploration problems often are relevant on their own. In this case, surprise maximization is clearly relevant for discovery applications where it can make it possible to sample surprising designs, or, as mentioned by Reviewer WBXm, to de-bias a given pre-trained model by inducing a more balanced distribution than the data distribution. Nonetheless, ideas from pure exploration can be used for exploration schemes in exploration-exploitation settings where there is an unknown quantity (e.g., a reward function) to be learned and optimized. In this case, the exploration principle introduced within Eq. (9), which makes it possible to scalably maximize a measure of surprise, could be used as a way to regularize an exploratory scheme for reward-learning in black-box optimization settings, e.g., [4]. **References** [1] Nemirovski et al., Problem complexity and method efficiency in optimization, 1983. [2] Hazan et al., Provably efficient maximum entropy exploration. ICML 2019. [3] Mutny et al., Active exploration via experiment design in markov chains. AISTATS 2023. [4] Uehara et al., Feedback Efficient Online Fine-Tuning of Diffusion Models. ICML 2024.
Summary: The paper presents a framework to perform optimal exploration of the data manifold defined by a pre-trained diffusion model. This can be useful whenever one wants to sample using diffusion models and explore the full data region within the learned data manifold. The approach that the paper proposes is based on self-guided exploration using the density estimated by the diffusion model itself. Using a connection between the entropy of the density learned by the diffusion model and its score function, they propose a sequential fine-tuning procedure that results in a fine-tuned version that should lead to optimal exploration. The fine-tuning procedure is derived using the connection of diffusion model and reinforcement learning and by using mirror descent theory. They evaluate the proposed method on a toy dataset and an image-to-text task. Claims And Evidence: The theoretical claims done in the paper seem supported by proofs. Claims regarding the scalability and effectiveness of the proposed method are supported by two (a toy and a text-to-image) experiments. (More on the experiments below). I have just one minor comment, and I might be wrong here, so I ask the authors to correct me if I am wrong. There is a constant repetition that the method proposed in the paper does not rely on explicit uncertainty quantification. I agree it's not explicit, but as the method is framed in terms of density estimation it strongly relies on a sort of uncertainty of the score function it seems. Because the constant repetition seems to give the message that the method wants to be completely detached from the concept of uncertainty in any of the forms, which might be a bit misleading. Methods And Evaluation Criteria: As mentioned above, to show the effectiveness of the proposed framework, they consider two different experiments: a 2D-toy experiment and a text-to-image experiment involving stable diffusion. - The first toy experiment is a nice and intuitive way to present the problem the paper wants to tackle and what are the results of the proposed method. It would be helpful to have all the plots with the same x-range and maybe a different `cmap` as in (b) some samples are rarely visible. I think it would also be curious to have a plot of the true density and the samples used to train the pre-trained model. Also, there are no details on how the authors trained the diffusion model in the appendix for that specific example. - Regarding the second experiment: the authors consider a pre-trained text-to-image diffusion based on based on stable diffusion and perform experiments for two different prompts. They measure results in terms of FID, CLIP, and distance between the two marginal distributions. You mention that you are showing samples for every iteration of the fine-tuning in Figure 9, but it is not clear if that is the case. Also, while the evaluation makes sense, finding a task where the exploration is needed is difficult and I am not fully convinced that the image domain is the best application. Also by comparing Fig. 4 and Fig.5 it is difficult to understand that the fine-tuned model is exploring the manifold more as all the samples look similar, and none of them resemble the one of the pre-trained model. Those results might be similar to getting samples by using two different guidance strengths, but I might be wrong. Maybe having an additional experiment on molecules with specific properties (as the authors mentioned in the paper) would make the paper stronger. Theoretical Claims: I checked the theoretical claims the authors have in the paper, but I cannot guarantee that the proofs are completely correct. Experimental Designs Or Analyses: I have checked the experimental details provided in the appendix. As mentioned above, there are not so many details on the training set and how the score was trained in the first experiment. It would be also interesting to know how expensive it is to perform the fine-tuning procedure the authors describe in algorithm 1 as it seems to consist of a nested loop. Also how did you usually choose the number of $K$ refinement iterations and the number of $N$ iterations in the inner loop? Additionally, in line 844 you mention 4 trajectories and then you mention a batch size of 8. Are these two related? Supplementary Material: I have checked the experimental details section, the pseudocode and the visual examples. Relation To Broader Scientific Literature: They present connections to broader scientific literature in the related works section, which is nicely written and very detailed. Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Line 831 Wince instead of Since - There might be a possible mistake in the pseudocode in the appendix I guess. In the adjoint ODE, there is a $k$ subscript that it's not clear where it comes from (if it comes from the outer loop then you might have to include it as input). Also if the initial $t=T$, then the first $\bar{a}$ considered is $T+1$ which does not exist. Questions For Authors: - Where does the $\eta \in [0,1]$ come from in the reverse process in Eq. 2? It's not justified and it does not appear in the forward process. Can the authors comment on this? - Why in proposition 1 the noise distribution $p_0$ is a truncated Gaussian? What's the bounding interval in this case? For standard diffusion models, the noise distribution at time T and not 0 approximately converges to a standard Gaussian for variance-preserving processes. Also in the section "Score matching and generation" the author is using $p_0$ for the data distribution. Indeed, at some point the author starts describing everything in terms of the time going backward, and when $t=T$ we get that the backward marginal $p_T$ corresponds to the data distribution. Therefore, the notation used in the paper can be improved to make the paper more clear. - It seems that the algorithm depends at each iteration on a different ${\alpha_k}_1^K$ that weighs exploration and exploitation. How should one decide the values of ${\alpha_k}_1^K$? - I might have a naive question: is Assumption 7.1 always satisfied? I understand that if we assume that both models have support on $\mathbb{R}^d$ this is true, but can it happen that by maybe forcing the model to explore too much, due to approximations and noise, the algorithm will end up sampling from regions of the space where there were no data initially and were the pre-trained model was not able to sample from? Similar to the case of using a too strong guidance strength? Like in Fig. 2 it seems that the tuned model gets samples on the true data support but where it seems that the pre-trained model gets no samples. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the interesting questions. In the following, we address several points that can hopefully let the Reviewer appreciate more this paper, and which will be included in a revised version. **Uncertainty quantification** We agree, our method does not employ *explicit* uncertainty quantification, but arguably relies on a form of *implicit* uncertainty quantification, although not the classic Bayesian one. **Toy experiment** - The y-range of plot 2.a should be the same as plots 2.b and 2.c. - Since Fig. 2.b and 2.c are histogram plots, for a smooth cmap, points with low probability have a similar color to points with zero probability. We will try to overcome this issue by testing non-smooth cmaps. - The original density is in Fig. 2.a, namely two uniform distributions. - Pre-training was performed by standard denoising score-matching and uniform samples (namely 10K) from the two distributions in Fig. 2.a. **Text-to-Image experiment** - The references to Fig. 9 in Sec. 8 should be references to Fig. 3. - We agree with the Reviewer that the method presented in this work is relevant for molecular spaces. Nonetheless, applications in scientific discovery go beyond the scope of this paper, which captures mathematically a novel and relevant problem, proposes fundamental algorithmic advances, and provides a novel type of theoretical analysis for diffusion fine-tuning. We believe that exploration in the visual domain is relevant for multiple applications, and particularly amenable for evaluation as it does not require specific background knowledge. - Since the method proposed directly maximizes a gradient of entropy, one can show that for $\alpha \geq 1$ (in Eq. 9), the entropy value increases monotonically. One could attempt at increasing entropy by reducing guidance strength, but this would result in being closer to the unconditional distribution thus changing the support significantly. On the contrary, our algorithm (provably) explores within the support of the conditional distribution via entropy maximization, which is substantially different. **Computational complexity and hyperparameter selection** Under exact fine-tuning one iteration is sufficient (see Sec. 5). In practice, we show in Sec. 8 that very few iterations (e.g., $K=3$) can lead to useful fine-tuned models. Therefore, the computational cost aligns with typical fine-tuning, e.g. [3]. The parameter $N$ is significantly problem dependent, and as in the case of Adjoint Matching [3], it requires experimental tuning. **Question 1** It is common to express the reverse process of a diffusion process via a SDE parametrized by a parameter $\eta \in [0,1]$ (e.g., in [1]). The processes obtainable for any valid $\eta$ induce the same marginal distributions. **Question 2** - The truncation in Prop. 1 is merely a technical aspect to build a theoretical formulation that matches the real sampling process, which always gives finite values. The bounding interval can be any as long as it is finite. Therefore it can fully capture any real scenario. - Diffusion notation uses $0$ for the time-step corresponding to data (e.g., [1]), while fine-tuning works denote by $0$ the noise level (see, e.g., [2]). This is formally not fully correct, since to compute closed-form the noise distribution one has to take the limit to infinity. A recent rigorous solution (see, e.g., [1]) is the one used in our paper. Nonetheless, we agree with the Reviewer that the sign flip might create confusion and have already added a drawing that clarifies this in a new version. **Question 3** Sec. 4 shows that given exact fine-tuning, the algorithm convergences in $1$ iteration with $\alpha_1 = 1$. It should be set to lower values to prioritize the surprise term in Eq. 9, and to higher values to prioritize regularization, as discussed in Sec. 4.4. When considering apx. fine-tuning oracles, Theorem 7.1 gives necessary conditions on $\alpha_k = 1/\gamma_k$ for the guarantees to hold. In practice, $\alpha_k$ should be experimentally tuned for the specific application. **Question 4** Ass. 7.1 could be difficult to verify in some cases. However, it can be relaxed as $ supp(p_T^{\pi_k}) \subset \tilde{\Omega} \text{ for all } k, $ and $ supp(p_j^{\pi_k}) = \tilde{\Omega} $ for some $ j. $ Using the same analysis, we can show that the algorithm can solve the exploration problem on $\tilde{\Omega}$. In particular, if $\tilde{\Omega}$ approximates the true support, as in Fig. 2, our analysis guarantees exploration of the approximate support. **References** [1] Zhao et al., Scores as Actions: a framework of fine-tuning diffusion models by continuous-time reinforcement learning. [2] Uehara et al., Feedback Efficient Online Fine-Tuning of Diffusion Models. ICML 2024. [3] Domingo-Enrich et al., Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control. ICLR 2025. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for answering my questions. I have adjusted my score accordingly. I still think that having an additional experiments where exploration is really needed will make the paper and the method stronger, therefore I would be super interested in seeing that in the updated version.
Summary: This paper considers the problem of exploring the underlying data manifold learned with a diffusion model. It formulates the problem as maximizing the entropy of the probability distribution, and proposes to solve it with a KL-regularized optimization of the first variation of the entropy. It proves that the optimal is obtained given perfect training optimization. Furthermore, it considers the fact that in practice the training and optimization have errors, and propose an iterative fine-tuning algorithm. It proves that this algorithm converges to the optimal solution under general and realistic assumptions. The method is theoretically connected to continuous-time RL. The method is empirically validated with an illustrative example and fine tuning a Stable Diffusion model for text-to-image task. ## update after rebuttal Thank you for providing the theoretical results and rationales for the evaluation metrics. I agree with these rationales. I think it would be interesting to explore the application of this method such as drug discovery, as future directions. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The method makes sense, and is practical. The toy example is evaluated with entropy, the target, and illustration, which makes sense and is intuitive. For the text-to-image experiment, the CLIP score shows that the p.d.f. still has the same support, and the FID and cross entropy show it deviates from the pretrained model. For the latter, it is implied that the p.d.f. might have higher entropy because if deviates from the original one with low entropy. I understand that entropy is hard to estimate on this space, but would other diversity metrics such as Inception Score or Vendi score be a better surrogate? Another interesting experiment would be first fine tune a stable diffusion model on some biased dataset to introduce imbalance, and see if the proposed method can recover it. Theoretical Claims: The theoretical claims make sense to me. I skimmed through the proofs in the appendix and I think they are correct. Experimental Designs Or Analyses: The experimental designs are sound. See **Methods And Evaluation Criteria.** Besides, for future work, it would showcase the utility of this model if tested on broader contexts, such as molecule generation problems mentioned in the paper. Supplementary Material: I briefly read the proofs, algorithm, experimental details, and additional image results. I think they make sense. Relation To Broader Scientific Literature: The paper deals with the data manifold exploration problem, a common problem in real data. The paper is related to diffusion models, and can be applied as fine tuning steps to existing models. As potential future work, the method might be useful for improving the exploration of drug discovery and other generative-discovery tasks. Essential References Not Discussed: This is not directly comparable, but an interesting line of work that might be related, for the problem of manifold exploration/sample bias reduction: Geometry-Based Data Generation (Lindenbaum, et al., 2018, Neurips), Geometry-Aware Generative Autoencoders for Warped Riemannian Metric Learning and Generative Modeling on Data Manifolds (Sun, et al., 2025, AISTATS). These methods solve the problem from a geometrical perspective. Other Strengths And Weaknesses: ## Strengths: 1. The paper is very well written and theoretically solid, with proofs of convergence to the proposed objectives. 2. The paper did not stop at the ideal case of perfect pretraining and optimization, but admitting those cases are not attained in practice, and further proposes a solution for this, making the method more practical. The theoretical assumptions for the method are justified. 3. The method is directly usable on pretrained diffusion models and has potential applications in tasks requiring exploration. ## Weaknesses: 1. The evaluation metric for the image generation task can be improved (see **Methods And Evaluation Criteria)** 2. The method could be showcased on tasks with a stronger motivation for manifold exploration (such as drug discovery). The current example of image generation is not very intuitive. Other Comments Or Suggestions: typo in the formula in line 95, right column: $x$ should be $x_t$; also, should be minimizer, not maximizer. Questions For Authors: 1. Are there empirical or theoretical results on the scalability of this method, especially on the larger pretrained models? 2. In assumption 7.3 and thm 7.1 do we assume inifite steps? k, infinite sum? Then is there an estimation of convergence rate? 3. In terms of connection to RL, is it just notational? It seems that the problem is formulated and solved without relying on properties or algorithms of RL. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the Reviewer for recognizing our work as very well written, theoretically solid, and practical. In the following, we address several points and questions mentioned within the review. **Inception Score (IS) and Vendi Score (VS).** IS: To the best of our understanding, the Fréchet Inception Distance (FID) has been introduced as a way to mitigate the inability of the IS score to recognize intra-class mode collapse [1], which is related to what we are aiming to measure in our setting. In particular, in our setup with images, the generated points would all be part of one class. As a consequence, IS might not be a better evaluation measure than FID. VS: We agree with the Reviewer that this measure of diversity might be relevant in this context. Nonetheless, choosing a relevant kernel in practice might be non-trivial, and if the feature space is obtained via a non-linear map, which is typically the case, it is unclear how well this measure aligns with entropy. In any case, we are grateful for this suggestion and will certainly explore it in this context. **Relevance to de-bias pre-trained generative models** Although not strictly related to the motivation presented in this work, we agree that the presented method is very relevant to de-bias generative models, and further research could focus on its potential impact on this important problem. **Question 1.** The presented method can run using any state-of-the-art (linear) diffusion model fine-tuning scheme, such as Adjoint Matching [2] or alternative methods, e.g., [3]. To the best of our knowledge, these methods are very scalable and they have been shown to successfully fine-tune large-scale pre-trained models for images [2,3], molecules [3], and proteins [3], among others. Crucially, due to Eq. (12) within Sec. 4.3, our algorithm becomes effectively as scalable as the state-of-the-art methods mentioned above, which have already proved successful in relevant real-world applications with larger pre-trained models [2,3]. **Question 2.** We can establish convergence guarantees under three different assumptions, ordered by increasing generality: - Perfect fine-tuning: If fine-tuning is exact, our algorithm terminates in a single step (see Section 5). - Unbiased noise oracle: When the noise oracle is unbiased (i.e., $\( b_k = 0 \)$ in (16)), standard mirror descent analysis yields a convergence rate of $\( O(k^{-1/2}) \)$ (see [7]). - General bias term: If the noise oracle in Eq. (16) includes a general bias term, then under the arbitrary slow decay assumption in Eq. (18), a polynomial-time guarantee is no longer feasible [6]. In this case, stochastic approximation techniques are required, and the best achievable rate is $\tilde{\mathcal{O}}((\log\log k)^{-1})$, which follows from our proof. We chose to present an asymptotic result rather than explicitly stating this rate, as the difference is negligible and in this case it is conventional to present the asymptotic result [6,7]. Among these, the third setting is the most practical, which is why we focused on it in Section 7. Notably, our results show that convergence does not require running $\( k \to \infty \)$ even with the most general assumptions (i.e., case 3 above). **Question 3.** The presented version of S-MEME relies on optimal control methods as discussed in Sec. 4.2. Nonetheless, it is standard to refer to this recent problem of optimization over the space of admissible state distribution to maximize a known functional as Convex RL [4] or General Utilities RL [5]. This is the main reason why we use the RL notation. Moreover, to the best of our understanding, nothing prevents to replace the control oracle currently used with classic MDP planning methods used in RL. In conclusion, we agree with the Reviewer that the choice of introducing the problem with RL notation is mostly a notational convention. Ultimately, we want to thank the Reviewer for reporting a typo, which we have corrected, and mentioning several relevant related references of which we were not aware. **References** [1] Lucic et al., Are GANs Created Equal? A Large-Scale Study. NeurIPS 2018. [2] Domingo-Enrich et al., Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control. ICLR 2025. [3] Uehara et al., Feedback Efficient Online Fine-Tuning of Diffusion Models. ICML 2024. [4] Mutti et al., Challenging common assumptions in convex reinforcement learning. NeurIPS 2022. [5] Zhang et al., Variational policy gradient method for reinforcement learning with general utilities. NeurIPS 2020. [6] Karimi et al., Sinkhorn flow as mirror flow: A continuous-time framework for generalizing the sinkhorn algorithm, AISTATS 2024. [7] Borkar et al., The ODE method for convergence of stochastic approximation and reinforcement learning. SIAM Journal on Control and Optimization 2000.
null
null
null
null
null
null
Enforcing Latent Euclidean Geometry in Single-Cell VAEs for Manifold Interpolation
Accept (spotlight poster)
Summary: This paper presents a novel training framework for VAEs for count data (i.e., negative binomial likelihood), where it enforces straight lines in latent space to map to geodesic paths on the data manifold, an assumption often made in VAE models for scRNA data but not explicitly enforced (or checked). In doing so, and combined with optimal transport-based modeling of cell fate transitions, FlatVI achieves good data reconstruction while better guiding the mapping between straight latent trajectories and cellular transitions. ## update after rebuttal I thank the authors for their clarifications, and maintain my score. Claims And Evidence: The main claim is that, by adding the novel flattening loss to NB-VAE training, straight latent trajectories that is often used for interpolation would better correspond to minimal paths on the data manifold, e.g., by going through an intermediate cell state. As far as I can tell, the authors successfully demonstrate this on synthetic and multiple real datasets. Methods And Evaluation Criteria: The method is elegant and simple, adding just one additional loss term to achieve the desired result, though requiring an additional regularization hyperparameter (see Questions). The experiments and evaluation criteria chosen appear to be appropriate, and demonstrate FlatVI’s improved performance over existing baselines. Theoretical Claims: I did not go through the loss proof in the amount of detail required to reconstruct it, but superficially everything checks out, aside from some questions in the end. Experimental Designs Or Analyses: The experimental design and results seem comprehensive (checked all), and overall quite performant. Supplementary Material: Yes: D.2, E, H, K.1.4 Relation To Broader Scientific Literature: This paper is a nice addition to the VAE, and more broadly, generative modeling literature. Having the correspondence between straight latent trajectories and geodesic trajectories on the data space would be useful in general for interpretability, and the specific negative binomial likelihood VAE would be useful in other applications with (low) count data, like neural recordings with spike counts. Essential References Not Discussed: None I’m aware of. Other Strengths And Weaknesses: - The paper is clearly and concisely written, and overall quite informative. - The exposition from deterministic to stochastic AE was helpful to guide the reader. - Really nice results overall and great visualization. Other Comments Or Suggestions: None Questions For Authors: - can you explain in more detail why alpha (in Eq 8) is a trainable parameter, and how varying a frozen alpha affects the results? - could you clarify why the inverse dispersion parameter is not a function of the latent space (line 232-234), and whether it is estimated through the decoder (e.g., Eq. 2)? The evaluation metric considers reconstruction error on dispersion, so it seems that it is given by the decoder? - What is the scale we should expect for the MSE between geodesic and euclidean distances? Is 0 the theoretical minimum if everything worked correctly? - How should one choose lambda in practice, especially since the optimal for the three metrics do not necessarily coincide in Table 1 (though the differences in mu and theta are minimal)? - Why are there these “streaks” of very high condition number for FlatVI in Fig 2b? While K.1.4 and Figure 7-9 touches on this point, I don’t quite see how the boundary effect highlighted in Fig 9 translate to the high CN regions in Figure 2? - Naive question: in figure 4, imagining paths from initial to terminal state seems quite non-linear given the curved manifold (e.g., for Pancreas). Given that the trajectories should be straight and PCA is a linear reduction, wouldn’t we expect more linear manifolds and “straighter” paths from early to terminal? Instead of PCA and taking the first two PCs, is there a way to do dimensionality reduction but preserving the linear paths? Or am I fundamentally misunderstanding something? - Another naive question, more broadly, why is it assumed (or desirable) that straight paths in latent space map to geodesic paths in data space through a cell’s transition? - how would this scheme extend to work with, e.g., beta VAEs with an additional hyperparameter (on the KL loss) controlling the prior spread? It would be interesting to be able to regularize both the latent trajectory straightness and posterior variance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank zkLA for their thorough review and positive assessment of our work, and we welcome the opportunity to address their remaining questions. > A1 $\alpha$ relaxes the strictness of Euclidean regularisation. When the metric tensor $\mathrm{M}(\mathbf{z})$ is identity matrix $\mathbb{I}_d$, the latent space is strictly Euclidean, and the inner product reduces to the linear product. Our goal is to enforce a constant local geometry in the latent space. On a Riemannian manifold, distances are computed as in Eq.5 in the manuscript. If $\mathrm{M}(\mathbf{z}) = \mathbb{I}_d$ everywhere, the shortest path is a straight line. By setting $\mathrm{M}(\mathbf{z}) = \alpha \mathbb{I}_d$, we preserve straight geodesics while uniformly scaling tangent vectors. The trainable $\alpha$ provides flexibility in latent metric scaling, improving reconstruction; freezing it restricts optimal geometry learning to an assumed scale. > A2 In single-cell analysis, $\theta$ is often modelled as a gene-specific parameter, capturing overdispersion from technical or biological effects. These are systematic gene-level patterns reflecting detection rates or expression variability. Thus, $\theta$ is learned as a free parameter by the likelihood model, not as a function of latent space. Gradients from the likelihood loss update both the VAE and gene-wise $\theta$. Though $\theta$ is independent of latent states, its estimation reflects reconstruction quality by interacting directly with the likelihood loss. Hence, we include it in Tab.1. > A3 In Tab.1, we measure how closely straight paths align with geodesics by computing their MSE. Since geodesics on the latent manifold (equipped with the pullback metric) lack closed-form solutions without perfect regularisation, we approximate them numerically. Specifically, we use cubic splines that minimise the total Riemannian displacement between points, with local distances weighted by each model’s learned pullback metric (see l. 995-1023). When $\alpha=1$, the pullback metric reduces to the identity and the minimum MSE achievable is 0. In practice, our spline-based method introduces numerical errors, so the MSE rarely reaches 0. This metric remains robust for comparing how models deviate from ideal Euclidean behaviour. > A4 Our simulations focus on understanding model behaviour and validating our claims. On real data, where selection matters, we recommend choosing $\lambda$ based on the downstream task. For trajectory inference, we tested $\lambda$ and selected, for each dataset, the highest possible $\lambda$ that improved trajectory reconstruction. For a detailed discussion, see App.G. > A5 The mismatch occurs because Fig.9 displays a UMAP projection of the same latent encodings shown directly in Fig.2. While Fig. 2 presents the raw 2D latent space, we applied UMAP in Sec. K.1.4 to better visualize how unflattened regions are enriched in decision boundaries. Both figures represent identical data, the UMAP simply offers an alternative view for better visualisation. We will clarify this in the Appendix. > A6 In Fig.4, we present data point embeddings rather than trajectories. We believe the non-linear structure observed, particularly for the Pancreas dataset, arises because PCA is a linear method that captures the global variance but does not preserve the local geometry of the data. Although our regularisation aims to approximate Euclidean space locally, some residual non-Euclidean structures may remain since we trained the model with $\lambda=1$. PCA, being a global method, emphasises this non-linearity when reducing the data to 2D. This is less apparent in the MEF dataset, where flattening appears to better unfold the natural temporal structure (Fig.4b). To better preserve the local structure, non-linear methods such as Isomap could be considered. We chose PCA for its interpretability and to analyse dimensionality across latent spaces. > A7 Many single-cell tools assume that Euclidean distances in the representation space reflect biological relationships. FlatVI explicitly links the VAE's latent manifold to a locally Euclidean latent space, ensuring these distances better match biological proximity in terms of single-cell counts. This creates a principled approach for downstream analysis (kindly refer to the 1st answer to NJwg). > A8 To extend FlatVI to $\beta-$VAEs with hyperparameters controlling the prior spread, we can incorporate the flattening loss alongside the modified KL term. This allows us to regularise both the latent trajectory straightness and posterior variance. Crucially, this extension does not disrupt the core guarantees of FlatVI, as the FlatVI loss only influences the latent space geometry shaped by the decoder model (Eq.7), while the KL term balances the posterior with the prior. This ensures that geometry regularisation and posterior variance are controlled without compromising the underlying structure of the latent space.
Summary: The manuscript focuses on learning a specific representation intended to understanding dynamics, specifically arising from single-cell RNA sequencing (scRNA-seq). The authors regularize the latent space of a Variational Autoencoder (VAE) using the geodesics of a statistical manifold. This approach ensures that straight lines in the latent space correspond to geodesics on the statistical manifold. The authors demonstrate the advantages of their method on toy datasets with known geodesics, as well as on real-world scRNA-seq datasets. Claims And Evidence: The claims are supported by theoretical and experimental results. Methods And Evaluation Criteria: The method is theoretically grounded and is explained clearly. It has been rigorously tested on synthetic datasets with known geodesic properties, as well as on scRNA-seq datasets that exhibit temporal dynamics. Theoretical Claims: I checked the derivation of proposition 4.1 and it seems correct. Experimental Designs Or Analyses: I reviewed all the experiments and found them to be reasonable. However, I find Figure 2 difficult to interpret, and the conclusions drawn from the first two columns are unclear. However, I appreciate the quantitative experiments presented in Table 2. Supplementary Material: I only read the section K. Relation To Broader Scientific Literature: The key contributions focus on enhancing the interpretability of the latent spaces in Variational Autoencoders (VAEs). Other studies have also attempted to regularize these latent spaces. The authors reference these related works and provide a comparison with one of them. Essential References Not Discussed: I believe there are no essential reference missing. Other Strengths And Weaknesses: The manuscript is well written, and the method is clearly explained. I find most of the experiments to be of high quality; however, I find Figure 2 difficult to interpret. I believe the experimental section could benefit from slight revisions. Specifically, I suggest moving some tables from the supplementary material, such as Table 4 and Table 8, into the main body of the text, as they present compelling qualitative results. Other Comments Or Suggestions: I don't have any other comments or suggestions. Questions For Authors: I don't have any other questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank Reviewer 5Fan for taking the time to review our work and for highlighting the aspects they found positive. We also appreciate their constructive suggestions regarding structural improvements and areas that could benefit from further clarification. > Interpretability of Fig. 2 The first two columns of Fig. 2 illustrate how the values of the Condition Number (CN) and the Variance of the Riemannian Metric (VoR) are distributed across latent cell observations. We offer a brief, intuitive description of these metrics: * VoR: Measures how the Riemannian metric at a given latent point varies relative to its immediate neighbours. In a perfectly Euclidean latent space, this metric would remain constant throughout, as Euclidean manifolds exhibit uniform geometry. Therefore, we expect this metric to take low values in the regularised space. * CN: Quantifies how uniformly distances are scaled in different directions within the latent space geometry. Specifically, it represents the ratio between the largest and smallest eigenvalues of the metric tensor. In a perfectly Euclidean space, this ratio would be 1, meaning distances are preserved equally in all directions. A higher CN indicates greater distortion, with distances being stretched more in some directions than others. In the first two columns of Fig. 2, we show that the latent manifold produced by FlatVI consistently exhibits lower VoR and CN values compared to its unregularised counterpart, suggesting a closer approximation to Euclidean manifold properties. These results should be considered alongside Tab. 1, which demonstrates that stronger regularisation does not compromise reconstruction accuracy. In other words, FlatVI enforces the Euclidean constraint for downstream tasks while enabling precise trajectory reconstruction. Despite these improvements, some regions with elevated VoR and CN values remain in FlatVI’s output. We discuss these regions in Section K.1.4, where we hypothesise that they correspond to rapidly changing manifold areas located at the intersections between different cell-type categories. > Moving tables 4 and 8 into the main body. We agree with the reviewer and thank them for their suggestion. In case we get the chance to present a revised version with an extra page, we will add Tab.4 to Section 5.1 and Tab.8 to Section 5.4.
Summary: The paper proposes FlatVI, a VAE training strategy that enforces local Euclidean geometry in the encoder’s latent space by regularizing the pullback metric of the decoder to approximate local Euclidean geometry in latent space. It validated the proposed method on simulated data and the single-cell trajectory inference task. It also demonstrated two other use cases of the locally Euclidean geometry in latent space, including latent vector fields and single-cell latent space visualization. Claims And Evidence: The authors make claims in the paper about straight paths and enforcing locally Euclidean geometry. This is only the case for globally euclidean data. Indeed curved manifolds can be locally euclidean and a lot of work has gone into showing how to drive geodesics there (see Neural FIM, Fasina et al. ICML 2023 ), (GAGA Xu et al. AISTATS 20250 , (Metric Flow Matching Kapusniak, Kacper, et al. 2024),. Methods And Evaluation Criteria: I think the idea of using the decoder’s pullback metric is very interesting, but I am unsure about the benefits of enforcing Euclidean geometry in latent space, especially for single-cell applications. To my best knowledge, single cells are often hypothesized to lie on a manifold (manifold hypothesis), and the geometric and topological analysis of single-cell data have been proven very useful (Rizvi, A., Camara, P., Kandror, E. et al. Single-cell topological RNA-seq analysis reveals insights into cellular differentiation and development, Moon, K.R., van Dijk, D., Wang, Z. et al. Visualizing structure and transitions in high-dimensional biological data). Enforcing Euclidean structure might not be ideal for capturing the intrinsic non-Euclidean geometry often present in single-cell data. The paper used OT Conditional Flow (OT-CFM) in their experiments, and it makes sense since OT-CFM does straight path interpolation, but there have been a lot of works recently leveraging the non-Euclidean geometry of the underlying manifold for trajectory inference tasks such as Metric Flow Matching (Kapusniak, Kacper, et al.), Geometry-Aware Generative Autoencoder (Sun, Xingzhi, et al.), WLF-SB (Neklyudov et al., 2024) etc. Theoretical Claims: The paper derives the Fisher Information Metric for the negative binomial distribution. I scanned the proof and it seems reasonable.. Experimental Designs Or Analyses: The paper showcased four experiments: 1) sanity check on simulated data, 2) single-cell trajectory inference, 3) latent vector field, and 4) single-cell representations. The experiments showed that locally euclidean geometry is preserved in the latent space of FlatVI, but they do not lend strong support for why FlatVI is better for modeling single-cell data. Overall, the experiments are interesting but not compelling enough. Indeed they did not compare to Neural FIM, GAGA or metric flow matching. The experiment on simulated data is more like a sanity check for whether the latent space is regularized as expected. The experiment on single-cell trajectory shows that FlatVI performs better than VAE without the flatness regularization and GAE. However, the interpolation method used in the experiment is OT Conditional Flow (OT-CFM), which assumes straight path interpolation, so the OT-CFM could compound the experiment results, making FlatVI more favorable. The paper can also benefit from benchmarking with more methods than just two methods. The same can be said for the latent vector field experiment. Here, OT-CFM was used to compute the velocity fields, and then the vector fields were used to learn the terminal states of the Pancreas data. The experiment showed that FlatVI has more consistent velocity fields compared to the other two methods, but it’s not clear to me why consistent velocity is more ideal for single-cell data. For the data representation, the paper claims that “FlatVI effectively represents the biological structure in the latent space as illustrated by the separation between initial and terminal states.”. However, in Figure 4, I do not see a clear advantage of FlatVI compared to the other methods, especially on EB and MEF. Supplementary Material: Scanned the entire supplementary material. Relation To Broader Scientific Literature: The key contributions of this paper are about regularizing the latent space of VAE to approximate a locally Euclidean geometry through the pullback metric of the decoder. However, I am not convinced of the benefits of enforcing Euclidean geometry on the latent space, especially for single-cell data, considering there have been a lot of works showing the benefits of leveraging the non-Euclidean characteristics of the single-cell data (Rizvi, A., Camara, P., Kandror, E. et al. Single-cell topological RNA-seq analysis reveals insights into cellular differentiation and development, Moon, K.R., van Dijk, D., Wang, Z. et al. Visualizing structure and transitions in high-dimensional biological data) . They need to cite and discuss these as well as the other papers I mentioned above. Essential References Not Discussed: This paper uses the pullback metric of the decoder to regularization the latent space but lacks citations on previous works in geometrically regularizing the latent space, such as Geometry-Aware Generative Autoencoder (Sun, Xingzhi, et al.) that regularizes the local latent space to match geodesic distances in data space, Geometry regularized autoencoders ( Duque, Andrés F., et al.), and Gromov-Monge Embedding (Lee et al., 2024), Neural FIM (Fasina et al ICML 20230. Other Strengths And Weaknesses: Overall, I think deriving a metric through the decoder pullback is very interesting, but the paper did not provide a compelling case for enforcing latent Euclidean geometry for single-cell data. The empirical experiments could benefit a lot from benchmarking with more interesting methods rather than the VAE baseline and GAE. Other Comments Or Suggestions: Please put the contributions in relation to related work, and offer more insight on the global euclidean assumption. Questions For Authors: Are there regularizations need to make the geometry globally euclidean ? For example can L_flat be modified? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank oZ7a for their elaborate review. Their informed criticism offers us an opportunity to improve our submission. We will refer to two additional rebuttal figures stored at https://figshare.com/s/74ca822781c60b2a85f2. > Global regularisation Similar to GAE, we propose to: - Sample a couple of cells and encode them. - Approximate the latent geodesic distance between representations using the pullback. One could fit an energy-minimising cubic spline weighing local shifts by the pullback. - Minimise the difference between the latent Euclidean and the geodesic distances. This approach is expensive because it contains an unstable inner optimisation loop requiring decoding and gradient computation. > Local vs global While our flattening loss enforces local Euclidean geometry in the latent manifold, we emphasise that this regularisation is applied specifically in the regions where the data lies. Under the assumption that the latent manifold is sufficiently densely sampled, these local constraints can collectively approximate a globally Euclidean structure for the data-supported manifold. This assumption aligns with common practices in single-cell manifold estimation, where smooth transitions between neighbouring states define the manifold structure. We will add a clarifying assumption statement. > Contextualisation Apologies for the missing citations; we will add them upon revision. * We contextualize our model in App. E (ref. in l.239, col.2). Methodologically, FlatVI is a representation learning model, creating a space where tractable interpolations map to meaningful data-space paths. In contrast, MFM is an interpolation model, operating in a fixed PCA space and focusing on FM regularisation to align with a neighbourhood-based manifold. Unlike FlatVI, MFM does not account for probabilistic aspects of high-dimensional single-cell data. * More related are GAGA and NeuralFIM. Both methods perform latent geodesic regularisation to reflect neighbourhood structures in single-cell datasets, using k-NN graph-based approaches. Different from FlatVI, both models: * Only consider low-dimensional and continuous input data (e.g., PCA). * Do not enforce a tractable latent manifold, hence, they have to learn geodesics between latent observations with a NeuralODE. These aspects may hinder scaling to the gene space, as sparse expression vectors are unsuitable for Euclidean distances, and geodesic NeuralODEs are unstable (see below). > Manifold assumptions There is a distinction between the goals of the cited studies and our paper. For instance, PHATE produces a geometrically sound embedding of the data, exploiting neighbouring structures to **discover** patterns in the dataset. We implement a strategy that connects distances on a latent manifold to the geometry of high-dimensional and noisy discrete data to replace the standard VAE logic in combination with distance-based tools. FlatVI operates through the decoder function. When the decoder is expressive and assuming geodesic convexity, it reconstructs the data manifold's geometry and aligns our latent structure to the statistical manifold. Interpolating simpler proxy manifolds is established [1]. > FlatVI's benefits Kindly refer to A1. to NJwg and A7. to zkLA. > Benchmark We focus on FlatVI’s representation aspect and how our guarantees improve downstream tasks. Our baselines, NB-VAE and GAE, reflect this. Comparing to NB-VAE serves as ablation to highlight our regularisation’s effects. GAE, like FlatVI, enforces Euclidean geometry via a k-NN-based approach, making the comparison fair. We validate our claims above by comparing FlatVI with GAGA (concurrent work under ICML guidelines). Both methods encode single-cell data into a latent space, where we interpolate between Multipotent and mature cells—using geodesic interpolations for GAGA and linear for FlatVI. Decoding the results, we assess marker gene reconstruction along lineages (Reb. Fig. 1a). FlatVI’s interpolations align better with true gene trends, whereas GAGA shows unnatural oscillations (Reb. Fig. 1a). GAGA’s simulations are also more unstable (Reb. Fig. 2a) and slower (Reb. Fig. 2b). Due to memory issues, we couldn’t run geodesic interpolations with NeuralFIM, but its training was slower than FlatVI (Reb. Fig. 2c). Since the FM algorithm is not the novelty of our work, we did not compare it with MFM. > Consistency Kindly refer to A2 to NJwg. > PCA - MEF and EB We agree that the sentence may sound ambiguous and overstated. In the text, we only mention the MEF and Pancreas datasets, where FlatVI better unrolls the dynamics. We quantified the separation between initial and terminal states in latent spaces using clustering metrics in Tab. 8. While our method outperforms competitors on MEF and Pancreas, the reviewer correctly noted that it does not show a clear advantage on EB. We will clarify this in the text. [1] de Kruiff et al., Pullback Flow Matching on Data Manifolds
Summary: The paper proposes FlatVI, a novel training framework for variational autoencoders (VAEs) applied to single-cell RNA-seq (scRNA-seq) data. Its central goal is to enforce Euclidean geometry in the latent space of discrete-likelihood VAEs—specifically, negative binomial VAEs commonly used for modeling gene expression counts—so that straight lines in the latent space correspond more closely to geodesics on the decoded statistical manifold. The authors develop a flattening loss based on the pullback of the Fisher Information Metric (FIM) from the decoder, which they regularize toward a scaled identity matrix. The method is designed to retain data reconstruction performance while improving manifold geometry. Empirical evaluations include: - Simulations showing improved alignment between geodesic and Euclidean distances in latent space. - Application with OT-CFM for scRNA-seq cellular trajectory reconstruction - Application with CellRank for latent vector field and lineage mapping - Some qualitative PCA-2D plots ## update after rebuttal I appreciate the authors’ rebuttal, and the rebuttal Figure 1 serves as a new case study to show that the changes of three genes can be better estimated by FlatVI on the pancreas dataset. Therefore, I raised my score by 1 from broadline reject to broadline accept. Nonetheless, I wish to clarify my stance (1) I agree this is a very interesting work extending VAE with a regularization to enhance euclidean metric (2) I think table 2 servers as a good benchmark to show the usefulness of this method on two real dataset (3) I am okay to use qualitative results in 5.4 to show the goodness of FlatVI representation at the end of main text to further enhance the work. The main shortcoming from my perspective is the limited heuristic results for Section 5.3. Given that the authors only give one OT method as the basis for the analysis. Would the velocity consistency be enhanced at different levels of complexity of different datasets (maybe some datasets with some simple induced differentiation process and some complex organism developmental dataset like Mouse or C. elegans)? I feel like Figure 3 is a case study, but not comprehensive enough as the evaluation of a task. To this end, if the full section 5 consists of only one simulation, one task evaluation with 2 datasets, one case study, and one qualitative representation. I feel like the overall workload is under my expectations. Moreover, I think rebuttal Figure 1 is a very nice addition to the results section. But again, to demonstrate the advantage of a method, we may consider a more systematic shows the usefulness among all the genes (or a subset of genes that are identified to be time-variant by a set of time analysis methods) in the dataset instead of curating three genes with better fitted trends. In the end, I raised my score by one based on the rebuttal and the comments from zkLA and 5Fan. But I think ideally, Figure 3 and rebuttal Figure 1 should be expanded into two task evaluations to not only give a comprehensive heuristic evaluation but also better define this task to facilitate future work to build upon. Claims And Evidence: The paper makes three central claims: 1. Latent geometry regularization via flattening loss 2. Improved trajectory inference when combined with OT-CFM 3. Improved velocity field consistency in the latent space I find the evidence for Claim 1 to be strong and clearly demonstrated through simulation experiments. The use of the pullback metric and flattening loss is well-motivated and effectively shown to induce Euclidean-like geometry in the latent space. Claim 2 and Claim 3 are supported by results in Table 2 and Figure 3, respectively. However, I believe these claims highlight improvements that are largely specific to a narrow set of use cases. The improvements shown for trajectory inference rely on OT-CFM, which assumes Euclidean latent geometry. Many commonly used trajectory inference methods (e.g., pseudotime algorithms, diffusion-based or graph-based methods) operate in non-Euclidean or PCA-based spaces, where the benefits of FlatVI may not hold or may be less impactful. As such, the broader significance of the method is unclear when considering the wide variety of available alternatives in the single-cell analysis space. For Claim 3, while Figure 3(b) reports improvements in “velocity consistency,” this metric reflects self-consistency of the learned vector field rather than ground-truth accuracy. It would strengthen the paper if the authors provided a clearer discussion about the biological interpretability and utility of velocity consistency in practical single-cell analyses. Overall, I appreciate the authors’ effort and find the core idea to be well-presented. The work is technically sound and the writing is clear. That said, the methodological contribution is somewhat incremental, and the scope of demonstrated utility is limited to scenarios where specific assumptions (e.g., Euclidean latent geometry) hold. From my perspective, this work could be a useful enhancement to tools like scVI or CellRank, but may fall short of the bar for a standalone ICML paper in its current form. Methods And Evaluation Criteria: I am in general agree with the metric and criteria. Moreover, I think Table 2 is a good choice of benchmark showing OT-CFM got improved by FlatVI. It would be a good reference for later works to do trajectory reconstruction task. Theoretical Claims: The proofs for theoretical claims look good to me. Experimental Designs Or Analyses: As discussed in Claims And Evidence. Supplementary Material: I reviewed the whole supplementary material. Relation To Broader Scientific Literature: . Essential References Not Discussed: . Other Strengths And Weaknesses: . Other Comments Or Suggestions: . Questions For Authors: - I would love to hear the authors thoughts about the significance of the FlatVI in trajectory inference task considering those alternative methods. - I would love to the authors explain/discuss a bit more about their thoughts about the usefulness of velocity consistency in single cell analysis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank NJwg for thoroughly reviewing our paper and providing constructive criticism to improve our submission. We kindly point the reviewer to the anonymous link https://figshare.com/s/74ca822781c60b2a85f2 containing the figures we refer to in some of our answers. > A1. Limited use/comparison with other algorithms. While our specific examples focus on OT-CFM, the benefits of FlatVI extend to a broader range of single-cell analysis techniques. * Many of the trajectory inference methods mentioned by the reviewer still fundamentally use distance calculations between representations of cells to define relationships and biological state proximity. For example, algorithms that build k-NN graphs (including pseudotime and diffusion) require a distance metric, and the Euclidean distance is often the default or a common choice. * By better aligning latent cell state Euclidean distances with biological distances in discrete gene expression counts (see our simulations), FlatVI's representation offers stronger guarantees for k-NN graph construction, which can be used by diffusion-based methods to assign cellular ordering and infer biological processes. * In other words, FlatVI provides a representation space where biological distances are easy to compute and suitable for building temporal graphs and inferring cell ordering. * The latent Euclidean assumption is also key in batch integration [1] and perturbation prediction [2], broadening the use of our regularisation. * In Rebuttal Fig.1b and 2b-c (addressing oZ7a), we extend our comparisons with GAGA [5], a geodesic autoencoder, on the additional task of gene trajectory reconstruction across lineages. Here, we show that our simple Euclideanisation approach is faster, more stable, and more accurate than the most recent geodesic interpolation approach in capturing the dynamics of gene features on cellular manifolds. > A2. Velocity consistency Importantly, we evaluate our model with ground truth terminal state labels in Fig. 3a, showing that FlatVI’s representations guide Markov paths computed by CellRank to the correct terminal states, unlike competing methods. Consistency, introduced by VeloVI [3], measures vector field smoothness in a cellular representation space. Higher consistency means nearby latent profiles have correlated velocities, resulting in a smoother vector field. When estimating trajectories, related cellular states should have similar gradients, with gradual changes in vector field direction along the manifold. Regularising geometry before vector field estimation, as suggested in [3], enables smoother state transitions with Euclidean latent geometry. > A3. Contribution Our work presents a novel methodological formulation that, to the best of our knowledge, has not been explored before: - **Manifold learning:** Most manifold learning methods rely on continuous approximations of single-cell data. We are the first to define the cellular manifold as the negative binomial manifold spanned by the decoder of a discrete VAE. This means our geometric regularisation is directly informed by the statistical properties of single-cell data rather than assuming an arbitrary continuous structure. Our simple approach leads to more biologically meaningful interpolations (Rebuttal figures). - **Pullback-based regularisation:** Unlike most existing methods that rely on k-NN neighbourhood estimation, we introduce pullback-based local geometric regularisation from the decoder of a VAE. This is a new approach to single-cell machine learning. - **Geometry-aware VAEs:** Our work extends existing geometry-aware VAEs by connecting latent Euclidean interpolations with geodesic paths on discrete statistical manifolds with tractable likelihoods. Similar geometric regularisation techniques have primarily been explored in deterministic autoencoders, but not in the probabilistic single-cell setting. - **Beyond trajectory inference:** While we focus on trajectory inference, the improved correspondence between latent Euclidean distances and statistical manifold geodesics has broader implications (see our first answer). Finally, we highlight that existing standalone ML conference papers in single-cell geometry [4, 5] often introduce new simulation-driven insights while targeting specific downstream tasks. FlatVI combines novel representation approaches with modelling assumptions in single-cell analysis. [1] Lücken et al., Benchmarking atlas-level data integration in single-cell genomics [2] Eyring et al., Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation [3] Gayoso et al., Deep generative modelling of transcriptional dynamics for RNA velocity analysis in single cells [4] Huguet et al., Manifold Interpolating Optimal-Transport Flows for Trajectory Inference [5] Sun et al., Geometry-Aware Generative Autoencoders for Warped Riemannian Metric Learning and Generative Modeling on Data Manifolds
null
null
null
null
null
null
Monte Carlo Tree Search for Comprehensive Exploration in LLM-Based Automatic Heuristic Design
Accept (poster)
Summary: This article discusses the limitations of population-based LLM-based automatic heuristic design, which makes it difficult to fully explore the heuristic space. This paper introduces MCTS for heuristic exploration and planning. Overall, in experiments, MCTS-AHD achieves significant performance advantages in various typical applications. Claims And Evidence: Generally clear and convincing. Methods And Evaluation Criteria: Make sense. Theoretical Claims: This paper does not include theoretical claims. Experimental Designs Or Analyses: Some baseline results are missed on some testing problems. I think it is convincing to introduce HSEvo in TSP-GLS and ACO. Meanwhile, according to Figure 3, it seems that ReEvo and HSEvo can run for construction-based TSP. Can you demonstrate the results of the construction heuristic designed by these two algorithms on TSP? Supplementary Material: I have read the supplementary experiments and discussions in Appendix. I think Appendix F.8 provides a good explanation for the possible situation when MCTS can lead in LLM-based AHD. Relation To Broader Scientific Literature: Ideas: Using MCTS to better explore and plan heuristics is an interesting and reasonable idea. Results: Generally achieves better results on plenty of CO and non-CO problems compared to recent methods. Essential References Not Discussed: This paper has included prior related findings. Other Strengths And Weaknesses: As strengths: 1.Using MCTS in LLM-based AHD is reasonable and novel. 2.This paper conducts comprehensive evaluations of MCTS-AHD on both CO problems and non-CO problems. MCTS-AHD seems to have a wide range of application scenarios and can get outstanding performance on NP-hard problems with clear descriptions. 3.This paper is well-written and clearly discusses the proposed method MCTS-AHD.  There are no major weaknesses in this paper, Please refer to **Experimental Designs Or Analyses**, **Other Comments Or Suggestions**, and **Questions For Authors** for my concerns and questions. Other Comments Or Suggestions: 1.In addition to the heuristic performance, heuristics' execution efficiency can be another metric. I wonder about the efficiency of algorithms designed by MCTS-AHD. 2.There is no clear definition of the Gap and highlights in Table 2. Please provide more descriptions. 3.Appendix F.3 well presents the significant lead of MCTS-AHD compared to EoH, and I believe this significance analysis will be a good supplement to the existing work. Can other baselines such as ReEvo and HSEvo be introduced to calculate the p-value? 4.The ``MCT Root`` in Figure 3 should be the ``MCTS Root``. Questions For Authors: 1.The Q value design in MCTS is important. Can you conduct another ablation study on the normalization of the Q value in MCTS-AHD? 2.This paper only involves GPT-4o-mini and GPT-3.5-turbo for OpenAI GPT. Will more advanced GPT models perform better on MCTS-AHD? 3.The MCTS you are using seems to have structural differences from LLM reasoning. Can you further explain why it does not include multi-step exploration processes such as self-evaluation or rollback? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for your time and effort in reviewing our work. We are glad to know that you find the proposed MCTS-AHD achieves significant performance advantages in various typical applications, using MCTS in LLM-based AHD is reasonable and novel and this paper is well-written. We address your concerns as follows. >**Experimental Designs Or Analyses. Baselines:** Thanks for your suggestions. We have complemented the ReEvo and HSEvo results in TSP-GLS and ACO TSP as follows. |Methods|TSP-ACO$N$=50|TSP-ACO$N$=100|CVRP-ACO$N$=50|CVRP-ACO$N$=100|MKP-ACO$N$=100|MKP-ACO$N$=200|BPP-ACO$N$=500|BPP-ACO$N$=1000| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |DeepACO|0.71%|1.26%|**0.00%**|**0.00%**|0.79%|1.20%|**0.00%**|**0.00%**| |HSEvo|0.14%|0.50%|5.19%|9.04%|**0.00%**|0.01%|1.20%|1.38%| |MCTS-AHD(Ours)|**0.00%**|**0.00%**|4.48%|5.70%|0.03%|**0.00%**|0.48%|0.53%| |||TSP-GLS|| |:-|:-:|:-:|:-:| |Methods|$N$=100|$N$=200|$N$=500| |KGLS|**0.003%**|0.227%|0.958%| |HSEvo|0.006%|0.223%|0.972%| |MCTS-AHD(Ours)|0.006%|**0.211%**|**0.949%**| |||TSP-Construction|| |:-|:-:|:-:|:-:| |Methods|N=50|N=100|N=200| |POMO|**0.39%**|**3.01%**|20.45%| |HSEvo|12.70%|14.32%|15.68%| |MCTS-AHD(Ours)|9.69%|11.79%|**13.19%**| As the results above show, on almost all these test sets, MCTS-AHD can demonstrate advantages compared to HSEvo and ReEvo. >**Other Comments Or Suggestions 1. Heuristics' efficiency:** MCTS-AHD employs the MCTS to comprehensively explore the heuristic space. Therefore, compared to population-based baselines, MCTS-AHD will design more complex and inefficient heuristic algorithms with superior performance. As a future work, we will consider introducing the idea of multi-objective LLM-based AHD [1] to balance the efficiency and performance of heuristics. >**Other Comments Or Suggestions 2. Clear descriptions and typos:** Thanks for your suggestion. The Gap refers to the distance between the best-performing algorithm in the current dataset. In each dataset, the best-performing LLM-based AHD will be highlighted. >**Other Comments Or Suggestions 3. p-values for HSEvo and ReEvo:** Thanks for your suggestions. We have complemented the ReEvo and HSEvo results in Constructive TSP, TSP ACO, and MKP ACO for p-values as follows (the computation of avg, std, and p-value is the same as the Appendix F.3.). Results show that in designing constructive heuristics for TSP, MCTS-AHD can demonstrate clear superiority (with more than 90\% significance) compared to EoH, ReEvo, and HSEvo. In designing ACO heuristics for TSP and MKP, MCTS-AHD can demonstrate clear superiority to ReEvo and slight advantages to HSEvo. |COProblem|Methods|avg|std|p-value| |-|-|-|-|-| |**General**|**Framework:**|**Step-by-step**|**Construction**|| |TSP50|EoH|6.386|0.080|**0.002855655**| ||ReEvo|6.400|0.128|**0.009529994**| ||HSEvo|6.397|0.069|**0.000796727**| ||MCTS-AHD|**6.280**|0.071|| |**General**|**Framework:**|**ACO**||| |TSP50|EoH|5.828|0.003|**0.039230447**| ||ReEvo|5.844|0.029|**0.02477005**| ||HSEvo|5.819|0.018|0.115934978| ||MCTS-AHD|**5.795**|0.036|| |MKP100|EoH|23.199|0.083|**0.037268388**| ||ReEvo|23.258|0.048|0.20630356| ||HSEvo|23.277|0.008|0.433650083| ||MCTS-AHD|**23.279**|0.025|| >**Question 1. Ablation on normalization:** To demonstrate the importance of normalization, we present an additional ablation experiment. According to the table below where values are the optimality gaps, the normalization in MCTS-AHD demonstrates high importance. ||TSP50|KP100| |-|-|-| |MCTS-AHD(10runs)|10.661%|0.059%| |*w/o* Normalization|11.977%|0.083%| >**Question 2. Advanced GPT:** We complement the results of MCTS-AHD in designing constructive heuristics for TSP and KP with advanced GPT LLMs as follows. Values in the table are gaps to the optimal. Advanced GPT LLMs cannot produce superior performance and GPT-4o-mini may be the best choice for GPT LLMs. |Method|TSP$N=50$|TSP$N=100$|KP$N=100$|KP$N=200$| |-|:-:|:-:|:-:|:-:| |MCTS-AHD(GPT-4o-mini)|**9.69%**|11.79%|**0.05%**|**0.04%**| |MCTS-AHD(GPT-4o)|10.24%|**11.69%**|0.08%|0.10%| |MCTS-AHD(GPT-4)|10.35%|12.22%|0.10%|0.09%| >**Question 3. MCTS structure:** Thanks for your comment. Generally, MCTS-based LLM reasoning methods involve self-evaluation or rollout to evaluate a better quality value $Q$ of each state, but these methods may lead to biased evaluation results [2]. Optimization tasks provide reliable performance scores for the $Q$ value, so we believe MCTS-AHD does not need self-evaluations or rollouts for $Q$ value estimation. >**Reference** [1] Yao, Shunyu, et al. "Multi-objective evolution of heuristic using large language model." arXiv preprint arXiv:2409.16867 (2024). [2] Xu, Wenda, et al. "Pride and prejudice: LLM amplifies self-bias in self-refinement." arXiv preprint arXiv:2402.11436 (2024). --- Rebuttal Comment 1.1: Comment: Thanks, it is clear now. I will raise my score. --- Reply to Comment 1.1.1: Comment: Thanks for your support and raising the score.
Summary: This paper introduces MCTS-AHD, a novel method that leverages MCTS to enhance the evolution of heuristic functions generated by LLMs for solving complex optimization tasks. The key contributions include the use of MCTS to organize and evolve heuristics in a tree structure, allowing for comprehensive exploration of the heuristic space and avoiding local optima. The method employs a set of LLM-based actions, including initialization, mutation, crossover, and tree-path reasoning, to iteratively refine heuristic functions. Experiments across various NP-hard combinatorial optimization problems and Bayesian Optimization tasks demonstrate that MCTS-AHD outperforms existing LLM-based AHD methods and handcrafted heuristics. ## update after rebuttal I have raised my score to accept. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors provide extensive experimental results across multiple tasks, showing that MCTS-AHD consistently outperforms existing methods. The use of MCTS to explore the heuristic space and the proposed LLM-based actions are well-justified and demonstrated through both qualitative and quantitative analyses. The results indicate that MCTS-AHD can escape local optima and develop high-quality heuristics. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The authors use a variety of NP-hard combinatorial optimization problems and Bayesian Optimization tasks to evaluate MCTS-AHD, ensuring that the method is tested across diverse scenarios. The evaluation criteria, including optimality gaps and performance improvements, are standard and relevant for these tasks. Theoretical Claims: The paper does not present any formal theoretical claims or proofs. The focus is on the empirical evaluation of the proposed method. Experimental Designs Or Analyses: The experimental designs are sound and valid. The authors conduct experiments on multiple datasets and tasks, comparing MCTS-AHD against various baselines, including handcrafted heuristics, neural combinatorial optimization methods, and existing LLM-based AHD methods. The results are statistically significant and demonstrate the effectiveness of MCTS-AHD. The authors also provide ablation studies to validate the necessity of the proposed components and actions. Supplementary Material: I reviewed the supplementary material, including the detailed descriptions of tasks, general frameworks, and additional experimental results. The supplementary material provides comprehensive support for the main claims and results presented in the paper. Relation To Broader Scientific Literature: The key contributions of the paper are well-related to the broader scientific literature. The use of MCTS for heuristic evolution is novel and builds upon existing work in evolutionary algorithms, combinatorial optimization, and LLM-based heuristic design. The paper cites relevant prior work and clearly positions its contributions in the context of existing research. Essential References Not Discussed: The paper provides a thorough review of related work. However, it might benefit from discussing recent advancements in the application of reinforcement learning techniques for heuristic optimization, which could provide additional context for the proposed method. Other Strengths And Weaknesses: * Strengths: 1. The proposed method demonstrates significant improvements over existing LLM-based AHD methods and handcrafted heuristics. 2. The use of MCTS for heuristic evolution is novel and provides a promising direction for future research. 3. The paper includes comprehensive experiments and ablation studies, which validate the effectiveness of the proposed method and its components. * Weaknesses: 1. The convergence speed of MCTS-AHD could be improved. Future work could explore hybrid methods combining MCTS with population-based approaches to enhance efficiency. 2. The paper does not provide detailed discussions on the computational complexity and scalability of MCTS-AHD for very large-scale optimization problems. Other Comments Or Suggestions: The paper is well-written and presents its contributions clearly. A minor suggestion would be to include a discussion on the potential applications of MCTS-AHD in other domains to further highlight its significance. Questions For Authors: 1. How does the performance of MCTS-AHD compare to recent reinforcement learning methods like DeepSeek-R1-Zero, which have shown promise in enhancing LLMs for complex tasks? 2. Can the authors provide a detailed comparison of the time and computational resource requirements of MCTS-AHD versus previous population-based methods? This would help in understanding the trade-offs involved in using MCTS for heuristic evolution. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for your time and effort in reviewing our work. We are glad to know that you find the proposed method demonstrates significant improvements and the paper includes comprehensive experiments and ablation studies. We address your concerns as follows. >**Weakness 1. Convergence speed:** The convergence speed of MCTS-AHD could be improved. Future work could explore hybrid methods combining MCTS with population-based approaches to enhance efficiency. Thanks for your suggestion. Yes, as listed in the Conclusion, future work could explore hybrid methods combining MCTS with population-based approaches to enhance the convergence speed of MCTS-AHD. >**Weakness 2. Computational complexity and scalability:** The paper does not provide detailed discussions on the computational complexity and scalability of MCTS-AHD for very large-scale optimization problems. Thank you very much for your comment. We would like to clarify that, unlike learning-based methods, heuristics designed on small-scale optimization instances often have high generalization abilities for large-scale optimization tasks [1]. As shown in the table below where constructive TSP heuristics designed by MCTS-AHD can well generalize to very large-scale instances with lower objective values. |Scale|N=500|N=1,000|N=2,000|N=5,000| |-|:-:|:-:|:-:|:-:| |Nearest Neighboor|20.2259|28.5305|39.2111|64.3547| |Construction heuristic designed by MCTS-AHD|18.7596|26.4281|37.2608|59.1260| Besides effectiveness, only heuristics with low time complexity can be easily applied to very large-scale instances. Current MCTS-AHD and other LLM-based AHD baselines do not include mechanisms to confine the algorithm complexity. To achieve trade-offs between heuristic algorithm efficiency and effectiveness, we should consider multi-objective LLM-based AHD methods [2]. We will consider this as future work and provide this discussion in our manuscript. >**Suggestion. Potential applications of MCTS-AHD in other domains:** Thanks for your valuable suggestion. As an advanced LLM-based AHD method, as discussed in [3,4], we believe MCTS-AHD has the potential to be applied to design heuristics for optimization tasks, machine learning tasks (In response to ``Weakness 3 by Reviewer ywZb``, we show the application of MCTS-AHD on a policy optimization task Gym-CarMountain-v0), and some scientific discovery tasks. We will include this discussion in our manuscript. >**Question 1. Compare to recent reinforcement learning methods:** Thanks for your comment. Recent test-time scaling methods like reinforcement learning-based reasoning model Deepseek-r1 and Openai-o1 cannot directly solve the AHD task with their thinking process. As shown in the table below, we prompt Deepseek-r1 and o1-preview to design heuristics and report the best-designed heuristic over $100$ LLM calls. The advanced test-time scaling methods can only find low-quality heuristics, which demonstrates the significance of using MCTS-AHD in AHD tasks. |Task|TSP-$N$=50||TSP-$N$=100||KP-$N$=100,$W$=25||KP-$N$=200,$W$=25|| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Method|Obj.↓|Gap|Obj.↓|Gap|Obj.↑|Gap|Obj.↑|Gap| |Optimal|5.675|-|7.768|-|40.271|-|57.448|-| |Deepseek-r1 (100 trails)|6.916|21.85%|9.595|23.52%|40.225|0.12%|57.395|0.09%| |o1-preview (100 trails)|6.959|22.62%|9.706|24.94%|40.225|0.12%|57.395|0.09%| |MCTS-AHD(GPT-4o-mini)|**6.225**|**9.69%**|**8.684**|**11.79%**|**40.252**|**0.05%**|**57.423**|**0.04%**| |MCTS-AHD(DeepSeek-v3)|6.348|11.85%|8.859|14.04%|40.233|0.10%|57.402|0.08%| >**Question 2. Time and computational resource requirements:** Thanks for your suggestion. As shown in the table below, we provide a detailed comparison of the time and token consumption of MCTS-AHD and population-based baselines in five application scenarios. We calculate the token number based on GPT-4o-mini. Results demonstrate that compared to population-based methods, MCTS-AHD has a slight efficiency decrease and maintains a similar level of token consumption compared to LLM-based AHD baselines. |Methods|Consumption|Construction-TSP|Construction-KP|ACO-TSO|ACO-MKP|BO-CAF| |-|-|:-:|:-:|:-:|:-:|:-:| |**EoH**|Time|2h|2h|4h|4h|15h| ||InputToken|0.8M|0.7M|1M|1M|1.2M| ||OutputToken|0.2M|0.2M|0.5M|0.5M|0.5M| |**ReEvo**|Time|2h|-|5h|5h|-| ||InputToken|1.1M|-|1.3M|1.3M|-| ||OutputToken|0.4M|-|0.6M|0.5M|-| |**MCTS-AHD**|Time|4h|3h|8h|4h|14h| ||InputToken|1M|1M|1.2M|1.3M|1.3M| ||OutputToken|0.3M|0.2M|0.5M|0.6M|0.6M| >**Reference:** [1] Liu, Fei, et al. "Algorithm evolution using large language model." arXiv preprint arXiv:2311.15249 (2023). [2] Yao, Shunyu, et al. "Multi-objective evolution of heuristic using large language model." arXiv preprint arXiv:2409.16867 (2024). [3] Liu, Fei, et al. "A systematic survey on large language models for algorithm design." arXiv preprint arXiv:2410.14716 (2024). [4] Liu, Fei, et al. "Llm4ad: A platform for algorithm design with large language model." arXiv preprint arXiv:2412.17287 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. While I appreciate the analysis on the quality of heuristics found by advanced test-time scaling methods such as Deepseek-r1 and Openai-o1, I believe it is also crucial to evaluate their performance in terms of time efficiency and token consumption. Specifically, I would like to see a comparison of these metrics between MCTS-AHD and the advanced test-time scaling methods (Deepseek-r1 and Openai-o1) . This comparison would provide a more comprehensive understanding of the trade-offs involved in using MCTS-AHD versus these other methods. --- Reply to Comment 1.1.1: Comment: Thank you very much for your valuable comment. We collect time and token consumptions of test time scaling methods (i.e., Deepseek-r1 and Openai-o1) in AHD tasks as follows. Since the specific tokenizer for Openai-o1 is closed-source, we use the tokenizer of GPT-4o-mini to obtain all the token consumptions. | Method | Consumption | Construction-TSP| Construction-KP| |:-:|-|:-:|:-:| | MCTS-AHD (1,000 GPT-4o-mini code generations) | Time | 4h|3h| | | Input Token| 1M|1M| | | Output Token |0.3M | 0.2M | | Deepseek-r1 (per trial) | Time |0.92m/trial|2.65m/trial | || Input Token|0.15K/trial|0.16K/trial | | | Output Token | 2.3K/trial|1.0K/trial| | Openai-o1 (per trial)| Time |0.279m/trial | 0.375m/trial | | | Input Token|0.15K/trial|0.16K/trial | | | Output Token |0.14K/trial|0.14K/trial | In designing step-by-step construction heuristics for KP as an example, referring to the price of GPT-4o mini, the algorithm design for MCTS-AHD requires approximately \\$ 0.3. Compared to MCTS-AHD, according to the official prices of Deepseek-r1 and Openai-o1, obtaining approximately 150 trials with Deepseek-r1 and 30 trials with Openai-o1 will cost the same budget (\\$ 0.3). Considering the time consumption, Openai-o1 has a relatively fast response, which can generate 500 trials within the time consumption of MCTS-AHD. Therefore, we believe that Deepseek-r1 or Openai-o1 are more feasible for obtaining relatively low-quality heuristic algorithms with limited cost and time budgets while LLM-based AHD algorithms such as MCTS-AHD can be better choices for better-performing heuristic algorithms.
Summary: This paper introduces MCTS-AHD, a Monte Carlo Tree Search (MCTS)-based method for automatic heuristic design (AHD) using Large Language Models (LLMs). MCTS-AHD organizes heuristics in a tree structure to enable more comprehensive exploration and refinement. Ultimately, the goal is to generate more efficient, robust, and generalizable heuristic functions for combinatorial optimization (CO) and Bayesian optimization (BO) tasks, enhancing the ability to solve complex optimization problems. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The paper does not provide theoretical claim. Experimental Designs Or Analyses: I have checked all the experiments in the experimental section. Supplementary Material: I have read all the content in the appendix. Relation To Broader Scientific Literature: The key contributions of MCTS-AHD are situated at the combination of automatic heuristic design (AHD), Monte Carlo Tree Search (MCTS), and Large Language Models (LLMs). The paper extends prior research in these domains while addressing specific shortcomings of existing approaches. Essential References Not Discussed: The paper has included the main related works that are crucial for understanding the context. Other Strengths And Weaknesses: Strengths: 1. The topic that leveraging LLMs to automatically enhance heuristics is intresting. 2. The paper is well-written. Weaknesses: 1. There is no formal convergence proof, making it unclear whether MCTS-AHD is guaranteed to consistently find optimal or near-optimal heuristics across different search spaces. Also, there is no computational complexity analysis, which is essential to understand the efficiency and scalability of the approach compared to traditional methods. 2. The method relies heavily on LLMs for both generating and refining heuristics, making its performance sensitive to the quality and stability of LLM-generated outputs. 3. It is recommended to extend the evaluation to other types of optimization problems, such as code search, to assess the generalizability of the method across different domains. 4. MCTS-AHD requires numerous LLM evaluations during its tree search, leading to higher computational costs compared to traditional population-based methods, which might limit practical applicability in resource-constrained environments. Other Comments Or Suggestions: See weaknesses Questions For Authors: See weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for your time and effort in reviewing our work. We are glad to know that you find MCTS-AHD addressing specific shortcomings of existing approaches and the paper is well-written. We address your concerns as follows. >**Weakness 1. Computational complexity analysis:** There is no computational complexity analysis, which is essential to understand the efficiency and scalability of the approach compared to traditional methods. Thank you for your valuable comment. The time consumption of running MCTS-AHD is from three parts: evaluating the performance of heuristic algorithms, employing LLMs for heuristic function generation, and employing MCTS to select heuristics. The time complexity of the first two parts cannot be represented explicitly. The time consumption of heuristic evaluation is related to the complexity of generated heuristics and the size of the evaluation dataset $D$, while the time consumption of LLM generation is related to the selection of LLMs. For the third part, the total time complexity of the heuristic selection in MCTS-AHD with $T$ evaluations is $\mathcal{O}(HT)$, where H is the maximal height of MCTS. The space complexity is $\mathcal{O}(T)$ for preserving all LLM-generated heuristics. When maintaining a $M$-size population, the time complexity of heuristic selection in EoH and ReEvo is $\mathcal{O}(MT)$, and their space complexity is $\mathcal{O}(M)$. >**Weakness 1 & 2. Stability and Convergence proof:** The method relies heavily on LLMs for both generating and refining heuristics, making its performance sensitive to the quality and stability of LLM-generated outputs. There is no formal convergence proof. Thanks for your insightful comment. We agree that the effectiveness of all the LLM-based AHD methods largely depends on the stability and ability of LLMs to improve heuristics. There is no formal convergence proof for these methods. >**Weakness 3. Other types of optimization problems, such as code search:** It is recommended to extend the evaluation to other types of optimization problems, such as code search. Thanks for your comments. As an AHD method, MCTS-AHD aims at designing and evolving high-quality heuristics for optimization problems within a given solving framework. Code search [1,2] is indeed a typical optimization problem. However, there are no compelling heuristics to solve the code search tasks so MCTS-AHD cannot be directly applied for code searches. When directly implementing MCTS-AHD to code search (e.g., the APPS dataset [1]), the current MCTS-AHD cannot handle code bugs and becomes ineffective. |Methods |EoH|ReEvo|MCTS-AHD| |-|:-:|:-:|:-:| |Gym-MountainCar-v0|140.3|117.6|**115.0**| To further assess the generalizability of MCTS-AHD across different domains. We evaluate MCTS-AHD on a policy optimization task ``MountainCar-v0`` based on the Gym framework. The table above shows the average steps needed to reach the goal where the heuristic policy designed by MCTS-AHD is more effective. We run AHD methods three times for average performances and take GPT-4o-mini as LLMs. In conclusion, MCTS-AHD demonstrates superiority across combinatorial optimization, Bayesian optimization, and policy optimization using the same set of hyperparameters, so we believe MCTS-AHD can generalize to a wide range of application scenarios. >**Weakness 4. Numerous LLM evaluations:** MCTS-AHD requires numerous LLM evaluations during its tree search. Thanks for your comment. We agree that due to the proposed thought-alignment procedure, MCTS-AHD requires more LLM calls. In each heuristic design with $T$ performance evaluations, MCTS-AHD conducts $2T$ LLM calls (includes $T$ LLM calls for code generation and $T$ LLM calls for thought-alignment procedures), while EoH requires $T$ LLM calls and ReEvo requires $\frac{22}{15}T$ LLM calls in this case. However, we want to clarify that the LLM calls for thought-alignment procedures only consume a limited amount of tokens. As shown in the table below, we calculate the total token costs of MCTS-AHD and baselines on a series of representative heuristics design scenarios (setting $T=1000$ and using GPT-4o-mini as LLMs). The results show that MCTS-AHD will not make significantly higher token costs while achieving better heuristic performances. |Methods|Consumption|Construction-TSP|Construction-KP|ACO-TSO|ACO-MKP|BO-CAF| |-|-|:-:|:-:|:-:|:-:|:-:| |**EoH**|InputToken|0.8M|0.7M|1M|1M|1.2M| ||OutputToken|0.2M|0.2M|0.5M|0.5M|0.5M| |**ReEvo**|InputToken|1.1M|-|1.3M|1.3M|-| ||OutputToken|0.4M|-|0.6M|0.5M|-| |**MCTS-AHD**|InputToken|1M|1M|1.2M|1.3M|1.3M| ||OutputToken|0.3M|0.2M|0.5M|0.6M|0.6M| >**Reference:** [1] Zhong, Li, Zilong Wang, and Jingbo Shang. "Debug like a human: A large language model debugger via verifying runtime execution step-by-step." arXiv preprint arXiv:2402.16906 (2024). [2] Hendrycks D, Basart S, Kadavath S, et al. Measuring coding challenge competence with apps[J]. arXiv preprint arXiv:2105.09938, 2021.
Summary: This paper introduces MCTS-AHD, which integrates MCTS into LLM-based AHD to improve heuristic search exploration. MCTS-AHD organizes heuristics in a tree structure, allowing for deeper refinement of temporarily weaker candidates. Key techniques include progressive widening, exploration decay, and tree-path reasoning, enabling more comprehensive heuristic evolution. Experiments on NP-hard optimization tasks (e.g., TSP, KP) and Bayesian optimization demonstrate superior performance over handcrafted heuristics and existing AHD methods. Claims And Evidence: The paper’s primary claims are backed up by reasonably thorough empirical results on multiple NP-hard problems (TSP, KP, BPP, CVRP, etc.). The authors also present ablation studies demonstrating the contributions of individual components (e.g., progressive widening, exploration decay) in MCTS-AHD. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I checked the standard MCTS-related proofs. Experimental Designs Or Analyses: Yes I check the validity of experimental designs and analyses. Supplementary Material: Yes, I briefly reviewed the Appendix. Relation To Broader Scientific Literature: This paper is primarily an extension of previous work on LLM for AHD, further incorporating the MCTS algorithm to explore the space of heuristic functions, thereby addressing known limitations of earlier methods. Essential References Not Discussed: Not found Other Strengths And Weaknesses: The paper is very well written and introduce a effective method for overcoming local optima. The experiments are detailed. The approach relies on LLM-generated code, which may sometimes produce heuristics that are either non-executable or logically flawed. Although the paper introduces a thought-alignment process to mitigate this issue, further analysis on the robustness of LLM outputs and the effectiveness of error handling would be beneficial. Other Comments Or Suggestions: page 6 line 298 LMM - LLM Questions For Authors: The paper mentions running time and token cost for the KP task, similar information is missing for other tasks. Could the authors provide more details for those tasks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for your time and effort in reviewing our work. We are glad to know that you find the proposed MCTS-AHD is an effective method for overcoming local optima, the paper is very well written, and the experiments are detailed. We address your concerns as follows. > **Weakness. Robustness of LLM output and error handling:** The approach relies on LLM-generated code, which may sometimes produce heuristics that are either non-executable or logically flawed. Although the paper introduces a thought-alignment process to mitigate this issue, further analysis on the robustness of LLM outputs and the effectiveness of error handling would be beneficial. Thank you for your insightful comment. We agree that LLMs may sometimes produce non-executable code. However, it's worth noting that unlike tasks involving code generation from scratch [1,2], where debugging plays a crucial role, LLM-based AHD methods only design the key heuristic function within predefined solving frameworks. This setup significantly reduces the likelihood of generating invalid codes [3]. Empirically, when using GPT-4o-mini, the probability of MCTS-AHD generating invalid code for step-by-step construction TSP heuristics is only about 15\%–20\%. Therefore, we believe that LLM-based debugging for AHD is not indispensable. To mitigate the potential negative impact of invalid codes on AHD, MCTS-AHD discards these invalid heuristic function codes and relies on conducting numerous LLM calls for valid code samples. We will add this discussion to our manuscript. > **Question. Time and token costs for other tasks:** The paper mentions running time and token cost for the KP task, similar information is missing for other tasks. Could the authors provide more details for those tasks? Thanks for your comment. As shown in the table below, we provide more statistics on the time and token consumption of MCTS-AHD and the baselines in more application scenarios. We calculate the token numbers based on using GPT-4o-mini as LLMs. Compared to LLM-based AHD baselines, MCTS-AHD demonstrates a slight efficiency decrease and maintains a similar level of token consumption. |Methods|Consumption|Construction-TSP|Construction-KP|ACO-TSO|ACO-MKP|BO-CAF| |-|-|:-:|:-:|:-:|:-:|:-:| |**EoH**|Time|2h|2h|4h|4h|15h| ||InputToken|0.8M|0.7M|1M|1M|1.2M| ||OutputToken|0.2M|0.2M|0.5M|0.5M|0.5M| |**ReEvo**|Time|2h|-|5h|5h|-| ||InputToken|1.1M|-|1.3M|1.3M|-| ||OutputToken|0.4M|-|0.6M|0.5M|-| |**MCTS-AHD**|Time|4h|3h|8h|4h|14h| ||InputToken|1M|1M|1.2M|1.3M|1.3M| ||OutputToken|0.3M|0.2M|0.5M|0.6M|0.6M| > **Other Comments Or Suggestions. Typo:** page 6 line 298 LMM - LLM problems. Thank you for pointing out the typos. We will correct typos accordingly. > **References** [1] Zhong, Li, Zilong Wang, and Jingbo Shang. "Debug like a human: A large language model debugger via verifying runtime execution step-by-step." arXiv preprint arXiv:2402.16906 (2024). [2] Hendrycks D, Basart S, Kadavath S, et al. Measuring coding challenge competence with apps[J]. arXiv preprint arXiv:2105.09938, 2021. [3] Liu, Fei, et al. "A systematic survey on large language models for algorithm design." arXiv preprint arXiv:2410.14716 (2024).
null
null
null
null
null
null
System-Aware Unlearning Algorithms: Use Lesser, Forget Faster
Accept (poster)
Summary: This paper introduces a system-aware unlearning framework, which is a new definition of machine unlearning that relaxes the unlearning definition by assuming a weaker attacker that has no access to the full training data, i.e., the definition only requires the unlearned model to be indistinguishable from a model trained on a subset of the data ("core set" called in the paper). The paper mainly focuses on linear classification, and the proposed method employs the selective sampling algorithm BBQSampler to select the core set from the training data to conduct unlearning. Claims And Evidence: The claims in the paper are not convincing to me. The paper claims that the new definition provides privacy guarantees by ensuring that the unlearned model using the core set reveals no more information than the traditional unlearned model. However, privacy leakage does not happen due to the definition of unlearning, it could arise from the networks or algorithms themselves, such as model inversion attacks. Moreover, with the traditional definition, data-free machine unlearning methods [1-4] were proposed and would provide more protection than the proposed method using partial data. [1] Foster, J., Schoepf, S., and Brintrup, A. Fast machine unlearning without retraining through selective synaptic dampening. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 12043–12051, 2024. [2] Tarun, A. K., Chundawat, V. S., Mandal, M., and Kankanhalli, M. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, pp. 1–10, 2023. [3] Bonato, J., Cotogni, M., and Sabetta, L. Is retain set all you need in machine unlearning? restoring performance of unlearned models with out-of-distribution images. In European Conference on Computer Vision, pp. 1–19. Springer, 2025. [4] Chundawat, V. S., Tarun, A. K., Mandal, M., and Kankanhalli, M. Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security, pp. 2345– 2354, 2023b. Methods And Evaluation Criteria: The proposed method using partial training data for unlearning makes sense, but the authors are encouraged to investigate relevant works, such as data-free methods [1-4], for better understanding the development of machine unlearning methods. To me, the paper just proposes a method with only access to partial training data, and use selective sampling algorithm to identify the core set. If the paper only focus on exact unlearning with partial training data, what is the advantage of the proposed method to approximate (data-free) unlearning methods? For different deletion requests, the proposed method need to find a core set for the requests, which is impractical in the end. Theoretical Claims: I only checked those till Section 4. Please see Claims And Evidence*. Experimental Designs Or Analyses: The paper mainly focuses on linear classification. However, current machine unlearning methods mostly examine with deep neural networks such as ResNet, ViT and even large-scale vision-language model CLIP [1-5]. [5] Poppi, Samuele, et al. "Safe-CLIP: Removing NSFW concepts from vision-and-language models." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024. Supplementary Material: Experiment part. Relation To Broader Scientific Literature: The key contribution of this paper is to use the selective sampling algorithm BBQSampler to select the core set from the training data to conduct unlearning for linear classification. Essential References Not Discussed: [1-4], for instance. The authors are encouraged to investigate and check recent machine unlearning papers. Other Strengths And Weaknesses: The core idea of using sample compression for unlearning is interesting but is restricted to linear classification. The argument that it enhances privacy is not convincing (See above). Other Comments Or Suggestions: Please see the above sections. Questions For Authors: - In Theorem 2.4, does $S \setminus U=( (S^{'} \setminus U), ( (S \setminus S^{'}) \setminus U ) )$ mean $S \setminus U=(S^{'} \setminus U) \cup ( (S \setminus S^{'}) \setminus U ) $? - What does "measurable sets F" mean in Definition 2.3? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > CE 1 - *The claims in the paper are not convincing to me. The paper claims that the new definition ...* The authors disagree with the claim that data-free machine unlearning methods would provide more privacy protection than the proposed method. Data-free machine unlearning methods still attempt to recover a model approximately equivalent to retraining a model from scratch on the remaining dataset S \ U, which is the traditional definition of unlearning. Furthermore, [1, 2, 3, 4] do not provide any rigorous guarantee that such an unlearning goal is actually being achieved. In fact, [5] has shown that SSD [1] fails to properly unlearn. Recent work [5, 6] has highlighted that many empirical methods for unlearning fail to properly unlearn and do indeed leak information about deleted individuals. This shows a dire need for algorithms that meet theoretical guarantees of unlearning. Additionally, we present a new definition of unlearning, system-aware unlearning, which provides privacy guarantees that are provably stronger than traditional unlearning. Then, we provide an algorithm that provably satisfies system-aware unlearning, and we provably demonstrate that system-aware unlearning algorithms for linear classification are more memory and computation efficient than traditional unlearning algorithms. > MAEC 1 - *The proposed method using partial training data for unlearning makes sense, but the ...* The method in our paper does not only have access to partial training data. The system-aware algorithm initially has access to the entire dataset and then uses selective sampling to select the most important points to form a core set in order to facilitate faster and more efficient unlearning in the future. Furthermore, it is important to note that the core set is computed once during the initial learning phase and fixed before any deletion requests arrive and then never computed again. As various deletion requests arrive, Algorithm 1 performs an unlearning update and does not recompute a new core set for every new set of requests. This method is provably more efficient than traditional unlearning algorithms. > EDOA 1 - *The paper mainly focuses on linear classification. However, current machine unlearning ...* Even under the traditional unlearning definition, it is still unclear how to perform theoretically rigorous and efficient unlearning in simple models like regression. We focus on algorithms with theoretically rigorous unlearning guarantees. > QFA 1 - *In Theorem 2.4, does $S \setminus U = ((S’ \setminus U), ((S \setminus S’) \setminus U))$ mean $S \ U =(S’ \setminus U) \cap ((S \setminus S’) \setminus U))$?* In Theorem 2.4, we treat $S \setminus U$ as a vector that can be tensorized into two parts, $((S’ \setminus U), ((S \setminus S’) \setminus U))$, so that we can apply the chain rule of mutual information. We will clarify this in the final version of the paper. > QFA 2 - *What does "measurable sets F" mean in Definition 2.3?* $\mathcal{F}$ is the $\sigma$-algebra over the outcome space of possible system-states. Measurable sets $F$ can be thought of as all possible events (subsets of outcomes) over the $\sigma$-algebra $\mathcal{F}$. We note that this notation is standard for all indistinguishability-style definitions. Please let us know if you have any additional questions or concerns. [1] Foster, J., Schoepf, S., and Brintrup, A. Fast machine unlearning without retraining through selective synaptic dampening. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 12043–12051, 2024. [2] Tarun, A. K., Chundawat, V. S., Mandal, M., and Kankanhalli, M. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, pp. 1–10, 2023. [3] Bonato, J., Cotogni, M., and Sabetta, L. Is retain set all you need in machine unlearning? restoring performance of unlearned models with out-of-distribution images. In European Conference on Computer Vision, pp. 1–19. Springer, 2025. [4] Chundawat, V. S., Tarun, A. K., Mandal, M., and Kankanhalli, M. Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security, pp. 2345– 2354, 2023b. [5] Machine Unlearning Fails to Remove Data Poisoning Attacks. Martin Pawelczyk, Jimmy Z. Di, Yiwei Lu, Gautam Kamath, Ayush Sekhari, Seth Neel. ICLR 2025. [6] Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy. Jamie Hayes, Ilia Shumailov, Eleni Triantafillou, Amr Khalifa, Nicolas Papernot. SatML 2025. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. The response addressed some of my concerns, so I raised my score to 2. However, even replace it with the System-Aware definition, for data-free methods, that said, S’ \ U = $\emptyset$, the State-of-System $I_A(S, U)$ would not save any remaining data in the system, therefore providing more privacy protection than the proposed method. The paper also claims that, S’ \ U can not reveal additional information about U than S \ U; but this does not mean that it provides a rigorous privacy guarantee. So, only S’ \ U may still leak information about deleted individuals. For example, if U share the same distribution as S’ \ U, then the risk of leakage from S’ \ U could still be high since the selected S' is a good representative of S. I am currently kind of confused about the definition, it seems to only introduce the state of the system (what is saved in the system by the unlearning algorithm after unlearning) to the DP-like unlearning definition; so from the perspective of an attacker who only observes the model after unlearning and stored samples. If so, we just need to design unlearning algorithms with few or even without access to training data, and this means that, the extent of privacy protection is not affected by the definition, instead, it is affected by the unlearning algorithms. Please let me know if I misunderstood something here. --- Reply to Comment 1.1.1: Comment: > *However, even replace it with the System-Aware definition, for data-free methods, that...* The state-of-system is not just the stored samples but everything stored in memory by the unlearning algorithm. At the very least, data-free methods store the model in the system; thus, the model itself is a part of the state-of-system. The model stored in the system by these data-free methods relies on all of the samples in S, even if the samples are not explicitly stored. Thus, the overall state-of-system (which contains the model) relies on the entirety of S. Furthermore, the data-free method from [2] actually requires access to some retain set samples in order to unlearn; these samples would have to be stored in the system. [2] Tarun, A. K., Chundawat, V. S., Mandal, M., and Kankanhalli, M. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, pp. 1–10, 2023. > *The paper also claims that, S’ \ U can not reveal additional information about U than...* It is true that S’ \ U may leak some information about U. However, Theorem 2.4 shows that any information leakage between S’ \ U and U must also be present between S \ U and U. Since traditional unlearning attempts to recover retraining-from-scratch on S \ U, this privacy leakage about U must have also been present under the traditional definition of unlearning, which is considered the gold standard. In particular, we only provide a relative guarantee of privacy. The privacy guarantee of system-aware unlearning is at least as strong as traditional unlearning. In fact, if no assumptions are placed on the data generation process, absolute privacy guarantees or unlearning guarantees are impossible. > *I am currently kind of confused about the definition, it seems to only introduce the...* We think the confusion is perhaps stemming from the reviewer not thinking of $\mathsf{I}_A$ as a functional. For an unlearning algorithm without access to training samples (either in direct or indirect form), we must have $\mathsf{I}_A(S, U) = \mathsf{I}_A(S, \emptyset)$ for all deletion requests $U$ (or $\mathsf{I}_A(S’, U) = \mathsf{I}_A(S’, \emptyset)$ for all $U$). With this condition, the only unlearning algorithm would be one which has a constant state. (For a quick explanation of why this is true, consider the following. Unlearning requires $\mathsf{I}_A(S, U) = \mathsf{I}_A(S \setminus U, \emptyset)$ for all $U$, and we also know that $\mathsf{I}_A(S, U) = \mathsf{I}_A(S, \emptyset)$ for all $U$. Consider when $U=S$, we have $\mathsf{I}_A(S, U) = \mathsf{I}_A(\emptyset, \emptyset)$, thus we must also have $\mathsf{I}_A(S, \emptyset) = \mathsf{I}_A(S, U) = \mathsf{I}_A(\emptyset, \emptyset)$. Thus, the state after unlearning (which includes the model) cannot depend on the dataset $S$, even if the set of deletion requests is the empty set, $U = \emptyset$.) This concurs with the traditional belief that an unlearning algorithm that throws away the trained model and outputs a constant function (independent of the dataset) is a valid unlearning algorithm. However, while unlearning is preserved in this case, one cannot expect non-trivial performance. We show that the privacy guarantee of system-aware unlearning is at least as strong as traditional unlearning. However, through the flexibility of S’ (again it is very important that S’ is chosen only based on S before any unlearning request, and hence S’ captures the essence of the sample set as viewed by the learning algorithm even before any unlearning requests), we can design system-aware unlearning algorithms that are more efficient than traditional unlearning algorithms. We can achieve the same privacy protections using less memory and computation resources. The fact that we can achieve the same privacy protections while gaining additional flexibility in the design of unlearning algorithms is an advantage of system-aware unlearning. We thank the reviewer for their valuable feedback and discussion to help improve our paper.
Summary: The authors propose system-aware machine unlearning, which constitutes unlearning against an attacker who can observe the entire state of the system (including whatever the learning system uses internally). If the system does store the entire remaining dataset, then system-aware unlearning definition becomes as stringent as traditional unlearning, but otherwise is a relaxation constrained only for the data stored by the system. The paper only provides theoretical insights about the proposed system-aware machine unlearning setting. ___ ## update after rebuttal I would like to thank the authors for their response. However, my fundamental concerns still remain -- despite the claim that the work aims to make theoretical contributions in defining system-aware unlearning for linear classifiers, I believe an empirical validation is necessary given that unlearning is a practical problem and existing baselines can be adapted to the system-aware definition. There are also issues in the current version regarding inconsistent phrasing/statements of non-convexity (which the authors mentioned they will rewrite) and the fact that SISA outperforms their method for a significant duration of the deletion phase. While there are minimalistic experiments in the paper, I will keep my score largely given the non-existent evaluation against relevant baselines. Claims And Evidence: - Under the definitions/assumptions made by the authors in Section 2 for linear classification, the theoretical claims are well justified. But the overall motivation of this work and the claims in the context of existing work in the machine unlearning domain are unclear, significantly limiting its usefulness. - To fully verify the claims being made (as unlearning is a task with practical real-world use-cases), the paper needs extensive evaluation against existing methods and approaches for unlearning. Machine unlearning methods need to be evaluated using both the unlearning efficiency and the utility of the model post unlearning across a number of existing methods. Currently, the few experimental results present are deferred to the appendix and only consider the SISA approach and exact retraining (both over 4 years old). Moreover, SISA seems to perform better than the proposed algorithm for a significant duration of the deletion phase. The paper is basically lacking a Results and Experimental Evaluation section currently. - Given the authors' statement in the Introduction (line 50, right hand column): _"For large datasets, this makes unlearning under the traditional definitions impractical"_. Yet, there exist approximate methods that work with large datasets and a large number of parameters (e.g. LLMs) that work well while working in the limited data access regime (e.g. [1,2]). The paper focuses on linear classifiers limiting its applicability to models that actually require a very large number of data samples. It would also be good to compare empirically with more recent approximate existing methods along the aforementioned metrics (authors can refer to [3] for more details) and on large datasets. Without these necessary experiments and analyses, the proposed method cannot be fully evaluated from an empirical perspective. - The authors state in the Introduction (line 43, right hand column): _"This is evidenced by a dire lack of efficient exact or approximate unlearning algorithms beyond the simple case of convex loss functions."_ I am not sure if this claim is correct. Is the SISA method the authors compare with itself not designed for non-convex losses? And given the large amount of approximate approaches [1-2,4-5] that work with LLMs and deep neural networks (inherently non-convex), I believe the authors need to rewrite this statement to reflect a correct and nuanced perspective. References: 1. Huang, James Y., et al. "Offset unlearning for large language models." arXiv preprint arXiv:2404.11045 (2024). 2. Ji, Jiabao, et al. "Reversing the forget-retain objectives: An efficient llm unlearning framework from logit difference." NeurIPS (2024). 3. Wang, Weiqi, et al. "Machine unlearning: A comprehensive survey." arXiv preprint arXiv:2405.07406 (2024). 4. Liu, Sijia, et al. "Rethinking machine unlearning for large language models." Nature Machine Intelligence (2025): 1-14. 5. Liu, Zheyuan, et al. "Machine unlearning in generative ai: A survey." arXiv preprint arXiv:2407.20516 (2024). Methods And Evaluation Criteria: There is very limited (and almost non-existent) evaluation. Please see above (Claims and Evidence) for more issues regarding the claims made in the context of methods and evaluation criteria. At the moment, there are very few experimental results that are present in the appendix and only consider the SISA approach and exact retraining (both over 4 years old). Moreover, SISA seems to perform better than the proposed algorithm for a significant duration of the deletion phase. The paper needs an extensive Evaluation section for a clear evaluation. Theoretical Claims: Yes, from a cursory verification of the theoretical proofs, I did not find any errors. Experimental Designs Or Analyses: As mentioned in the Methods And Evaluation Criteria section, there is very limited (and almost non-existent) evaluation. Please see above (Claims and Evidence) for more issues regarding the claims made in the context of methods and experiments. At the moment, there are very few experimental results that are present in the appendix and only consider the SISA approach and exact retraining (both over 4 years old). Moreover, SISA seems to perform better than the proposed algorithm for a significant duration of the deletion phase. The paper needs an extensive Evaluation section for a clear evaluation. Supplementary Material: Yes, I went through the Appendices. Relation To Broader Scientific Literature: The paper can be of interest to the machine unlearning community. However, the limited evaluation limits its effectiveness and scope, and could lead to a reduced impact in the field. Essential References Not Discussed: There are many references that are missing; primarily because the paper is lacking a Related Works section in the main text. There is some discussion of related papers in the appendix, but these need to be moved to the main paper and more works should be discussed. This is not an exhaustive list, but some references (as mentioned in above sections), include: 1. Huang, James Y., et al. "Offset unlearning for large language models." arXiv preprint arXiv:2404.11045 (2024). 2. Ji, Jiabao, et al. "Reversing the forget-retain objectives: An efficient llm unlearning framework from logit difference." NeurIPS (2024). 3. Wang, Weiqi, et al. "Machine unlearning: A comprehensive survey." arXiv preprint arXiv:2405.07406 (2024). 4. Liu, Sijia, et al. "Rethinking machine unlearning for large language models." Nature Machine Intelligence (2025): 1-14. 5. Liu, Zheyuan, et al. "Machine unlearning in generative ai: A survey." arXiv preprint arXiv:2407.20516 (2024). Other Strengths And Weaknesses: While the ideas in this paper have merit, the limited evaluation makes the work somewhat incomplete. For more details, please see the Methods And Evaluation Criteria and Claims and Evidence sections above. Other Comments Or Suggestions: N/A. Questions For Authors: Please see the Methods And Evaluation Criteria and Claims and Evidence sections above. Each of the points can be considered as questions and points of concern. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: > CE 2 - *To fully verify the claims being made (as unlearning is a task with practical real-world ...* > MAEC 1 - *There is very limited (and almost non-existent) evaluation...* We emphasize that our primary contribution is theoretical. We focus on unlearning algorithms with provable unlearning guarantees. Recent work [1, 2] has demonstrated that many empirical methods fail to properly unlearn. This shows a dire need for algorithms that provide theoretical guarantees of unlearning; thus, we focus on specifically unlearning algorithms that meet theoretical guarantees of certified unlearning. Our paper is a theoretical paper with a focus on developing a new definition for unlearning that is not as pessimistic as the traditional unlearning definition but is still principled. We provably demonstrate that system-aware unlearning algorithms for linear classification are more memory and computation efficient than traditional unlearning algorithms. > CE 3 - *Given the authors' statement in the Introduction (line 50, right hand column): "For large ...* In this work, we demonstrate that for linear classification, system-aware unlearning leads to algorithms that are provably more efficient. Furthermore, even under the traditional unlearning definition, it is still unclear how to perform theoretically rigorous and efficient unlearning in simple models like regression. Although the exact methods may not translate directly, the flexibility of system-aware unlearning could also lead to significantly more efficient algorithms for more complex model classes. An empirical evaluation of unlearning with larger models and datasets is beyond the scope of this work, as we focus on system-aware unlearning and exact unlearning algorithms for linear classification with rigorous theoretical guarantees. > CE 4 - *The authors state in the Introduction (line 43, right hand column): "This is evidenced by ...* SISA requires the storage of many intermediate models and the entirety of the dataset to facilitate unlearning, which is extremely memory inefficient. We wanted to draw attention to the memory and computation inefficiencies of current unlearning algorithms, particularly those for nonconvex function classes. We will rewrite this statement for clarity. > ERND 1 - *There are many references that are missing; primarily because the paper is lacking a ...* In the final version, we will move the Related Works section to the main text and expand the references and discussions. Please let us know if you have any additional questions or concerns. [1] Machine Unlearning Fails to Remove Data Poisoning Attacks. Martin Pawelczyk, Jimmy Z. Di, Yiwei Lu, Gautam Kamath, Ayush Sekhari, Seth Neel. ICLR 2025. [2] Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy. Jamie Hayes, Ilia Shumailov, Eleni Triantafillou, Amr Khalifa, Nicolas Papernot. SatML 2025. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. However, my fundamental concerns still remain -- despite the claim that the work aims to make theoretical contributions in defining system-aware unlearning for linear classifiers, I believe an empirical validation is necessary given that unlearning is a practical problem and existing baselines can be adapted to the system-aware definition. There are also issues in the current version regarding inconsistent phrasing/statements of non-convexity (which the authors mentioned they will rewrite) and the fact that SISA outperforms their method for a significant duration of the deletion phase. While there are minimalistic experiments in the paper, I will keep my score largely given the non-existent evaluation against relevant baselines.
Summary: The authors propose a new definition for unlearning that they refer to as “system-aware unlearning” where the aim for the unlearned model is to be indistinguishable from a model that was trained on *any* subset of the training data excluding the forget set (rather than a model specifically trained on exactly the retain set), and where indistinguishability here is with respect to the “internal state” required by the model (e.g. any data that must be stored, any intermediate checkpoints that are required, the unlearned model for that request, or the ingredients required to produce it). The idea is that this definition enables the model developer to control what information is placed in the “internal state” (and that consequently an “attacker” would have access to). One could then design systems that support system-aware unlearning by requiring less information to be stored in the first place. The authors then discuss core set algorithms, and how they facilitate connections between the standard definition of unlearning and system-aware unlearning. Namely, running a standard unlearning algorithm on a coreset S’ of the original dataset S enables system-aware unlearning (assuming only the coreset needs to be stored and not S itself) where for a large number of examples (those not in S’) unlearning is a no-op. The authors propose an exact learning / unlearning algorithm for linear classification that uses a particular selective sampling strategy called BBQSampler from prior work to effectively get a coreset. Then, to handle a deletion, interventions are only required if the examples to be deleted is part of the “coreset”, i.e. a set of “queried” examples. These interventions required to unlearn a point are efficient in terms of computing the “new coreset” due to a monotonicity property. They prove bounds on deletion capacity, memory requirements, computation time. ########## update after the rebuttal The authors have clarified several concerns of mine through extensive discussions (see thread below) and I'm confident that reflecting these discussions in the updated manuscript will improve the paper (I read the latest response, too, and while I didn't have time to reply to it directly earlier, I note here that the clarification w.r.t how the divergences are computed is helpful, thanks again). I maintain my recommendation of weak accept. The reason I don't recommend acceptance more strongly is due to the limitation of being applicable only to linear classifiers. I also agree with other reviewers that adding/strengthening the existing empirical investigation would also lead to a stronger contribution even though I understand the nature of the contribution here is theoretical. Claims And Evidence: - “a dire lack of efficient exact or approximate unlearning algorithms beyond the simple case of convex loss functions” – while indeed there is a dire lack of certified unlearning algorithms for nonconvex models that are well performing, there are various (uncertified) ones that are efficient, and there is work on empirically auditing them, see e.g. Hayes et al. in the references below. - “only the privacy of the stored points are at jeopardy as long as the learnt model does not reveal much about points that were not used by the model” – this doesn’t seem to be true in general, since not all data selection mechanisms preserve privacy. Even if only a subset of the training dataset is accessible, this may leak privacy about other training data points (that were used, for instance, for selecting which subset to store). - “Even if an observer or attacker has access to larger public data sets that might include parts of the data the system was trained on, in such a system, we could expect privacy for data that the system does not use directly for building the model to be preserved.” – similarly here, why is this the case? In general, an example can influence the final computation even if it isn’t “directly” used for training. And this influence can translate to privacy vulnerability in general, right? - “Traditional unlearning definitions that require the unlearned hypothesis to approximately match A(S ∖ U, ∅) implicitly assume that the information between the deleted individuals U and the remaining dataset S ∖ U is small.” I don’t understand why the authors claim that this assumption is needed. The traditional DP-like unlearning definitions are about measuring the difference between the allowed knowledge one would have on U even if having trained just on S ∖ U compared to the *additional* knowledge about U that stems from having trained on U (and not perfectly removed its influence). But this doesn’t require that the MI between U and S ∖ U is small? - In the definition of system-aware unlearning, how should I interpret the probability Pr(IA(S, U ) ∈ F )? In the standard definition, the probabilities are over the output of running either the “retraining” or the “unlearning” recipe and in either case, that output is a set of model weights. When we replace this with the “state of the system” (which can contain a variety of things like model checkpoints, but also data points, etc), it’s less clear to me how to interpret this probability distribution, or what this divergence exactly means in this case. - “We point out that the traditional definition of unlearning does not require indistinguishability for auxiliary information stored in the system outside of the unlearned model; thus, the traditional definition does not account for system-awarenes” – I agree in general, but see the distinction introduced in Neel et al. about “perfect unlearning” relating to whether a secret state (that the attacker can’t access) can be kept or not which is relevant to discuss here. Methods And Evaluation Criteria: N/A Theoretical Claims: I did not check the proofs in the Appendix, but followed the proof sketches in the main paper. Experimental Designs Or Analyses: N/A Supplementary Material: I did not read the Appendix in detail. Relation To Broader Scientific Literature: See below section. Essential References Not Discussed: - “When machine unlearning jeopardizes privacy” (see references below) shows that, with access to intermediate checkpoints of the model (what can be considered as being part of an “internal state” here), an attacker can even succeed against exact unlearning! This would be interesting to incorporate in the discussion of this paper (since it also relates to the relationship between auxiliary information that the attacker can access vis-a-vis privacy claims or guarantees; even in the case of “exact unlearning”). - Definition 1 in Golatkar et al. (see references below) has a similar flavour of requiring the “existence of a certificate” such that closeness to that verifies an algorithm as successful unlearning (rather than a fixed recipe as a certificate). This is reminiscent of the “there exists an S’ …” modification in the proposed definition. There are various differences between that definition and the one proposed here, but this similarity is interesting to acknowledge. References ========== - Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy. Hayes et al. SatML 2025. - Descent-to-Delete: Gradient-Based Methods for Machine Unlearning. Neel et al. 2020. - When machine unlearning jeopardizes privacy. Chen et al. ACM SIGSAC 2021. - Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks. Golatkar et al. CVPR 2020. Other Strengths And Weaknesses: Strengths - The paper is thought-provoking, proposes an interesting alternative definition of unlearning. - The authors propose an efficient algorithm for unlearning in linear classification that they prove satisfies exact system-aware unlearning. They prove bounds for how many unlearning requests their algorithm can handle without too big a drop in performance (deletion capacity), memory requirements and amount of computation required. - The paper is for the most part well-written, giving intuition behind various theoretical results that makes it easier to follow. Weaknesses - It’s unclear to me when is system aware unlearning useful in practice, what application is it most appropriate for? I have concerns (discussed above) about its privacy protection, but maybe the authors can convince me otherwise in the rebuttal. For example, for the compression, if we use all of S as input to a “compression function” that outputs a representative subset S’ on top of which we do further processing, it does not mean that only S’ influenced the final model. S influenced the final model here too and that influence depends on the compression function and whether it is privacy preserving. - The proposed algorithm is only applicable to linear classification. Can it be extended to neural networks? - “Assume that deletions are drawn without replacement [...]” – often not a realistic assumption as in practice unlearning requests may be correlated (e.g. groups of users that are unhappy with the model and share various characteristics with each other, would be more likely to request data deletion). Other Comments Or Suggestions: Minor: - Line 196 (right column): “S′ ∖ U should leak any more information about U as compared to S ∖U” – I think the authors here mean “should *not* leak any more information [...]”. - Initially, e.g. in Definition 2.1, the authors use A to denote a learning algorithm and \bar A to denote an unlearning algorithm. But later, in various places, A denotes the unlearning algorithm (and A(S,U) the unlearned model). This discrepancy in notation can cause confusion. - Many new symbols introduced in Algorithm 1, it would be helpful to describe them intuitively. - In Theorem 4.2, define A (also, the symbol A is overloaded – elsewhere in the paper it denotes a learning or unlearning algorithm, here it appears to be some matrix). Questions For Authors: - “Thus, S′ ∖ U leaks no more information about U than S ∖ U” – what exactly is the privacy guarantee that this implies? Can you help me understand how this statement translates to a privacy guarantee for U which is what subsection 2.1 is trying to argue I think? - “under traditional unlearning, exact unlearning requires storing the entirety of the dataset” – can’t we be satisfied though with approximate unlearning that stores a random (or not random) subset of the dataset? I don’t quite see why this is impossible under traditional unlearning and we must resort to system aware unlearning here. (I can see it being impossible under *exact* traditional unlearning, but approximate might be okay in practice). What is the benefit of applying a corset algorithm and viewing it as exact system-aware unlearning as opposed to approximate “standard” unlearning? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > CE 1 We acknowledge that there are empirical unlearning algorithms that have demonstrated good performance on nonconvex models; however, recent work [1] has demonstrated that many such empirical methods fail to properly unlearn; thus, we focus on algorithms that meet theoretical guarantees of certified unlearning. > CE 2, CE 3 Our definition (Definition 2.3) requires that for any S, there exists S’ such that *for any U*, $\textsf{I}_A(S, U)$ is indistinguishable from $\textsf{I}_A(S’ \setminus U, \emptyset)$. The order of the quantifiers ensures that S’ is chosen before U is selected, and thus the data selection mechanism is independent of U. Thus, S’ \ U can not reveal additional information about U than S \ U. > CE 4 We remark that this is not a technical assumption used in any of our results. We only mention this to help interpret our results. Indeed, the mutual information (MI) between U and S ∖ U does not need to be small. However, such an assumption is implicitly necessary when considering unlearning for privacy purposes. One can easily construct scenarios where the MI between U and S ∖ U is large; thus, even retraining from scratch (the gold standard) may not preserve the privacy of deleted individuals U (as S \ U reveals U). Unlearning definitions that compete with retaining-from-scratch only provide meaningful privacy for the deleted individuals when the MI between U and S ∖ U is small. To motivate our definition, we show that the MI between U and S’ ∖ U never exceeds the MI between U and S ∖ U (Theorem 2.4). > CE 5 The probabilities are over the outcome space of all possible system-states (model weights, saved gradients, stored samples, etc). > CE 6 We thank the reviewer for this reference. We note that perfect unlearning is equivalent to system-aware unlearning when S=S’. We will add this reference. > ERND 1 Unlearning definitions do not guarantee privacy against adversaries with continuous model observations. One could incorporate the intermediate checkpoints of the model into the state-of-system definition, resulting in a system-aware unlearning definition that would be robust to such adversaries. For Algorithm 1 specifically, since the core set is stored in memory, observing the system state before unlearning will reveal the deleted individuals in the core set. However, we can always provide privacy guarantees for individuals outside of the core set against such adversaries because they are never stored in the system at any point. > ERND 2 We thank the reviewer for this reference. We agree that the similarity of the “existence of a certificate of forgetting” is interesting and will be sure to cite Golatkar et al. However, it is important to note that the unlearning algorithm in Golatkar et al. (OQSA) uses a certificate that computes the learning algorithm trained on S \ U, equivalent to traditional unlearning. Golatkar et al. did not capitalize on the flexibility of such a certificate. Our Algorithm 1 leverages the flexibility in S’ for more efficient unlearning. > OSAW W1 System-aware unlearning is most useful when an accurate model can be learned with a small number of samples, such as models with small eluder dimension or disagreement coefficient [2]. > OSAW W2 Even under the traditional unlearning definition, it is still unclear how to perform theoretically rigorous and efficient unlearning in simple models like regression. We hope to extend our framework to more complex models, but this seems to be a significant challenge in traditional and system-aware unlearning. > OSAW W3 This model of deletions can capture correlated deletions. For example, the deletion distribution may be such that users sharing certain characteristics have a higher probability of deletion than other users. We do assume that the deletion requests are not adaptive to system updates. > QFA 1 Theorem 2.4 states that the information leakage between U and S’ ∖ U never exceeds the information between U and S ∖ U. This gives a relative information leakage guarantee that system-aware unlearning leaks no more information about the deleted individuals than traditional unlearning. > QFA 2 So far, we do not know of any practical unlearning algorithms that can do approximate unlearning without storing the entire dataset in memory. In fact, [3] already shows a lower bound that even approximate unlearning (in the traditional definition) needs to store the entire dataset (while they focus on the realizability testing problem, we strongly suspect that their result can be extended to unlearning). We will clarify these discussions in the final paper. Please let us know if you have further questions. [1] Machine Unlearning Fails to Remove Data Poisoning Attacks. Pawelczyk et al. ICLR 2025. [2] Instance-Dependent Complexity of Contextual Bandits and Reinforcement Learning: A Disagreement-Based Perspective. Foster et al. COLT 2021. [3] On the unlearnability of the learnable. Cherapanamjeri et al. 2025. --- Rebuttal Comment 1.1: Comment: Hi authors, thanks a lot for the responses and additional discussion. Some follow-up thoughts: - CE 1: yes, I am aware of works showing limitations of existing uncertified methods in certain cases. However, methods that have guarantees instead have the drawback of requiring simplifying assumptions like convexity or smoothness. So it seems to me that neither certified nor uncertified approaches have yet "solved" the problem for non-convex models but both are worth discussing, and there has been a lot of work in both directions. So the claim “a dire lack of efficient exact or approximate unlearning algorithms beyond the simple case of convex loss functions” is not fully correct. I would prefer instead mentioning recent "sota" of each category (certified vs non-certified), and pointing out their respective weaknesses (such as those uncovered in Pawelczyk et al and Hayes et al for the case of non-certified methods). - CE 2/3: OK, I understand now about the order of picking S' before U (and it must work for any U) means that S’ \ U can not reveal additional information about U than S \ U. Thanks for clarifying. - CE 4: Thanks for clarifying that the MI assumption isn't required for any of your results. I actually disagree that it is implicitly necessary when considering unlearning for privacy purposes. It can be in some cases, but it depends on your definition of privacy. For Differential Privacy, we are simply concerned with the *additional* knowledge that the model has on a data point due to having trained on it (we are not concerned with knowledge that it might be able to infer about it due to having trained on other, similar, examples). And analogously for the usual definition of unlearning that required indistinguishability between the unlearned and retrained distributions: we are only concerned about the *additional* knowledge on the forget set that is due to having trained on it. Any knowledge that would have been obtained on it even when "retraining from scratch" is allowed, and doesn't deduct points from the estimated "unlearning quality" of the algorithm in question. This is the reason that I found these MI arguments confusing. I suggest the authors clarify in the revised text, at least to state, as they did in my response, that the results of the paper don't require an MI assumption. - CE 5: Thanks. Right, I understand conceptually that the probabilities are over the outcome space of all possible system-states. But how would one instantiate this / operationalize computing divergence between such distributions? It sounds practically much harder than when the probabilities are only in terms of model weights. Perhaps adding some discussion for this is helpful. - ERND 2: Agreed w.r.t that difference. It's only partially related due to the "certificate" aspect, and thought that was an interesting connection to discuss. But of course agree about the other key axis of difference there. - OSAW W1: OK. Can you think of a specific practical application where this condition would be met? - QFA 2: "So far, we do not know of any practical unlearning algorithms that can do approximate unlearning without storing the entire dataset in memory." Again, I think here you are referring to algorithms with guarantees specifically. In general, for the non-certified case, it is possible to do approximate unlearning without storing the entire dataset. Of course, the "unlearning quality" would need to be measured separately, empirically, if there is no certification. But I do believe this is an important direction too, and one that should not be discarded from the narrative. Overall, I maintain my opinion that this work is thought-provoking and a nice contribution and I thank the authors for the interesting discussion. I am inclined to keep my score of weak accept and to not raise it further because of concerns of limited (or unclear) practical applicability over other definitions (which applications is this definition most appropriate for compared to the standard one? how can one audit for this in practice if we don't have certification (which we don't for non-convex)?) as well as some clarification issues and the need for additional more well-rounded discussion, though I recognize some of that is personal preference. But I will read and take into consideration any additional follow-up comment from the authors. --- Reply to Comment 1.1.1: Comment: > *CE 1: Yes, I am aware of works showing limitations of existing uncertified methods in...* We will update this discussion in the paper to discuss both the strengths and weaknesses of certified and uncertified unlearning methods and add a comprehensive discussion about both the performance (or lack thereof in certified methods) and vulnerabilities of uncertified methods with appropriate citations. We agree that both directions are interesting to explore. > *CE 4: Thanks for clarifying that the MI assumption isn't required for any of your results...* We will update the paper to state and clarify that the results of the paper do not require a MI assumption. We will emphasize that system-aware unlearning provides a relative privacy guarantee, as is the case with differential privacy and traditional unlearning definitions. We thank the reviewer for pointing out this confusion. > *CE 5: Thanks. Right, I understand conceptually that the probabilities are over the...* One can compute divergences between system-states similarly to how one computes divergences across model weights. Many unlearning algorithms store “sufficient statistics” of the sample, approximately update these statistics when unlearning, and then compute an unlearned model as a function of these statistics [1, 2]. In fact, we point out that the way these works argue closeness after unlearning is by actually arguing the closeness of these approximate “sufficient statistics” to the true statistic of S \ U, rather than directly arguing closeness of the model weights. We can think of these stored statistics as exactly the state of the system. For example, [1] stores an approximate Hessian matrix of S \ U, $\hat{H}$, to assist with unlearning along with the model weight vector $w$. We can think of $\hat{H}$ and $w$ together as the state of the system. [1] first argues closeness between the approximate Hessian matrix of S \ U, $\hat{H}$ and the true Hessian matrix of S \ U, $H$. Since the unlearned model weight vector $w$ is simply a function of $\hat{H}$ and the retrained-from-scratch model is the same function of $H$, we immediately have closeness between the unlearned model and the retrained-from-scratch model. One can think of this as arguing that the system-states are close in order to conclude that the model weights are close. Thus, we do not believe that computing divergences across system-states poses an additional challenge than computing divergences across model weights. We have included a discussion of this in line 250 of the paper, but we will expand this discussion in the final version of the paper. [1] Sekhari, A., Acharya, J., Kamath, G., and Suresh, A. T. Remember what you want to forget: Algorithms for machine unlearning. NeurIPS 2021. [2] Guo, C., Goldstein, T., Hannun, A., and Van Der Maaten, L. Certified data removal from machine learning models. ICML 2019. > *OSAW W1: OK. Can you think of a specific practical application where this condition...* One could use influence functions [3] and data attribution techniques [4] to identify a subset of points that have the largest influence on the model weights and treat these “high-influence” samples as a core set. These techniques have been practically applied to deep learning models [3]. By only updating or retraining when one of these “high-influence” samples, we have fast average deletion time. While it is not clear how to give performance guarantees when one of these “high-influence” points is deleted, this provides a practical framework for system-aware unlearning that can be applied to general model classes. [3] Understanding Black-box Predictions via Influence Functions. Pang Wei Koh and Percy Liang. ICML 2017. [4] TRAK: Attributing Model Behavior at Scale. Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, Aleksander Madry. ICML 2023. > *QFA 2: "So far, we do not know of any practical unlearning algorithms that can do...* We do refer to certified algorithms here, as that is the focus of our paper. However, we will add an expanded discussion of uncertified approximate unlearning algorithms in the Related Work, including data-free empirical methods, which do not require the storage of the entire dataset. We thank the reviewer for their valuable feedback and perspective. We appreciate the time that the reviewer has taken to engage with our paper. We greatly value the helpful discussions to improve our paper.
Summary: The paper introduces a new system-aware unlearning setting, where attackers have access to only a partial dataset stored in the system rather than the entire dataset. The authors argue that this relaxation enables a more efficient and practical unlearning framework for attackers. To address this setting, they propose a selective sampling-based algorithm to identify the core set and provide theoretical analysis for deletion capacity, risk and etc. Claims And Evidence: theoretical contributions: The work rigorously proves that system-aware unlearning generalizes traditional unlearning, providing bounds on deletion capacity, memory requirements, and excess risk. Methods And Evaluation Criteria: Under partial training setting, the proposed method use selective sampling-based to select core set points where the label was queried make senses to me and provide interesting perspective to save computation time and capacity. Theoretical Claims: Lemma 4.7: Let the deletion distribution μ be the uniform distribution. Is it always valid to assume this scenario? Wouldn't there be data-dependent deletion distributions? Experimental Designs Or Analyses: * The experimental setup is relatively simple and limited. The current study focuses on linear classification, which, while important, represents a rather basic scenario. How would the method perform on non-convex deep models or, at the very least, on SVMs? * Regarding the experiments, would it be possible to provide further evaluations on how the selection process performs under different distributions and sample size of S' Supplementary Material: experiment sections & related work Relation To Broader Scientific Literature: The paper broadens the scope of machine unlearning from systematic perspective by shifting from the traditional unlearning model to a more practical system-aware framework Essential References Not Discussed: There are also related work in [1] and [2] that emphasize the importance of system-aware perspectives in machine unlearning, the authors are encouraged to investigate relevant works to better describe its contribution in the development of machine unlearning methods through the lens of system-awareness [1] Yuke Hu, Jian Lou, Jiaqi Liu, Feng Lin, Zhan Qin, and Kui Ren. Eraser: Machine unlearning in mlaas via an inference serving-aware approach. Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security, 2024. [2] Gaoyang Liu, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. Federaser: Enabling efficient client-level data removal from federated learning models. In 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pages 1–10. IEEE, 2021 Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > TC 1 - *Lemma 4.7: Let the deletion distribution μ be the uniform distribution. Is it always valid to ...* We agree that assuming the deletion distribution to be uniform is not always valid. Our main theorem (Theorem 4.6) applies to general data-dependent deletion distributions. Lemma 4.7 is an instantiation of Theorem 4.6 with a uniform deletion distribution in order to provide an illustrative example of deletion time savings under system-aware unlearning. For the main Theorem 4.6, the main assumption we make is that deletion requests are not adaptive/adversarial to updates in the system. > EDA 1 - *The experimental setup is relatively simple and limited. The current study focuses on ...* We emphasize that our contributions are primarily theoretical. Our main contribution is to provide a new definition of unlearning that is not as pessimistic as the traditional one and show how this definition can help us obtain much more efficient unlearning algorithms in a principled way. Several works that have used empirical metrics for unlearning have been proven to leak information about deleted data points [3, 4], thus illustrating the dire need for a theoretically sound definition for unlearning. Even under traditional unlearning definitions, it is still unclear how to perform theoretically rigorous and efficient unlearning in simple models like regression. > EDA 2 - *Regarding the experiments, would it be possible to provide further evaluations on how ...* In Theorem 4.6, the bound on the sample size of $S’$ from the selection process holds under any data distribution. The sample size of $S’$ can be tuned by properly setting $\kappa$ in the BBQSampler. When $\kappa$ is large, the sample size of $S’$ is large, leading to higher accuracy. When $\kappa$ is small, the sample size of $S’$ is small, leading to fast expected deletion times and low memory requirements. The exact tradeoffs are characterized by Theorem 4.4 and Theorem 4.6. > ERND 1 - *There are also related work in [1] and [2] that emphasize the importance of ...* These two papers consider unlearning under unique learning architectures, specifically under machine-learning-as-a-service [1] and federated learning [2] settings. Both of these papers work to satisfy the traditional definition of unlearning under these unique learning systems. We agree that it would be interesting to explore how these unique settings could benefit from a system-aware perspective on unlearning. Please let us know if you have any additional questions or concerns. [1] Eraser: Machine unlearning in mlaas via an inference serving-aware approach. Yuke Hu, Jian Lou, Jiaqi Liu, Feng Lin, Zhan Qin, and Kui Ren. ACM SIGSAC 2024. [2] Federaser: Enabling efficient client-level data removal from federated learning models. Gaoyang Liu, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. IEEE IWQOS 2021 [3] Machine Unlearning Fails to Remove Data Poisoning Attacks. Martin Pawelczyk, Jimmy Z. Di, Yiwei Lu, Gautam Kamath, Ayush Sekhari, Seth Neel. ICLR 2025. [4] Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy. Jamie Hayes, Ilia Shumailov, Eleni Triantafillou, Amr Khalifa, Nicolas Papernot. SatML 2025.
Summary: This paper introduces system-aware unlearning, a novel framework that generalizes traditional machine unlearning by relaxing privacy guarantees to account for realistic attacker access to the system’s internal data. It proposes a general approach using sample compression or core sets, where reduced algorithmic reliance on stored data inherently limits attack exposure. The authors demonstrate this via the first exact unlearning algorithm for linear classification with sublinear memory in the dataset size, enabled by selective sampling for compression. Theoretical analysis establishes bounds on computation, memory, deletion capacity, and excess risk, balancing efficiency and privacy in practical settings. This framework bridges rigorous unlearning guarantees with feasible resource requirements. Claims And Evidence: See Strengths and Weaknesses. Methods And Evaluation Criteria: See Strengths and Weaknesses. Theoretical Claims: The reviewer did not conduct a thorough review of the proofs. Experimental Designs Or Analyses: There're no experimental designs. Supplementary Material: The reviewer has reviewed the supplementary material. Relation To Broader Scientific Literature: This paper presents work whose goal is to advance the field of machine unlearning, which is specifically oriented to improve the trustworthiness of machine learning. Essential References Not Discussed: The paper focuses on the field of certified machine unlearning, but lacks of following papers: [1] Eli Chien, Chao Pan, and Olgica Milenkovic. “Certified Graph Unlearning”. [2] Jiaqi Liu, Jian Lou, Zhan Qin, and Kui Ren. “Certified Minimax Unlearning with Generalization Rates and Deletion Capacity”. In NeurIPS 2023. [3] Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li. "Towards Certified Unlearning for Deep Neural Networks." in ICML 2024. Other Strengths And Weaknesses: Strengths 1. Proposes a practical relaxation of traditional unlearning definitions by introducing system-aware unlearning, which accounts for realistic attacker capabilities. 2. Generalizes traditional unlearning while enabling more efficient algorithms. 3. Provides the first exact unlearning algorithm for linear classification with sublinear memory, breaking prior lower bounds (Cherapanamjeri et al., 2024) under traditional definitions. 4. Rigorous analysis of deletion capacity, excess risk, and computational complexity, with tradeoffs controlled by the sampling parameter $\kappa$. Weaknesses 1. The use of mathematical symbols is not standardized. The font size of symbol $\backslash$, $\in$, $\subseteq$ is different from other letter symbols, which causes difficulties for readers to read. 2. Lack of experimental verification. The paper lacks quantification of the unlearning performance, such as F1 score in [3] and the accuracy of the membership inference attack. Other Comments Or Suggestions: 1. It would be better to palce Eq.(1) and Eq.(2) after Definition 2.2, which makes the logic more coherent before Definition 2.3. 2. Grammar error. For example, in Definition 2.3, '... there exists a...' should be '... there exists an...' Questions For Authors: 1. Why does the input of state-of-system in Definition 3.1 has only one parameters (i.e., $S$) while the input of state-of-system in Eq.(1) & Eq.(2) has 2 input parameters (i.e., $S$ and $U$)? They all use the symbol $\mathrm{I}$, but the definitions before and after are not coherent. Code Of Conduct: Affirmed. Overall Recommendation: 3
null
null
null
null
Stochastic Control for Fine-tuning Diffusion Models: Optimality, Regularity, and Convergence
Accept (poster)
Summary: This paper proposes a discrete-time stochastic control framework with linear dynamics and KL regularization for fine-tuning diffusion models. It establishes well-posedness, proves the regularity of the optimal value function, and develops a policy iteration algorithm (PI-FT) with guaranteed regularity and stability. The framework extends to parametric settings for efficient high-dimensional implementation. Claims And Evidence: The claims in this manuscript are supported by clear and convincing evidence, such as 1. The illustration of the connection between DDPM and stochastic control 2. The regularity and well-posedness of the optimal value function and the optimal control policy 3. The framework/algorithm for fine-tuning diffusion models (PI-FT) and the analysis for its convergence 4. linear parameterization to enable practical and efficient update for above algorithm Methods And Evaluation Criteria: Both the regularity & well-posedness and the convergence analysis give the proposed method a strong theoretical foundation. However none of the above theoretical derivation results and convergence of the algorithm have been realized on a particular dataset and a specific task. I think adding some experimental results will make this approach more convincing. Theoretical Claims: This paper provides strong theoretical foundations for fine-tuning diffusion models, with rigorous derivations supporting its key claims. It establishes a clear connection between Denoising Diffusion Probabilistic Models (DDPM) and stochastic control, ensuring a principled approach to fine-tuning. The authors prove the well-posedness and regularity of the optimal value function and control policy, guaranteeing stability. They develop a policy iteration algorithm (PI-FT) and provide a detailed convergence analysis. Additionally, the framework extends to a parametric setting with linear parameterization, enabling efficient and practical updates for high-dimensional applications. Experimental Designs Or Analyses: Unfortunately, the method proposed in this paper has not yet been demonstrated on a specific dataset or task. Incorporating experimental results would further validate the approach and enhance its credibility. Supplementary Material: Supplementary material includes the proof of three lemmas or theorems, including 1. the KL-divergence between two conditional distributions with the squared loss between the control and the pre-trained score 2. regularity and well-posedness 3. the convergence of algorithm Relation To Broader Scientific Literature: Recent empirical results provide compelling evidence to support proposed framework, which implements reinforcement learning (RL) or control-based fine-tuning algorithms closely aligned with their approach. These empirical studies have demonstrated the practical efficacy of KL-regularized control formulations in fine-tuning diffusion models. Essential References Not Discussed: The paper provides a comprehensive discussion of related works. All essential references needed to understand its key contributions are adequately cited and analyzed. Other Strengths And Weaknesses: I have to say that the theory in this paper is very solid. However, the lack of empirical results is also the biggest drawback of this paper. Other Comments Or Suggestions: Including experimental results would strengthen the paper by demonstrating the practical effectiveness of the proposed approach. Empirical validation would also make the theoretical contributions more compelling and applicable to real-world scenarios. Questions For Authors: 1. Line90-Line96 in introduction section, authors mention that recent empirical results provide compelling evidence to support their framework. Could authors show more details about the comparison and similarity of their methods and your method. 2. Actually, authors can show the convergence and the regularity of the optimal value function & the optimal control policy of your approach based on above mentioned empirical settings. 3. The parameterization trick in proposed algorithm a linear convergence rate. The linear convergence rate means convergence faster and obtaining better performance with same time/resources? I am curious could the conditions in above lemmas and theorems be realized in empirical settings. External experiment results to prove this point is better. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. We are glad that you found our theoretical results strong and rigorous. Below please find our response to your questions. ``` I have to say that the theory in this paper is very solid. However, the lack of empirical results is also the biggest drawback of this paper. ``` Thank you for your suggestion. To demonstrate the practical efficiency and effectiveness of the proposed PI-FT method, we conduct thorough numerical experiments, which are included in https://anonymous.4open.science/r/ICML-rebuttal-741C/figures.pdf. **Experiment Set-up.** We fine-tune Stable Diffusion v1.5 for text-to-image generation using LoRA and ImageReward. Our prompts—“A green-colored rabbit,” “A cat and a dog,” “Four wolves in the park,” and “A dog on the moon”—assess color, composition, counting, and location. During fine-tuning, we generate 10 trajectories (50 transitions each) to calculate the gradient with 1K gradient steps (AdamW, LR = 3e-4), and KL regularization ($\beta= 0.01$). **Evaluation.** We first compare ImageReward scores of images generated by the pre-trained model, DPOK and PI-FT. To ensure a fair evaluation, we perform 10 gradient steps with LR = 1e-5 per sampling step in DPOK. Each gradient step is computed using 50 randomly sampled transitions from the replay buffer. Consequently, 1K sampling steps and 10K gradient steps in DPOK result in a computational cost similar to that of PI-FT. As shown in Figure 1 in the link, PI-FT consistently **outperforms** both the pre-trained model and DPOK across all four prompts. Figure 2 further shows that PI-FT more accurately generates the correct number of wolves, and correctly places the dog on the moon. Additionally, PI-FT avoids mistakes such as depicting a rabbit with a green background as appeared in the pre-trained model and DPOK. The texture of the generated animals is also more natural-looking under PI-FT compared to other baselines. **Effect of KL regularization.** KL regularization is known to improve fine-tuning. We analyze its effect in the PI-FT method using the prompt “Four wolves in the park,” varying $\beta \in$ {0.01, 0.1, 1.0}. Figure 3 shows that the gradient norm decreases to 0 in all three settings, illustrating the convergence of our algorithm. In Figure 4, we see that the ImageReward score increases and eventually stabilizes when $\beta$ is small. In contrast, the score exhibits limited improvement with large $\beta$. This observation is consistent with Figure 5, where the KL divergence remains large throughout training for $\beta = 0.01$ and stays significantly at a smaller level for $\beta \in$ {0.1, 1.0}. Figure 6 illustrates that smaller $\beta$ produces images with nearly four wolves, while larger $\beta$ yields fewer. These results underscore the importance of the KL coefficient in our framework. ``` … Could authors show more details about the comparison and similarity of their methods and your method. ``` In the literature, DPOK and DDPO use generic RL algorithms such as PPO or REINFORCE to optimize the policy/score. Despite showing empirical promise on certain tasks, existing methods **overlook** the underlying structure of diffusion models, leaving significant room for improvement in efficiency and lacking any theoretical justification. In contrast, by fully utilizing the problem structure, we formulate fine-tuning as a KL-regularized stochastic control problem (with linear dynamics), and propose the PI-FT algorithm. This structure allows us to derive regularity and convergence guarantees, closing the gap left by prior works. We will revise lines 90-96 to clarify these points. ``` ... show the convergence and the regularity of the optimal value function & the optimal control policy of your approach based on above mentioned empirical settings. ``` In the newly added experiments, we do observe convergence of both the control policy and the value function during training, supporting our theoretical claims. ``` … I am curious could the conditions in above lemmas and theorems be realized in empirical settings… ``` Empirically, we observe numerical results consistent with our theoretical claims. In Figure 3 (in the link), we observe that the algorithm converges **linearly** and the gradient norm of the U-Net stabilizes after approximately 200 sampling steps. Additionally, we compare PI-FT and DPOK under the same computational budgets; as shown in Figure 1, PI-FT achieves consistently higher ImageReward scores across all four prompts. This performance suggests that our approach converges more efficiently in practice. Although our theory assumes Lipschitz conditions and a specific range of the regularization coefficient, our experiments demonstrate that convergence remains robust even for small $\beta = 0.01$ and relatively large learning rates. Thank you for your detailed feedback. We hope our response addresses your concerns. If so, we would appreciate it if you could reflect it in your evaluation of our paper. --- Rebuttal Comment 1.1: Comment: The author's response helped clarify some of the confusion I had about this article. While the experimental results do not fully establish the validity of the proposed method, they serve as strong supporting evidence for the paper’s theoretical framework. I look forward to seeing more experimental results and comparisons of relevant evaluation metrics in the final version. I correspondingly improved my score.
Summary: This paper proposes a stochastic control framework for fine-tuning diffusion models. The key contribution is establishing theoretical properties such as well-posedness, regularity, and linear convergence of a proposed policy iteration algorithm. Claims And Evidence: The paper makes strong theoretical claims about the optimality, regularity, and linear convergence of their proposed algorithm. The theoretical analysis is mathematically rigorous with assumptions clearly presented. Methods And Evaluation Criteria: Proper evaluation is missing: Algorithm 1 is defined and justified clearly but is purely theoretical; however, the practicality of the method is questionable. The algorithm’s iterative nature and associated computational overhead might limit its applicability in realistic high-dimensional problems, but no practical evaluation or even basic numerical simulations have been conducted. Theoretical Claims: I have roughly examined the theoretical analysis, particularly the convergence properties presented in Theorem 3.1 and Theorem 2.8. The proofs appear correct. (On the other hand, all results before Thm 2.8 are standard and well-known in the literature. I have concerns of ssuch presentation, see below) Experimental Designs Or Analyses: A significant weakness is the absence of any empirical validation. The paper does not include simulations or practical experiments demonstrating the behavior of Algorithm 1 or verifying the theoretical claims. In diffusion model fine-tuning, it is a common practice to test it on text-to-image models such as StableDiffusion v1.5. Pipelines for such implementations are also accessible and easy to work with. Can the authors provide clarifications why it is not conducted? Supplementary Material: NA Relation To Broader Scientific Literature: The paper accurately positions itself within existing literature concerning fine-tuning diffusion models using reinforcement learning and stochastic control Essential References Not Discussed: Reference discussion is proper and abundant to me. Other Strengths And Weaknesses: As mentioned, Algorithm 1 appears overly idealized, and its practical utility in realistic scenarios remains highly questionable. In practical high-dimensional environment, taking expectations to obtain accurate value function can have difficulty. But the idealized algorithm overlooks such hardness. Other Comments Or Suggestions: I find following the flow of this work pretty easy and smooth. One suggestion would be: all theoretical results before Thm 2.8 are standard in the literature, that is to say, all contents up until Page 5. Results such as Lemma 2.5 and Eq. 8 (many remarks are also well-known) have long been discovered, see [1], [2]. These standard results take up too much space in the main text. I would suggest improving the presentation, for example, simply referring to these works instead of repeating all the details in the main text. Otherwise, it is also possible to only present the most important parts in the main text but defer burdensome calculation details and remarks to the appendix in the next version. More importantly, since this work is purely theoretical and does not have any kind of simulation studies. It is advised to spend more efforts in demonstrating the technical hardness of Thm 2.8 and Thm 3.1. It is confusing to me what the major difficulty is in presenting these results. It feels like Thm 2.8 is a direct calculation based on the Lipschitzness assumptions, which might not present enough technical novelty. I might be wrong, but can the authors comment on this? [1] https://arxiv.org/pdf/2402.15194 [2] https://arxiv.org/abs/2403.06279 Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4 Ethics Expertise Needed: ['Legal Compliance (e.g., GDPR, copyright, terms of use)']
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We are delighted that you found our theoretical results rigorous and our presentation clear. Below is our point-to-point response to your comments: ``` Proper evaluation is missing: Algorithm 1 is defined and justified clearly but is purely theoretical; however, the practicality of the method is questionable… ``` ``` A significant weakness is the absence of any empirical validation… ``` ``` As mentioned, Algorithm 1 appears overly idealized, and its practical utility in realistic scenarios remains highly questionable. In practical high-dimensional environment, taking expectations to obtain accurate value function can have difficulty… ``` We included a new set of thorough experiments in this rebuttal. Please see https://anonymous.4open.science/r/ICML-rebuttal-741C/figures.pdf for the results and see response to Reviewer 1H7D for detailed explanations on the experiments. ``` I find following the flow of this work pretty easy and smooth. One suggestion would be: all theoretical results before Thm 2.8 are standard in the literature, that is to say, all contents up until Page 5. Results such as Lemma 2.5 and Eq. 8 (many remarks are also well-known) have long been discovered, see [1], [2]. These standard results take up too much space in the main text. I would suggest improving the presentation, for example, simply referring to these works instead of repeating all the details in the main text. Otherwise, it is also possible to only present the most important parts in the main text but defer burdensome calculation details and remarks to the appendix in the next version. ``` Thank you for your constructive feedback. To the best of our knowledge, we are the first to use KL divergence over transition dynamics (on path space) to control the deviation of the fine-tuned model from the pre-trained model, whereas formulations in the literature consider KL between the terminal state distributions. For this reason, the results before Theorem 2.8 are novel and valuable to readers although not technically significant. Therefore, we decide to follow your suggestion and defer calculation details and remarks to the appendix in the revised manuscript. ``` …since this work is purely theoretical and does not have any kind of simulation studies. It is advised to spend more efforts in demonstrating the technical hardness of Thm 2.8 and Thm 3.1. It is confusing to me what the major difficulty is in presenting these results. It feels like Thm 2.8 is a direct calculation based on the Lipschitzness assumptions, which might not present enough technical novelty. I might be wrong, but can the authors comment on this? ``` We appreciate the opportunity to clarify the technical novelty of Theorem 2.8 and Theorem 3.1. Regarding Theorem 2.8, the core challenge stems from the fact that Eq. (17) is an implicit equation, in which both sides involve the optimal control $u_t^*$. This prevents a direct application of Lipschitz assumptions on model parameters. In contrast, we prove the Lipschitz property of $u_t^*$ through the detailed and nontrivial derivations in Line 726 - 736, where the choice of $\beta_t$ ensures the invertibility of the coefficient. Additionally, the differentiability of $u_t^*$ relies on the Gaussian smoothing effect; see Eq. (30) – (32), which is novel analysis to the RL literature. The Lipschitz condition of $\nabla u_t^*$ is especially non-trivial as it involves $\nabla^2 V_{t+1}^*$, which may not be Lipschitz. We address this using the integration by parts formula in Eq. (34), which is a key technical step. We will discuss these in the manuscript. In terms of Theorem 3.1, the convergence analysis of the PI-FT algorithm relies on **maintaining the regularity** of the value function throughout training, which is highly nontrivial. For example, prior work such as [3] proves convergence under the assumption that the value function remains regular during training (which is non-tractable); see Assumption 2 in [3]. In contrast, our result establishes the desirable regularity through a connection between $V_t^{(m)}$ and the optimal value function $V_t^*$. Thanks to the reviewer’s question, we will revise the manuscript to better emphasize the technical challenges and contributions underlying these results. \ Thank you for your time and for your valuable feedback. Your suggestions definitely helped us improve the quality of this manuscript and highlight our contributions. We hope our response addresses your questions and clarifies the challenges in our work. We also hope that the effort we invested in conducting thorough numerical experiments demonstrates the promising practical performance of our proposed algorithm. If so, we would be grateful if you could reflect it in your evaluation of our paper. [3] Zhou, Mo, and Jianfeng Lu. "A policy gradient framework for stochastic optimal control problems with global convergence guarantee." arXiv preprint arXiv:2302.05816 (2023).
Summary: The authors propose a discrete-time stochastic optimal control framework with linear dynamics and KL regularization to model the problem of fine-tuning of diffusion models. They analyze well-posedness and regularity of the control formulation, propose a novel algorithmic scheme based on policy iteration, and they analyze it showing linear convergence. Ultimately they introduce a parametric extension and analyze the convergence of an associated policy gradient method. ## update after rebuttal The rebuttal and further discussion with the authors tackled properly some of my concerns and therefore I have increased the score from 2 to 3. Nonetheless, I still believe that some of the weaknesses mentioned in my review and further comments do still hold. In particular, I am not convinced by the authors justification for the current experimental evaluation proposed within the rebuttal. Although it certainly makes the work more complete than before, it arguably does not present a clear comparison with common methods in this space and might lead to a distorted/limited viewpoint/evaluation of the presented methods. Claims And Evidence: Yes Methods And Evaluation Criteria: The paper proposes arguably novel Policy Iteration and Policy Gradient schemes for fine-tuning of diffusion models but does not perform any experiment (e.g., comparing with existing methods). This is not inherently problematic, as theoretically relevant papers without any experiment are fine, but as later pointed out, I have some doubts regarding the relevance of the presented theoretical results. And not presenting experimental validations of the presented algorithm fundamentally renders impossible to evaluate and/or them besides the theoretical results. Theoretical Claims: Only limited parts of Lemmas B.2 and B.3. Experimental Designs Or Analyses: There are no experiments. Supplementary Material: Only limited parts of Lemmas B.2 and B.3. Relation To Broader Scientific Literature: - To the best of my knowledge most works within this literature (of diffusion models fine-tuning) are fundamentally experimental. Overall I believe the authors of this work reported a very clear and honest representation of the current landscape of existing works. I find particularly interesting the focus on discrete-time and linear dynamics, which seems well motivated and renders it possible to connect the problem to common formulations in control theory. Having said so, I have doubts regarding the relevance of the theoretical results derived expressed within the weaknesses section. Essential References Not Discussed: The work seems to properly mention all relevant references. Other Strengths And Weaknesses: STRENGTHS: 1. The paper focuses on a very timely and relevant problem. 2. The authors analyze well-posedness and regularity properties of a practically relevant control problem, which I regard as a positive contribution. 3. The authors present an arguably new algorithms based on policy iteration and policy gradient for fine-tuning of diffusion models and an extension to parametric controls. 4. Formal derivations showing linear convergence guarantees of such algorithms are presented. WEAKENESSES: Unfortunately, I found the paper quite unclear in being able to distinguish between the policy iteration and policy gradient schemes presented, the limitations of each, and whether some limitations are purely of the theoretical analysis, or are actual algorithmic limitations. I list several concerns and doubts in the following. 1. It is not clear to me if the policy iteration algorithm presented is supposed to work only on discrete-spaces, or, more precisely, how the control $u$ is represented within that algorithm. What settings does this algorithm actually cover? I am asking this since Sec. 4 seems to extend the control representation to be parametrized, but in doing so it seems to propose another algorithm. Did I misinterpret something? 2. Sec. 4 seems to propose an algorithm for continuous spaces where the control is represented via a linear parametrization. While a linear approximation would make sense to represent the forward process drift, this seems an excessive approximation for the reverse process control, which should have an order of complexity comparable to a score network. It seems to me that this object would be non-linear even in the easiest examples possible. Hence the question: are there some relevant examples (even simple) where this assumption holds? 3. At times it is very unclear whether the algorithms are introduced within this work (e.g., the policy gradient one), or if this work aims to only perform theoretical analysis on them. Please clarify. Not presenting experimental evaluations renders impossible to evaluate the practical relevance of proposed algorithmic ideas. Hence I have to evaluate only the theoretical contributions. Other Comments Or Suggestions: 1. Line 434, 'replies' should be 'relies', I guess. 2. Line 397, 'the' is repeated twice. Questions For Authors: Please answer clearly regarding all 3 points mentioned within the weaknesses section. I am very open to changing my vote/decision based on these answers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We are glad to receive your positive feedback on our theoretical contributions and that you found our placement of the work clear and honest. Below please find our response to your questions. ``` It is not clear to me if the policy iteration algorithm presented is supposed to work only on discrete-spaces, or, more precisely, how the control $u$ is represented within that algorithm. What settings does this algorithm actually cover? I am asking this since Sec. 4 seems to extend the control representation to be parametrized, but in doing so it seems to propose another algorithm. Did I misinterpret something? ``` While we consider a **discrete-time** model, the PI-FT algorithm is formulated on continuous state and action spaces, with both state and control taking values in $\mathbb{R}^d$. This setting captures the essence of fine-tuning and indeed disparts us from the usual RL literature. Specifically, we focus on Markovian control policy $u_t: \mathbb{R}^d \to \mathbb{R}^d$, and our algorithm iteratively updates $u_t(y)$ for each $y \in \mathbb{R}^d$. Section 4 introduced a linear parametrization of the control policy to enable an efficient implementation in practice. This parameterized setting should be viewed as a special version of the PI-FT algorithm, rather than a separate algorithm. Moreover, we believe the theoretical results regarding regularity and convergence established in Section 3 apply to this parameterized setting and a roadmap has been provided in Section 4. Please let us know if this explanation clarifies your concerns. ``` Sec. 4 seems to propose an algorithm for continuous spaces where the control is represented via a linear parametrization. While a linear approximation would make sense to represent the forward process drift, this seems an excessive approximation for the reverse process control, which should have an order of complexity comparable to a score network. It seems to me that this object would be non-linear even in the easiest examples possible. Hence the question: are there some relevant examples (even simple) where this assumption holds? ``` We refer to the parameterization in Section 4 as linear in the sense that the control policy is a linear combination, through parameter $K_t$, of (nonlinear) basis functions. Hence the resulting control can be possibly nonlinear in the state variable $y$. This is a flexible, powerful, and tractable parameterization often used in control and RL, where depending on the basis, it can have high expressivity. In particular, our parameterization includes a wide class of score approximators such as random features, kernel methods and even overparameterized neural networks under the NTK regime. Hence, despite being linear in parameter $K_t$, the policy class can be sufficiently rich to approximate highly nonlinear score functions. Moreover, our experimental results (which we discuss below) suggest our principled framework remains valid for generic neural networks. We will clarify these points in our revision. ``` At times it is very unclear whether the algorithms are introduced within this work (e.g., the policy gradient one), or if this work aims to only perform theoretical analysis on them. Please clarify. ``` Apologies for the confusion and thank you for raising this point. To clarify, both the policy iteration method (PI-FT) and its policy gradient variant are new and proposed in this work. Existing methods such as DPOK and DDPO rely on generic RL developments such as PPO or REINFORCE and fail to fully leverage the fine-tuning structure. On the contrary, the specific setting considered, with linear dynamics and entropy-regularization, is tailored towards the development of efficient fine-tuning diffusion models. For this reason, the PI-FT algorithm and its parametric extension directly computes the policy gradient of a KL-regularized control objective. This principled design leads to a more efficient implementation in practice compared to prior works; see our experiments below. We will clarify all these points in our final revision. In particular, we will specifically highlight that both algorithms are proposed in this work and are our contribution. ``` Not presenting experimental evaluations renders impossible to evaluate the practical relevance of proposed algorithmic ideas. ``` We included a new set of thorough experiments in this rebuttal. Please see link https://anonymous.4open.science/r/ICML-rebuttal-741C/figures.pdf for the results and see response to Reviewer 1H7D for detailed explanations on the experiments. ``` Line 434, 'replies' should be 'relies', I guess. Line 397, 'the' is repeated twice. ``` Thank you for pointing out these typos. We will fix them in the revised manuscript. Finally, we would like to thank you for your constructive feedback. We hope our response answers your questions. If so, we would greatly appreciate it if you could reflect it in your evaluation of our paper. --- Rebuttal Comment 1.1: Comment: While the rebuttal clarifies some aspects regarding the theory, others are still unclear to me, in particular: 1) How should I interpret the sentence "This parameterized setting should be viewed as a special version of the PI-FT algorithm, rather than a separate algorithm"? It seems to me that this setting is formally tackled with another algorithm in Sec. 4 (namely the PG scheme). What am I missing? 2) While I understand in general the idea of learning a complex state representation and then considering a linear dynamics for the sake of analysis, in the case of diffusion modeling this seems to me particularly controversial as to my understanding they rely on the idea of defining a very complex vector field on the original space in order to implicitly represent complex distributions on the same space. This seems the case also when the diffusion process acts on a learned latent space. While I would understand considering a linear or kernelized approximation of the score/dynamics for algorithmic sake, I am not sure how much this is a fair abstraction for theoretical understanding in this context. Why would it be? Moreover, my main concern regarding this work is the following: I would understand building a theoretical analysis for practically successful methods, but the algorithms introduced here are novel and therefore would require experimental comparison with relevant methods. In the context of entropy-regularized control for fine-tuning, there are already several works with successful algorithms based on continuous-time control, e.g., [1,2] among others. Hence the question: why DPOK, which does not seem to rely on the classic duality for KL-regularized control/RL would be a meaningful baseline compared to these algorithms that rely on entropy/KL-regularized control? [1] Uehara et al., Feedback efficient online fine-tuning of diffusion models. ICML 2024. [2] Domingo-Enrich et al., Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control. ICLR 2025. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for constructive feedback. Below is our response to your concerns. ``` "This parameterized setting should be viewed as a special version of the PI-FT, rather than a separate algorithm"? … this setting is formally tackled with another algorithm in Sec. 4 (namely the PG scheme). ``` Apologies for the confusion and thank you for the opportunity to clarify. The **parameterized setting** in our earlier response refers to the PG scheme in Section 4. We would like to emphasize that Section 4 does not introduce a new algorithm rather presents a practically implementable **realization/actualization** of the PI-FT algorithm developed in Section 3. This realization is closely connected to the PI-FT in two fronts: First, the PG scheme implements the key ideas of PI-FT in a computationally tractable manner by restricting the control policy to a parameterized function class, especially for high-dimensional problems. Second, the convergence of PI-FT serves as the essential building block of the convergence of the (parameterized) PG scheme. In particular, the regularity of the optimal value function and its preservation during training are crucial for the convergence analysis. For this reason, we first derive the suite of results for the PI-FT algorithm in Section 3 and then introduce the PG scheme in Section 4. Building on the results derived in Section 3, we are nearly equipped with the theoretical results for the PG algorithm, with Section 4 providing a clear roadmap for the remaining (straightforward) steps. We will clarify this point in our revision. Please let us know if this explanation clarifies your concerns. ``` …a linear dynamics for the sake of analysis,...diffusion process acts on a learned latent space…a linear or kernelized approximation…a fair abstraction… ``` Thank you for the opportunity to clarify two types of linearity in our work. First, the linearity of the dynamics is not introduced for analytical convenience, but as a natural consequence of the DDPM backward process, which has been widely used in practice. In the pre-trained model, the DDPM sampling dynamics presented in Eq. (1) of our manuscript are generally non-linear in the state $Y_t^{\rm pre}$ due to the presence of the learned score function $s_t^{\rm pre}$. However, once the pre-trained score is replaced by a control variable, the resulting sampling dynamics become **linear in both state and control** with additive Gaussian noise (while **control can still be a nonlinear function in states**). This hidden **linear structure** is the key insight from practical fine-tuning applications that enables our effective control formulation. Although the idea of treating score as an action has appeared in prior work, we are the first to leverage this linear structure to design an efficient algorithm and establish convergence guarantees in the context of fine-tuning diffusion models. Empirical results further validate the efficiency and practicality of our framework. Moreover, the use of linear parameterization for the control policy is motivated from two aspects: First, the linear policy (in parameters) has been widely used in the RL and control literature. As mentioned in our earlier response, while the policy is linear in parameters, it remains nonlinear in the state due to expressive feature mappings, enabling it to capture a broad class of complex score functions. Second, the control policy can also be parameterized using (an overparameterized) neural network, which behaves similar to a linear model in the NTK regime [3]. We are confident that the analysis can be carried through under such neural network parameterization as well. We will clarify these points in our revision. ``` … experimental comparison…e.g., [1,2] among others. Hence the question: why DPOK, which does not seem to rely on the classic duality for KL-regularized control/RL… ``` We would like to clarify that DPOK [4] **does** have KL regularization; see sections 4.1 and 5.3 of [4]. Moreover, our work is based on **discrete-time** DDPM dynamics, which are natural settings in practical implementations of diffusion models. While prior works such as [1,2] study KL-regularized control problems in continuous time, their formulations and the additional discretizations may not align with the actual diffusion sampling processes, making it difficult to have a fair comparison to our setting. In contrast, DPOK studies the same discrete-time framework, which is a more suitable empirical baseline for our experiments. We will also include additional experiments in the revision. We hope our response addresses your additional concerns. If so, we would greatly appreciate it if you could reflect it in your evaluation of our paper. [3] Han, Y. et al. Neural Network-Based Score Estimation in Diffusion Models: Optimization and Generalization. ICLR 2024. [4] Fan, Y. et al. DPOK: Reinforcement learning for fine-tuning text-to-image diffusion models. NeurIPS 2023.
null
null
null
null
null
null
null
null
Spectral-Aware Reservoir Computing for Fast and Accurate Time Series Classification
Accept (poster)
Summary: This paper introduces a novel method for time series classification. The approach begins by decomposing the time series into multiple prominent frequencies, each representing different cyclic patterns in the data. For each extracted frequency, a reservoir computing model—called FreqRes—is applied to generate complex hidden states. These hidden states are then used to produce individual readouts, which are subsequently concatenated to form a final representation for classification. Experiments on the UCR dataset demonstrate that this method is effective and outperforms competing approaches. Claims And Evidence: **Key Claims and Comments** - *Incorporating Spectral Information into Echo State Neural Networks:* Ablation studies in Section 5.3 (particularly Figure 8a) demonstrate that integrating spectral insights into BiESN enhances performance across most datasets, providing strong evidence of its effectiveness. However, it would be valuable to see similar analyses for other architectures, such as ENS, LeakyESN, DeepESN, and LSTM/GRU, to determine whether spectral insights benefit all models. - *Computational Efficiency and Performance:* Figure 5 presents a computational cost analysis, highlighting the method’s low computational overhead alongside a time complexity evaluation. Additionally, comparisons of F1-score and accuracy against multiple competitors show a performance improvement, reinforcing the effectiveness of the approach. Methods And Evaluation Criteria: *Evaluation Criteria:* The authors evaluate their method based on computational cost and several widely accepted metrics in the community, including F1-score, accuracy, Win/Tie/Loss analysis, and statistical significance testing using p-values. Additionally, the benchmarks used are standard, ensuring a fair comparison. However, a limitation is that the method only applies to univariate time series, whereas the main challenges often lie in the multivariate setting. *Proposed Method:* Incorporating spectral insights into time series analysis is a logical approach, as frequency-based representations are a well-established and widely used method for capturing time series patterns. Theoretical Claims: 1- No theoretical claims in the paper. Experimental Designs Or Analyses: The experimental design is well-executed, with a comprehensive set of benchmarks (128 UCR datasets) used for comparison. The chosen metrics are appropriate, and p-values are computed to assess statistical significance. However, the authors should place greater emphasis on evaluating the impact of spectral insights—the main contribution of the paper. Specifically, they should demonstrate that these insights not only enhance performance beyond BiESN but also apply to other model variants. Supplementary Material: I went roughly through the supplementary material. Relation To Broader Scientific Literature: To the best of my knowledge, this is the first work to incorporate spectral insights into Echo State Neural Networks (ESNs), despite the existence of several studies leveraging frequency information for classification. In the specific context of ESNs, this approach appears to be the first in this direction, making it the key innovation of the paper Essential References Not Discussed: No essential references that I know. Other Strengths And Weaknesses: **Strengths:** - Extensive experimental studies - Well-written article with a clear motivation **Weaknesses:** - The method is limited to univariate time series classification. - A direct comparison should be made between applying classifiers (e.g., Ridge, Random Forest) directly to the extracted frequency components and using reservoir computing. This would clarify whether the reservoir computing truly adds value, especially given that the reservoir size does not appear to be a key factor. If the reservoir computing is not significantly improving performance, it may be unnecessary, as a direct classification approach could be more computationally efficient. Other Comments Or Suggestions: 1- Need for incorporating the baselines of using classifiers directly on the concatenation of raw extracted from each frequency. 2 Need to comment on multivariate extensions **After rebutal** I increase my score to 4 given the answer of the authors Questions For Authors: 1- Could the authors explain at least experimentally the need for the reservoir computing? 2- Can the authors discuss the extension to multivariate time series classification? 3- Could the authors make some experiments regarding the number of frequencies extracted ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for your time and recognition. Detailed responses to the raised concerns are listed below. We agree with the suggestions mostly and will clarify those details. **1. Impact of Spectral Insights across Different Models** The table below provides a comparison of whether Spectral Insights (SI) are incorporated across various base models (in terms of accuracy). Specifically, Row 4 reports the win/tie/loss counts of "w/ SI" version against "w/o SI", and the p-values between them are from Wilcoxon tests. Results demonstrate that incorporating SI achieves average accuracy improvements ranging from 13.86\% to 18.97\% across all models. Statistics further confirm the significance of these improvements. This validates the consistent effectiveness of our approach across a broader range of models. --- | | ESN | LeakyESN | BiESN | DeepESN | LSTM | GRU | |--|--|--|--|--|--|--| | Avg. Acc. w/o SI | 0.6456 | 0.7030 | 0.7055 | 0.6752 | 0.6212 | 0.6295 | | Avg. Acc. w/ SI | 0.8353 | 0.8416 | 0.8508 | 0.8394 | 0.7896 | 0.7926 | | Wins/Ties/Losses | 122/1/5 | 117/0/11 | 119/3/6 | 117/2/9 | 115/0/13 | 114/0/14 | | P-value | 6.44E-22 | 3.79E-20 | 1.58E-21 | 1.05E-20 | 5.05E-20 | 2.18E-20 | --- **2. The Need for Reservoir Computing** As Eq. (6) shows, the FreqRes modules iterate on raw time series and implicitly utilize frequency information. Thus, an explicit frequency component for direct classification could be unavailable. In response to the reviewer's expectation, the following features in our experiments enable comparison and demonstrate the contribution of reservoir computing (RC): - $O$: Original time series data - $F_1$: For each $f \in \mathcal{F}$ and $p = [L/f]$, compute a differenced series of the raw sequence and concatenate statistical features, including mean absolute value, standard deviation, kurtosis, zero-crossing rate, and first-order autocorrelation coefficient. - $F_2$: For each $f \in \mathcal{F}$, extract frequency band components (centered at $f$, 10% relative bandwidth) and concatenate features, including relative band energy ratio alongside the above five statistical features. - $F_3$: Feature vectors composed of FFT amplitude values at each $f \in \mathcal{F}$. The average accuracies using Ridge and Random Forest classifiers with these features are presented below. This comparison highlights the necessity of RC and FreqRes modules, as they achieve significantly better performance by capturing temporal dependencies at multi-scale frequencies. --- | | $O$ | $F_1$ | $F_2$ | $F_3$ | $O+F_1$ | $O+F_2$ | $O+F_3$ | $O+F_{1,2,3}$ | SARC | |--|--|--|--|--|--|--|--|--|--| | Ridge | 0.6475 | 0.7112 | 0.6322 | 0.6178 | 0.7270 | 0.6961 | 0.7048 | 0.7422 | 0.8508 | | RF | 0.7209 | 0.7281 | 0.6920 | 0.6610 | 0.7712 | 0.7680 | 0.7430 | 0.7894 | 0.8391| --- **3. Extension to Multivariate TSC** The proposed SARC is currently adapted to multivariate scenarios in a straightforward manner: it extracts required frequencies from the mean series across variables, then applies FreqRes modules to iterate on the raw multivariate series. This approach proves effective, as evidenced by the submitted supplementary materials, which contain a demo achieving ~98% accuracy on the "ArticularyWordRecognition" dataset from the UEA archive. Another feasible extension involves separately extracting frequencies and modeling patterns for each individual variable before concatenating the derived features. While preserving variable-specific information, it would require more sophisticated implementations to ensure parallelization efficiency. Moreover, inter-variable interactions are unaddressed in this design. Further improvements could focus on the modeling of inter-variable relationships during feature extraction, and we plan to explore this in future studies. **4. Impact of Frequency Quantity** As described in Section 4.1.2, the extracted frequency set $\mathcal{F}$ consists of several "root" frequencies and their sub-harmonics. Figure 7(c) in the main paper has visualized the impact of threshold $\kappa$ (which controls the number of sub-harmonics) on average accuracy across 128 datasets. Here, we offer an analysis regarding the impact of "root" frequency quantity. The table below shows SARC's average accuracy when selecting and extending only the top-$\alpha$ smallest "root" frequencies. Similar to the trend with $\kappa$, SARC achieves better performance as $\alpha$ increases (i.e., more frequencies are included). Notably, the number of extracted "root" frequencies is less than 10 for most datasets, resulting in identical accuracy when $\alpha=9$ and $10$ on these datasets. This explains why the average accuracy growth gradually plateaus when $\alpha$ approaches 10. --- | $\alpha=$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |--|--|--|--|--|--|--|--|--|--|--| | Avg. Acc.| 0.6863 | 0.7641 | 0.7937 | 0.8140 | 0.8279 | 0.8373 | 0.8454 | 0.8503 | 0.8507 | 0.8508 | --- --- Rebuttal Comment 1.1: Comment: Thank you to the authors for providing a detailed response to my comments. I am satisfied with their explanations, and I find their arguments convincing in the univariate case. However, I would encourage the authors to explore the multivariate case further and explain how the extension could be implemented in the paper. Additionally, benchmarking against UEA datasets would enhance the impact of their work. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer for the acknowledgment and valuable suggestion. We will further explore the multivariate case in depth and provide a detailed explanation of its implementation in the paper. We will also make efforts to conduct experiments on UEA datasets to assess the feasibility of the extension. Thanks again for the reviewer's time and helpful comments!
Summary: Typical reservoir computing (RC) considers recursive updates from adjacent states and has difficulty handling long-term dependencies. For this issue, this paper proposes a Spectrum-Aware Reservoir Computing framework (SARC) that incorporates spectral insights to enhance long-term dependency modeling. Prominent frequencies are first extracted to reveal explicit or implicit periodic patterns. For each prominent frequency, SARC further integrates a frequency-informed reservoir network (FreqRes) to fully capture the sequential and periodic dynamics, thereby deriving effective dynamic features. By synthesizing these features at different frequencies, SARC provides a multi-scale analysis of temporal dynamics and improves the modeling of long-term dependencies. Extensive experiments on pubic datasets demonstrate that SARC achieves state-of-the-art results. Claims And Evidence: The claim that SARC provides multiscale analysis of temporal dynamics and improves modeling of long-range dependencies is supported by a large body of experimental evidence, including experimental results on multiple datasets. However, the theoretical analysis and description of the SARC framework could be clearer, especially in how long-term dependencies are captured. Methods And Evaluation Criteria: The proposed SARC framework is suitable for handling long-term dependencies in RC. Besides, the comparison of SARC with existing RC methods helps confirm the effectiveness of SARC. Theoretical Claims: The theoretical claims about incorporating spectral insights to enhance the modeling of long-term dependencies are well-founded. However, the mathematical formulation of the frequency-informed reservoir network (FreqRes) could be more explicitly defined, especially about the convergence properties of its cyclical connections. Experimental Designs Or Analyses: The experiments are well designed and the benchmarks are clear. The comparison with eight state-of-the-art baselines and the use of statistical significance tests such as the Wilcoxon Signed-Rank Test strengthen the validity of the results. Supplementary Material: The supplementary material is reviewed and includes useful appendices providing implementation details, complexity analysis, and more experimental results. These sections provide valuable insights into the experimental setup and methodology. Relation To Broader Scientific Literature: The paper positions itself within the broader context of time series classification and reservoir computing, comparing the Spectral-Aware Reservoir Computing (SARC) framework to traditional reservoir computing models and deep learning approaches. It highlights the advantages of integrating spectral analysis for improved long-term dependency modeling. However, a more comparison with other spectral-aware approaches would be appreciated. Essential References Not Discussed: The paper cites key references, but would benefit from more discussion on the connection to other spectral analysis techniques in time series modeling, particularly those utilizing advanced wavelet transforms and adaptive frequency decomposition methods. Other Strengths And Weaknesses: Strengths:
 1. This paper conducts comprehensive experiments on 128 UCR datasets to demonstrate the effectiveness and efficiency of the proposed SARC framework. Comparisons with multiple state-of-the-art methods further strengthen its empirical contribution.
 2. This paper clearly points out the limitations of existing reservoir computing methods in dealing with long-term dependencies and periodic patterns. The introduction of spectrum-aware modeling makes a meaningful contribution to time series classification. Weaknesses:
 1. Although this paper introduces the FreqRes component to enhance long-term dependency modeling, its mathematical formulation lacks sufficient clarity, especially in terms of theoretical guarantees. 2. This paper does not fully compare its spectrum-aware approach with other frequency-domain methods. A more in-depth discussion will help position SARC in the broader field of spectrum-based machine learning techniques. Other Comments Or Suggestions: 1. Provide a clearer definition of the frequency-informed reservoir network (FreqRes), especially its theoretical properties, stability, and impact on modeling long-term dependencies. 2. Some hyperparameter choices (e.g., spectral radius, input scaling) appear to be empirically driven. A more systematic sensitivity analysis would enhance experimental validity. Questions For Authors: 1. Could you provide a more formal and detailed mathematical explanation of the frequency-informed reservoir network (FreqRes)? A clearer formulation would help strengthen the theoretical justification of the proposed method. 2. How does SARC compare to other frequency-aware time series models, such as adaptive Fourier-based methods or alternative wavelet-based approaches? A more detailed discussion could better position SARC within the broader landscape of spectral-based time series classification. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s efforts and recognition. Our responses to the raised concerns are listed below: **1. Mathematical Explanation of FreqRes** We formally define a general FreqRes, and then give some theoretical claims for ESN-based FreqRes, as described in Eq. (6). **(1) Definition of FreqRes** Let the following recursive process define a general RC model: $$ \begin{aligned} \mathbf{\hat{x}}(n) &= \mathbf{W} \mathbf{x}(n-1), \\\\ \mathbf{x}(n) &= \sigma\left(\mathbf{\hat{x}}(n), \mathbf{u}(n)\right), \end{aligned} $$ where $\mathbf{x}(n)$ is the hidden state; $\mathbf{u}(n)$ is the input; $ \mathbf{W}$ is the reservoir weight matrix; and $ \sigma(\cdot) $ is the activation. The corresponding FreqRes with a frequency $f$ redefines its recursive process as: $$ \begin{aligned} \mathbf{\hat{x}}(n) &= \mathbf{W}\_{\text{s}} \mathbf{x}(n-1) + \mathbf{W}\_{\text{c}} \mathbf{x}(n-p), \\\\ \mathbf{x}(n) &= \sigma\left(\mathbf{\hat{x}}(n), \mathbf{u}(n)\right). \end{aligned} $$ Here, $L$ is series length ; $p = [L/f]$ is cycle length; $ \mathbf{W}\_{\text{s}}$ and $\mathbf{W}\_{\text{c}}$ are weight matrices modulating sequential and cyclical dependencies. **(2) Lipschitz Continuity Analysis** To analyze state stability, we treat the input influence $\mathbf{W}\_{\text{in}} \mathbf{u}(n)$ as fixed. **Claim 1:** The Lipschitz constant of state update function for the ESN-based FreqRes is bounded by $\sqrt{ \Vert\mathbf{W}\_{\text{s}}\Vert_2^2 + \Vert\mathbf{W}\_{\text{c}}\Vert_2^2 }$. Thus, $\Vert\mathbf{W}\_{\text{s}}\Vert_2^2 + \Vert\mathbf{W}\_{\text{c}}\Vert_2^2 < 1$ ensures a contraction mapping and guarantees state stability. Proof sketch: Noting that $\tanh$ is 1-Lipschitz and applying the Cauchy-Schwarz inequality: $$ \begin{align} \Vert \mathbf{A} \mathbf{x}\_{1} + \mathbf{B} \mathbf{x}\_{2} - \mathbf{A} \mathbf{x}\_{1}' - \mathbf{B} \mathbf{x}\_{2}'\Vert\_{2} & \leq \Vert \mathbf{A}\Vert\_{2} \Vert(\mathbf{x}\_{1} - \mathbf{x}\_{1}')\Vert\_{2} + \Vert\mathbf{B}\Vert\_{2} \Vert(\mathbf{x}\_{2} - \mathbf{x}\_{2}')\Vert\_{2}\\\\ & \leq \sqrt{\Vert \mathbf{A}\Vert\_{2}^{2} + \Vert \mathbf{B}\Vert\_{2}^{2}}\sqrt{\Vert\mathbf{x}\_{1} - \mathbf{x}\_{1}'\Vert\_{2}^{2} + \Vert\mathbf{x}\_{2} - \mathbf{x}\_{2}'\Vert\_{2}^{2})} \end{align} $$ **(3) Discussion on Echo State Property** **Claim 2:** Given $p = [L/f]$, an ESN-based FreqRes with reservoir size $s$ satisfies the ESP $\iff$ a specially structured ESN with reservoir size $ps$ does. Proof sketch: Let $\mathbf{U}(n) = [\mathbf{u}(n); \mathbf{u}(n-1); \dots; \mathbf{u}(n-p+1)]$, and $\mathbf{X}(n) = [\mathbf{x}(n); \dots; \mathbf{x}(n-p+1)]$. The state update could be rewritten as: $$\mathbf{X}(n) = \sigma\left(\begin{bmatrix} \mathbf{W}\_{\text{s}} & \mathbf{0} & \mathbf{W}\_{\text{c}} \\\\ \mathbf{I}\_{s} & \mathbf{0} & \mathbf{0}\\\\ \mathbf{0} & \mathbf{I}\_{(p-2)s} & \mathbf{0}\\\\ \end{bmatrix}\mathbf{X}(n-1) + \begin{bmatrix} \mathbf{W}\_{\text{in}} & & \\\\ & \mathbf{0}\_{s} & \\\\ & & \mathbf{0}\_{(p-2)s}\\\\ \end{bmatrix}\mathbf{U}(n)\right)$$ Here, $\sigma$ is 1-Lipschitz, applying $\tanh$ to the first $s$-dimensional part and identity function to the rest, which aligns with the common assumptions used in ESP discussions. Thus, existing theories can be applied to derive constraints on $\mathbf{W}\_{\text{s}}$ and $\mathbf{W}\_{\text{c}}$. **(4) Hyperparameter Analysis** Due to space constraints, we briefly discuss spectral radius (SR) and input scaling (IS) choices. Based on the above analysis, restricting SR $< 1$ is a necessary condition for state stability, justifying the range selection in Fig. 8b. For IS, Fig. 7a shows that overly small values markedly weaken input influence and degrade performance. With further validation, we find that excessively large IS also reduce performance, likely because states activated by $\tanh$ saturate at $\pm 1$, as weights and historical states remain bounded. **2. Comparison with Spectral-Based Methods** We compare SARC with Fourier or wavelet-based methods in accuracy using results from their original papers. The table below shows SARC's consistent outperformance in pairwise comparisons, highlighting its competitiveness in spectrum-based TSC. These methods include: - **AMSWR('21):** Learnable multi-scale wavelet decomposition via adaptive CNNs. DOI: 10.3390/info12060252 - **SFCC('22):** Stratified frequency recombination for data augmentation to enhance ResNet. DOI: 10.1007/s11063-022-10965-9 - **TF-Net('22):** Two-stage fusion of ResNets trained on raw series and wavelet representations. DOI: 10.1007/s10489-022-03485-5 - **CoInception('24):** Noise-resilient wavelet contrastive views with cross-view alignment. DOI: 10.1109/ICDM59182.2024.00041 --- | | AMSWR| SFCC|TF-Net|CoIncep.| |--|--|--|--|--| |#Dataset|85|128|85|128| |Avg. Acc. $\uparrow$|2.15%|1.8%|0.53%|0.82%| |Win/Tie/Loss|60/4/21|76/6/46|43/3/39|71/6/51| |P-value|1.17E-05|3.61E-03|6.01E-01|5.75E-02| ---
Summary: The paper enhances reservoir computing (RC) for TSC by incorporating spectral insights. It extracts prominent frequencies to identify cyclical patterns and refines RC module to capture cyclical dynamics. Features from next-step prediction tasks are used for classification. Extensive experiments on the UCR 128 archive show state-of-the-art performance with superior efficiency, highlighting the potential of RC in time series modeling. Claims And Evidence: The paper is rigorous and main claims are well-supported: 1) Superior performance is shown by comparing with SOTAs across full UCR benchmarks. 2) High efficiency is validated via time and complexity analysis. 3) Ablation studies confirm the effectiveness of spectral insights and adaptability to various RC models. 4) Other minor design and claims are supported by reasonable explanations and experiments. Methods And Evaluation Criteria: The proposed method and evaluation criteria are suitable for TSC. Combining spectral analysis with RC is novel and addresses long-term dependencies effectively. The refined RC module, along with specially designed features, enables fast and accurate classification. The UCR 128 archive and strong baselines like mini rocket and cote ensure solid validation. Theoretical Claims: Time complexity analysis appears correct based on appendix proofs, showing linear complexity with respect to sample size and length, supporting the efficiency claim. Experimental Designs Or Analyses: The experimental design is sound and well-structured, with rich datasets and comprehensive baselines. Diverse metric indicators allow convincing comparisons. Additionally, extensive ablation studies and hyperparameter analyses support the claims, and the provided code aids reproducibility. Only a minor clarification is needed on data preparation (see Questions). Supplementary Material: The appendix includes necessary details, results, and analyses that enhance transparency. The provided code is also well-documented and demonstrates strong performance. Relation To Broader Scientific Literature: RC techniques have been applied to TSC, but often with conventional structures. The paper advances RC by integrating spectral insights, aligning with trends in time series modeling, such as multi-scale and periodicity analysis. This design improves accuracy while maintaining high efficiency. Notably, the proposed method offers a generalizable extension to various RC models, representing an significant advancement in the field. Essential References Not Discussed: Essential works in both the RC and TSC fields are discussed. Other Strengths And Weaknesses: Besides the above, there are two notable strengths: 1) The derivation of dynamic features is interesting, using the predictive model as a sequence representation. This idea could inspire further cross-task research. 2) The writing is a strong point with high readability. Motivation is well-justified, and technical details are clear. Each step is explained with its purpose, ensuring rigor and logical flow. The paper looks promising in many aspects, and I found no obvious weaknesses. Other Comments Or Suggestions: I have no other comments. Questions For Authors: Data preparation: Some UCR datasets have variable lengths or NaN values. They can greatly affect the performance of RC-based and recurrent models. How were they processed in the experiments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are thankful for the reviewer’s recognition of the work. The processing of datasets with variable lengths or NaN values is clarified as follows: - For middle NaN values (i.e., real values exist on both sides of the NaN), we employ the interpolation for imputation. - Then, we shift real values in each time series sample to the rightmost positions of the tensor. - Finally, the rest leading NaNs are filled with zeros. Shifting real values to the right is critical due to the recurrent nature of FreqRes. Specifically, this ensures that the initial hidden states start as zeros and transition to normal iteration when real values appear. In addition, the leading zero states do not affect subsequent ridge regression computations.
Summary: The paper proposes SARC, a novel framework that combines spectral analysis with RNN-based models for time series classification. SARC identifies prominent frequencies and captures corresponding temporal dynamics. The framework analyzes time series on multiple scales, achieving state-of-the-art accuracy on the full UCR archive and showing exceptional speed. Claims And Evidence: Main claims of the paper, including the listed contributions, complexity analysis, and design tricks, are supported by reasonable evidence. Methods And Evaluation Criteria: The proposed method uses a novel reservoir-based RNN network to analyze cyclical patterns at multiple scales, achieving high accuracy and very fast speed for TSC. The utilized UCR datasets provide diverse characteristics, and comparisons include highly efficient MiniRocket. Statistical tests (Wilcoxon Test, CD diagrams) further enhance evaluation reliability. Theoretical Claims: The theoretical claims involving complexity analysis and are proven correct. Experimental Designs Or Analyses: The experimental design is targeted and thorough, supporting main claims and showcasing the proposed method's effectiveness. It compares several relevant reservoir computing methods and state-of-the-art approaches. The analyses are detailed, evaluating accuracy, F1-score, and time across multiple dimensions. Supplementary Material: The appendix provides additional information and enhance experimental completeness. The code runs successfully, verifying the reliability of accuracy. Noteworthy, it includes a multivariate dataset from the UEA archive, also showing strong performance. Relation To Broader Scientific Literature: The paper adopts a multi-periodicity perspective to analyze time series, which may be influenced by prior work such as [1]. The difference is that it employs an ensemble of reservoir-based models to capture multi-scale temporal dynamics. This architecture is rare in the current literature and could offer new perspectives for time series modeling. [1] Wu et al. "TimesNet: Temporal 2D-variation modeling for general time series analysis." ICLR 2023. Essential References Not Discussed: I believe the current references are sufficient. Given the paper's emphasis on efficiency, excluding some large models like RDST and HC2 is acceptable. Other Strengths And Weaknesses: Strengths: 1. The algorithm's lightweight nature is a key highlight, especially for architectures comprising multiple recurrent structures. This encourages further exploration of reservoir computing. 2. Technical details are clearly presented, making the reservoir model's properties easy to grasp. 3. The proposed dynamic features break classification into forecasting + classification, suggesting a new pathway to leveraging forecasting models for classification. 4. The architecture emphasizes accurate representation of time series, showing potential for broader applications like forecasting and imputation. Weaknesses: 1. While described in footnotes and appendix, how the framework adapts to different RC models is not entirely intuitive. 2. Analyzing performance on varying lengths could further enhance evaluation. Despite minor issues, I find this paper acceptable and capable of inspiring new ideas. Other Comments Or Suggestions: None. Questions For Authors: 1. Please provide formulas to further clarify SARC's iteration when based on BiESN, which is the primary model evaluated in the experiments. 2. Since high efficiency is a core contribution, how does SARC achieve such performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s efforts and acknowledgement. Below, we provide detailed responses to your concerns: **1. Comparison by Sequence Length** In the table below, we group 128 datasets by sequence length and report the average rank of different methods across each group. The results show that SARC achieves the best average rank of 2.475 in the (500, 1000] case. In contrast, Rocket performs best in the (200, 500] range, while COTE ranks first in the remaining two groups. Despite this, SARC attains at least the second-best performance across all cases, demonstrating consistent competitiveness. --- | Length | Counts | rmESN | Conv. | Times. | Incep. | COTE | Hydra | Rocket | Mini. | SARC | |-----|-----|-------|----------|----------|---------------|---------|---------|--------|------------|--------| | (0, 200] | 41 | 6.6098 | 5.5366 | 8.1707 | 4.5610 | 3.6463 | 4.4878 | 4.0000 | 4.3171 | 3.6707 | | (200, 500] | 45 | 7.3111 | 6.1222 | 8.5667 | 4.1667 | 4.1333 | 4.0333 | 3.3000 | 3.9556 | 3.4111 | | (500, 1000] | 20 | 5.3750 | 5.6500 | 8.0250 | 4.8250 | 4.7250 | 4.5500 | 4.2000 | 5.1750 | 2.4750 | | (1000, 2844] | 22 | 6.9773 | 5.8636 | 8.8182 | 4.9091 | 2.2955 | 4.0909 | 4.0909 | 4.5909 | 3.3636 | --- **2. SARC’s Iteration Based on BiESN** SARC's iteration based on BiESN has been introduced through Equations (6)-(8) and footnote 3 of the main paper. For further clarification, we formally describe the computation of FreqRes based on BiESN. Given an input time series $\mathbf{u}$ of length $L$, a specific frequency $f$, and $p = [L/f]$, this FreqRes module iterates bidirectionally as follows: - Forward hidden state update: $$ \mathbf{x}^{\text{f}}(n) = \tanh \left(\mathbf{W}\_{\text{s}}^{\text{f}}\mathbf{x}^{\text{f}}(n-1) + \mathbf{W}\_{\text{c}}^{\text{f}}\mathbf{x}^{\text{f}}(n-p) + \mathbf{W}\_{\text{in}}^{\text{f}}\mathbf{u}(n) \right),$$ - Backward hidden state update: $$\mathbf{x}^{\text{b}}(n) = \tanh \left(\mathbf{W}\_{\text{s}}^{\text{b}}\mathbf{x}^{\text{b}}(n+1) + \mathbf{W}\_{\text{c}}^{\text{b}}\mathbf{x}^{\text{b}}(n+p) + \mathbf{W}\_{\text{in}}^{\text{b}}\mathbf{u}(n) \right),$$ Here, superscripts f/b denote forward/backward directions; $\mathbf{x}^{\text{f}}(n) = 0$ for $n \leq 0$; and $\mathbf{x}^{\text{b}}(n) = 0$ for $n > L$. After computing all hidden states, the states at the same time step are concatenated as: $\mathbf{x}(n) = \begin{bmatrix} \mathbf{x}^{\text{f}}(n) \\\\ \mathbf{x}^{\text{b}}(n) \end{bmatrix}$, which is then used to compute dynamic features as described in Equations (7) and (8). **3. Discussion on SARC's Efficiency** We attribute SARC’s high efficiency to three main factors: - Based on reservoir computing, SARC eliminates the need for iterative training and time-consuming gradient backpropagation. - Different FreqRes modules operate independently, enabling efficient parallel processing. - FreqRes maintains nearly linear complexity with respect to data size, ensuring scalability.
null
null
null
null
null
null
CollabLLM: From Passive Responders to Active Collaborators
Accept (oral)
Summary: While state-of-the-art Large Language Models (LLMs) trained with RLHF are good at following the instructions from users, this paper argues that they are often ``passive responders'' where they only passively respond to ambiguous or open-ended user requests. To address this limitation, this paper proposes to train LLMs with multi-turn aware utility through a conversation-level reward and a forward sampling process. The conversation-level reward is composed of an extrinsic reward of task completion and intrinsic reward that prioritizes user experiences. Experiments have shown that in three simulated tasks, CollabLLM (trained with either PPO or DPO) is able to achieve better performances compared to prompting baselines. A large-scale user study is also carried out and it is shown that CollabLLM can indeed enhance the user satisfaction over multiple turns. Claims And Evidence: In general, claims in the paper are well supported through empirical evidence, both through simulated experiments of three tasks and a conducted user study with Mechanical Turks. I particularly appreciate the inclusion of the user study with more than 200 participants that show the positive generalization from model-simulated user to real users. Methods And Evaluation Criteria: Do proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand? Yes, the proposed evaluations are well-suited for the problem. Theoretical Claims: No theoretical claims made in the paper. Experimental Designs Or Analyses: The experiment designs make sense in general, and I appreciate the ablation of the effect of reward mechanisms in Figure 4 and Figure 6, and the zero-shot generalization experiment in Table 2. However, it seems strange why a different model GPT4-o is used as the user simulator compared to the original model Llama-3.1-8B-Instruct. It would nice to have some additional discussions in terms of the effect of using a stronger model as the user simulator, and I would be curious to see if it is possible for the model to improve with "self-play" without relying on a stronger model as a user simulator. Supplementary Material: I reviewed some sections in the appendix, in particular the Related Works section and the User Study section. Relation To Broader Scientific Literature: I find a detailed discussion of the paper's contribution with the broader literature missing from the main text. Although Appendix B mentions related works, I believe that it is important to address the connection of the paper to the prior literature in the main text. Additionally, in the related works section, the paper mentions that "However, these methods primarily rely on post-hoc trajectory-level data, learning from observed conversations rather than explicitly modeling the causal effect of individual responses on long-term task success (see Appendix A for further details).", but the relative advantage of learning from real-user conversations and explicitly modeling the causal effect of individual responses seem to be missing from the paper as far as I can tell. While I understand that real-user conversations might be hard to obtain, it would be valuable if the authors can provide some quantitative comparisons with the methods used in the prior literature. Essential References Not Discussed: Would be nice to have a comparison with other works that make use of user-simulators to improve LLMs, such as Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations (https://arxiv.org/abs/2311.05584). Other Strengths And Weaknesses: See above Other Comments Or Suggestions: See above Questions For Authors: My main questions are mentioned in the previous responses and I summarize them below: 1) While I understand that real-user conversations can be hard to obtain, it may strengthen the paper and better situate the paper in the related literature if the authors can provide quantitaive comparisons with prior "Multiturn training for LLMs" as mentioned in the related works section. 2) It would be nice to add discussions explaining the difference of this paper from other prior literature that uses LLMs as user simulators, e.g. ttps://arxiv.org/abs/2311.05584. 3) It would be helpful if the authors can provide some insight in terms of the limitations of applying LLMs as user simulators. In my experience, I found instruction-tuned LLM as user simulators tend to be overly agreeable and always consent to the requests while real users can be harder to deal with. Curious if such issues are also observed in the experiments. 4) Would be nice to se Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their approval and rigorous comments! Here we address the remaining concerns: --- ### **[Experimental Designs] "It would be nice to have additional discussions about using a stronger model as the user simulator.”** Great catch! A user simulator should follow the language style of previous user turns while exhibiting typical user behaviors like evolving needs and limited background knowledge. This requires LLMs to role-play users with **basic understanding of real-world user traits and follow instructions effectively** [1,2,3]. Though not motivated by self-training, we initially tried using `Llama-3.1-8B` as our user simulator to reduce latency. Unfortunately, it performed poorly, frequently getting "confused" and solving problems as an assistant instead of a user. This observation is the same even for `Llama-3.1-70B`. We think this raises an interesting research problem - *while we have increasingly superior LLM assistants trained to solve problems, we lack user models that learn from real-world user behaviors*. Building better user models can be valuable running simulations for real-world applications. Therefore, "self-play" was difficult to implement within the scope of this work, but this points to potential future work. We've added discussion in future work directions. --- Moreover, to reviewer’s Question \#3 - **“It would be helpful if the authors can provide some insight in terms of the limitations of applying LLMs as user simulators.”**. We discussed this in our response to `Reviewer 2AZE` (please see the section's beginning), given space constraints. We've added the discussion in the paper to provides insights. --- ### **[Relation To Broader Scientific Literature]** **(1) “(add) prior literature in the main text.”** Yes we agree. We will and should have space in the final version to accommodate the Related Work section in the main text! --- **(2) “it would be valuable if the authors can provide some quantitative comparisons with prior "Multiturn training for LLMs”** Thanks for this suggestion! Previously we looked into MTPO (Shani et al. 2024), but unfortunately we didn’t find an existing implementation. Another relevant method in the category, ArCHer (Zhou et al. 2024), is a hierarchical multiturn RL approach that requires the training of three LLMs: 1) utterance-level Q-function and 2) utterance-level value function, and 3) a token-level actor that maximizes the prediction of the Q-model, which exhibits high complexity in our setup. Moreover, this contributes to a more task-specific training with learning the token-by-token policy within each turn, while CollabLLM offers a more intuitive and generalizable method for multiturn interaction with a single model. --- ### **[Essential References Not Discussed] "(compare with) works that use user simulators to improve LLMs”** We further added the following content in the related work section: ``` Recent works employ user simulators to enhance dialogue systems [4,5] and LLMs [6,7,8]. Tseng et al. improve both dialogue systems and simulators through reinforcement learning during their interactions. Recently, Hong et al. leverage LLMs to create diverse synthetic dialogues with varying user personas, then train smaller dialogue models to optimize conversational outcomes. CollabLLM differs in leveraging user simulators in forward sampling to account for long-term effect in both offline and online training. ``` It is also worth mentioning that our main contribution comes from the key intuition in making the model be aware of future outcomes and prioritize responses with higher long-term impact. The other components are fundamentally aimed at computing long-term effects, with the use of a user simulator for forward sampling being just one, albeit minor, contribution of ours. --- ## **Summary** We sincerely hope our answers mitigates your concerns about (1) the user simulator models and their difference between real users (2) the comparison with works within "Multiturn training for LLMs" and works using user simulators and. Given our answers, we would very much appreciate it if you could reconsider your evaluation. Thank you very much! --- ## **Reference** [1] Park et al. Generative Agent Simulations of 1,000 People. ArXiv. [2] Wang et al. User Behavior Simulation with Large Language Model-based Agents. ArXiv. [3] Yoon et al. Evaluating Large Language Models as Generative User Simulators. NAACL 2024. [4] Shi et al. How to Build User Simulators to Train RL-based Dialog Systems. EMNLP 2019. [5] Tseng et al. Transferable Dialogue Systems and User Simulators. ACL-IJCNLP 2021. [6] Hong et al. Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations. ArXiv. [7] Hu et al. Unlocking the Potential of User Feedback: Leveraging Large Language Model as User Simulator to Enhance Dialogue System. CIKM 2023. [8] Faltings et al. Interactive Text Generation. EMNLP 2023.
Summary: COLLABLLM is a new training framework designed to improve multi-turn human–LLM collaboration. Its core idea is to simulate a collaborative conversation setup where a Multiturn-aware Reward (MR) function estimates the long-term impact of model’s responses, rather than focusing solely on immediate single-turn outcome (as in standard RLHF). Main Contributions: -Multiturn-aware Rewards (MR): A conversation-level reward function that encourages the LLM to seek and incorporate additional context or clarification from users if it improves overall task success. -New Multi-turn Interaction Benchmark: which covers 3 challenging tasks related to document editing, coding, and mathematics. -COLLABLLM outperforms base (or prompt-engineered) baselines on 3 test sets by boosting task accuracy by 18.5% and interactivity by 46.3%, as judged by LLM evaluators. In a large-scale user study with 201 Amazon Mechanical Turkers, COLLABLLM also increases user satisfaction by 17.6% and saves 10.4% of user time compared to baselines. Claims And Evidence: Yes they are (although i have concerns/reasons to reject expressed in the specific strength/weeknesses section) Methods And Evaluation Criteria: Yes they are (although i have concerns/reasons to reject expressed in the specific strength/weeknesses section) Theoretical Claims: There are no strong theoretical claims in this paper Experimental Designs Or Analyses: Yes i checked: -simulated experiments -crowdsourcing study Supplementary Material: No except the related work that has been put there (due to space constraint probably) Relation To Broader Scientific Literature: Paper is well positioned related to the scientific litterature, although i'm not a specialist of multi-turn human–LLM collaboration. Essential References Not Discussed: They are discussed but in the supp material Other Strengths And Weaknesses: reason to accept -A well-designed framework aimed at improving multi-turn human–LLM collaboration, supported by experimental evaluation. -Introduction of a new benchmark specifically for evaluating multi-turn interactions. -An in-depth case study that goes beyond accuracy metrics, providing deeper insights into model behavior and interaction quality. -Proposed approach implicitly addresses (and casually solves?) the question clarification problem by optimizing for long-term goal achievement, naturally encouraging the model to seek more clarifications when needed, as demonstrated in the paper. reasons to reject -The improvements on simulated experiments of tab 1 (35% to 36-38% BLEU, 12.5 to 15) are small between prompt engineering and the proposed method (using PPO or DPO), raising doubts about real impact. With overall small performance improvement and a model size ≤8B parameters, the validation of the method is not 100% convincing to me. What would be the topline obtained w/ gpt4-o and the same prompt engineering for instance ? -It's unclear whether the improvements stem specifically from the multi-turn-aware reward (with w>0, regardless of whether the reward is based on helpfulness, intrinsic, extrinsic, or a mix) or from the reward modification itself (replacing helpfulness with extrinsic + intrinsic rewards). or alternativelly is it the interaction between both factors that drives the gains? Other Comments Or Suggestions: Typo: -caption of fig 2: Figure 2: Real examples from COLLABLLM and non-collaborative LLM fine-tuing => fine-tuNing Questions For Authors: -MediumDocEdit-Chat: task performance is evaluated using BLEU, which measures similarity between the extracted document and the original articles. How is the document extracted? It's unclear what exactly is being generated here. Is BLEU the right metric for this task? Why not also use LLM judges for a more qualitative assessment? -Interactivity (ITR): engagement is evaluated using an LLM judge (Claude-3.5-Sonnet) with scores rescaled to [0,1] But how exactly is this scoring performed? The methodology needs more clarity. -Figure 4: why does ITR performance decrease when the forward sampling window size increases from w=2 to w=3? This behaviour seems counterintuitive to me. What could explain it? -What about optimizing helpfulness (as assessed by the LLM evaluator) using w>0? Is it feasible? If so, why was this approach not explored? Ethical Review Concerns: none Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewers' extensive and thoughtful outputs! We'll address each comment below: --- ### **[Other Weaknesses] "The improvements (over prompt engineering) on simulated experiments of Tab 1 are small… What would be the topline obtained w/ gpt4-o + prompt engineering?”** Thanks for the comment! We'd like to provide the following justification: - **Task-specific Performance upper bound of improving collaborative strategies:** The training of CollabLLMs does not necessarily provide more task-specific knowledge, rather, the primary goal is to explore the best collaboration strategies for the models to understand user requests and deliver its internal knowledge. Therefore, we believe an upper bound constraint exists for improvement we gain from improving collaboration strategies.s - **Challenging Tasks:** In particular on MediumDocEdit-Chat, the task itself is challenging with a large generation space for writing the blogs, making the BLEU metric hard to improve. Given the first reason, comparing with gpt-4o (in general, comparing between models with different base models) may not be fair, as it's difficult to isolate effects from stronger knowledge versus better collaborative strategies. There might also be data contamination risks with gpt-4o, which likely trained on internet-scale data possibly including our datasets. However, we're willing to run testing if you're curious about the results. --- ### **[Other Weaknesses + Questions] "Where do the improvements stem from? Multiturn-aware reward design, or reward modification, or interaction between both factors?”** This relates to one of the reviewer’s questions - `"What about optimizing helpfulness using w>0?"`. For validation, we extended helpfulness, intrinsic, and extrinsic metrics to $w=1,2,3$ in model training, following the same setting as Figure 4. Please see visualized results at: https://anonymous.4open.science/r/collab-llm/images/ablation.png. Previously, we see helpfulness as a type of intrinsic reward. Here, we found applying helpfulness alone doesn't work well on all metrics, especially as it encourages lengthy responses. For the other metrics, increasing $w$ generally benefits the performances corresponding to the metric. For example, applying extrinsic rewards improves the BLEU score, while the ITR and token amount underperforms Extrinsic+Intrinsic adopted in CollabLLM. Notably, the design of extrinsic and intrinsic rewards is independent of the key design in MR function, which highlights the estimation of a response’s long-term effect via forward samplings. In fact, one can apply multiple intrinsic rewards including helpfulness and extrinsic rewards in the MR. We hope the additional ablation clarifies the source of improvements, we have changed `Figure 4` with this more comprehensive study. --- ### **[Questions] Clarify the experiments on MediumDocEdit-Chat** **(1) “How is the document extracted?”`** The document is extracted by prompting an LLM to extract the final written content after multiple conversational turns between the user simulator and the model, then comparing this generated document with the original Medium article using BLEU score to evaluate similarity. We added this description to the experiment setup. **(2) “How exactly is this scoring for ITR metric performed?”`** Please see `Appendix D.4`, starting from `Line 915`, we provide full prompts to produce ITR. **(3): “Is BLEU the right metric for this task? Why not also use LLM judges?”`** Yes we can use LLM judges, but since we already have human evaluation on document quality in the user study, which, we believe, is perhaps more convincing to assess the document quality. We're happy to reevaluate if the reviewer thinks otherwise. --- ### **[Questions] "Figure 4: why does ITR performance decrease from $w=2$ to $w=3$?”** Great catch! Following the MR formulation in `Eq. 1`, with increasing $w$, each model response's effect estimation on the final goal should be more accurate, and ITR performance should improve. However, for scalability, we conduct Monte Carlo sampling for future conversations with sample size fixed at 3 `(Appendix C.2)`, inevitably introducing estimation errors. This explains potential fluctuation with increasing $w$. We added this interpretation to the ablation section. Thanks for this question! Lastly, we have fixed the typo identified by the reviewer. --- ## **Summary** We deeply appreciate your detailed and thoughtful comments. We hope our answers address concerns about (1) significance of improvements, (2) source of improvements, and (3) clarity about evaluation and ablation results. Please let us know if you have more questions! Thank you again! --- Rebuttal Comment 1.1: Comment: Thank authors for having addressed my questions and concerns. I have nothing to add here and overall this confirms my positive feedback on the paper. I still think having an upper bound gpt4-o topline (comparing with gpt-4o) would be informative though... --- Reply to Comment 1.1.1: Comment: Thanks to Reviewer YkB9 for the comment and acknowledgement. For sure, we are happy to provide the reference results running gpt-4o with proactive prompting: https://anonymous.4open.science/r/collab-llm/images/gpt-4o-reference.png - For **task-specific metrics**, gpt-4o achieves the best results on math and coding tasks, which is expected since gpt-4o has exhibited much stronger knowledge from pretraining compared to `Llama-3.1-8b`. However, the performance of gpt-4o on the document editing task is particularly low, which emphasizes the positive impact of our multiturn-aware training on open-ended tasks even when compared to a much stronger model. - For **number of tokens**, gpt-4o generates 28.9% more content when compared with `Proactive Base`, and 51.3% more content when compared with our `Online DPO models`. We observe that the actual generation from gpt-4o, e.g., on the document editing task, is extremely extensive, especially when the user simulator didn't specify the length. - For **interactivity**, gpt-4o are slightly better than CollabLLMs on the math and coding tasks, while the interactivity is lower than CollabLLMs on document editing tasks. We hope this reference provides more information (also we apologize for replying a bit late). We have added the results to the paper.
Summary: This paper studies how to enhance human-AI collaboration by improving multi-turn conversations. Concretely, authors propose a learning framework CollabLLM that uses a reward function aware of multi-turn setup in reinforcement finetuning. This multiturn-aware reward takes account of both task performance and user satisfaction, and is proved empirically effective in a few simulated environments including text editing, code generation and math reasoning. Claims And Evidence: - This work addresses a key limitation of existing LLMs: the tendency to generate single-turn responses without actively engaging in clarifying or guiding user intents. - The proposed multiturn-aware reward function is an interesting contribution, as it incorporates both extrinsic task success metrics and intrinsic user experience factors (e.g., excessive tokens to read and write). - This work thoroughly evaluates CollabLLM across multiple tasks, showing substantial improvements in task success and user engagement with simulated users. Additional human evaluation with 201 crowd worker participants provides empirical validation beyond automated benchmarks, showing increased user satisfaction and reduced time spent on tasks. - The ablation section provides useful insights into the importance of forward-looking strategies in reinforcement learning. - This paper is very well written. Methods And Evaluation Criteria: Three multiturn interaction benchmarks are proposed, covering document editing, code generation, and math problem-solving. The evaluation criteria is diverse, including measurements on task accuracy, interactivity, user satisfaction, user efforts etc. Theoretical Claims: N/A Experimental Designs Or Analyses: Experimental design and analysis sections look sound and thorough. Supplementary Material: No Relation To Broader Scientific Literature: The discussion around the suboptimal performance in handling multi-turn interactions is well-motivated and supported by literature. The insight of using multiturn-aware reward and forward sampling strategies is shown to be effective, and seems generalizable to other tasks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - A comparison of the potential divergence of simulated user and human user during training would further strengthen this work, as prompt-defined simulated LLM users could be substantially biased. - Multiturn-aware reward function is intrinsically hard to define for ambiguous tasks, which limits its applicability. Other Comments Or Suggestions: N/A Questions For Authors: Any idea on the computational expensive aspect of the forward sampling strategy? This seems nontrivial especially for long conversations. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your approval and insightful comments! We further address your comments: --- ## **Comment 1: "A comparison of the potential divergence of simulated user and human user during training would further strengthen this work"** Good point! We agree that prompt-defined user simulators could be biased. Due to the large-scale forward conversations needed for computing Multiturn-aware Rewards, we only used the user simulator during training. While not meant for training, in the user study, we have collected conversations between real users and our models. Here we provide some insights on feature differences and similarities between prompt-defined user simulators and real users' responses. **Differences:** - *Real users communicate with shorter, more fragmented sentences often containing grammatical errors, while simulated users typically use more complete sentences.* - *Real users frequently change direction mid-conversation and add highly specific personal details (like "eight dogs"), while simulated users are more predictable.* - *Real users express emotional reactions more bluntly ("that's awful," "sounds pretentious") and use more casual language patterns with abbreviations and incomplete thoughts compared to simulated users.* **Similarities**: - *Both exhibit iterative content development patterns - gradually revealing requirements rather than providing complete information upfront.* - *Both prioritize accessibility - consistently requesting simplification of complex topics, actionable advice, and concrete examples that make information more understandable.* - *Both express preferences about content structure and style, and acknowledge when content meets or doesn't meet their expectations.* We train models by interacting with simulated LLM users, while conducting user studies to evaluate model performance with real users as a test of generalization. The generalization is validated from the experimental improvements. With more resources, it would be interesting to further explore the sensitivity of the impact of users on the trained models. We are glad that the reviewer raises this interesting question. We have added these insights to `Appendix: F`, as well as adding the key gap in `Section 6: Real-world User Study`. Hopefully these could shed light into future human-centered LLM training. --- ## **Comment 2: "Multiturn-aware reward function is intrinsically hard to define for ambiguous tasks"** Thanks for the comment! Optimizing for ambiguous tasks could be hard even in single-turn settings, in addition to the multiturn settings that we study. In general, on ambiguous tasks, such as recommendation or consultant, one mitigation is to use LLM Judges whose inputs contain the task definition. The assumption is that, since LLMs are powerful at reasoning, they are fairly good at telling if the task completion is good or not. Empirically, this has been commonly adopted in evaluation and benchmarking. For our Multiturn-aware Reward function, we incorporate both extrinsic and intrinsic rewards, where the intrinsic rewards (interactivity and efficiency) should be applicable to most applications. For extrinsic/task-specific metrics, the same design discussed above can be applied over the future conversations. We have added this discussion to our main paper! --- ## **Question 1: "the computational expensive aspect of the forward sampling strategy?"** Thanks for this question! In online training, the computational overhead in forward sampling comes from 1) generation from the policy model and 2) generation from the user simulator. - For (1), the computational overhead and cost are fairly low since we have integrated vLLM [1] in model inference. - For (2), we use gpt-4o-mini as the user simulator where we expect user responses to be concise, i.e., the number of output tokens to be small. We compute the average statistics over 100 future conversations when w=1,2,3 on MediumDocEdit-Chat, the document editing task, which has the maximum computational overhead among the three tasks. Please see the table in https://anonymous.4open.science/r/collab-llm/images/cost.png We have added this information to `Appendix C.3: Computational Cost During Training`, and hopefully this provides clear details. --- ## **Summary** We thank the reviewer for the interesting questions and comments. We hope our responses can alleviate the concerns on 1) user simulators, and 2) applicability in ambiguous tasks. We also provide more details about computational expenses, as well as improving our manuscript accordingly. Please don't hesitate to let us know if you have more questions!
Summary: Existing fine-tuning techniques for LLMs, such as Reinforcement Learning from Human Feedback (RLHF), primarily maximize the reward for immediate and single-turn responses. However, real-world users often reveal their intents or preferences until later interactions; thus, to streamline their interaction with users and improve user satisfaction, LLMs must be able to actively guide users to clarify and refine their intents throughout the multi-turn conversation. This paper proposes ColabLLM, a novel training framework that encourages LLMs to collaborate with humans in multi-turn conversations. The collaborative simulation module of ColabLLM samples future conversations with users to estimate how the LLM response would impact future turns. This long-term impact, termed Multiturn-aware Reward (MR), evaluates responses based on both task-specific success and efficiency to assess the multi-turn collaboration quality. Once this MR is computed, ColabLLM employs established RL algorithms to fine-tune the backbone LLM. In addition, the paper releases three multiturn datasets across diverse domains - collaborative document editing, coding problem assistance, and multiturn problem solving - to fine-tune and evaluate LLMs' multiturn conversational capabilities. Claims And Evidence: C1. ColabLLM encourages LLMs to collaborate with human users in multiturn conversations -> lacks evidence or clarity - It is clear that multiturn data are collected with an LLM that is prompted as a user simulator. On the contrary, it is rather unclear how the multiturn reward obtained with multiturn data effectively encourages collaboration. C2. The reward design of ColabLLM aligns with causal effect estimation. -> somewhat convincing - Could the authors elaborate more on their claim in lines 685-687 of the Appendix "existing methods primarily rely on post-hoc trajectory-level data, learning from observed conversations rather than explicitly modeling the causal effect of individual responses on long-term task success?" Is this because the existing methods do not have a user simulator, and thus they lack the ability to probe the long-term impact of LLMs responses? Then, how do existing methods create post-hoc trajectory-level data? How are these different from the data created by ColabLLM? These distinctions would make the related works section more comprehensive and improve the paper's quality as a standalone academic paper. C3. Three datasets for fine-tuning and evaluating LLMs on multiturn conversations are proposed -> lacks evidence - Please correct me if I am wrong, but I could not find supplementary materials or anonymized links to these datasets. While the authors provide samples in the Appendix, I would have preferred to see the full datasets to get a clearer picture of the dataset quality and quantity. Methods And Evaluation Criteria: The proposed method is technically sound. The evaluation criteria are aligned with the convention in this field. Theoretical Claims: The claim on the cause-effect estimation wiht ColabLLM is unclear; this concern was raised above. Experimental Designs Or Analyses: The experimental designs and analyses are valid. Supplementary Material: I read through the Appendix; no other supplementary material was provided. Relation To Broader Scientific Literature: This was included in the summary. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - The problem is well-motivated. Improving the LLMs' multiturn conversational capability is an important problem. - The proposed method, which relies on data generation with a user simulator and multi-turn reward design, is technically sound. - The paper additionally introduces three public benchmarks for multi-turn conversation research. - The results are strong, and the authors compare the proposed method against strong baselines. Weaknesses (already mentioned in the Claims and Evidence section) - How the proposed method encourages collaborative behavior could be better discussed. - The cause-effect estimation with the user simulator could use some clarification. Other Comments Or Suggestions: As collecting real data is expensive, and it is commonly believed that most of the data available at hand (ex) via internet crawling) have been exhausted during LLM pre-training, utilizing LLM-generated data to improve performance on downstream tasks or to encourage a certain behavior is gaining popularity. It appears that this work shares this philosophy of self-training, and has made clever modifications to tailor it specifically for multi-turn conversation capabilities. Therefore, the methodology, while described using fancy technical terms, such as "forward-looking strategies" or "user simulator," may not necessarily be as novel as the authors claim them to be. For instance, when explained in plain terms, estimating forward-looking strategies with a user simulator is more or less the same as generating more realistic multi-turn data with an LLM that is prompted to behave like a human user. Therefore, the motivation and design principle behind the methodology must be better conveyed to showcase that this work goes beyond simply engineering and re-designing the self-training framework for the purpose of multiturn conversations and does indeed reveal an unknown or discussion-worthy application of LLM-backed data generation. If this concern, raised in the Claims and Weaknesses section, could be addressed, I am willing to raise my score to recommend acceptance of the paper. Questions For Authors: Please refer to the above section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your approval and useful feedback! Here we address your concerns: --- ## **[Claims And Evidence] "How does the multiturn reward effectively encourage collaboration?"** - At the **methodology** level, Multiturn-aware Reward (MR) encourages collaboration by accounting the long-term effect of each response in future interactions. A more collaborative LLM should achieve higher extrinsic and intrinsic rewards within a fixed number of turns, corresponding to task completion and efficiency/engagement. `=> clarity` - At the **data** level, since the data is generated from applying MR, supervised training on the multiturn data encourages the model to replicate the behavior. MR is also used for online training, which reinforces the model to optimize MR based on current generations. `=> clarity` - At the **experiment** level, we show that in simulated environments and real-world user study, CollabLLMs achieve best collaboration with humans in the collaborative aspects we considered. `=> evidence` --- ## **[Claims And Evidence] Could the authors elaborate more on lines 685-687.** We apologize for this concise statement and provide the following clearer explanation, and we will update the paper accordingly: - In plain language, our MRs provide **turn-level** signals as the long-term effect of each model response, while existing methods such as MTPO [1] applies **trajectory-level** rewards, making it hard to dissect the effect of good/bad responses inside a conversation. - In depth, MRs answers **`"How does the current model response impact future interactions?."`** We train the model to produce responses that maximize the final reward given the context. In contrast, MTPO leverages preference rewards between two conversations, answering **`"Which conversation should the model prefer?"`** where multiple responses are entangled and hard to dissect their influences to the entire conversation. - Borrowing terms from causality literature [2], we refer to the first mechanism as **interventional**, and the second as **observational**. The interventional mechanism by MRs offers more fine-grained estimation of the effect of model responses. Therefore, to your questions, the difference is not simply from the user simulator. In fact, the existing methods like MTPO can have user simulators while still relying on post-hoc comparison between conversations. In terms of data creation, the data we generated (data available in the next answer) comes from turn-by-turn filtering guided by MRs, while MTPO is trained on pairs of good and bad conversations. We hope the explanations convey the distinctions clearly. We have added them to the related works section to improve the paper's quality, as suggested by the reviewer. --- ## **[Claims And Evidence] Request to access the multiturn datasets** We are happy to provide the datasets! You can click the anonymous link: https://anonymous.4open.science/r/collab-llm/notebooks/load_conv_data.ipynb, we provide the script that loads the full datasets from `data/`. The notebook displays random samples of the data, and can be updated to show different samples. We hope this helps with getting a clearer picture of our dataset! --- ## **[Other Suggestions] "the design must be better conveyed to show that this work goes beyond re-designing the self-training framework"** Thanks for raising the insight about self-training and our work! Self-training typically involves generating synthetic data to improve model performance. Under this scope, our work indeed 1) conducts synthetic data generation, and 2) leverages this data to improve the model. However, there are many possible ways our current design can end up being. In particular, our key intuition in (2) is that we want the model to **be aware of future outcomes and prioritize responses with higher long-term impact**, which constitutes our main novelty. To achieve this, we consider both extrinsic and intrinsic metrics for a more user-centric estimation of long-term effects. The rest of the components in (1) are fundamentally aimed at **computing long-term effects**, with the use of a user simulator for forward sampling being just one, albeit minor, contribution of ours. Moreover, our models are not merely trained on offline synthetic data; they also leverage MRs for online training to adapt model behavior. --- ## **Summary** We hope our responses address your concerns on clarity and data accessibility. We acknowledge the previous draft may have been concise in explaining differences with related works. We have revised thoroughly and added a paragraph in Related Work discussing the connection with self-training. We sincerely appreciate your reconsideration of our work in light of our responses. Thank you for your insights! --- ## **Reference** [1] Shani et al. Multi-turn reinforcement learning from preference human feedback. arXiv:2405.14655. [2] Pearl et al. Causal Inference in Statistics: A Primer. 2016.
Summary: This paper introduces CollabLLM, a training framework designed to enhance the capability of large language models (LLMs) to collaborate with humans in multi-turn interactions. The basic idea is to introduce forward-looking behaviors in LLMs to maximize long-term collaborative outcomes. This is achieved through a collaborative simulation module, which samples potential future user interactions to assess the impact of current responses using a new metric called Multiturn-aware Reward (MR). The MR combines both extrinsic factors, such as successful task completion, and intrinsic factors, like interaction efficiency, to comprehensively evaluate response quality. By applying reinforcement learning methods to optimize responses according to MR, CollabLLM improves models' abilities to proactively engage users, leading to superior collaborative task performance. The experimental results show the fine-tuned model actively anticipates user needs, poses relevant follow-up questions, generates targeted content, and offers insightful recommendations. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence specifically in Section 5. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria in this paper make sense for addressing the multiturn collaboration problem. Specifically: The authors clearly identify the limitations of existing methods (like RLHF), which typically reward single-turn responses and do not effectively address long-term user interactions. The introduction of Multiturn-aware Reward (MR) and a collaborative simulation module effectively addresses these limitations by explicitly modeling forward-looking behaviors, thereby ensuring LLMs actively engage in clarifying user intent, leading to better long-term outcomes. The use of both extrinsic (task-specific success) and intrinsic (interaction efficiency and interactivity) evaluation metrics provides a comprehensive and meaningful assessment of collaboration quality. Theoretical Claims: The paper does not contain explicit theoretical proofs. Experimental Designs Or Analyses: Multiturn-aware Reward (MR) Ablation (Section 5.1, Figure 4): - Validity: The ablation study clearly compares immediate reward methods (Helpfulness, Extrinsic, Extrinsic + Intrinsic) and multiturn-aware reward variants (with window sizes w=1,2,3). The controlled experimental setup is sound because it isolates the effect of reward design clearly and directly evaluates their relative effectiveness. - Issues: No major issues. However, the authors mention briefly the computational costs associated with larger window sizes, but explicit details about these computational trade-offs are sparse. Including more detail on computational overhead might enhance the practical interpretability of the findings. Generalization Tests (Section 5.3, Table 2): - Validity: The authors clearly test model generalization by evaluating on Abg-CoQA, a dataset distinct from training domains, thereby assessing whether learned collaborative strategies transfer effectively. - Issues: Generalization tests were limited to a single additional dataset. Including multiple diverse external benchmarks could strengthen claims about generalizability. Supplementary Material: No Relation To Broader Scientific Literature: 1. Addressing Limitations of Single-turn Reward Methods (e.g., RLHF) - In prior work, RLHF significantly advanced LLM fine-tuning using single-turn feedback, optimizing immediate next-turn responses. This method is now a standard baseline but is inherently limited for multi-turn interactions because it neglects the cumulative effects of model responses on long-term user goals. - CollabLLM introduces Multiturn-aware Rewards (MR), explicitly modeling the long-term trajectory of human-model interactions. - Unlike traditional RLHF, MR leverages forward sampling to anticipate conversational impact, thus directly overcoming RLHF’s known limitations regarding long-term interaction quality. 2. Proactive, Clarification-based Interactions - Prior work explored using LLMs proactively, especially for clarification questions. However, these methods often rely heavily on predefined interaction patterns or specific domains, limiting adaptability. Prompting-based methods attempt similar proactive strategies but struggle with generalizability across diverse user scenarios. - CollabLLM generalizes proactive collaboration through reinforcement learning, enabling more versatile interactions that dynamically adapt to user intent across different tasks and scenarios. Essential References Not Discussed: Some related work not included in the paper: 1. Abdulhai, Marwa, et al. "Lmrl gym: Benchmarks for multi-turn reinforcement learning with language models." arXiv preprint arXiv:2311.18232 (2023). 2. Shani, Lior, et al. "Multi-turn reinforcement learning from preference human feedback." arXiv preprint arXiv:2405.14655 (2024). Other Strengths And Weaknesses: Strengths: - The paper introduces CollabLLM, a novel training framework designed to enhance multiturn human-LLM collaboration. - The development of Multiturn-aware Rewards (MR) represents a significant advancement over traditional single-turn reward mechanisms, addressing the limitations of models like RLHF in long-term interactions. - The paper is well-structured, with clear explanations of the methodology, experimental setups, and results. Weaknesses: - While the paper introduces innovative concepts, it does not discuss certain related works that have explored similar themes, such as multi-turn reinforcement learning benchmarks and proactive clarification in language models. Incorporating these references could provide a more comprehensive context for the contributions. - The paper could benefit from a more detailed discussion of the computational overhead associated with the proposed methods, particularly regarding the scalability of Multiturn-aware Rewards. This information would be valuable for practitioners considering the implementation of CollabLLM. Other Comments Or Suggestions: As suggested in strengths and weaknesses. Questions For Authors: - How does CollabLLM integrate with existing reinforcement learning frameworks and algorithms? Are there specific modifications or considerations required to implement MR within standard RL pipelines? (Clarifying the integration process would help us understand the feasibility of adopting CollabLLM in other systems. Complex integration requirements could pose barriers to implementation.) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive and thoughtful suggestions touching on the practicalness of our work! We address your remaining concerns: --- ## **[Experimental Analyses | Weaknesses] "(What is) the computational costs associated with larger window sizes”** Thanks for this suggestion! In online training, the computational overhead in forward sampling comes from 1) generation from the policy model and 2) generation from the user simulator. - For (1), the computational overhead and cost are fairly low since we have integrated vLLM [1] in model inference. - For (2), we use gpt-4o-mini as the user simulator where we expect user responses to be concise, i.e., the number of output tokens to be small. We compute the average statistics over 100 future conversations when $w=1,2,3$ on MediumDocEdit-Chat, the document editing task, which has the maximum computational overhead among the three tasks. Please see the table in https://anonymous.4open.science/r/collab-llm/images/cost.png We have added this information to `Appendix C.3: Computational Cost During Training`, and hopefully this provides clear details. --- ## **[Experimental Designs | Weaknesses] "Including multiple diverse external benchmarks could strengthen claims about generalizability.”** In addition to Adb-CoQA, the model's generalizability is also validated in our user study, where the deployed model is trained on the MediumDocEdit-Chat task with document type restricted to a collection of medium blogs. While in deployment, the model also interacts with users to write personal statements and conduct creative writing. --- ## **[Essential References | Weaknesses] Adding related works** Thanks for providing more references! The second paper [3] that the reviewer listed is discussed in `Line 631 (Appendix A)` and `Table 4 (Appendix B)`. A difference between CollabLLM and MTPO [4] is that our Multiturn-aware Rewards provide **accurate and turn-level** signals as the long-term effect of each model response, while MTPO applies **trajectory-level** rewards when training the LLMs, making it hard to dissect the effect of good/bad responses inside a trajectory/conversation. We further added the benchmark paper [2] to `Line 682 (Related Work)`, which will be included in the main paper in the final version. Here is the modified content: ```Recent benchmarks~\cite{LMRL,MTEval} evaluate LLMs' performance in multiturn settings, measuring the goal orientation and planning capabilities of models across interactions. Several studies...``` We hope this addresses your concern about the comprehensiveness of our related work. --- ## **[Questions] How does CollabLLM integrate with RL frameworks and what modifications are needed?** CollabLLM is plug-and-play with two user-defined modifications: 1) **(Optional) User simulator prompt**. We provide the default prompt (the one we used in the paper), while in some cases, the user characteristics can be known ahead of time in certain tasks. For example, consider an LLM for education where users may be students with basic physics understanding. Brief instructions for role-playing can better approximate real conversations. 2) **Metrics.** The intrinsic rewards (interactivity and efficiency) should be applicable to most applications, while users can define other task-specific metrics, such as accuracy, correctness, or, for example, bargaining advantage in debating or deal-making tasks. We are glad that the reviewer raises this question. Our implementation goal is to make CollabLLM easy to use, able to accommodate user-customized tasks, and efficient with a fast inference infrastructure such as vLLM, so as to be compatible with RL training libraries such as [4]. --- ## **Summary** We thank you for the extensive review and insights! We hope our answers address your concerns on 1) computational overhead, 2) generalization, and 3) related works. We further provide clarification about how to integrate the existing RL training with CollabLLM. Finally, we greatly appreciate your support, which strengthens the paper in terms of its practical potential. Thank you again! --- ## **Reference** [1] Kwon et al. Efficient Memory Management for Large Language Model Serving with PagedAttention. SOSP 2023. [2] Abdulhai, Marwa, et al. Lmrl gym: Benchmarks for multi-turn reinforcement learning with language models. arXiv preprint arXiv:2311.18232 (2023). [3] Shani, Lior, et al. Multi-turn reinforcement learning from preference human feedback. arXiv preprint arXiv:2405.14655 (2024). [4] von Werra et al. TRL: Transformer Reinforcement Learning. 2020.
null
null
null
null
ShieldAgent: Shielding Agents via Verifiable Safety Policy Reasoning
Accept (poster)
Summary: The paper proposes a guardrail/shielding scaffold for LLM-based agents. The core idea is to parse safety documentation such as policies into atomic rules, which are then encoded as probabilistic circuits. The use of probabilistic circuits thus allows for efficient marginal inference of the likelihood that a rule will be violated given some action being taken by the agent. In addition to this main contribution, the paper also contributes an extensive benchmark containing over a thousand examples of unsafe agent trajectories. Empirically, the method proposed in the paper achieves higher accuracies (in terms of predicting whether an action is safe or not, and which rule it is that will be violated) compared to previous work. ## update after rebuttal I have kept my original score of 4 - Accept. I believe the additional experiments and discussion provided by the authors during the rebuttal, including an additional set of ablations, to be very valuable; my reason for not raising my score further is that this is not my area of expertise, and other reviewers, in particular tHcj, have voiced concerns that appear to be valid. I am not sure I share them, for example I do not find the overall framework to be unclear, but nonetheless the concerns of the other reviewers should be taken into account when making the final recommendation. Claims And Evidence: I believe the claims to be clear and well supported by the given evidence. However, I would have liked to see more discussion of the impact of the hyperparameters, of which there appears to be many (e.g. the rule similarity threshold $\theta$ and $\epsilon_\text{tol}$), as this appears to be a crucial part of trading off the accuracy and runtime of the safety harness. Methods And Evaluation Criteria: Yes, the methods and criteria makes sense. The additional contribution of a new benchmark for this task strengthens the paper (since to be honest there is not--to the extent of my knowledge--many existing benchmarks out there to evaluate these methods on), and it does not appear to be constructed in a way that would unfairly favor the author's proposed approach. My only critique in the terms of the criteria is that in the context of ensuring safety, false negatives are much worse than false positives. I would thus have liked to see an evaluation in terms of the false negative rate, in addition to the accuracy and false positive rate. Theoretical Claims: N/A; no theoretical claims made. Experimental Designs Or Analyses: The experiments appear to be both sound and thorough. The benchmarks aren't huge, but given the complexity of constructing such datasets this does not feel like a fair criticism of the work, especially as the authors already supplement their main methodological contribution with a novel benchmark themselves. Supplementary Material: Yes, I have reviewed the appendices. Appendix A.3. and B appear to be empty/missing. These missing/empty appendices do not appear to have been referred to at any point in the main text, so it seems to be a rather benign formatting issue. Relation To Broader Scientific Literature: The paper appears to be a significant contribution to the agent literature. Not only is this type of work very important, but the methodological decisions are themselves very interesting and (to my knowledge) novel, such as the use of probabilistic circuits to make exact safety inference tractable. Previous works which I am familiar with, such as ShieldLM by Zhang et al. (2024), were much more limited in scope and largely relied on the LLM to implicitly evaluate the risk of a rule violation; in particular, framing the guardrail problem as an MDP and using exact probabilistic inference to compute the probability of a rule being violated appears to be a much more well-founded approach to the problem. Essential References Not Discussed: None to me knowledge. Other Strengths And Weaknesses: The paper is very clearly written. While the methodology is not particularly technically involved, it does have a lot of moving parts, and yet they are presented in a way that (in my opinion) is very easy to follow. Besides that, I applaud the authors for focusing their efforts on an issue which will be very important as agentic systems are likely to become more and more popular over the next few years. Other Comments Or Suggestions: There is a minor type on line 191. Questions For Authors: 1. Have you carried out an ablation for the different parts of your system; for example, how often does your system fail because of a poorly tuned probabilistic circuit vs. a misformulated rule? 2. How do you set the hyperparameters for your method? (I didn't see any discussion of this in the appendix, but please correct me if I am wrong.) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recommending our paper for acceptance and we really appreciate their valuable suggestions! Below we have addressed the questions one by one. > more discussion of the impact of the hyperparameters Following the reviewer's suggestions, we have provided a detailed ablation study regarding the hyperparameters, i.e., different components, different PC weights, and different safety thresholds $\theta$ in Table-reb 2, 3, 4 below. The results show that guardrail performance and efficiency are primarily influenced by tool calling and model-checking components, which are critical for detecting rule violations. Additionally, we observe that PC significantly reduces the FPR and remains stable even under large perturbations, demonstrating the robustness of our safety certification algorithm. Table-reb 2: Ablation study on different components | Component | ACC@G | FPR@G | FNR@G | ARR@R | Time | |------------------|:-----:|:-----:|:-----:|:-----:|:-----:| | w/o Model checking |85.4 | 7.2 | 9.0 | 80.9 | 24.0 | | w/o PC | 82.5 | 13.0 | 2.3 | 87.5 | 31.1 | | w/o History cache | 90.4 | 5.6 | 6.5 | 87.5 | 42.0 | | w/o Workflow memory | 86.0 | 7.0 | 8.5 | 82.0 | 38.2 | | w/o Tool calling | 81.4 | 6.2 | 11.3 | 71.3 | 48.9 | Table-reb 3: Ablation study on PC weights. | Method | ACC@G | FPR@G | FNR@G | ARR@R | |------------------|:-----:|:-----:|:-----:|:-----:| | Learn from real data | 90.4 | 5.6 | 6.5 | 87.5 | | Pseudo-learning | 88.0 | 6.7 | 8.5 | 87.5 | | FOL ($\theta \to \infty$) | 82.5 | 13.0 | 2.3 | 87.5 | | Perturbed ($\epsilon_\theta=10\%$) | 89.7 | 5.9 | 7.7 | 87.5 | | Perturbed ($\epsilon_\theta=30\%$) | 87.4 | 7.0 | 9.0 | 87.5 | | Table-reb 4: Ablation study on the safety threshold $\theta$ of barrier certificate. Larger $\theta$ indicates more critical safety needs. | Threshold $\theta$ | ACC@G | FPR@G | FNR@G | ARR@R | |------------------|:-----:|:-----:|:-----:|:-----:| | $0$ | 87.4 | 5.0 | 8.2 | 87.5 | | $-0.1$ | 84.0 | 4.2 | 12.5 | 87.5 | | $0.1$ | 90.4 | 5.6 | 6.5 | 87.5 | | $0.3$ | 86.9 | 7.4 | 4.2 | 87.5 | > evaluation in terms of the false negative rate We totally agree with the reviewer that false negatives are even worse than false positives, and we have already included the FNR metric in our above ablation results in Table-reb 2, 3, 4. We will also ensure to update all table results to include FNR in the revised version. > Appendix A.3. and B appear to be empty/missing. We enrich the supplementary material by presenting a list of more comprehensive examples in [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/rebuttal.pdf). Specifically, we provide (1) three examples of the extracted policy blocks and LTL rules in Figure-r. 4-6; (2) three examples of ASPM optimization in Figure-r. 7-9; (3) an example of the dataset sample in ShieldAgent-Bench in Figure-r. 10; (4) an end-to-end example of the shielding process in Figure-r. 11-16. We also fix the typos and provide a detailed algorithm for each optimization process in Algorithm 1, 2, 3 of [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/figure_updates.pdf). We will ensure to add corresponding references and fix missing references in the updated version of our paper. > how often does your system fail because of a poorly tuned probabilistic circuit vs. a misformulated rule? We present detailed ablation results in Tables reb-2, 3, and 4 above. Regarding the PC weights, the results indicate that the guardrail’s performance remains comparably robust even under large perturbations (noise up to 30%), demonstrating the reliability of the probabilistic inference process within the robust ASPM policy model and the safety probability certification algorithm. Additionally, since the PC learns soft weights for each rule from real-world safety-related datasets paired with ground-truth labels, the impact of any misformulated rule is naturally mitigated, as such rules would receive very low weights during learning. > How do you set the hyperparameters for your method? Thank you for raising this insightful question! In our evaluation, we set the hyperparameters based on ablation experiments conducted on a held-out validation set of approximately 400 examples from ShieldAgent-Bench. For instance, we set the safety threshold to 0.1, as it achieved the best trade-off between guardrail accuracy and false positive rate compared to other candidate values. Once again, we sincerely thank the reviewer for their thoughtful feedback and for recognizing the contributions of our work for ensuring the safety deployment for LLM agents! --- Rebuttal Comment 1.1: Comment: I have reviewed the authors' responses to the reviews. I am pleased to see the additional ablation studies, the inclusion of which in the paper will undoubtedly strengthen its conclusions. I will keep my current score. I do not feel confident raising it further as I do not believe myself sufficiently knowledgeable about other prior and concurrent work in this area, but I will reiterate that it seems like a very solid contribution to me and I would be happy to see the paper accepted. --- Reply to Comment 1.1.1: Comment: Dear Reviewer GLxp, We are sincerely grateful for your recognition of ShieldAgent’s contribution to ensuring the safety of LLM agents! Your valuable feedback has greatly helped us improve our work, and we deeply appreciate your support in recommending this paper for broader dissemination. Thank you very much! Submission 16287 Authors
Summary: Main findings: - The paper introduces SHIELDAGENT, a novel guardrail agent designed for LLM agents to explicitly ensure compliance with safety policies during sequential decision-making through auto-mated probabilistic policy reasoning. It addresses significant vulnerabilities of LLMs to malicious instructions and attacks, which can lead to severe consequences such as privacy breaches and financial losses. Besides, this paper introduces a new benchmark, named SHIELDAGENT-WEB, which contains 2K safety-related instructions under two kinds of attacks. Main results: - This paper tests their algorithm on two benchmarks: SHIELDAGENT-WEB and ST-WEBAGENTBENCH. The SHIELDAGENT achieves SOTA results on both benchmarks under affordable inference costs. Algorithmic ideas: - The SHIELDAGENT consists of constructing a structured safety policy model and a shielding model. -- Structured safety policy model: (1) uses GPT-4o to exhaust all the potential policies from the provided organization handbook which sets restrictions or guidelines based on four elements: definition, scope, policy description, and reference; (2) converts obtained policies to LTL structures with GPT-4o; (3) iteratively refines the verifiability of the policies and merges similar ones together; (4) trains the probabilistic circuit for policies and cluster similar circuits -- Shield model: (1) selects the most related circuits and computes the safety probability; (2) shields the action with control barrier function (CBF) based on the safety probability; (3) provides the safety label, explanations, or violated rules based on CBF. Claims And Evidence: One claim assumes there exists policy document to do policy extraction. One problem may be how to deal with the case when there is not such document. Methods And Evaluation Criteria: 1. Yes, this paper tests two datasets. One is created by themselves, and the other is an existing one. 2. SHIELDAGENT outperforms existing baselines in both datasets. 3. However, all benchmarks are related to web service. It would be better to add results in some other field. Theoretical Claims: N/A Experimental Designs Or Analyses: The “Direct” baseline seems to have smaller FPR with low accuracy in table 2. The main reason may be “Direct” tends to predict more “0” so that it may have high False Negative Rate (FNR). It is better to also show the FNR in the table. Supplementary Material: Yes. However, the supplementary material is too simple and incomplete. For example, Appendix A.3. and Appendix B. have no content with only titles. If the author thinks that their main content is detailed enough, please delete these two parts. Otherwise, please finish these two parts. Relation To Broader Scientific Literature: 1. Safety: This paper introduces two general kinds of attacks, which are agent-based attacks and environment-based attacks. One contribution is that this paper considers both attacks in the design of SHIELDAGENT. 2. Guardrails: Prior works mainly focus on guarding LLM models in natural languages or images rather than decision-making processes, like LlamaGuard, LlavaGuard, and SafeWatch. One method focuses on guarding LLM agents, named GuardAgent, mainly relies on model’s reasoning ability instead of explicitly shielding the policies of target LLM agent. Thus the main contribution of SHIELDAGENT is to achieve explicit safety for LLM agents in making sequential decisions. Essential References Not Discussed: No, the key contribution is clearly itself with correctly cited related works. Other Strengths And Weaknesses: 1. Strengths - This paper designs a novel pipeline to construct safety policy model, which can explicitly shield LLM agents. - This paper provides a comprehensive benchmark, which contains 2k safety-related instructions across seven safety categories and six web environments under two kinds of attacks. 2. Weaknesses - This paper only conducts experiments on Web services without other conditions. - The appendix part seems to be incomplete. Other Comments Or Suggestions: 1. I don’t think GuardAgent, in section 2.2., focuses solely on textual space since GuardAgent also does experiments on a web service benchmark. However, I agree with the authors for the latter part, where GuardAgent just relies on the internal knowledge and reasoning ability to guard safety. 2. In line 213-214, Page 4. Please correct the presentation “Prompts defined in Appendix C and Appendix C, respectively”. 3. In table 2, the smallest overall FPR comes from “Direct” baseline, but the bold style is in the SHIELDAGENT. 4. In Section 3.3 and 3.4, how to generate action shield plan and how to generate shielding code are not explained or with any examples. 5. The Appendix part seems to be incomplete. Some titles do not have any content. Questions For Authors: Q1: How to do pseudo training? Is the pseudo training high-time-cost since the agent trajectories need to be generated? Q2: How to generate action shield plan and how to generate shielding code? Are there any explanations or any examples? Q3: What is the action set in Figure 1? Is it the part of “Shielding Operations” in Section 3.3. Q4: are the shielding operations and the toolbox general or task-specific? Are they the same for different kinds of tasks or needed to be fine-tuned or designed for other tasks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's recognition of our paper's novel contribution and thoughful suggestions! > how to deal with the case when there is not such document. Thank you for this insightful question! Instead of strictly requiring a document for policy extraction, we mainly aim to achieve explicit agent safety compliance against regulations and specifications defined in **natural language**, which reflects the vast majority of real-world cases. When no explicit documents are available, ShieldAgent can also leverage prompt engineering to extract implicit rules directly from data samples, as long as they can be described in natural language. > add results in some other field. To demonstrate the strong generalization capabilities of ShieldAgent in diverse agent tasks other than web services, we further evaluate it on two additional benchmarks, AgentHarm [1], which is a comprehensive agent risk benchmark beyond web agent tasks, and VWA-Adv [2], which involves diverse risk scenarios of GUI-based agents. We provide the detailed evaluation result of AgentHarm in Table-reb 5 below and defer the evaluation of VWA-Adv in Table-r. 2 of [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/rebuttal.pdf) due to space limit. The results demonstrate that ShieldAgent can effectively generalize and provide robust guardrails across different agent types, environments, and tasks. Table-reb 5: Guardrail performance comparison on **AgentHarm** across 11 harm categories. | Guardrail | Fraud | Cybercrime | Self-harm | Harassment | Sexual | Copyright | Drugs | Disinfo. | Hate | Violence | Terrorism | Overall | |-------------|:-------:|:---------:|:--------:|:---------:|:------:|:--------:|:------:|:-------:|:------:|:-------:|:--------:|:-------:| | Direct | 75.7 / 5.2 | 82.4 / **3.6** | 76.5 / **3.6** | 80.6 / 3.8 | 82.2 / **3.8** | 72.0 / 3.9 | **82.0** / 7.0 | 76.9 / 4.1 | 71.0 / **3.5** | 75.8 / 4.4 | 71.1 / 5.1 | 76.9 / 4.4 | | GuardAgent | 82.6 / 4.7 | 66.1 / 4.0 | 75.1 / 4.5 | 75.9 / 3.4 | 82.1 / 6.3 | 69.6 / 4.3 | 76.6 / **3.8** | 80.1 / **3.2** | 77.7 / 3.7 | **92.4** / **3.3** | 83.9 / 4.2 | 78.4 / 4.1 | | **ShieldAgent** | **89.1** / **4.6** | **92.9** / 4.9 | **82.5** / 3.9 | **92.4** / **2.5** | **94.0** / 4.0 | **89.0** / **2.1** | 80.4 / 5.5 | **81.9** / 4.2 | **81.7** / 3.8 | 83.9 / 4.7 | **88.3** / **3.2** | **86.9** / **3.9** | > show the FNR in table 2 We totally agree with the reviewer that the direct baseline achives lower FPR by predicting more "0". We will ensure to update all table results to include FNR in the revised version. > the supplementary material is too simple We enrich the supplementary material by presenting a list of more comprehensive examples in [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/rebuttal.pdf). Specifically, we provide (1) three examples of the extracted policy blocks and LTL rules in Figure-r. 4-6; (2) three examples of ASPM optimization in Figure-r. 7-9; (3) an example of the dataset sample in ShieldAgent-Bench in Figure-r. 10; (4) an end-to-end example of the shielding process in Figure-r. 11-16. We also fix the typos and provide a detailed algorithm for each optimization process in Algorithm 1, 2, 3 of [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/figure_updates.pdf). We will ensure to add corresponding references and fix missing references in the updated version of our paper. > pipeline not explained with any examples. We provide an overview of the overall shielding procedure in Figure-r. 11, and provide an end-to-end example of the shielding plan generation process in Figure-r. 12-14 in [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/rebuttal.pdf). Additionally, we provide a detailed example illustrating the shielding code generation and verification process in Figure.-r 15, and also illustrate the safety probability inference process in Figure.-r 16. > How to do pseudo training? We follow [3] to conduct pseudo training, which actually lowers the time and data costs by leveraging some heuristics to simulate the labels of the training data. However, it slightly compromises performance as indicated by the results in Table-reb 3 in our response to reviewer oQJ8. > What is the action set in Figure 1? Yes, the action set contains all the shielding operations which can be further extended to meet diverse guardrail requirements. > are the shielding operations and the toolbox general or task-specific? While in our experiments, we leverage the same built-in tools to ensure a fair and controllable evaluation, the tool library could be easily extended to include diverse new tools and specialized operations since our agent is built following the MCP protocol (as shown in Figure-r. 11-16). [3] Kang, M., & Li, B. R2-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning. ICLR 2025 --- Rebuttal Comment 1.1: Comment: The authors addressed my concerns and also provided additional experiments. I am glad to raise my recommendation to "accept". --- Reply to Comment 1.1.1: Comment: Dear Reviewer sVmH, We are very glad that our response has addressed your concerns! We sincerely appreciate your recommendation to accept our work, and we are excited to contribute ShieldAgent to the community’s efforts toward building safer and more reliable LLM agents. Thank you! Sincerely, Authors of Submission 16287
Summary: This paper presents ShieldAgent, a new technique for determining whether LLM outputs conform to a given policy. ShieldAgent starts by using an LLM to formalize a policy document and produce a set of rules expressed in LTL. These LTL formulae are embedded into probabilistic circuits which can be used to efficiently estimate the safety of potential actions. These probabilistic circuits can also be used to construct a barrier function which ensures the safety of an LLM agent over a long sequence of decisions. The paper proposes a new dataset designed to test the efficacy of guardrails for LLM agents. On both this new dataset and a preexisting dataset, ShieldAgent outperforms prior work. Claims And Evidence: The main claim of the paper, that ShieldAgent works better as a guardrail for LLM agents, is well supported with experiments. Across several different risk categories, ShieldAgent is almost universally more accurate than prior methods at identifying potentially unsafe behaviors. The practical impact of this accuracy is shown with additional experiments that use ShieldAgent on a set of internet interaction tasks. These experiments show that ShieldAgent results in safer behavior compared to prior guardrails. Methods And Evaluation Criteria: The datasets used for evaluation are appropriate and include both an established dataset and a novel dataset designed for the use case targeted by the proposed technique. The experiments include both prior work in this setting as well as a few plausible baseline techniques to provide evidence that ShieldAgent is a useful approach. Theoretical Claims: This paper does not make any novel theoretical claims. Experimental Designs Or Analyses: The experiments are generally well-designed. They could be improved with the inclusion of some ablation studies. ShieldAgent includes so many interacting components that it is difficult for me to tell what the impacts of different pieces of the system are. For example, it would be helpful to understand the costs of omitting the model checker or the KV-cache. Supplementary Material: Appendix C includes the used prompts which is useful for reproducibility. However, the supplementary material is unfinished, with very little present in Appendix A and two empty sections (A.3 and B). Relation To Broader Scientific Literature: The related work section situates this work well within related literature. Because LLM's are often used sequentially, it is important to ensure they are safe even when used repeatedly to generate a sequence of decisions. Essential References Not Discussed: To the best of my knowledge, the authors discuss all essential relevant literature. Other Strengths And Weaknesses: The main strength of the paper in my opinion is the experimental results. They are extensive and show an impressive improvement over prior work in terms of accurately classifying potentially unsafe behavior. The main weakness of the paper which is not captured in the other fields of this review is the number of interacting components. This is not inherently a problem, but it makes it difficult to include enough detail to really understand the proposed technique in the paper. See the "other comments or suggestions" section below for a few specific places where I felt this detracted from the paper. Other Comments Or Suggestions: The paper mentions that many external tools are used by ShieldAgent but gives little detail about what those tools are or how they are used. For example, Section 3.3 says that ShieldAgent uses a formal verification component that "conducts model checking to rigorously validate rule compliance". But model checking can refer to quite a few different techniques, so it would be useful to include some details on how that model checking is conducted. As another example, 3.2.3 describes "probabilistic safety inference" at a very high level, but it's not clear to me how such a computation would actually be carried out. At the same time, there are some places in the paper where space is used to explain fairly standard concepts. For example, in 3.2.3, there is a definition of the binary cross-entropy loss. In my opinion, this definition is so standard that it could be removed or moved to the appendix in order to save some space for the more novel details of this paper. Questions For Authors: I'm not quite sure I understand the definition of $P_S(a \mid o)$. As I understand it, it is related to the probability of picking action $a$ (denoted $P(a \mid o)$) as well as "the safety probability when all rules are satisfied", $P_{ub}(o)$. First, why is it the case that when all rules are satisfied, the probability of safety is not one? Second, I'm not sure why the probability of safety is defined with respect to the probability of picking action $a$ rather than the probability that the system remains safe after taking action $a$. Second, could the authors elaborate on the soundness of the barrier certificate? It seems to me that the second requirement, $P_S(a \mid o) - P_s(o) \le \epsilon_{tol}$, allows the probability of unsafe behavior to increase by $\epsilon_{tol}$ with each time step. Doesn't this allow the system to behave unsafely over long time horizons? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We really appreciate the reviewer's valuable suggestions and we have accordingly updated the paper to include additional results and examples in [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/rebuttal.pdf). > inclusion of some ablation studies We provide a detailed ablation study regarding the different components, different PC weights, and different safety thresholds in Table-reb 2, 3, 4 below. The results show that guardrail performance and efficiency are primarily influenced by tool calling and model-checking components, which are critical for detecting rule violations. Additionally, we observe that PC significantly reduces the FPR and remains stable even under large perturbations, demonstrating the robustness of our safety certification algorithm. Table-reb 2: Ablation study on different components | Component | ACC@G | FPR@G | FNR@G | ARR@R | Time | |------------------|:-----:|:-----:|:-----:|:-----:|:-----:| | w/o Model checking |85.4 | 7.2 | 9.0 | 80.9 | 24.0 | | w/o PC | 82.5 | 13.0 | 2.3 | 87.5 | 31.1 | | w/o History cache | 90.4 | 5.6 | 6.5 | 87.5 | 42.0 | | w/o Workflow memory | 86.0 | 7.0 | 8.5 | 82.0 | 38.2 | | w/o Tool calling | 81.4 | 6.2 | 11.3 | 71.3 | 48.9 | Table-reb 3: Ablation study on PC weights. | Method | ACC@G | FPR@G | FNR@G | ARR@R | |------------------|:-----:|:-----:|:-----:|:-----:| | Learn from real data | 90.4 | 5.6 | 6.5 | 87.5 | | Pseudo-learning | 88.0 | 6.7 | 8.5 | 87.5 | | FOL ($\theta \to \infty$) | 82.5 | 13.0 | 2.3 | 87.5 | | Perturbed ($\epsilon_\theta=10\%$) | 89.7 | 5.9 | 7.7 | 87.5 | | Perturbed ($\epsilon_\theta=30\%$) | 87.4 | 7.0 | 9.0 | 87.5 | | Table-reb 4: Ablation study on the safety threshold $\theta$ of barrier certificate. Larger $\theta$ indicates more critical safety needs. | Threshold $\theta$ | ACC@G | FPR@G | FNR@G | ARR@R | |------------------|:-----:|:-----:|:-----:|:-----:| | $0$ | 87.4 | 5.0 | 8.2 | 87.5 | | $-0.1$ | 84.0 | 4.2 | 12.5 | 87.5 | | $0.1$ | 90.4 | 5.6 | 6.5 | 87.5 | | $0.3$ | 86.9 | 7.4 | 4.2 | 87.5 | > the supplementary material is unfinished We enrich the supplementary material by presenting a list of more comprehensive examples in [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/rebuttal.pdf). Specifically, we provide (1) three examples of the extracted policy blocks and LTL rules in Figure-r. 4-6; (2) three examples of ASPM optimization in Figure-r. 7-9; (3) an example of the dataset sample in ShieldAgent-Bench in Figure-r. 10; (4) an end-to-end example of the shielding process in Figure-r. 11-16. We also fix the typos and provide a detailed algorithm for each optimization process in Algorithm 1, 2, 3 of [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/figure_updates.pdf). > tools and examples used We provide a comprehensive example of the tools used by ShieldAgent based on MCP protocol in Figure-r. 11 and explain step-by-step the different tool calling purposes within the shielding plan in Figure-r. 11-16. Specifically, we leverage Prover9/Mace4 as model checking tool and provide a detailed example of it in Figure.-r 15. We also illustrate the safety inference process in Figure.-r 16. > paper writing arrangement We will ensure to follow the reviewer's suggestion and update the paper presentation to focus more on delivering the key contributions of our method. > clarification on $P_S(a\mid o)$ We adopt MLN to model safety probability w.r.t. explicit rule compliance, i.e., $P(\mu)=\frac{1}{Z}\exp(\sum_{r \in R}\theta [\mu \sim r])$, where $z$ is a possible assignment of predicates, and $Z$ is the partition function which sums up all possible assignments. Thus even if all rules are satisfied, the probability of safety only reaches an upper bound instead of approaching one. And since the action predicate $a$ is also part of the MLN state space, $P(a \mid o)$ essentially denotes the probability that the system remains safe after taking action $a$. > clarification on barrier certificate We thank the reviewer for noting this typo and we apologize for the confusion. The condition $P_s(a\mid o)-P_s(o)\geq \epsilon$ only applies when the safety probability $P_s(o)$ is smaller than a prescribed threshold, i.e., outside the tolerable safety region. In such cases, the condition enforces that the safety probability must increase by at least \epsilon at each step. Therefore, rather than allowing the probability of unsafe behavior to accumulate, this condition ensures that the system progressively becomes safer when it is in an unsafe state.
Summary: This paper proposes SHIELDAGENT, an LLM-based guardrail agent, to enforce explicit safety policy compliance of the action sequences of other LLM agents via automated probabilistic reasoning. SHIELDAGENT constructs an action-based probabilistic safety model (APSM) by extracting verifiable rules from policy documents, refining and clustering them into action-conditioned probabilistic circuits, and learning rule weights. During inference, it localizes relevant rule circuits, generates shielding plans, and performs probabilistic inference to assign safety labels and report violations. To evaluate it, the SHIELDAGENT-WEB dataset is introduced, which contains 2K safety-related instructions across various risk categories and web environments with paired risky trajectories. Experiments show that SHIELDAGENT achieves state-of-the-art performance on SHIELDAGENT-WEB and existing benchmarks. Claims And Evidence: The methodology and evaluations presented in this paper lack important running examples for readers to understand. Please refer to Weaknesses for more details. Methods And Evaluation Criteria: The methods proposed in the paper is relevant to safeguarding LLM agents, which face the challenge of security threats in simulated environments. SHIELDAGENT's approach of constructing an action-based probabilistic safety model (APSM) seems to be a working solution compared with traditional static approaches, and the four key shielding operations of SHIELDAGENT work in concert with the APSM. Please refer to Weaknesses for more details. Theoretical Claims: In terms of the safety certification, the use of control barrier function (CBF)-inspired conditions $|P_s(a∣o)|<\epsilon_{tol}$ and $P_s(a∣o)-P_{s}(o)<\epsilon_{tol}$ to determine action safety seems to be a valid approach. Despite the multiple equations used to model the policy model decision process, it seems to be a linear classifier based on the rules extracted. Since no rigorous proof or theory has been provided in the paper, I take this certification as empirical. Experimental Designs Or Analyses: In this paper, the authors constructed a dataset, named ShieldAgent-Web, consisting of AgentPoison and AdvWeb attacks to evaluate the robustness of the proposed ShieldAgent framework. However, no sample in the dataset has been given in the paper or the supplementary material, making it hard to understand to what extent the proposed dataset can represent threats against agent systems in real-world applications. Please refer to Weaknesses for more details. Supplementary Material: The pseudo code of the APSM Structure optimization algorithm and prompt templates have been provided. But the details can be further refined, e.g., the missing references of Equations in the algorithm and other typos. Relation To Broader Scientific Literature: - The composition of the probabilistic safety certification module highly resembles [1], [2]. It is suggested that the authors can discuss the relationship to these works and the key contribution of this work. [1] Knowledge Enhanced Machine Learning Pipeline Against Diverse Adversarial Attacks. ICML 2021. [2] Improving Certified Robustness Via Statistical Learning with Logical Reasoning. NeurIPS 2022. Essential References Not Discussed: Please see relation to broader literature above. Other Strengths And Weaknesses: ### Strengths - SHIELDAGENT offers a comprehensive approach to safeguarding LLM agents. - The SHIELDAGENT-WEB dataset is proposed for evaluation. ### Weaknesses - The presentation of this paper can be improved. - In terms of the methodology, the overall framework is an RAG-based linear classifier which uses safety rules extracted by LLMs to ensure safe operations of LLM-based agents. The authors put much effort in modeling the linear classifier with the term "Action-based Probabilistic Safety Model", but did not provide essential examples of the rules and the verification and pruning process. It is also not clear how the policy optimization contribute to the safety performance. - In terms of the figures, for instance, there are too many elements in Figure 1 that the fonts of each text is small to recognize. The caption of the figure also does not provide enough substantial information to help me understand what operations are to be performed. - The evaluation of the paper is confusing, maybe questionable. - No sample has been provided in the ShieldAgent-Web dataset. It is hard to imagine how realistic are the attacks considered in the paper. It is suggested that the dataset should be open-sourced, or at least typical samples should be provided for reviewing. - Although SHIELDAGENT performs well on the datasets used in the experiments, its generalization to new and unseen scenarios may be limited. The evaluation on diverse scenarios is expected. Other Comments Or Suggestions: Please refer to weaknesses and questions. Questions For Authors: - Could you provide a more detailed explanation of Figure 1(Top)? Now that rules are clustered in the Redundancy Pruning process, but the graph shows that it is clustering the conditional predicates? Moreover, what does the arrow from one grey circle to another represent? Does it mean the split of a rule into two rules, or a logical context between the two rules? It seems that its meaning in Redundancy Pruning and Action-based Probabilistic Circuit is different. A clearer explanation to the figure and these questions would be very helpful. - In the Probabilistic-Constrained Safety Certification part, is the condition $P_s(a∣o)-P_s(o)<\epsilon_{tol}$ a necessary and sufficient condition for action safety? Could you provide more theoretical basis or experimental evidence? - In the baselines, why Rule Traverse will exhibit high FPR? Sequentially verifying rules one by one seems to have a low FPR result. Could you explain this baseline in more detail? - As stated in Sec. 3.2.1, the policy blocks are extracted by GPT-4o. How do the authors ensure that rules extracted by LLMs align with human values? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! We have followed your suggestions and improved our paper to incorporate more examples and additional experiment results. > lack examples We provide a list of more comprehensive examples in [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/rebuttal.pdf). Specifically, we provide (1) three examples of the extracted policy blocks and LTL rules in Figure-r. 4-6; (2) three examples of ASPM optimization in Figure-r. 7-9; (3) an example of the dataset sample in ShieldAgent-Bench in Figure-r. 10; (4) an overview of the overall shielding procedure in Figure-r. 11; (5) an end-to-end example of the shielding plan generation process in Figure-r. 12-16. > no sample of the dataset We include an example from ShieldAgent-Bench in Figure-r.10 and a comprehensive comparison with existing datasets in Table-r. 4. To ensure that our dataset represents agent threats in real-world applications, we construct it by attacking real-world SOTA agents with practical security-related targets to elicit unsafe trajectories and conduct thorough human review to ensure quality. To ensure transparency and ease of verification, we manually annotate the potential safety violations of each agent action step. > missing references in the algorithm and typos. We have updated the paper to fix all missing references and typos, and also provided the updated versions in Algorithm 1, 2, 3 of [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/figure_updates.pdf). > key contribution of the probabilistic safety certification module Our key contribution on top of [1,2] is that we are the first to successfully (1) scale up the probabilistic verification process for a very large predicate space by clustering it by different actions and constructing local action-based circuits; (2) extend the static knowledge rules in previous works to temporal logics and integrate it with barrier certificates to facilitate verifying agent action sequence. > examples of the rules and the verification and pruning process. We present detailed statistics of the number of rules, predicates, and their vagueness score (evaluated by GPT-4o-as-judge) in Table-r. 1 and Figure-r. 1-3; examples of rules during the optimization process are presented in Figure-r. 7-9. > Update Figure 1 and caption We have updated Figure 1 and its captions in Figure-r. 1 of [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/figure_updates.pdf). Due to limited space, we defer a more detailed illustration of the shielding process in Figure-r. 11 and explain step-by-step the shielding plan generation process in Figure-r. 11-16. > evaluation on diverse scenarios is expected. To demonstrate the strong generalization capabilities of ShieldAgent to diverse agent tasks and unseen scenarios, we further evaluate it on two additional benchmarks, i.e. VWA-Adv [1] (diverse risks of vision-based web agents), and AgentHarm [2], which is a comprehensive agent risk benchmark (not limited to web agent) that involve 11 unseen scenarios not present in our dataset. We provide a summary of the result in Table-reb 1 below and defer the detailed results in Table-r. 2 and Table-r, 3 of [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/figure_updates.pdf). The results demonstrate that ShieldAgent can generalize well to unseen domains and provide robust guardrails across different agent types, environments, and tasks. Table-reb 1: Comparison of guardrails on VWA and AgentHarm. | Guardrail | VWA-Adv (ACC ↑ / FPR ↓) | AgentHarm (ACC ↑ / FPR ↓) | |--------------|:----------------------:|:------------------------:| | Direct | 90.3 / 4.2 | 76.9 / 4.4 | | GuardAgent | 89.9 / 4.4 | 78.4 / 4.1 | | **ShieldAgent** | **94.1 / 3.4** | **86.9 / 3.9** | > Explain Figure 1 top The grey arrow in Figure 1 denotes logic connections. VR splits a compound rule into two atomic sub-rules, and RP clusters the rule space by semantic similarity to reduce redundancy. After optimization, ASPM represents them as Laplace matrix and applies spectral clustering to obtain rule circuits. > is the condition necessary and sufficient The condition is a sufficient condition to asymptotically guarantee safety based on MDP assumption. We will include a detailed proof in the updated version of the paper. > high FPR of Rule Traverse? We found verifying rules one by one usually results in the guardrail being over-cautious and thus induces a higher FPR. > rules align with human values? We guarantee the accuracy and quality of the rules by manually validating them w.r.t. the document source which is preserved during optimization. [1] Wu, C. et al. Dissecting Adversarial Robustness of Multimodal LM Agents. ICLR 2025 [2] Andriushchenko, M., et al. Agentharm: A benchmark for measuring harmfulness of llm agents. arXiv preprint --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed response. After reading the rebuttal, some of my concerns have been addressed. However, I still have the following concerns regarding the paper. Firstly, the presentation of the original paper can be largely improved. The overall working mechanism of the proposed ShieldAgent framework remain unclear until I review the samples provided in the rebuttal materials. It is suggested that the theories proposed in the paper should guarantee the safety of the procedure instead of causing difficulties to the understanding of readers. Secondly, the quality of the extracted rules is doubtful. No public access to the generated rules database is provided in the paper and the rebuttal except several examples. As stated in the rebuttal, no quantitative evaluation has been conducted regarding the quality and alignment of the rules to human values as well. The authors claim that one of the main contribution of the paper is scaling up the verification process, but they manually validate the rules. For the reasons above, I will keep my score. I hope the ACs take all the reviewers' perspectives into accout and make the final recommendation. --- Reply to Comment 1.1.1: Comment: Dear Reviewer tHcj, We are glad that our response has addressed your previous concerns, and we would like to provide additional details in light of your follow-up questions! > presentation of the original paper Thank you for your suggestions, following which we have largely improved the paper's presentation and summarized our updates as follows (detailed in [[this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/rebuttal.pdf)]). We will integrate all discussions in the rebuttal into the final version. + **Additional Case Studies**: We added various running examples in the appendix to explain each component of ShieldAgent: a) three examples of the extracted policy blocks and LTL rules in Figure-r. 4-6; b) three examples of ASPM optimization in Figure-r. 7-9; c) an overview of the overall shielding procedure in Figure-r. 11; d) an end-to-end example of the shielding plan in Figure-r. 12-16. + **Quantitative Analysis of Policy Optimization**: We provided additional statistics of the rule extraction process in Table-r. 1 and Figure-r. 1-3, as well as a detailed human alignment analysis in Table-reb 2-3 below. + **Dataset Quality Demonstration**: We compared ShieldAgent-Bench with previous works in Table-r. 4 and provided an example in Figure-r. 10, showing that it effectively captures the security threats of real-world agent applications. + **Clarified Methodology**: We further clarified our safety certification methodology and provided rigorous theoretical proof to demonstrate that it effectively guarantees agent safety via barrier certificates. Due to rebuttal constraints, we will ensure to disclose this proof in the camera-ready version. + **Revised Figures and Corrections** We updated Figure 1 and its captions, fixed all typos and references, and provided detailed pseudocode for each optimization process in Algorithm 1, 2, 3 in [this link](https://anonymous.4open.science/r/shieldagent-icml-rebuttal-30B0/figure_updates.pdf). + **Additional Evaluation**: We evaluated ShieldAgent on two diverse agent safety benchmarks VWA-Adv and AgentHarm and provide the results in Table-r. 2-3. We sincerely hope these updates have enhanced the clarity of our paper and can help readers better understand our work. > scalability of rule extraction We really appreciate the reviewer's thoughtful question! We would like to first clarify that since the number of policies is finite (e.g. GitLab Policies, OpenAI Use Policies, EU AI Act), our automatic rule extraction pipeline requires only a **one-time, cost-efficient human-in-the-loop verification process which can be performed completely offline, once and for all for each policy**. And the resulting rule database can then be applied generically across various agents, enabling us to further scale up the verification process through the proposed automatic action tree-based probabilistic circuit and retrieval-based verification method. Following the reviewer's suggestions, we would like to also provide additional evidence that our automatic rule extraction pipeline is **scalable in terms of human efforts** and capable of producing rules that **strongly align with human intent**. As shown in the verification statistics in Table-reb 2, **fewer than 10% of the rules extracted by our pipeline require manual correction**, while the remaining 90% already align well with human values and require no modification, keeping total human verification time under 1 hour across different environments. Compared to traditional policy-based guardrail that requires lengthy human curation, our automatic rule extraction pipeline can efficiently produce accurate, large-scale, and human-aligned rules, substantially reducing human efforts and achieve scalable rule extraction. Table-reb 2: Human verification statistics for extracted rules. We report the total number of extracted rules, manually corrected rules, the update ratio, and the total human verification time. | Environment | #Total Rules | #Updated Rules | #Update Ratio (\%) | #Human Working Hours | |--------------|:---------------:|:-------------------:|:-------------------:|:-------------------:| | Shopping | 240 | 27 | 11.3 | $\leq 1h$ | | CMS | 120 | 10 | 8.3 | $\leq 0.5h$ | | Reddit| 178 | 13 | 7.3 | $\leq 1h$ | | GitLab| 198 | 23 | 11.6 | $\leq 1h$ | | Maps| 104 | 5 | 4.8 | $\leq 0.5h$ | | SuiteCRM | 240 | 12 | 5.0 | $\leq 1h$ | Once again, we sincerely thank the reviewer for all the insightful feedback, and we would be truly grateful if you would kindly consider revisiting the score. As also recommended by other reviewers, we believe ShieldAgent could contribute meaningfully to the community in building safer and more capable agents. Thank you!
null
null
null
null
null
null
Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions
Accept (oral)
Summary: The authors first show masked diffusion models (MDMs) indeed train on computationally intractable subproblems compared to their autoregressive counterparts. Then an adaptive Top-K probability margin inference strategy is proposed to sidestep hard subproblems that are not properly learned in the training time. The proposed inference strategy has proven to be effective in the Sudoku puzzle task. Claims And Evidence: The claim of "Complexity at training time" of MDMs is well shown both theoretically and empirically on text data. The claim of "Planning for the best" of MDMs is less persuasive. For instance, the Top-K probability margin inference strategy proves effective for Sudoku puzzles but does not work for the Zebra puzzle, and its performance on text data remains unknown. Methods And Evaluation Criteria: The evaluation of the imbalanced subproblems during the training time of MDMs and the proposed adaptive inference strategy makes sense in general. Theoretical Claims: I reviewed the proofs in Section 2 and they seemed correct to me. Experimental Designs Or Analyses: The experimental designs in Sections 3 and 4 are mostly correct. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The findings in the paper may be useful in building the next-generation LLMs based on the diffusion model. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The analysis of why "MDMs train on hard problems" is sufficient and helps those unfamiliar with MDMs understand their characteristics. - The Top-K probability margin inference strategy is intuitive and effective on Sudoku. Weaknesses: - The proposed Top-K probability margin inference strategy proves effective only for Sudoku puzzles but not for text data, which raises concerns about the technical contribution to the general text domain and diffusion LLMs. Other Comments Or Suggestions: N.A. Questions For Authors: Regarding adaptive Top-K probability margins: As it is stated "When multiple values have similar probabilities at a position, Top-K probability margin will provide a better estimate of the uncertainty of a position". Does this mean the Top-K probability margin strategy is only useful when there are multiple possible values for a given position? If so, why is the Top-K probability margin so effective for Sudoku as each position has only one solution in a standard Sudoku puzzle. I would appreciate further clarification on this. Ethics Expertise Needed: ['Other expertise'] Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s valuable questions and comments. Below, we address the main concerns. ## (1) Further experiments on text data In response to the reviewer’s comments, we ran additional experiments and found that **Top-k margin indeed outperforms Top-k on challenging code and math tasks**. Specifically, to examine the effect of different inference strategies on text evaluation tasks, we adapted LLaDA, the 8B MDM model from [1]. We compare three strategies: **Vanilla, Topk, Topk-margin**. The results are presented below. | **Sampler** | HumanEval-Single | HumanEval-Multi | HumanEval-Split | Math | MMLU-Pro | ROCStories | |------------------|-------------------------|------------------------|------------------------|--------|----------|-------------| | **Vanilla** | 31.8% | 16.5% | 14.2% | 28.5% | 33.2% | 21.23% | | **Top-k** | 32.9% | 20.8% | 18.4% | 31.3% | **36.5%** | 21.10% | | **Top-k Margin** | **33.5%** | **25.4%** | **22.3%** | **34.3%** | 35.4% | **21.41%** | As shown in the table, both Top-k and Top-k Margin consistently outperform the Vanilla MDM inference, underscoring the importance of adaptively selecting the decoding order to avoid harder problem instances. Notably, in more challenging tasks, such as HumanEval-Multiline, HumanEval-Split Line, and Math, Top-k Margin shows a clear advantage over Top-k. This is because, particularly in coding and math problems where the fixed answer exists, selecting the correct intermediate token during inference is critical. Moreover, the Top-k margin offers a more reliable estimate of uncertainty when multiple tokens have similar probabilities—a common scenario in these challenging tasks. These results reinforce our claim in Section 4.1: Top-k Margin serves as a better proxy for positional uncertainty than Top-k in such cases. **These results also further highlight the potential of the Top-k Margin strategy for challenging infilling tasks.** **We also emphasize that our main contribution lies in the fundamental understanding of token ordering in MDM, rather than in proposing a superior inference strategy.** Nevertheless, as demonstrated in the experimental results on Sudoku puzzles and math/coding tasks, our proposed Top-$k$ Margin inference shows promising potential compared to Top-$k$ in certain scenarios, highlighting practical implications of our work. **Given that the main weakness raised was that we did not demonstrate effectiveness for our inference strategy on text data, we hope the reviewer will consider raising their score.** ## (2) Clarification about Top-K probability margin The Top-K probability margin strategy is not only useful when there are multiple possible correct values for a given position, but can also be effective when there is a single correct value. The reason is that the strategy is used for the **distribution $p_\theta$ estimated by the model** rather than for the **true distribution $p_{data}$.** To understand this distinction, consider the case of Sudoku. Recall that $x_t$ denotes the sequence at decoding time $t$ – in the case of Sudoku, this is the partially filled Sudoku puzzle. The posterior data distribution at the $i$-th location, given the partially filled puzzle, is denoted by $p_{data}(x^i | x_t)$. Since we are dealing with puzzles that have unique solutions, the reviewer is correct that $p_{data}(x^i = v | x_t) = 1$ at the correct value $v$ and 0 otherwise. However, the Top-K probability margin strategy doesn’t rely on $p_{data}(x^i | x_t)$ but instead uses $p_{\theta}(x^i | x_t)$ during the adaptive inference. As explained in Section 3, the learned posterior can be quite different from the true posterior. Intuitively, $p_{\theta}(x^i | x_t)$ reflects the model’s uncertainty about the correct value at the $i$-th location. When the model is unsure, it assigns high probabilities to a few candidate values which we refer to as the possible values at that position. The Top-K probability margin is effective in Sudoku because situations often arise where the model is uncertain between two or more possible values — say, $v_1$ and $v_2$ — assigning high probabilities to both. In these cases, the top probability $\max( p_{\theta}(x^i = v_1 | x_t), p_{\theta}(x^i = v_2 | x_t) )$ does not provide a reliable estimate of whether the model knows the correct value at the ith location. However, the Top-K probability margin $| p_{\theta}(x^i = v_1 | x_t) - p_{\theta}(x^i = v_2 | x_t) |$ serves as a more effective measure of the model’s uncertainty.
Summary: The paper takes a close look at training and inference of masked diffusion models (MDMs), which are a type of discrete diffusion models where the noising process consists of randomly “masking” tokens until all tokens are masked, and training a model to reverse this degradation process. The paper claims that this order-agnostic training is inherently harder than left-to-right next-token prediction, that this difference can at least partly explain the gap in performance between AR models and MDMs, and that we can leverage the trained MDM to avoid difficult denoising orders. To this end, a novel sampling adapter for diffusion models is introduced, which significantly improves performance both on generative PPL for language modeling, as well as solving logical puzzles like Sudoku or Zebra puzzles. Claims And Evidence: The question of whether or not “the benefits of inference flexibility for MDMs is enough to outweigh the drawbacks of training complexity” is claimed to be answered “in the affirmative.” While this does seem to be the case for solving Sudoku puzzles, the answer is not as clear for language modeling. Unfortunately, there is no comparison between AR sampling from an AR model (and/or left-to-right sampling from a MDM as a non-adaptive baseline) and adaptive sampling with the proposed sampling algorithm from a MDM. If AR models still have better sample quality than MDMs with adaptive sampling, which is rather plausible, then the answer to the original question would unfortunately change to “it depends.” L35 ff., Col. 2: I cannot verify the claim that “MDMs can actually be used to decode in any order” based on the provided reference. As far as I can tell, the cited paper does not conduct any experiments about the order-sensitivity of MDMs. Of course, in theory MDMs can decode in any order, but whether or not this is true in practice is a different question. Methods And Evaluation Criteria: Besides what is mentioned above, the methods and evaluation criteria seem sound. Theoretical Claims: The paper makes the following two core theoretical claims. Claim 1: > we provide theoretical [...] evidence that the overhead imposed by training complexity quantifiably impacts MDMs’ performance. Claim 2: > We prove that even for simple, benign models of data, there are noise levels at which a large fraction, but not all, of the corresponding subproblems that MDMs must solve are computationally intractable. While the proposed L&O distributions seem very contrived (and calling it a “benign” model of data is, IMO, a stretch when it is known that these problems can be computationally hard), Proposition 3.3 does seem correct. The fact that it does require assuming the “1RSP cavity prediction” conjecture to be true may need to be highlighted more prominently (although I’m personally not familiar with this conjecture and cannot judge whether it is generally accepted to make this assumption). However, the original statement (claim 2) seems somewhat trivially true: Of course there exist distributions where for some noise levels (namely, when all tokens are masked) solving the corresponding subproblem is computationally hard and for some noise levels it is easy (namely, when all tokens are unmasked). A more useful proposition may be that some _orders_ of filling in the missing tokens are harder than others. How might Proposition 3.3 imply that some infilling orders are computationally hard? Similarly, and from what I can tell, the statement(s) proved in Appendix B.2 (Proposition B.5) does/do not necessarily imply that “order-aware training is tractable yet order-agnostic training is hard”. As far as I understand, what is shown is that for some noise level, finding the solution is computationally hard. If this indeed implies that some _orders_ are more difficult than others, further explanation and/or proof is warranted. All in all, the original claim (claim 1) of “providing theoretical evidence that [...] training complexity quantifiably impacts MDMs’ performance” is either overstated or stands on shaky ground. The provided hardness proofs seem correct upon skimming, but from what I can tell they do not lead to a conclusion this strong. It is possible that I'm simply not seeing the final logical step, in which case it should be a simple fix of adding a more detailed explanation that leads to the final conclusion. Experimental Designs Or Analyses: Sampling adapters often decrease the diversity of generated samples, which in extreme cases can lead to a collapse of the distribution. For the proposed adaptive inference there does indeed seem to be somewhat of a decrease in entropy, which warrants providing some qualitative examples in the appendix to prove that no catastrophic collapse is occurring. Supplementary Material: I have skimmed the appendix but did not read it in detail. Relation To Broader Scientific Literature: The paper shines a light on the discrepancy between AR models and discrete (masked) diffusion models, which has been observed many times in the literature. It also proposes a novel adaptive sampling technique that drastically improves performance both on text and logic puzzles. Both of these are valuable contributions to the literature of masked diffusion models. Essential References Not Discussed: It is known that auto-regressive models trained with teacher forcing face some fundamental challenges [1], which may (at least partly) be to blame for their poor performance on Sudoku. There has also been work on the directionality of AR language models, finding that there seems to be a slight but consistent left-to-right bias in human language. Finally, a recent study (concurrent work) has applied image-based diffusion models to solving Sudoku [3]. All of these are not strictly necessary to cite, but may help tie the results into the broader literature. It is for the authors to decide whether or not to include them. - [1] Bachmann & Nagarajan, 2024. https://arxiv.org/abs/2403.06963 - [2] Papadopoulos et al., 2024. https://arxiv.org/abs/2401.17505 - [3] Wewer et al., 2025. https://arxiv.org/abs/2502.21075 Other Strengths And Weaknesses: The paper makes an important observation on how the infilling order in masked diffusion can have a major effect on both upstream and downstream performance. While the theoretical part seems a bit shaky, the empirical evidence is convincing. Besides the theoretical part, the main weakness of the paper lies in lacking scientific rigor and a tendency to overstate or misrepresent the actual results. For example, the phrase “train for the worst” seems to imply that it is optimal to train on all possible permutations jointly, but this claim is not tested in the paper. Similarly, “planning for the best” implies that there is some sort of planning involved, which there isn’t (the term “planning”, in the context of Machine Learning, generally refers to the act of looking ahead of and beyond the immediate next step). In actuality, the paper proposes a sampling adapter, the likes of which are ubiquitous for autoregressive models. Applying this idea to discrete diffusion models is a novel and valuable contribution, and obfuscating it through a misleading title is not necessary. Despite the concerns about soundness and in light of the strong empirical results, I am inclined to recommend an accepting decision and will be happy to update my score if these concerns can be addressed. Other Comments Or Suggestions: Nits: - L104: $e_{x_0^i}$ should be bold. - Figure 1 (bottom): 2nd line, 2nd step; mask tokens should presumably have a black background. - Definition 3.1: As stated, the vocabulary size is $q+1$, and $p_{data}$ should presumably be $\{1, \dots, q\}^L$. Also, lowercase $n$ is not defined and presumably refers to uppercase $N$. - L216: I think it should be $g(x | S)$, not $g(x |_S)$. - Figure 3 is not referenced in the text. - Conjecture B.13 does not have a citation. - Def. 3.1: Overloaded notation: $\pi$ is used for both permutation and latent distribution. Questions For Authors: 1. L189 ff.: If The observations are cryptographic hash functions, is it not true that the observations themselves are also not efficiently learnable? Or, put differently, are cryptographic hash functions efficiently learnable? It seems to me like the answer would be no, in which case this example is not only worst-case, but also violates our assumption. 2. How is the “hardness” of a $\pi$-learner measured, and what is the exact hardness of the quoted “$\pi$-learner-much_closer”, “$\pi$-learner-closer”, and “$\pi$-learner-unif”? How does this compare to the average-case hardness, which would apply for MDM? 3. If different orderings have different inherent difficulty, and MDM is trained on all of them jointly, how does MDM perform on inherently easy (e.g. left-to-right) orders? According to the claims in the paper, we would expect this to be quite close to AR models and would be a nice addition to Figure 2. 4. As an alternative to the top-k probability margin: Could the per-token entropy be a better proxy for uncertainty? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful review and address the comments below. ## Soundness of theoretical claims There are several misunderstandings, so we'd like to clarify them. The statement "There exist distributions where for some noise levels (namely, when all tokens are masked) solving the corresponding subproblem is computationally hard and for some noise levels it is easy (namely, when all tokens are unmasked).” **is incorrect**. The subproblem we investigate is to estimate the *coordinate-wise marginals* of the posterior distribution, not **full posterior sampling.** If all tokens are masked, full posterior can indeed be hard: take any hard-to-sample distribution. However, estimating marginals isn't necessarily difficult. For example, take a hard-to-sample Ising model with density $\propto e^{-x^TAx},x\in \\{-1,1\\}^n$. Thanks to sign-symmetry, marginals are unbiased, so estimating them is trivial—even if full sampling is hard! Our theory emphasizes scenarios where some intermediate masking fractions are computationally harder than either extreme (fully masked or unmasked). In vanilla MDM inference, at each step, a random subset of positions is selected to be unmasked. Consequently, the masking patterns encountered correspond to randomly sampled mask indices---precisely the setting considered in Proposition 3.3! In contrast, decoding in a fixed left-to-right order (as in ARM) leads to encountering only left-to-right subproblems. **These hopefully address the confusion on why our results imply the hardness of sampling under certain token orderings.** Regarding 1RSB, it is a widely accepted conjecture from statistical physics, with extensive literature support. For an introduction, see “Notes on computational-to-statistical gaps” by Bandeira, Perry, and Wein. ## Comparison to AR baseline We ran generative perplexity experiments using an ARM baseline. An 1.1B ARM achieved perplexity 11.745, lower than the 1.1B MDM’s 13.396 with adaptive inference. We acknowledge that our phrasing may have given the impression that adaptive MDM inference outperforms ARM. **However, our claim is more nuanced: adaptive MDM inference helps avoid hard problem instances.** Absolute ARM performance isn't directly relevant. To clarify, for Sudoku puzzles, we included ARM to demonstrate adaptive MDM’s advantage through flexible reasoning orders. ## Other comments - *On decoding in any order in MDM*: For “MDMs can actually decode in any order”, we only meant that theoretically, when all the infilling problems are perfectly solved, any-order decoding matches the true likelihood. We'll update the PDF to make it clear. - *On the title*: For “train for the worst”, we never claimed optimality of training over all permutations, only the benefits of training in fixed order. For “plan for the best,” the reviewer is conflating “sampling adapters, the likes of which are ubiquitous for auto-regressive models” with what we do. For AR, the adapter has nothing to do with decoding *order*, which is left-to-right by default. The reason we call it planning is that MDMs can decide (plan) which token position to decode at each step. - *Entropy dropping*: While there is indeed an entropy decrease on adaptive MDM inference, this drop is negligible. To contextualize this, we measured entropy using the SlimPajama dataset: *average (5.10) and 0.45 quantile (4.85)*. These demonstrate minimal entropy reduction during adaptive inference, thus insignificantly impacting text generation performance. ## Questions 1) **Learnability of hash functions**: This nuanced issue is covered in “Cryptography in NC0” by Applebaum et al., demonstrating cryptographic primitives implemented via constant-depth circuits can be polynomially learnable. 2) **$\pi$-learner**: Each $\pi$-learner learns sequences according to $\pi$ and is modeled via causal Transformers trained on permuted data. A higher likelihood indicates easier learning. Average-case hardness, applied to MDM, involves uniformly sampling permutations $\pi$, resulting in a lower likelihood compared to fixed left-to-right ordering (Fig 2, left, green line). Due to character limits, we kindly ask the reviewer to refer to the 'experimental setup' paragraph in Section 3.2 for more details. We are happy to clarify further on the discussion page. 3) **Performance on left-to-right sampling**: We observed catastrophic collapse during **left-to-right MDM sampling**, with entropy dropping (~0.28). This is because MDM wasn't explicitly trained for left-to-right order. This result underscores the importance of adaptive inference strategies, where the model selects unmasking positions based on logit-derived uncertainty rather than on a prefixed order. 4) **Per-token entropy strategy**: While a more natural measure would be per-token entropy, the only reason we went with top-k margin was its efficiency. In preliminary experiments, we also tried per-token entropy, but the performance difference was negligible. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed response. - **Soundness of theoretical claims**: Thank you for clarifying this misunderstanding. I now believe that the theoretical results indeed show that masked diffusion models face computationally hard sub-problems on some models of data. Perhaps this elaboration could be included in a future version of the paper in order to guide the reader and avoid any confusion. - **Comparison to AR baseline**: Indeed, the results shown in the paper are more nuanced than that adaptive MDMs generally outperforms ARMs, and I believe the phrasing should be adapted and clarified accordingly. For example, L45 Col. 2 ("Training for the worst") comes across as if the theoretically quantifiable impact on training performance applies in general, including to language modeling, which is where MDM are featured most prominently. Instead, it should be clarified that this is provably true on some special (toy) models of data. Similarly, the next paragraph ("Planning for the best") comes across as if "the benefits of inference flexibility for MDMs enough to outweigh the drawbacks" _in general_. Instead, this is only true for some models of data, including the toy models but also Sudoku puzzle solving. Indeed, and as the authors admit, it does not close the gap to ARM on language modeling, which is a caveat worth highlighting prominently (including gen. PPL numbers as provided in the rebuttal). It is important to realize that being upfront with caveats and limitations does not diminish the contributions of the paper, but actually improves the clarity and scientific rigor of the writing. - **Sampling adapters**: The proposed adaptive inference is arguably still a sampling adapter. Instead of sampling $z\_{t-1} \sim p\_\theta(z\_{t-1} | z\_t)$, we sample from a modified distribution $\tilde{p}\_\theta(z\_{t-1} | z\_t)$, where $\tilde{p}\_\theta$ is a function of $p\_\theta$. Again, drawing this parallel does not diminish the contributions, but actually improves the paper by appropriately tying it into the existing literature. Given that my main concern regarding theoretical soundness has been addressed, I will increase my score from 3 (weak accept) to 4 (accept), while also urging the authors to improve phrasing and messaging as outlined above and in my initial review in order to avoid confusion and misconceptions. As I said in my original review, there is no need to conflate and obfuscate since the presented results are strong on their own. --- Nits: - Providing a reference on the 1RSB conjecture will help make this paper more accessible to the general machine learning community. The same goes for polynomial learnability of cryptographic hash functions. - Entropy decrease on language generation is expected and reasonably small, it should be included in the paper along with gen. PPL numbers. - Catastrophic collapse on left-to-right sampling is interesting and important to highlight (esp. given the claim that "MDMs can decode in any order"). However, an entropy drop by 0.28 (as opposed to "to 0.28") does not indicate catastrophic collapse. Providing qualitative examples (in the appendix) can help give an idea of the nature and extent of the collapse. --- Reply to Comment 1.1.1: Comment: We appreciate that the reviewer found our rebuttal clarifying. For the further suggestions, we will make sure to include those in a new version.
Summary: The main contribution of the paper is the use of theoretical arguments and carefully designed experiments to show the following: 1. The complexity of training Masked Diffusion Models (MDMs) is higher than Auto-regressive Models (ARMs). 2. The flexibility of any-order decoding offered by MDMs helps it to perform better than ARMs on specific kinds of data distributions, especially the ones where some (data/instance dependent) positions in the sequence contain harder sub-problems than other positions. The paper also introduces a new decoding strategy called the "top-k probability margin"-based strategy, which picks the next token to decode based on the margin between top-2 vocabulary items at any specific position. Claims And Evidence: ## 1. The inference flexibility provided by the any-order decoding in MDMs overweights the drawbacks introduced by training complexity. Nie et. al. (2024) already demonstrated through scaling law curves that the complexity of training MDMs is higher than ARMs. The paper provides some theoretical insight into the phenomenon. Their empirical experiments on text (Figure 2) re-confirm the observations made in Nie et. al. (2024). Zheng et. al. (2024b) demonstrated that the logits produced by MDMs have useful information for selecting the positions to unmask during inference. In summary, Nie et. al. and Zheng et. al. together have already demonstrated the main claim of this paper. Therefore, in my view, the main contribution of this paper is the use of theoretical arguments and carefully designed synthetic experiments to drive the point home, which the paper does well. ## 2. The proposed top-k probability margin strategy for sampling performs better than the top-k strategy proposed in Zhen et. al. 2024b. The proposed top-k prob. margin-based sampling strategy is only demonstrated to be better than top-k (from Zhen et. al. 2024b) on Sudoku puzzles. Both top-k and top-k prob. margin work similarly for Zebra puzzles (Table 3). Moreover, it is not clear if there is any advantage to using the top-k prob. margin-based strategy on real data like the text data used in the paper (Figure 4 does not compare the two adaptive sampling strategies). Therefore, I find the use of top-k prob. margin-based strategy to be not well justified and a possible area to improve in the paper. Methods And Evaluation Criteria: Yes. Theoretical Claims: The proof for Proposition 2.1 is correct. The claim in proposition 3.3 looks reasonable; however, I was unable to check the complete proof. Experimental Designs Or Analyses: All the experimental settings look sound. Supplementary Material: I reviewed the Appendix sections C, D and E, which cover the details of the experimental settings and the proof for Proposition 2.1. Relation To Broader Scientific Literature: As mentioned above, Nie et. al. (2024) and Zheng et. al. (2024b) together have already demonstrated the main claim of this paper to a great extent. Nie et. al., demonstrated empirically on text data that the complexity of training MDMs is higher than ARMs. That said, the scope of Nie et. al. was quite broad and was focused more on the scaling aspect of MDMs. Zheng et. al. introduced the top-k sampling strategy for MDMs and demonstrated that it works much better than random unmasking. However, Zheng et. al. do not discuss the learning aspect of MDMs. This paper is much narrower in scope and tries to tease out the essence of adaptive decoding for MDMs through theoretical arguments and carefully selected experiments. Essential References Not Discussed: The paper includes exhaustive references; however, the related work section is in the Appendix. Since the paper re-states some of the claims in existing papers, it would be better to include at least one paragraph of related work in the main paper. Other Strengths And Weaknesses: The contribution of the paper is incremental. It combines claims from existing papers (Nie et al. (2024) and Zheng et al. (2024)). That said, since the paper only focuses on one claim, it is easy to read and follow. Other Comments Or Suggestions: 1. Line 873: The expression for $p_i$ does not make sense. It should be $p_i = \sum_{j=1}^L \delta(x^j = i) / L$. 2. It might be good to show some decoding trajectories on the Sudoku or Zebra puzzles where vanilla unmasking makes mistakes whereas the adaptive strategy circumvents them? Questions For Authors: 1. Do you observe any difference between top-k and top-k margin-based samping on text MDMs? Can you provide some examples of the generated text for various sampling strategies? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's overall positive evaluation and comments. We will make sure to include a paragraph of related work in the main body and fix the typo mentioned. Below, we respond to the reviewer’s main concerns: ## (1) Scope of our contributions The reviewer stated “The contribution of the paper is incremental. It combines claims from existing papers (Nie et al. (2024) and Zheng et al. (2024))” and “Nie et. al. (2024) and Zheng et. al. (2024b) together have already demonstrated the main claim of this paper to a great extent.” We respectfully disagree with the reviewer on this point. As stated in our introduction, our goal is to **understand** the benefits and drawbacks of training and inference of MDMs over ARMs. - Even though Nie et al. (2024) show that the autoregressive models outperform MDMs in scaling, they don’t explain the reason behind it. In this work, we give extensive empirical and theoretical evidence that this is due to the **heterogeneity of complexity across masking tasks at training time** (Section 3). **While empirically it has been observed (even well before the work of Nie et al.) that MDMs are more difficult to scale, our paper is the first to provide rigorous insight into why this is the case.** - While Zheng et al. (2024) propose a different ordering of the sampling, we view our most important contributions in this direction not to be about proposing new heuristics per se, but about explaining the reason/motivations behind the improvement achieved by these heuristics. Indeed, in our work we **provide principled justification for such adaptive inference schemes**, e.g. by showing that *any-order inference in perfectly trained MDM results in the same true distribution* (line 296, right column), and **disentangle the extent to which different “confidence-based” decoding strategies are actually planning based on uncertainty.** - Additionally, both of these works fail to explain that the benefit of MDM (especially with adaptive inference) over ARMs is most dramatic on tasks where the left-to-right token ordering structure *doesn’t hold*. This also explains the reason behind very drastic improvements in tasks like math or coding (e.g., see Table 1 in [1]) where left-to-right ordering doesn’t hold. [1] Large language diffusion models. Nie et al. 2025. ## (2) Top-k margin outperforms Top-k on challenging code and math tasks On text MDMs, for challenging math and coding tasks, we found that Top-k margin outperforms Top-k. For comparison, we adapted LLaDA [1], 8B MDM. For the result, please refer to our response to the reviewer xjPE. Notably, in more challenging tasks, such as HumanEval-Multiline, HumanEval-Split Line, and Math, **Top-k Margin shows a clear advantage over Top-k**. This is because the Top-k margin offers a more reliable estimate of uncertainty when multiple tokens have similar probabilities (our claim in Section 4.1)—a common scenario in the challenging tasks in the coding and math domains. **These results also further highlight the potential of the Top-k Margin strategy for challenging infilling tasks.** To understand the difference between Top-K and Top-K margin strategy, we consider the following problem. The problem prompt given to an MDM is: [If $\sqrt{5x}\cdot\sqrt{10x}\cdot\sqrt{18x}=30$, find $x$.] The model’s output using Top-K strategy is: … *So the equation becomes: \[ \sqrt{900x^3} = 30 \]* *Square both sides to eliminate the square root: \[ (900x^3)^2 = 30^2 \]* …. The model was wrong by decoding an incorrect sentence (900x^3)^2 = 30^2. At the moment just before decoding ^ (following 900x^3), the model faces multiple plausible options: (1) adding ^, or (2) adding =. This ambiguity arises from the token "square", which confuses the model. The Top-k strategy selects ^, as it has the highest probability. **This exemplifies a situation where the model assigns comparable probabilities to multiple tokens at a single location**. In contrast, the probability margin between ^ and = was small, indicating high uncertainty, so Top-k margin shifted focus to a different position where it had greater confidence, leading to the correct statement, *900x^3 = 900*. ## (3) Examples of decoding trajectories for Sudoku: For the following partial Sudoku board, Vanilla MDM inference decodes a cell at random—for example, the cell in the 7th row and 9th column. In contrast, adaptive MDM inference prioritizes cells in the 1st and 2nd rows, which are objectively easier to fill in at earlier. | | | | | | | | | | |---|---|---|---|---|---|---|---|---| | 9 | 8 | 3 | 7 | 5 | . | 4 | 1 | 2 | | 2 | 4 | 5 | . | 9 | 1 | 3 | . | 6 | | 7 | 1 | 6 | 2 | 3 | . | . | . | . | | . | 2 | 1 | . | . | 8 | . | . | . | | 3 | 7 | . | . | 1 | . | 2 | 6 | . | | 6 | 9 | . | . | 2 | . | 8 | . | 1 | | 8 | . | 2 | . | . | . | 1 | . | . | | 1 | 5 | 7 | . | 8 | 3 | 6 | . | . | | . | 6 | . | 1 | 7 | . | 5 | 3 | 8 |
Summary: This work presents two contributions to a emerging discrete diffusion model called masked diffusion models. * The first contribution is a theoretic construction showing the hardness of prediction subtasks within masked diffusion, motivating an inference time solution to sidestep these challenging subtasks. * The authors then proposed a new criteria to adaptively choose the decoding order in inference time based on probability margins and show that leads to significant performance boosts of masked diffusion models in planning problems. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. I have followed the presented theoretical results, including proposition 2.1 and the tractability of subtasks in Example 3.2. Experimental Designs Or Analyses: Yes. The experiment design is thoughtful and closely tracks the main claims. Supplementary Material: No Relation To Broader Scientific Literature: This paper improves the understanding of masked diffusion models - presenting theoretical analysis of the hardness of the prediction problems, which motivates inference-time improvements that selects "easier" decoding passes. Essential References Not Discussed: Relevant work in literature is well cited. Other Strengths And Weaknesses: ## Strengths * The theoretical results on provable hardness of mask prediction problems in some orders is original and improves our understanding of the intrinsic difficulty of training such models. It also provides sufficient motivation for the inference-time strategy that followed later. * The results on planning tasks are pretty strong. The proposed probablity margin strategy significantly improves performance on hard Sudoku tasks. Notably, it even outperforms an AR model that is informed of the optimal order to solve the problem. ## Weaknesses * The decoding-order selection, while novel, is still heuristic. The experimental evidence is in a narrow domain (e.g., sudoku puzzles) and the generality needs to be tested further, e.g., in text and image experiments. * It's surprising and unclear why the masked diffusion model that uniformly optimizes predictions over all ordering can ourperform AR models informed of the optimal ordering. Could the authors elaborate on the possible reasons and implications? Other Comments Or Suggestions: Can you test your decoding strategy on common text or image tasks often used to evaluate diffusion models? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's positive evaluation and the insightful comments and questions. Below, we respond to the reviewer’s suggestions and questions. ## (1) Further experiments on text data--Top-k margin outperforms Top-k on challenging code and math tasks To examine the effect of different inference strategies on text evaluation tasks, we adapted LLaDA, the 8B MDM model from [1]. We compare three strategies: **Vanilla, Topk, Topk prob -margin**. The results are presented below. | **Sampler** | HumanEval-Single | HumanEval-Multi | HumanEval-Split | Math | MMLU-Pro | ROCStories | |------------------|-------------------------|------------------------|------------------------|--------|----------|-------------| | **Vanilla** | 31.8% | 16.5% | 14.2% | 28.5% | 33.2% | 21.23% | | **Top-k** | 32.9% | 20.8% | 18.4% | 31.3% | **36.5%** | 21.10% | | **Top-k Margin** | **33.5%** | **25.4%** | **22.3%** | **34.3%** | 35.4% | **21.41%** | As shown in the table, both Top-k and Top-k Prob. Margin consistently outperform vanilla MDM inference, underscoring the importance of adaptively selecting the decoding order to avoid harder problem instances. Notably, in relatively challenging tasks, such as HumanEval-Multiline, HumanEval-Split Line, and Math, **Top-k Margin shows a clear advantage over Top-k**. This is because the Top-k prob margin offers a more reliable estimate of uncertainty when multiple tokens have similar probabilities—a common scenario in these challenging tasks. In addition, particularly in coding and math problems where often a fixed answer exists, selecting the correct intermediate token during inference is critical. (Hence, a wrong token selection can directly lead to the incorrect answer) These results reinforce our claim in Section 4.1: Top-k Margin serves as a better proxy for positional uncertainty than Top-k in such cases. **These results also further highlight the potential of the Top-k Margin strategy for challenging infilling tasks.** ## (2) On the reason why MDMs outperform ARMs MDM is trained across all possible orderings and uses an adaptive inference strategy. This flexibility allows it to discover more efficient reasoning orders tailored to the task or dataset, which can generalize better to unseen data. In contrast, ARMs trained with a fixed order may fail to generalize to unseen (or harder) data (please refer to Table 4 in our paper). Additionally, the harder MDM training (i.e., training in more than one token generation order) might be more (sample) efficient than ARM training that focuses on learning in only one order. Moreover, the ordering used for ARM training—predetermined by humans—may be suboptimal. In contrast, MDM may discover more effective decoding orders by systematically leveraging information from the logits, often outperforming human-specified orderings. ## (3) Implications These hint at the strong potential of MDMs, which we also highlighted in Section 4.3: Since MDMs are trained on all possible masked subproblems, their adaptive inference allows them to discover *good reasoning paths*, potentially leading to better performance than fixed orderings predetermined by humans. [1] Large language diffusion models. Nie et al. 2025
null
null
null
null
null
null
Knowledge Retention in Continual Model-Based Reinforcement Learning
Accept (poster)
Summary: The authors propose a method for continual model-based reinforcement learning, where, ideally, an agent retains previously learned skills while learning new skills mitigating the catastrophic forgetting problem. The main problem addressed is the bounded storage problem, where the agent is not assumed to have infinite storage for storing previous transitions, which most continual RL methods assume (replay-based approaches). Instead, this work consists of two key components: 1. Synthetic Experience Rehearsal, where a generative model (VAE) produces synthetic transitions from all previous tasks, which are used in tandem with the current tasks data to train the new transition model. 2. Regaining Memories Through Exploration, which is an intrinsic reward that encourages exploration of states that were well-understood in previous tasks but are not currently well understood. This supposedly helps to bridge newly learned tasks and previously learned tasks. Hence, instead of storing previous transitions, the proposed method stores a generative model that learn the distribution of the previous transitions. Thus, the generative model should require less storage than keeping previous transitions. The experimental evidence is conducted through continual learning experiments on the MiniGrid environment and the Deepmind Control Suite. The method is compared to the following baselines: TDMPC from scratch (model-based/world-model RL), 'continual' TDMPC (initialize with model of previous task), EWC (regularization-based continual RL). The presented results significantly outperform the baselines in most experiments. In an ablation study, the authors also verify the complementary roles of the two following components: 1. The generative model 2. The intrinsic reward While I provided a "Reject" as score, I think that this paper overall details valuable research, that will be accepted if the authors improve the clarity and above all justify the use of the Generative Model to store experience through experimental support (comparing with CRL baselines). Claims And Evidence: The claims are not clearly stated, but I extracted the following. - New tasks are learned without forgetting previous tasks. - Evidence: Supported by experiment on gridworld (nicely visualized in Figure 4) - The previously learned skills improve few-shot transfer to new tasks that build upon the previous skills. - Evidence: supported by experiments on cheetah/walker - Synthetic Experience Rehearsal learns the distribution of previous experiences - Evidence: partly shown in Figure4. - The intrinsic reward incentivices agent to relearn previously learned tasks and bridges the gap - Evidence: only in ablation study that shows worse performance without it. - Mitigating the bounded storage problem - Evidence: No evidence on environments with a large number of tasks, where replay-based continual RL would be limited. - Or alternatively: comparison to replay-based methods and how much less storage is needed for similar performance Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand. However, the evaluation would benefit from another experiment demonstrating good performance where replay-based methods fail to verify the main claim of solving the bounded storage problem. Specifically, as the authors are reusing the technique of learning a generative model instead of storing the raw replay buffer, they should demonstrate the extent to which this technique is more efficient than storing a replay buffer. Thus, the authors could use replay buffers with limited capacity (to match the storage requirements of their generative model). Then, uniformly drawing could constitute a first baseline, even if more advance storage selection and sampling have been developed in CL papers. Theoretical Claims: Not applicable Experimental Designs Or Analyses: The paper lacks experiments designed to test the performances on an environment with a larger number of tasks, which is especially relevant if the main goal is to be an alternative to replay-based methods in settings where they become infeasible due to a large number of previous transitions (bounded storage assumption). Could you extend the number of rooms to e.g. 16 ? On a similar note, the method is not compared to replay-based methods, which would probably perform better or similar on the conducted experiments (according to my CL expert fellows). Maybe a valuable experimental question would thus be: "how close can we match their performance with x less storage use?" TD-MPC baselines: Please clarify how exactly 'from scratch' and continual TDMPC differ. Does the continual TDMPC use the same task-embedding as the model from the previous task? Some baselines are missing. For example: pseudo-rehearsal (Atkinson et al.) and world-model pseudo-rehearsal (Ketz et al.) -> see below. Supplementary Material: F. Limitations is a great argument (mode collapse of the generative model) and should be discussed in the main paper. Relation To Broader Scientific Literature: This research work combines known methods from continual learning (pseudo-recursal) with methods from reinforcement learning (MBRL). Some essential references are not discussed: - Pseudo-recursal is not referenced, neither is pseudo-rehearsal. - which is exactly the idea of [1], just with DQN instead of MBRL There even exists methods applying pseudo-rehearsal to MBRL [2] (but not published) How does your method differ? You should explain why these baselines are excluded, or if they are not relevant to support your claims (but then the reader would need explicit claims listed) [1] Atkinson, Craig, et al. "Pseudo-rehearsal: Achieving deep reinforcement learning without catastrophic forgetting." *Neurocomputing* 428 (2021): 291-307. [2] Ketz, N., Kolouri, S., & Pilly, P. (2019). Continual learning using world models for pseudo-rehearsal. arXiv preprint arXiv:1903.02647. Essential References Not Discussed: See above Other Strengths And Weaknesses: Strength: - Figure 4 is really nice, although a short claim as first sentence what this figure shows would be great. In general you could improve the captions of all figure following this principle: - The first sentence should highlight the main message of the Figure/Table. E.g. **DRAGO reduce the catastrophic forgetting from previous tasks,** - The next sentences then explain what is depicted in the Table/Figure, E.g Illustrated by the prediction score of the learned world models across the entire gridworld after each task.... etc Weaknesses: - Missing clear description of all research questions answered / main contributions. - Figures: - architecture overview - do not always contain a direct description of what to take away (as explained above) - Figure 3 does not add a lot -> remove? - Figures 5,6 could use a grid - missing number of seeds/runs (also at table 1) - Formalism: - TDMPC2 uses the MSE on the embedded state. Are the states embedded in DRAGO? - If not: This is not really TDMPC -> clarify in method - If yes: not clear from context, redefine s. - The transition model $T_i$ and it's probability are never properly defined. What kind of model is it? - Makes it sound like its two different models (also with eq.4 and 5 using one each) - Suggestion: Just use $p_T(s'|s,a;\psi)$ and $\mathbb{E}[p_T(s'|s,a;\psi)]$ , or define T that way. - Loss formulation is confusing, because you show the total loss using probabilities and then split it later into dyn and gen loss, whereas the dyn loss is actually MSE that uses the deterministic(?) T. - Also, the total loss (eq. 4) does not specify, where $\hat{s}'$ comes from, which is only later revealed to be from $T_\text{old}$ . - Maybe equations 3 and 4 are unnecessary (and confusing) and can be ommited altogether? - Instead just show L_dyn and L_gen and then: L_total = L_dyn + L_gen - Baselines: - Continual TDMPC is not really continual, more like 'pre-initialized' - Missing comparison to a replay-based method to show sota results and how drago compares. - Limitations missing in main paper. Super important, since the generated synthetic state-action pairs may not fully represent all previous tasks with a growing number of tasks, as discussed in Appendix F. - A (short) high-level algorithm in the main paper would also be beneficial. More specifically, to improve clarity, I strongly advise the authors to include in their paper: I. A list of contributions at the end of the introduction (often denoted with i., ii., ...) that stands out visually. II. A list of scientific questions at the beginning of the experimental evaluation section (often denoted Q1, Q2, ... etc), that are each answered in different paragraphs. For example Q1/ How does DRAGO perform in comparison to existing CRL baselines? Q2/ How much does the generative model help preventing catastrophic forgetting? Q3/ Is the generative model more storage efficient than the classical use of a replay buffer? ... Other Comments Or Suggestions: Background formalism MDP tuple: Usually use <> instead of () Typos: - Chapter title 3 - Lines 4-7 under eq. 5 - Just before chapter 4: 2x same sentence. While I graded the paper with a reject. I think that the paper does provide a valuable research work, that, once improved, will be of valuable interest for many readers. However, in its current form, it cannot be accepted to a major conference such as ICML. I hope that the authors can use the provided feedback to improve their work. Questions For Authors: - Q1: add research questions / key contributions - Q2: properly define transition model/likelihood, also focus on one notation (T or p) - Q3: reformulate loss definitions for clarity (see above) - Q4: add limitations + high-level algorithm - Q5: add replay-based baseline -> how does DRAGO improve upon that? (e.g. how much % less storage needed for similar performance)) - Q6: add pseudo-rehearsal baselines (see references above) - Q7: benchmark number of tasks -> when does it fail? Ethical Review Concerns: Not applicable Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive feedback. We are encouraged that you find our ideas valuable and appreciate your suggestions on clarity and comparisons. We address each concern below: Q: Replay-based Baseline A: Thank you for pointing this out – we are adding a **fixed-size replay buffer** baseline with the same storage budget as DRAGO’s generative model. Preliminary implementation is underway, and we will share results on this anonymous link https://drive.google.com/file/d/18TdI9nPCT7MMQCQMmL1ZYxJkblhUI91i/view?usp=sharing after we have the results. This baseline is an experience replay approach that stores a limited number of real past transitions (capped to a similar memory size as our generative model) and interleaves them during training on new tasks. Our hypothesis is that DRAGO will perform on par with or better than this bounded replay baseline, especially as tasks accumulate (since DRAGO can simulate diverse past experiences without strictly limited slots). Q: Pseudo-rehearsal Baseline A: Since the original paper does not provide source code, we are currently in the process of implementing a **world-model pseudo-rehearsal** baseline based on the paper. This is a generative replay baseline where an agent uses a pretrained world model (or VAE) to rehearse past tasks’ experiences, akin to DRAGO’s Synthetic Experience Rehearsal but without our intrinsic exploration component and continual learning of the generative model. We will share results on this anonymous link https://drive.google.com/file/d/18TdI9nPCT7MMQCQMmL1ZYxJkblhUI91i/view once the experiments are finished. Q: Terminology “Continual TDMPC”: A: Thank you for flagging this potential misunderstanding. In our paper, “Continual TDMPC” refers to the naïve baseline where we initialize the model for each new task with the previously learned TDMPC world model, without any forgetting mitigation. It is essentially the standard TDMPC algorithm simply fine-tuned sequentially. Q: Paper structure and readability: A: We appreciate these suggestions and will incorporate them in the final version. We will add a bulleted summary of contributions in the introduction. Specifically, - **Novel Continual MBRL Framework**. We introduce **DRAGO**, a new approach for continual model-based reinforcement learning that addresses catastrophic forgetting while incrementally learning a world model across sequential tasks without retaining any past data. - **Synthetic Experience Rehearsal**. We propose a generative replay mechanism that synthesizes “old” transitions using a learned generative model alongside a frozen copy of the previously trained world model. This synthetic data consistently reinforces earlier dynamics knowledge, mitigating forgetting in each new task. - **Regaining Memories Through Exploration**. We design an intrinsic reward signal that nudges the agent toward revisiting states that the old model explained well—effectively “reconnecting” current experiences with previously learned transitions. This mechanism complements the synthetic rehearsal by incorporating real environmental interactions to maintain a more complete world model. - **Extensive Empirical Validation**. Through experiments on **MiniGrid** and **DeepMind Control Suite** domains, we show that DRAGO: 1. Substantially improves knowledge retention compared to standard continual MBRL baselines. 2. Achieves higher forward-transfer performance, allowing faster adaptation to entirely new (but related) tasks. 3. Exhibits strong few-shot learning capabilities, substantially outperforming both learning-from-scratch and other continual methods under limited interaction budgets. Q: Formalism clarity (transition model, loss equations, notation) A: We apologize for the confusion. In our notation, $T$ is the parametric transition model (the learned dynamics predictor) and $p(\cdot)$ denotes a probability or distribution. For example, $p(s' \mid s,a;\psi)$ is the likelihood of observing $s'$ given $(s,a)$ under the transition model $T_{\psi}$. We will explicitly define the transition model in the main text upon first use and use a consistent notation (e.g. using $T$ for the model and $p$ for probabilities) throughout. We will also clarify the loss equations step-by-step. In particular, Equation (4) in the paper decomposes the loss into current-task loss (on real data $D_i$) and synthetic rehearsal loss (on generated data $\hat D$). Equation (5) then shows the combined training objective for the dynamics model: it includes a term $|T_i(s,a)-s'|^2$ for real transitions and a term $|T_{i}(ŝ,â)-T_{old}(ŝ,â)|^2$ for synthetic ones (weighted by $\lambda$). We will make sure to clearly explain each term and the roles of $T$ vs. $p$ in a revision. Q: Minor issues (typos, notation, algorithm pseudocode) A: We will fix all minor typos and notational errors. We also agree that a high-level pseudocode algorithm in the main paper would aid clarity. --- Rebuttal Comment 1.1: Comment: While I believe most of my claims have been adressed, I still have 2 points I that I would like the authors to clarify. Could you comment exactly what makes DRAGO better than the 2 baselines in your paper? Particularly, I find the second plot very weird, why does the replay-based baseline's performances complitely drop ? Also, I would recommend to drop *extensive* for this evaluation. While I believe that if the authors extend their evaluation of the two additional baselines, it would be sufficient to support their claims, I don't consider it to be *extensive* (but I might be wrong). --- Reply to Comment 1.1.1: Comment: Thank you for the follow-up questions. - Why DRAGO Is Better Than the Replay-Based and Pseudo-Rehearsal Baselines Bounded Replay vs. Generative Replay: The replay-based baseline can only store a small fraction of previous transitions in memory, so as the number of tasks grows, past data coverage shrinks. In contrast, DRAGO’s generative model synthesizes essentially unlimited “old” transitions, preserving a broader variety of past experiences. Reviewer Reward for Exploration: Our intrinsic “reviewer” reward actively drives the agent to revisit states that connect the new task with old tasks, resulting in a more unified, accurate world model. The bounded-replay baseline cannot exploit a similar “exploratory bridge,” so it remains siloed in the new task data plus a few replay samples. For the pseudo-rehearsal baseline, since it pretrains a variational autoencoder on early, randomly collected rollouts, it's hard for it to cover diverse transitions for both old and new tasks. Additionally, it also does not have our reviewer reward which helps the agent connect the transitions from current task and the old tasks. - Why the Replay Baseline’s Performance Drops in the Second Plot Our hypothesis is that the world-model for the replay-based agent is highly imprecise early on—so it occasionally “lucks into” hitting the goal (giving a temporary spike). As learning proceeds without a sufficiently diverse buffer of old data, it re-overfits to the most recent task or gets stuck, causing the subsequent performance drop. Note that, despite the transient fluctuations, its highest average reward remains low (around 10 in the plot), whereas the other two methods compared reach 60 and 100.. Thus the “drop” is exaggerated on a small scale—this method does not truly master the tasks but rather hovers around a low-performance regime. We used the word *extensive* to describe all the experiments we included in the paper - that answer was meant to be the list of contributions we plan to include in the introduction section based on the reviewer's suggestion.
Summary: This paper presents a new approach to model-based continuous reinforcement learning (DRAGO) aimed at improving the incremental development of world models across a range of tasks. DRAGO consists of two key components: Synthetic Experience Rehearsal and Regaining Memories Through Exploration. Empirical evaluations have shown that DRAGO is capable of preserving knowledge across a wide range of tasks and achieving excellent performance in a variety of continuous learning scenarios. ## update after rebuttal I have read the authors' rebuttal and appreciate their hard work and detailed responses. While their clarifications have helped me better understand certain points that were previously unclear, they are not sufficient for me to increase my rating score further. Claims And Evidence: YES Methods And Evaluation Criteria: YES Theoretical Claims: I examined the theoretical proofs of two key components of the author's approach: Synthetic Experience Rehearsal and Regaining Memories Through Exploration. The logic is basically correct, but there is still a lack of clarity that needs to be improved. Experimental Designs Or Analyses: The experimental design is generally sound, but still insufficient, and additional supplementary experiments need to be added. Supplementary Material: All of them were read. Relation To Broader Scientific Literature: The task addressed in this paper is a hot topic of discussion in recent years, where models learn new tasks with catastrophic forgetting of past tasks. This has been a great challenge in the field of continuous learning and reinforcement learning, and there have been many previous studies on continuous reinforcement learning (lifelong reinforcement learning). The authors' main innovation comes from a research finding on dreaming in the 1990s, which has been applied to other modelling domains, but not to the field of continuous reinforcement learning (lifelong reinforcement learning). Essential References Not Discussed: No significant relevant papers were found that were not cited. Other Strengths And Weaknesses: Strengths: 1. The paper explains the innovativeness clearly enough, and the structure of the article and the linguistic description make it very easy for the reader to understand the proposed innovativeness. 2. The presentation of previous relevant work is clearer and may be a continuation of the team's previous work. 3. The algorithms and pseudocode are more adequately described and innovatively explained in the supplementary material. 4. The introduction of the problem through the robot was very easy to understand Weaknesses: 1. In Section 3.1, the concept of “Synthetic Experience Rehearsal” is introduced as a method for generating synthetic experiences from past tasks. Could the authors clarify how the synthetic experiences generated by the generative model differ from actual past experiences in terms of their representational accuracy? 2. Too few algorithms for comparative testing. It is recommended that the authors add more SOTA methods on continuous reinforcement learning (lifelong reinforcement learning) published in top journals/conferences in recent years 3. In Eq. (1), where the synthetic state is generated using the frozen old world model Told, could the authors provide more explanation about how the state-action pair is sampled from the generative model p_G(s, a; \theta)? Specifically, how is the action sampled in continuous action spaces, and how do the learned transition dynamics handle this? 4. It’s suggested to quantitatively demonstrate the degree of forgetting in the main text, e.g., by reporting how well each method recovers performance (or world model error) on earlier tasks after the training sequence has ended. While Table 3 in the Appendix of the paper provides a comparison of the final performance of DRAGO and the baseline on each of the training tasks, it would have been more convincing in the main paper to mention that ‘DRAGO roughly maintains the performance of the old tasks without degrading the performance of the new tasks’. 5. On some of the test tasks, the ‘train from scratch’ approach came close to or even outperformed the continuous learning baseline (for the ‘Cheetah backward’ related combinatorial task). It is suggested that the authors briefly analyse the reasons for this phenomenon. 6. There still seems to be a clerical error in the title and abbreviation. Other Comments Or Suggestions: N/A. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and thoughtful questions.We address your questions below. Q: “how the synthetic experiences generated by the generative model differ from actual past experiences in terms of their representational accuracy?” A: Synthetic experiences in DRAGO are generated by a continually learned VAE-based generative model $G$ that **encodes and decodes both states and actions**, capturing the joint distribution of prior state-action. In other words, the agent “dreams” trajectories from its learned world model. These synthetic experiences **approximate real past interactions**—if $G$ is well-trained, the sampled states and actions resemble those from earlier tasks, helping reinforce previously learned dynamics **without storing actual data**. We acknowledge that synthetic data may not be perfect (due to model approximation error, we also discussed this in appendix F), but our design mitigates this: we **retrain $G$ after each task on a mix of new and generated old data** so it retains the ability to produce samples representative of all past tasks. Moreover, **DRAGO’s second component (Regaining Memories via Exploration)** complements generative rehearsal by actively revisiting important states in the real environment. This intrinsic reward-driven exploration addresses any gaps in the generative model’s coverage, ensuring that the world model doesn’t diverge from true environment dynamics. Q: Additional Continual Reinforcement Learning Baseline A: We agree that incorporating more recent state-of-the-art baselines will strengthen the evaluation. In the original submission, we compared against **naïve continual fine-tuning (Continual TDMPC)**, a **from-scratch retraining baseline**, and **EWC**. To address the reviewer’s suggestion, we are **running two new baselines** and will include them on this link https://drive.google.com/file/d/18TdI9nPCT7MMQCQMmL1ZYxJkblhUI91i/view?usp=sharing once we get the results: **(1) Pseudo-rehearsal with world models**: a generative replay baseline where an agent uses a learned world model (or VAE) to rehearse past tasks’ experiences, akin to DRAGO’s Synthetic Experience Rehearsal but without our intrinsic exploration component and continual learning of the generative model. **(2) Bounded replay-buffer baseline: an experience replay approach that stores a limited number of real past transitions (capped to a similar memory size as our generative model)** and interleaves them during training on new tasks. This will provide a direct yardstick for DRAGO’s no-data approach: i.e., how well does compressing experiences into a model compare to simply saving raw data with equal storage budget. Q: “How is the action sampled in continuous action space for synthetic data” A: We sample actions jointly with states from the generative model, rather than picking random actions. This is crucial for continuous action domains: **random actions might not lead to meaningful or realistic transitions**, whereas our generative model produces plausible $(s, a)$ pairs grounded in past experience​. Q: **Details on Sampling from $p_G(s,a;\theta)$ in Eq. (1)** A: In Equation (1) of the paper, $(\hat{s}, \hat{a}) \sim p_G(s,a;\theta)$ denotes drawing a state-action pair from the generative model’s distribution. As explained above, this is implemented by sampling from the VAE. We will clarify in the text that $p_G$ is the VAE-generative distribution over state-action pairs. The sampled pair $(\hat{s}, \hat{a})$ is then fed into the **frozen previous dynamics model $T_{\text{old}}$** to predict the next state $\hat{s}' = T_{\text{old}}(\hat{s}, \hat{a})$. This yields a synthetic transition $(\hat{s}, \hat{a}, \hat{s}')$ which is used (alongside real data from the new task) to train the current dynamics model. Thus, **the generative model provides realistic past states and actions, and $T_{\text{old}}$ ensures the next-state is generated according to learned physics**. Q: Train from scratch approach’s performance A: You are correct that in a few scenarios (notably the Cheetah run-backward task), the scratch baseline slightly overtakes continual learning baseline. We observed this in our results: the continual learning method does **not fully eliminate plasticity loss**, and a sufficiently different new task can benefit from a fresh model. We hypothesize that in the backward-running tasks, the agent’s prior knowledge (largely acquired from forward-running dynamics) was less applicable or even somewhat **biasing the exploration**. We thank the reviewer for pointing this out, and we will clarify in the revision.
Summary: The work aims to develop a new model-based reinforcement learning method that trained on a set of tasks with consistent changes and different reward functions. The researchers assume that the environment's dynamics remain the same for all tasks. They use TD-MPC as the basic approach, and train a separate generative model (VAE) that simulates the generalized transition function using a dataset of the new task and synthetic data from the old version of the model. The current transition model is then trained using an error minimization loss function on the current dataset and error regularization on the old model's on data generated by the generative model. Additionally, the authors propose using internal rewards to encourage visits to states that are accurately predicted by both the old and current models. To evaluate the effectiveness of their approach, the authors compare it with traditional TD-MPC and an older EWC baseline on two tasks: miniGrid and MuJoCo. ## update after rebuttal Both before and after the rebuttal phase, I believe that the work has a certain novelty, and therefore, I leave my current high assessment. Claims And Evidence: The main claims of the authors regarding the effectiveness of using the generative model for previous tasks are confirmed by experiments. Methods And Evaluation Criteria: The method itself is relatively new, and while it mainly utilizes the TD-MPC code base, it does provide an improvement in the continuous learning process. I should note that TD-MPC also claims to have some form of multitasking ability and the capacity to work with multiple tasks at once. However, the authors do not elaborate on this in any detail. Instead, the authors use relatively simple vector control systems to create a series of tasks that need to be solved. The method in question is limited to vector envs only and is unlikely to be effective for observations presented in the form of images. Theoretical Claims: The paper provides a conclusion of the main loss function in equation 5, which can be considered sufficient theoretical justification for the proposed method. Experimental Designs Or Analyses: It should be noted that the test and training tasks are not randomized, and the authors perform experiments on fixed sequences of tasks. This raises questions about the generalizability of their results to other sequences and compositions of test tasks. Additionally, the authors use an old EWC continuous learning baseline, which is a significant drawback and does not reflect the current state of the field. Supplementary Material: I have reviewed the source code of the proposed method. The README file contains instructions for reproducing the results, but there is no baseline code to ensure that they are used correctly. Relation To Broader Scientific Literature: The authors did not adapt or demonstrate the effectiveness of other continuous learning methods mentioned in the review in any way. Yes, the environment model is not used there, but perhaps this is not so necessary for such simple environments as Minigrid and MuJoCo. Essential References Not Discussed: The authors have provided all the necessary references and methods. Other Strengths And Weaknesses: It would also be useful to compare the authors' approach with different curriculum learning scenarios based on the same sequence of tasks. It would also be a good baseline. Other Comments Or Suggestions: The authors did not decipher the notation q_\theta_i in formula 6. Questions For Authors: To what extent would any curriculum learning scenarios also be effective for such task sequences? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your very positive evaluation and accept recommendation. We believe we can resolve your concerns. Q: TDMPC2’s Multitask Training A:We clarify that TDMPC2 is evaluated in a multitask regime: it is trained on all tasks jointly, with access to the full replay buffers from every task simultaneously and (implicitly) the task identity or reward function for each experience. This means TDMPC2 benefits from seeing all task data at once (no task ordering) and can leverage task-specific information during training. In contrast, **DRAGO** tackles tasks in a strict continual learning manner: tasks are presented sequentially, and our method does not utilize task IDs or any access to past task data/replay buffers once those tasks are finished. DRAGO must retain knowledge through its mechanisms (generative rehearsal and intrinsic reward) without being able to directly revisit past data. Q: Additional Continual Learning Baselines A:We agree that incorporating more recent state-of-the-art baselines will strengthen the evaluation. In the original submission, we compared against naïve continual fine-tuning (Continual TDMPC), a from-scratch retraining baseline, and EWC. To address the reviewer’s suggestion, we are **running two new baselines** and will include them on this link https://drive.google.com/file/d/18TdI9nPCT7MMQCQMmL1ZYxJkblhUI91i/view?usp=sharing once we get the results: **(1) Pseudo-rehearsal with world models**: a generative replay baseline where an agent uses a fixed pretrained world model (or VAE) to rehearse past experiences, akin to DRAGO’s Synthetic Experience Rehearsal but without our intrinsic exploration component. **(2) Bounded replay-buffer baseline**: an experience replay approach that stores a limited number of real past transitions (capped to a similar memory size as our generative model) and interleaves them during training on new tasks. This will provide a direct yardstick for DRAGO’s no-data approach: i.e., how well does compressing experiences into a model compare to simply saving raw data with equal storage budget. Q: Curriculum Learning A:Thank you for the interesting suggestion on curriculum learning. We agree that intelligently ordering tasks (e.g., from easier to harder or with gradually increasing complexity) could further improve continual learning performance. A curriculum might help the agent build up its world model in a more structured way, potentially reducing forgetting and improving transfer. However, designing an optimal curriculum is non-trivial and was beyond the scope of our current work, which focuses on general mechanisms applicable to any task sequence. We opted to evaluate DRAGO on diverse task sequences without assuming a favorable order. Nonetheless, exploring curriculum learning in conjunction with DRAGO is an exciting avenue for future work. We will note this in the discussion as a potential enhancement, as it could complement our approach by easing the learning progression through tasks. Q: Handling Image Observations A: Thank you for the suggestion. We would like to first point out that the domains that we tested on (MIniGrid & Deepmind Control Suite) are two of the most popular RL benchmarks that have been evaluated in a large number of prior papers and have been shown to be quite challenging environments. The tasks we designed for evaluating transfer performance are even more challenging on DMC tasks as they require the agent to learn to transition from one locomotion mode (jump, run etc.) to another (run forward, run backward). While due to time constraints we cannot test the method on image observation settings, we think this is an exciting and challenging problem for future work as we mentioned at the end of Section 3.1, especially as we can replace VAE with diffusion models that are capable of generating high-quality image data. Q: Clarification of Notation $q_{\theta_i}$ in Eq. (6) A: We apologize for the confusion regarding the notation $q_{\theta_i}$ in Equation (6). In the context of our VAE-based generative model, $q_{\theta_i}$ denotes the encoder’s approximate posterior distribution for task $i$. In other words, $q_{\theta_i}(z \mid s,a)$ is the VAE encoder’s output: the probability distribution (in latent space) that approximates the true posterior of the latent variable $z$ given an observation-action pair $(s,a)$. The subscript $i$ indicates that this encoder (with parameters $\theta_i$) is the one learned up to task $i$ (since we train a new generative model $G_i$ for each task $i$ using both new and past data via rehearsal). We will explicitly clarify this in the revised paper. Essentially, Eq. (6) is the standard VAE loss: $L_{\text{gen}}(\theta_i) = E_{(s,a) \sim D_{\text{gen}}}\Big[ -E_{z \sim q_{\theta_i}(z|s,a)}[\log p_{\theta_i}(s,a \mid z)] + \text{KL}(q_{\theta_i}(z|s,a)|p(z)) \Big]$, where $q_{\theta_i}$ is the encoder’s distribution and $p_{\theta_i}$ is the decoder (generative distribution).
Summary: The authors introduce a new method (DRAGO) aimed at mitigating catastrophic forgetting in model-based RL in situations where previous experience cannot be stored. The authors propose to learn world model that compresses experience of previous tasks, and propose a novel intrinsic reward which encourages the policy to bridge the gap between different tasks, towards learning a more complete world model. The authors test their method empirically in various continual RL gridworld and continuous control domains, showing improvement over vanilla MBRL baselines and a previously successful continual-learning method (EWC). Claims And Evidence: The motivation lacks justification. For instance, are storage limitations really a bottleneck for policy learning in the present day? Can the authors qualify this with quantitative estimates? Similarly, do they have specific examples of privacy-preservation preventing the training of a generalist policy? The on-device deployment angle seems to most well justify the need for memory. However, there the justification suffers from another problem. Do the authors believe that online MBRL is likely to take place on-device, as opposed to in-context learning of some appropriately meta-trained foundation model (as is becoming standard for many AI applications)? The results in Figures 4 and 5 are convincing that this method improves over the chosen baselines. The authors test their method on both gridworlds and continuous control tasks, and in both settings they see improvement. The ablation study in Figure 6 is also convincing. It would be useful for the authors to include information on the number of seeds they ran for each method in the figure captions. As far as I can tell, the choice of continual RL tasks is not standard, and was rather determined by the authors. This raises the concern that these tasks could have been cherry-picked specifically to showcase the benefit of this method. Can the authors comment on why they did not use pre-defined standard benchmarks here, and whether there have any additional data or arguments that could allay a reader's concern that these tasks have been deliberately chosen to fit the specific setup in which the method is likely to succeed? Methods And Evaluation Criteria: The DRAGO algorithm is well-described, and I believe that there are sufficiently many details provided that this work would be reproducible. The loss functions in equations (5) and (6) appear to be correct to me. The intrinsic reward in equation (7) is well-motivated, if a little ad-hoc. It would be useful to have a system diagram summarizing the various components of the full system (VAE, dynamics model, intrinsic reward). The integration with TD-MPC is rather confusingly described to me. As I understand it, the philosophy of TD-MPC is to be encoder-only. However, here the authors specify that they need a decoder for state prediction. Can the authors comment on why they did not stick with the encoder-only philosophy of TD-MPC. It is also unclear to me how the "learner" and "reviewer" portions of the agent are combined when using MPC for planning. Can the authors comment on why they did not simply learn one value function on the sum of the intrinsic and extrinsic reward, and how the reviewer and learner combine to produce the policy? This sentence is also confusing to me: For each new test task, we randomly initialize the reward, policy and value models and reuse only the world model (dynamics)". Why are the reward, policy and value models thrown away for test tasks if these are kept throughout training? Is this because of an empirical performance gain or because it would be unfair to keep them around for test, for some reason. More justification is needed here. Theoretical Claims: N/A Experimental Designs Or Analyses: The evaluation setup is insufficiently well described. Is RL taking place during the "test" tasks or are these purely in-context learning? I assume that RL is still taking place, but then the designation of these as "test" tasks seems rather odd (usually one things of a train-test split, where there is no in-weights learning taking place on the test split). More explanation is required to convince me that the evaluation procedure is reasonable, rigorous and reflects a likely real-world deployment setup as motivated in the introduction. Supplementary Material: No. Relation To Broader Scientific Literature: Both the MBRL and continual RL literatures are well-reviewed. One line of work that it may be useful to additionally mention in this regard is world models learned from data (e.g. Genie https://arxiv.org/pdf/2402.15391, UniSim https://arxiv.org/abs/2310.06114). Much as in the same way that large language models meta-learning on internet scale data has led to foundational models capable of long in-context continual adaptation, the same may be true of world models. It would be interesting to hear the authors thoughts on this complementary direction and how their work could fit into that narrative, were it to transpire to be successful at scale. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: N/A. Questions For Authors: Please see the questions in my responses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and for acknowledging the strong empirical results of DRAGO, as well as the clarity of our writing. We address your concerns below: Q: bounded memory and On-device MBRL A: We agree that **bounded-memory continual learning** is most critical in constrained or privacy-sensitive scenarios. In practice, real agents **cannot always store unlimited replay data**. For example, (i) *on-device learning*: robots or mobile devices have finite storage and often must learn incrementally without offloading data (for privacy or connectivity reasons), (ii) *privacy*: prior task data may contain sensitive information that cannot be archived or sent to a server. These real-world constraints motivate our setting where the agent must learn sequentially **without full past data. On-device online MBRL** addresses scenarios where a pretrained universal simulator is unavailable or too large to deploy. DRAGO’s contribution is to some extent **complementary to foundation models**: for users who have access to a foundation model, our approach could be used to continually fine-tune that model on new tasks in a memory-efficient way. Conversely, in domains not covered by a foundation model (or where data cannot leave the device), DRAGO enables continual learning from scratch. We would also like to emphasize that even though large world model pretraining is very popular right now, it is not the only correct way and not sufficient to learn a perfect world model - we would definitely need more techniques (especially continual learning) in the future and we should not stop doing research on it. Q: Evaluation Procedure A: The “test tasks” in our evaluation are **new tasks presented to the agent after the sequence of training tasks**, meant to assess how well the learned world model can be reused. We apologize for the confusion – these test tasks do involve further RL training of the policy (i.e. the agent is still learning to maximize reward on the new task), and we only load the pretrained world model’s parameters during these tests. In other words, when the agent faces a test task, only the world model is loaded to **test if it is an ideal initialization **for the new task. Q: Integration with TD-MPC, Decoder, and Learner vs. Reviewer Agents A: *Decoder* usage: DRAGO is built on TD-MPC, but we introduce a **variational generative model** (encoder–decoder) for state-action pairs as part of our Synthetic Experience Rehearsal module (Section 3.1). TD-MPC by itself uses only an encoder and latent dynamics (planning entirely in latent space), so a decoder was not needed in the original TD-MPC. In our case, however, the **decoder is essential** – it allows us to **reconstruct synthetic state-action examples** from the latent generative model of past tasks. These reconstructed experiences are fed to the world model to rehearse past dynamics. In short, **without a decoder, the agent couldn’t simulate prior states/actions explicitly**, so we added one to enable **generative replay** of past experiences (we will clarify this design choice in the text). *Learner vs. Reviewer in planning:* In Section 3.2 and Algorithm 1, we introduce two parallel actor-critic pairs – a **“learner” agent** (the original policy optimizing extrinsic reward) and a **“reviewer” agent** (an auxiliary policy optimizing an intrinsic reward)​. Both **share the same world model** and run simultaneously during training, but they have separate policy networks and reward/value heads. The learner uses the environment’s reward $r^e$ to solve the task (just like standard TD-MPC), while the reviewer receives a designed intrinsic reward $r^i$ (Equation 7 in the paper) that encourages revisiting state transitions that the **previous task’s model** could predict confidently. In practice, the two agents alternate or parallelize interactions with the environment in each training iteration (we will clarify this scheduling in the revision). During planning for action selection, each agent uses Model Predictive Control (CEM in our case) with its own reward model: the learner plans to maximize extrinsic return, while the reviewer plans to maximize intrinsic return. They do not directly interfere with each other’s action selection; they simply contribute different trajectories for training. *Why not a single combined reward/value?* We chose to keep intrinsic and extrinsic rewards separate (two policies) after initial experiments indicated that combining them can be counterproductive. A single policy optimizing a sum of extrinsic+intrinsic rewards tended to trade off one against the other, sometimes neglecting the task objective in favor of curiosity (or vice versa). By using a dedicated reviewer agent for the intrinsic objective, we ensure the extrinsic task performance remains the learner’s sole focus, while the reviewer safely explores for retention. The world model benefits from both data sources without the learner’s policy being distracted. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. You have addressed many of my concerns. However, my concern about the choice of tasks remains unaddressed. I am not certain that these tasks have not been cherry-picked to demonstrate the success of this method. Can the authors comment on the choice of continual RL tasks and whether they have appeared in previous literature? My concerns here are slightly mitigated by the additional baselines from other methods that the authors are preparing for Reviewer XZ8k, so I am minded to increase my score if the authors can provide some more justification and / or results on a wider range of task orderings. --- Reply to Comment 1.1.1: Comment: Thank you for the follow-up comment. We appreciate your continued engagement with our work. We would like to emphasize that each individual tasks, i.e., MiniGrid goal reaching, Cheetah/Walker run, jump etc. , is one of the most common standard RL tasks as used in many well-known papers, like TDMPC, which is the MBRL algorithm that we built DRAGO on. Specifically, for MiniGrid: We chose distinct rooms in MiniGrid because each room highlights a different region of the state space, yet all rooms share underlying transition dynamics. Although the tasks are laid out in a way to ensure minimal overlap in reward-relevant regions, the “door-connecting” layout is standard in many MiniGrid experiments, and we believe it realistically captures cases where local behaviors (e.g., navigating a specific room) must be stitched together across tasks. DeepMind Control Suite (Walker, Cheetah): These tasks are standard continuous-control benchmarks. Our continual-learning versions simply vary reward functions (e.g., running vs. jumping vs. walking backward), which is a common way to induce different behavioral modes while preserving the same underlying physics. As we focus on knowledge retention in this paper, we design tasks like jump&run and jump2run, to make sure we can test in the same time whether the agent forgets previous knowledge and whether the agent is learning a more and more complete world model in the process of continual learning (so it can quickly solve the combination of previous tasks). Overall, we designed the tasks to (a) ensure that each new task reveals a different aspect of the dynamics (rather than reusing the same states or transitions repeatedly), and (b) require knowledge retention across tasks for better performance. Our main goal was not to artificially inflate our method’s advantages, but rather to demonstrate it on tasks that are not trivially overlapping. As the reviewer also mentioned, we have included two new baselines and have updated the results in the anonymous link: https://drive.google.com/file/d/18TdI9nPCT7MMQCQMmL1ZYxJkblhUI91i/view?usp=sharing. We compared to one replay-based MBRL baseline and one pseudo-rehearsal MBRL baseline, and DRAGO still clearly outperforms both, indicating that the combination of synthetic experience rehearsal and targeted intrinsic exploration is crucial for robust knowledge retention.
null
null
null
null
null
null
Angle Domain Guidance: Latent Diffusion Requires Rotation Rather Than Extrapolation
Accept (poster)
Summary: This paper introduces Angle Domain Guidance (ADG), a simple and effective sampling algorithm designed to improve the performance of text-to-image latent diffusion models, particularly under high guidance weights. The authors focus on the shortcomings of Classifier-Free Guidance (CFG), specifically its tendency to cause norm amplification in the latent space, which leads to color distortions and oversaturation in generated images. The paper provides a comprehensive theoretical analysis showing that CFG’s linear extrapolation mechanism results in sample norm inflation and anomalous diffusion phenomena. Based on this insight, ADG is proposed as an alternative that focuses on angular alignment rather than magnitude extrapolation in the latent space, ensuring better text-image alignment without sacrificing image fidelity. Experimental results on the COCO dataset demonstrate that ADG achieves superior performance in terms of CLIP Score, ImageReward, and FID across a wide range of guidance weights, outperforming both CFG and CFG++. Claims And Evidence: The paper claims that (1) CFG leads to significant color distortions at high guidance weights due to norm amplification in latent space, (2) ADG effectively mitigates these issues by controlling magnitude variation while enhancing angular alignment, and (3) ADG offers better text-image alignment, improved color fidelity, and superior perceptual quality. The evidence is compelling: both theoretical analyses and extensive empirical experiments support these claims. Figures and quantitative metrics (CLIP, ImageReward, FID) consistently demonstrate ADG’s robustness and effectiveness at high guidance weights, where CFG and CFG++ degrade. The ablation studies further substantiate the role of angular constraints in preventing catastrophic failures, solidifying the evidence behind the paper’s key claims. Methods And Evaluation Criteria: The proposed ADG method is well-motivated, deriving from a sound analysis of the pitfalls of existing CFG. The authors introduce a geometrically inspired approach by emphasizing angular guidance in the latent space, consistent with the assumption that latent representations follow high-dimensional spherical Gaussian distributions. Evaluation criteria include standard and accepted metrics in text-to-image generation: CLIP Score (semantic alignment), ImageReward (human preference alignment), and FID (distributional similarity). The authors run experiments on the COCO dataset using Stable Diffusion v3.5 and validate compatibility with advanced samplers such as DPM-Solver. The evaluation is adequate, though it relies primarily on automated metrics without additional human evaluations, which are often necessary for perceptual alignment validation in generative modeling. Theoretical Claims: The paper provides a theoretical framework analyzing the shortcomings of CFG. Specifically, Theorem 3.2 introduces the norm amplification effect of CFG, and Theorem 3.3 shows anomalous diffusion, which offers insight into how CFG might induce undesirable latent space behaviors at high guidance scale. The authors extend beyond prior analyses by introducing the concept of surface classes and offering proofs in high-dimensional settings with multiple components. However, while the theoretical arguments are sound, the practical extension of these results to complex real-world latent spaces (beyond Gaussian mixtures) could be discussed in more depth. Experimental Designs Or Analyses: The experimental design is sound and systematic. The authors test ADG across varying guidance weights ($\omega=2$ to $\omega=10$), compare against CFG and CFG++, and include ablation studies to validate the contribution of angular constraints and normalization. Experiments on both Stable Diffusion v3.5 (COCO dataset) and Stable Diffusion v2.1 with DPM-Solver highlight ADG’s generality. Key metrics are reported clearly, with ADG consistently outperforming baselines in ImageReward (often used as a proxy for human preference) and CLIP Score. Ablation studies demonstrate the necessity of angular constraints to avoid instability. Supplementary Material: The supplementary material is comprehensive, containing extended theoretical proofs (Theorems 3.2 and 3.3), implementation details for ADG and its variants, and deeper discussions on comparisons with CFG++ and other recent methods. The supplementary section also details derivations for extensions of ADG to flow-matching models. While not all of these details are fully presented in the main paper, the supplement appears to include necessary mathematical rigor and algorithmic specifics to ensure reproducibility. The inclusion of ablation results and algorithmic variants in the appendix supports the thoroughness of the study. Relation To Broader Scientific Literature: The proposed paper provides a method to improve image generation, while the method could be further investigated to diffusion models on other data, e.g., protein, text, etc., as the conditional generation with diffusion model plays a crucial role in various machine learning tasks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper’s primary strength lies in its novel conceptual shift from linear extrapolation to angular guidance in latent diffusion sampling. ADG is a theoretically motivated and practically effective approach that mitigates common issues with CFG, especially at high guidance weights. The theoretical analysis is comprehensive and extends prior literature in meaningful ways. Additionally, the method is shown to be compatible with various samplers and diffusion frameworks, highlighting its flexibility. On the weakness side, the empirical validation is narrowly focused on latent diffusion models (i.e., Stable Diffusion). While the authors acknowledge ADG’s heuristic nature and its dependence on the latent space structure of variational autoencoders, a deeper analysis of potential limitations when applied to non-latent diffusion models (e.g., pixel-space diffusion) would be helpful. Moreover, computational costs associated with ADG, particularly in high-dimensional spaces, are not discussed in detail. Other Comments Or Suggestions: N/A Questions For Authors: 1. Does the similar trends of CFG also resides in pixel diffusion models? 2. The paper assumes that the latent space follows spherical Gaussian distribution, could the author elaborate or provide additional visualization to validate this claim? 3. While original CFG use linear interpolation, one natural way is to use spherical linear interpoliation (slerp) to perform angular guidance. Is there any reason why did not tried slerp as the design choice for angular guidance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive comments. We provide our responses below. ### 1. **Clarification on Computational Costs of ADG** **Reviewer Concern:** The reviewer mentions that computational costs associated with ADG, particularly in high-dimensional spaces, are not discussed in detail. **Response:** We appreciate the reviewer’s observation. ADG introduces negligible computational overhead compared to standard sampling procedures. It does **not require additional neural network evaluations**; instead, its overhead consists solely of basic vector operations (e.g., normalization and angle clipping). To quantify this: - On **SD3.5-large** (latent dimensionality ≈ $2 \times 10^5$), the additional operations per inner loop cost around $1 \times 10^6$ FLOPs. - In contrast, one full sampling step requires approximately $9 \times 10^{13}$ FLOPs (measured using the `thop` package). - Empirical measurements over 100 generations on an A100 GPU show: - **CFG**: Avg. generation time = **6.74s** - **ADG**: Avg. generation time = **6.72s** This confirms that **ADG does not increase runtime**, and minor variation may be attributed to system noise. ### 2. Behavior of CFG in Pixel-Space Diffusion Models **Reviewer Concern:** The reviewer asks whether similar trends of norm amplification and image degradation occur in **pixel-space diffusion models**. **Response:** Yes, similar degradation patterns are observed in pixel-space diffusion models under high guidance weights. As in latent models, CFG-induced norm amplification pushes pixel values toward extreme ranges, leading to oversaturation and unnatural contrast. [visual example](https://files.catbox.moe/dqymkj.png) These observations reinforce our hypothesis that norm amplification is a core issue, not limited to latent spaces. ### 3. Justification for Assuming Spherical Gaussian Latent Space **Reviewer Concern:** The reviewer requests further elaboration or visualization to validate the assumption that the latent space follows a spherical Gaussian distribution. **Response:** During the pretraining of VAEs used in latent diffusion models, the latent space is regularized to approximate a standard multivariate Gaussian distribution through a Kullback-Leibler (KL) divergence loss term. We acknowledge that Gaussian priors in VAEs are often idealized, and real-world latent spaces may deviate from this assumption due to model imperfections or data complexity. However, this Gaussian prior remains a practical and effective approximation that helps explain why directional information carries more semantic meaning than magnitude information. Samples drawn from a high-dimensional Gaussian distribution tend to concentrate around a thin spherical shell, where the angular relationship between latent variables retains critical semantic information. Consequently, emphasizing angular alignment, as done in Angle-Domain Guidance (ADG), effectively preserves text-image consistency while mitigating the undesirable effects of norm amplification. This is also one of the reasons why we emphasize that, although ADG is theoretically inspired and has some theoretical guarantees, it remains a heuristic algorithm. ### 4. Choice of ADG over Spherical Linear Interpolation (SLERP) **Reviewer Concern:** The reviewer suggests that using **spherical linear interpolation (SLERP)** could be a natural alternative to ADG for performing angular guidance and inquires why it was not considered. **Response:** While **SLERP** is an elegant mathematical alternative, it is not suitable for high guidance weights in our setting. The reasons are as follows: - SLERP inherently performs interpolation, not extrapolation. In cases where the guidance weight $\omega > 1$, SLERP **extrapolates** beyond the two endpoints, leading to uncontrolled norm growth. - Our proposed ADG method, by contrast, **constrains the norm** of the generated sample under high guidance weights, as demonstrated by **Proposition 4.1**. - Consider a simple example where: $$ \hat x_{0, c} = [1, 0]^\top, \quad \hat x_{0, \emptyset} = [0.5, 0.01]^\top. $$ Using SLERP with $\omega = 5$, the result is: $$ \text{slerp}(\hat x_{0, \emptyset}, \hat x_{0, c}, 5) \approx [15.9, -1.2]^\top, $$ which exhibits extreme sensitivity when the vectors are nearly aligned. In contrast, ADG mitigates this instability by ensuring angular alignment while maintaining controlled magnitude growth. We chose ADG over SLERP due to its norm control under high guidance weights, which is crucial for maintaining image fidelity. Once again, thank you for your constructive feedback and for considering our paper for acceptance. We will revise our paper accordingly.
Summary: This paper focuses on the problem of color distortions in the generated images when classifier-free guidance is set to a high value. This paper identifies that these distortions come from the amplification of sample norms in the latent space. To address this problem, this paper proposes Angle Domain Guidance (ADG) algorithm. ADG constrains magnitude variations while optimizing angular alignment, thereby mitigating color distortions while preserving the enhanced text-image alignment achieved at higher guidance weights. Experimental results demonstrate the effectiveness of ADG. Claims And Evidence: The main claim of this paper is to address the color distortion problem when classifier-free guidance is set to a high value. The experiment results successfully validate the claim. Methods And Evaluation Criteria: The method and evaluation are pretty complete, but I still have some questions: - I am curious about the performance of a higher CFG, e.g., larger than 20. This would further show the effectiveness of ADG. - Since sdv3.5 is pretrained on text-to-image generation, it's better to show more results containing complex text prompts as guidance (instead of simple prompts that describe a single object). This would further show how text alignment becomes when we increase the guidance scale. - More experiments on state-of-the-art models (e.g., Flux) would make the claim more solid. Theoretical Claims: The theoretical claims are correct. Experimental Designs Or Analyses: See "Methods And Evaluation Criteria". Supplementary Material: Yes, I have read most parts of the supplementary material. Relation To Broader Scientific Literature: The contribution of this paper can be further applied to broader generation models such as motion generation and video generation, which would have a broader impact on the research field. Essential References Not Discussed: The references are complete and comprehensive. Other Strengths And Weaknesses: This paper addresses an important issue in this field and can lead to a broader impact on more general generation methods. The paper writing of this paper is clear and easy to follow. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive comments. We provide our responses below. ### **1. Performance of Higher CFG Values** **Reviewer Comment:** > I am curious about the performance of a higher CFG, e.g., larger than 20. This would further show the effectiveness of ADG. **Response:** We appreciate this insightful suggestion. We have conducted additional experiments with higher guidance weights, specifically at weight = 20. The results demonstrate that ADG maintains stable performance even under extreme guidance conditions, whereas CFG suffers from severe color distortions and semantic misalignment.These results highlight the robustness of ADG at high guidance levels. We will include qualitative visualizations at weight = 20 in the supplementary material. [View Images](https://files.catbox.moe/lnh69r.png) ### **2. Evaluation with Complex Text Prompts** **Reviewer Comment:** > Since sdv3.5 is pretrained on text-to-image generation, it's better to show more results containing complex text prompts as guidance (instead of simple prompts that describe a single object). This would further show how text alignment becomes when we increase the guidance scale. **Response:** Thank you for this helpful suggestion. To evaluate ADG under more complex prompts, we curated 500 complex text prompts using GPT. Our experiments show that ADG significantly outperforms CFG under these conditions. For guidance weight = 8: - ADG achieves a CLIP score of 0.355 and an IR score of 1.566 - CFG scores lower with a CLIP score of 0.338 and an IR score of 0.766 These results confirm that ADG preserves semantic alignment and image quality even with complex instructions. We will include detailed metrics and visual comparisons in the supplementary material. [View Images](https://files.catbox.moe/p5ayb9.png) ### **3. Experiments on State-of-the-Art Models (e.g., Flux)** **Reviewer Comment:** > More experiments on state-of-the-art models (e.g., Flux) would make the claim more solid. **Response:** Thank you for raising this important point. We initially considered incorporating results from state-of-the-art open-source models, including **Stable Diffusion 3.5 large** and **Flux.1 [dev]**. However, during our experiments, we discovered that Flux.1 [dev] applies guidance weight as a **model input** rather than through the conventional CFG mechanism. Upon further investigation of the Flux architecture and related blog posts, we learned that Flux.1 [dev] is derived via guidance distillation from Flux.1 [pro], which is unfortunately **not open-source**. Since access to Flux.1 [pro] is required to modify the guidance mechanism, it was impossible for us to conduct experiments on the Flux family. Once again, thank you for your constructive feedback and for considering our paper for acceptance. We will revise our paper accordingly. --- Rebuttal Comment 1.1: Comment: The rebuttal effectively addressed and resolved the concerns I had. After reviewing the response in detail, I feel that my initial reservations have been adequately clarified. As a result, I am satisfied with the explanation provided and will maintain my original rating of "accept." --- Reply to Comment 1.1.1: Comment: Thank you very much for your kind feedback and for taking the time to carefully review our rebuttal. We truly appreciate your thoughtful evaluation and are glad to know that our clarifications addressed your concerns.
Summary: The paper presents angle domain guidance (ADG), an alternative to classifier-free guidance (CFG) for conditional diffusion models. The key observation is that CFG leads to excessively large sample norms, causing oversaturated colors in the generated images. The paper claims that this is a result of CFG's linear extrapolation scheme in the latent space, presents an analysis on norm magnitude in a simplified setting where the target distribution is a Gaussian mixture model, and proposes to instead extrapolate in the angular domain to align the directions of latents, thereby effectively controlling norm magnitude. The experimental results demonstrate that ADG outperforms CFG given large guidance weights. Claims And Evidence: - The paper claims that CFG amplifies sample norm, which is indicative of poor sample quality. This claim is substantiated by empirical evidence (Figure 2) showing that norm magnitude is proportional to guidance weight, and color values are positively correlated with norm magnitude. The paper further presents a theoretical analysis (Section 3) which echoes the empirical findings. The analysis is performed in a simplified setting, where the target distribution is assumed to be a mixture of Gaussians. Nevertheless, this analysis sheds light on the challenge and informs the design of the new guidance algorithm. - The paper claims that ADG mitigates norm amplification, thereby enhancing sample quality. This is backed by theoretical (Proposition 4.1) and experimental results (Section 5). To further strengthen this claim, I encourage the authors to plot norm against guidance weight (similar to Figure 2a) to find out whether ADG can empirically control sample norm as suggested by the theoretical result. - The paper claims that ADG generalizes across samplers and can work with flow-based models. This is supported by experimental results with the DPM sampler (Table 3) and an extension of ADG that fits the flow-matching formulation (Appendix F). However, no experimental results are provided for flow-based models. Methods And Evaluation Criteria: - The proposed method aims to address a limitation of CFG, namely poor sample quality under large guidance weights. The method is motivated by a theoretical analysis and is both simple and effective. - The benchmarks cover multiple aspects of sample quality, namely image quality (FID), image-text alignment (CLIP score), and human preference (IR). The same set of metrics has been used by the community for the evaluation of text-to-image diffusion models. Additional qualitative results further highlight the strength of the proposed method. Theoretical Claims: The paper provides a theoretical analysis on the impact of CFG on sample norms. The analysis is performed in a simplified setting, where the target distribution is assumed to be a mixture of Gaussians. I did not carefully check the correctness of the proofs, yet the claims make intuitive sense and are partially justified by the experimental results. Experimental Designs Or Analyses: - The experiments compare ADG against two baselines, namely CFG and CFG++, under varying guidance weights. An ablation study is performed to understand what design choices impact the performance of ADG. I do not have major concerns about the experimental setting, although I encourage the authors to showcase qualitative results on more diverse text prompts. - One potential baseline could be CFG with normalization. That is, performing CFG at each sampling step, followed by normalizing and re-scaling the estimated x_0's, similar to Algorithm 6. This could be a simpler remedy to the norm amplification issue compared to ADG. Supplementary Material: I did not read the supplementary material in great details. Relation To Broader Scientific Literature: The proposed method fits in the literature of conditional diffusion models and provides an alternative to the widely used CFG for improving sample quality and condition following. Essential References Not Discussed: I am not familiar with recent literature on the analysis of diffusion model sampling. My impression is that the paper has adequately covered the most related works, given the quite extensive discussion in the supplementary material. Other Strengths And Weaknesses: - Overall, the paper is well motivated and clearly written. - I encourage the authors to expand on what they mean by "non-commutativity of the tilting process with the forward process" (L126-127) and briefly explain why Equation 10 holds. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive comments. We provide our responses below. ### **1. Plot norm against guidance weight for ADG** **Reviewer Comment:** > To further strengthen this claim, I encourage the authors to plot norm against guidance weight (similar to Figure 2a) to find out whether ADG can empirically control sample norm as suggested by the theoretical result. **Response:** We appreciate this suggestion. We have added a new plot to **Figure 2a**, showing the norm against guidance weight for ADG. [View Figure 2a](https://files.catbox.moe/mtxaye.png) The experimental results indicate that **ADG effectively controls sample norms**, preventing norm amplification even at high guidance weights. ### **2. Experimental results for flow-based models** **Reviewer Comment:** > However, no experimental results are provided for flow-based models. **Response:** Thank you for highlighting this point. We would like to clarify that our primary results were obtained on **SDv3.5**, and Stable Diffusion series starting from version 3 employs flow-based models[a]. To avoid any ambiguity, we will emphasize this point more explicitly in the camera-ready version. ### **3. CFG with normalization as a potential baseline** **Reviewer Comment:** > One potential baseline could be CFG with normalization. That is, performing CFG at each sampling step, followed by normalizing and re-scaling the estimated x_0's, similar to Algorithm 6. This could be a simpler remedy to the norm amplification issue compared to ADG. **Response:** This is a good suggestion. Since our motivation stems from the observation that **angular information in the x_0 domain better aligns with semantics**, while **norm amplification** under high CFG weights degrades sample quality, we designed ADG to enhance angular consistency. The method proposed by the reviewer is a simple way to control sample norms. We implemented and tested this potential baseline (CFG and normalization in the x_0 domain). Experimental results show that although this method performs slightly worse than ADG, it significantly outperforms the original CFG. Our analysis suggests that the angle between the predicted x_0 and the unconditional prediction under this baseline is smaller than the angle between the conditional prediction and the unconditional prediction in ADG, limiting semantic alignment despite effective norm control. [View comparison](https://files.catbox.moe/e0scpg.png) ### **4. Explanation of non-commutativity and Equation 10** **Reviewer Comment:** > I encourage the authors to expand on what they mean by "non-commutativity of the tilting process with the forward process" (L126-127) and briefly explain why Equation 10 holds. **Response:** Due to space constraints, we could not elaborate on this point in the main text. However, as noted in [c], **non-commutativity** means that the weighted gradient used in CFG does not align with the true gradient of the tilted target distribution for t>0. To illustrate this, consider a simple example where: - The unconditional distribution is modeled as a Gaussian with zero mean and variance 10. - The conditional distribution is another Gaussian with zero mean and unit variance. When applying classifier-free guidance (CFG) with a guidance weight of 2, the tilted distribution becomes a Gaussian with: - Zero mean. - A variance of 10/19. After applying the forward process, which introduces Gaussian noise over time (considering a Variance Exploding SDE, i.e., VE-SDE), the resulting distribution is convolved with a Gaussian kernel whose variance increases with time. To simplify the computation, assume that the time-dependent Gaussian kernel at a particular time t* is modeled as a Gaussian with variance 1. In this setting: - The gradient of the CFG-modified distribution is a weighted sum of the gradients of the conditional and unconditional distributions, which results in a gradient proportional to **-10/11 \* x**. - However, this gradient does **not** match the gradient of the tilted distribution obtained after applying the forward process, which is proportional to **-19/29 \* x**, highlighting the **non-commutativity** between the tilting process and the forward process. Once again, thank you for your constructive feedback and for considering our paper for acceptance. We will revise our paper accordingly. ## **References** - [a] [Stable Diffusion 3](https://stability.ai/news/stable-diffusion-3) - [b] Exploring diffusion and flow matching under generator matching. arXiv - [c] What does guidance do? A fine-grained analysis in a simple setting. NeurIPS 2024.
Summary: This paper attempts to analyze the distributions of conditional generation vs. unconditional generation, and claims that in some occasions the direction of classifier-free guidance may be "abnormal", i.e., leads to low-probability density areas, and proposes "Angle-Domain Guidance Sampling" (ADG) as a remedy. UPDATE after author response: The author response makes me understand the paper and some statements better. In particular, I appreciate the authors provide extra comparative examples with APG. I'd like to raise the rating to weak accept. Claims And Evidence: 1. After two hours of reading, I feel very confused about the theoretical derivations, especially I don't understand why they lead to the "Angle-Domain Guidance Sampling" (ADG). In particular, 1) The theorems and lemmas in section 3.2 don't seem to lead to the claim that "CFG leads to excessively large norms of features". 2) More confusingly, when the authors present motivations of ADG, they said "The focus on magnitude differences is secondary, and in the case of high guidance weights, it can even be detrimental", so this means that large norms are not important, but angles are important? Then why in the abstract, "these distortions stem from the amplification of sample norms in the latent space"? Methods And Evaluation Criteria: 1. As listed in "Claims And Evidence", the derivations are confusing and I don't understand why ADG is a logical consequence of the theorems. 2. The empirical evaluation seems to be fine. Theoretical Claims: Same as those listed in "Claims And Evidence". Experimental Designs Or Analyses: The empirical evaluation seems to be fine. However, perhapse the most important baseline is adaptive projected guidance (APG) [a], which is not mentioned or compared with. [a] Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models. ICLR 2025. Supplementary Material: I quickly read sections D and E of the appendix. Relation To Broader Scientific Literature: ADG seems to be highly similar to adaptive projected guidance (APG) [a], although ADG is obfuscated by some non-essential transformations in Algorithm 1 (arccos, discount then cos). Whereas APG is not cited (It's not a concurrent work since APG has been on arxiv since Oct 2024). [a] Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models. ICLR 2025. Essential References Not Discussed: [a] Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models. ICLR 2025. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: 1. Please center your derivations around your main claims/methods. 2. Preferablly, illustrate math derivations with actual examples of generation. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful comments. While the overall evaluation was critical, your constructive feedback is highly valuable and will help us improve both the clarity and impact of our work. Below, we provide detailed responses to your key concerns. ### 1. Comparison with Adaptive Projected Guidance **Reviewer Concern:** The reviewer notes the absence of discussion on Adaptive Projected Guidance (APG) [ICLR 2025], suggests high similarity with our proposed ADG method, and questions whether ADG offers substantive novelty.  **Response:** We appreciate the reviewer for pointing out this important and timely work. APG appeared on arXiv in October 2024, shortly before our submission, and was therefore not included in our original analysis. We will include appropriate citations and a dedicated comparison section in the final version. To clarify the distinctions between ADG and APG, we summarize the differences below:  1. **Attribution of Image Degradation:** - APG attributes image oversaturation and degradation to the **parallel component** of the difference vector $\Delta \hat x_0$ relative to $\hat x_{0, c}$, denoted as $\Delta \hat x_0^\parallel$. - In contrast, we attribute oversaturation and distortion to the **large norm** of $\hat x_{0, CFG}$: $$ \hat x_{0, CFG} = \hat x_{0, c} + (\omega - 1) \Delta \hat x_0. $$ Removing the parallel component slows norm growth but **fails to address norm amplification**. Additional experiments show that normalizing $\hat x_{0, CFG}$ mitigates image degradation more effectively. [Visual example](https://files.catbox.moe/2msq2b.png)  2. **Algorithmic Difference:** - APG modifies $\Delta \hat x_0$ in CFG: $$ \Delta \hat x_{0, APG} = \eta \Delta \hat x_0^\parallel + \Delta \hat x_0^\perp, \quad \hat x_{0, APG} = \hat x_{0, c} + \omega \Delta \hat x_{0, APG}, $$ where $\eta < 1$. However, APG retains **linear enhancement**, leading to norm growth at high guidance weights. - ADG, in contrast, performs angular-domain updates, effectively constraining norm growth (Proposition 4.1). [ADG vs. APG Visual](https://files.catbox.moe/1nnve4.png)  3. **Empirical Performance and Stability:** Under high guidance weights, **ADG successfully mitigates oversaturation and artifacts**, whereas **APG-generated images exhibit significant oversaturation**, which reduces text-image alignment. [visual examples](https://files.catbox.moe/c76emo.png) ADG outperforms CFG and CFG++ in both alignment (CLIP) and human preference metrics (ImageReward), especially under high guidance weights. [quantitative results](https://files.catbox.moe/cjzl1o.png)  ### 2. Clarification on Theoretical Derivations and Algorithm Motivation **Reviewer Concern:** The reviewer seeks clarification on how the theoretical derivation leads to the conclusion that "CFG leads to excessively large norms of features." Additionally, they express confusion about the connection between the theoretical derivations and the proposed method. The reviewer also suggests illustrating the mathematical derivations with concrete generation examples where possible.  **Response:** Theorem 3.2 shows that **linear extrapolation in CFG** shifts samples toward the outer regions of the unconditional distribution, increasing the norms of the latent variables (features).  The paper's logical flow is: 1. **CFG Limitation:** Norm amplification and anomalous diffusion for surface-class samples (Theorem 3.2 and Theorem 3.3). 2. **Source of Norm Amplification:** Linear enhancement of $\hat{x}_0$ leads to larger norms. 3. **ADG Motivation:** Angular-domain updates mitigate norm amplification (Proposition 4.1). Fig. 3 illustrates this phenomenon with a Gaussian mixture model. The green, blue, and orange clusters represent surface classes, while the red cluster denotes non-surface classes. Higher guidance weights push surface-class samples away from the unconditional distribution, amplifying norms.  ---  ### 3. Clarification on “Magnitude Differences Are Secondary” Statement **Reviewer Concern:** The reviewer finds the statement *"The focus on magnitude differences is secondary, ..."* ambiguous.  **Response:** We appreciate the reviewer's feedback. The intended insight is:  - **Linear enhancement** methods (e.g., CFG) modify both the magnitude and direction of $\hat{x}_0$. - While **magnitude enhancement** may improve semantic alignment slightly, it becomes detrimental at high guidance weights due to excessive norm amplification, resulting in image degradation. #### Revised Statement: "Linear enhancement methods simultaneously modify the norm and direction of $\hat{x}_0$. While norm adjustments may marginally improve semantic alignment, excessive norm amplification at high guidance weights becomes detrimental, resulting in oversaturation and distortion." We appreciate your feedback and will revise the paper to clarify contributions and address concerns. --- Rebuttal Comment 1.1: Comment: Now I understand the paper and some statements better. In particular, I appreciate the authors provide extra comparative examples with APG. I'd like to raise the rating to weak accept. In the meantime, I encourage the authors to release the source code of ADG for reviewers to verify. If my own tests agree with the claims in the paper, I'd like to further raise the rating to accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful and constructive feedback, and for your willingness to raise the rating to “weak accept.” We greatly appreciate your recognition of our additional comparisons, including the evaluation with APG. Regarding your suggestion on code availability, we would like to clarify that, as mentioned in the **final paragraph of the Introduction in the manuscript**, the source code has been made publicly accessible via the anonymous repository: [https://anonymous.4open.science/r/ADGuidance/](https://anonymous.4open.science/r/ADGuidance/). The core implementation of the ADG algorithm used in our experiments is located in the file `method/ADG_SD3.py`, under the function `ADG_SD3`. In addition, the repository provides: - a `README.md` file with instructions on how to use the ADG method, and - a `vis.ipynb` notebook that reproduces most of the visualizations shown in the main paper. Please note that, upon acceptance, the anonymous repository will be replaced with a permanent public GitHub repository to ensure long-term accessibility and reproducibility. We sincerely appreciate your attention to the reproducibility of research and your encouragement for open-sourcing. If your own tests confirm our reported results, we would be truly grateful for your further consideration in raising the rating to “accept.”
null
null
null
null
null
null
Continuous machine learning on Euclidean graphs with unordered vertices
Reject
Summary: The authors introduce a new invariant for Euclidean graphs called Nested Centered Distribution that capture many-body unordered relative distance in a hierarchical way, and show that this graph invariant is complete (can distinguish non-isomorphic Euclidean graphs), robust (Lipschitz continuous to coordinate perturbation) and efficient to compute (polynomial time). Claims And Evidence: As claimed in the paper, a complete, Lipschitz continuous invariant is proposed for Euclidean graphs with formal proof, shown by Theorem 4.6. Methods And Evaluation Criteria: Please refer **Experimental Designs Or Analyses**. Theoretical Claims: I do not check the correctness of the proof, but it reads technically sound. Experimental Designs Or Analyses: While it is good to see the experimental part validates the effectiveness of the proposed NCD to distinguish non-isomorphic molecules, from machine learning pespective it would be interesting to see if we can directly leverage NCD as graph features to perform graph-level tasks (like predicting molecular properties on QM9). Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The complete invariant of Euclidean graphs could also be of independent interest for other domains like graph theory. Essential References Not Discussed: No as far as I know. Other Strengths And Weaknesses: - Strength: I think the paper’s results of a complete, robust and computationally efficient graph invariant are neat, interesting and fundamental. - Weaknesses: the experiment part can be made more solid if we could directly leverage NCD as graph features to perform some graph-level tasks on the molecule dataset. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 6FoT, Thank you for the highly supportive review. >a complete, Lipschitz continuous invariant is proposed for Euclidean graphs with formal proof,shown by Theorem 4.6. Thank you for correctly summarizing the main theoretical result. >The complete invariant of Euclidean graphs could also be of independent interest for other domains like graph theory. Thank you for mentioning the broader value of Theorem 4.6, which was proved for all embedded graphs in any dimension. >Strength: I think the paper’s results of a complete, robust and computationally efficient graph invariant are neat,interesting and fundamental. Thank you for highlighting the fundamental strength. >the experiment part can be made more solid if we could directly leverage NCD as graph features to perform some graph-level tasks on the molecule dataset. The NCD invariant was used to distinguish all chemically different molecules in the two large databases of molecules with 3D positions of unordered atoms: QM9 of 130K+ and GD of 31+ million entries. >it would be interesting to see if we can directly leverage NCD's graph features to perform graph-level tasks (like predicting molecular properties on QM9). Yes, you are right that, after designing a complete and Lipschitz continuous invariants of molecular graphs, the property prediction is the next natural step, though outside the scope of the paper. Without guaranteed completeness, any prediction based on incomplete invariants fails to output different properties of non-equivalent molecules that are mapped to the same point in a latent space. If any concerns remain, we would be happy to clarify.
Summary: The paper proposes a new graph invariant descriptor Nested Centered Distribution (NCD), which satisfies completeness, Lipschitz continuity, invertibility, and computability for all Eulicdean graphs embedded in $\mathbb{R}^n$. Claims And Evidence: The paper provides detailed mathematical constructions and proofs (Theorem 4.6) for establishing that the NCD is a complete invariant under rigid motion, that it is Lipschitz continuous, and that it is invertible. These proofs offer clear support for the theoretical claims. Methods And Evaluation Criteria: The methods and datasets (QM9) are well-suited to the problem. Theoretical Claims: In my reading, the definitions of several graph invariants and proofs of NCD properties are clear. Experimental Designs Or Analyses: 1. No outstanding benchmarking is presented in the paper. Even though the authors mention machine learning in title anSection 2, they do not provide any model learning results and provide solely the descriptor construction and measurement. It is highly recommended that the authors provide the benchmarking over QM9 property prediction with and without NCD. 2. Only QM9 is used in the experiment section which does not suffice to justify the effectiveness of NCD. 3. The presentation of the experiment section needs improvement. The authors can put Table 3 and Table 4 in Appendix and try to highlight the contribution of NCP instead of other descriptors. Supplementary Material: Yes, I reviewed Appendix D and it appears to be sound. Relation To Broader Scientific Literature: The paper integrates ideas from geometric invariants and graph isomorphism to provide a complete and efficiently computable invariant that overcomes limitations of earlier methods. Essential References Not Discussed: Key references are well-cited. Other Strengths And Weaknesses: **Strength**: The NCD seems significant both theoretically and for applications such as molecular machine learning because it achieves completeness, Lipschitz continuity, invertibility, and efficient computability. **Weakness**: The experiment part is inadequate to justify the effectiveness of NCD as no learning procedure is involved to show the performance of NCD. Other Comments Or Suggestions: I do not have other comments. Questions For Authors: I do not have other questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer biqw, thank you for the detailed review. >The paper provides detailed mathematical constructions and proofs (Theorem 4.6) for establishing that the NCD is a complete invariant under rigid motion, that it is Lipschitz continuous, and that it is invertible. These proofs offer clear support for the theoretical claims. Thank you for correctly summarizing the proved theoretical claims. >The experiment part is inadequate to justify the effectiveness of NCD as no learning procedure is involved to show the performance of NCD. Theorem 4.6 deserves its own recognition as a full solution to Problem 1.1. The learning procedure was demonstrated on simpler invariants that distinguished all chemical types of atoms in two large molecular datasets QM9 and GD. >The methods and datasets (QM9) are well-suited to the problem. Thank you for supporting the choice of experimental data. >No outstanding benchmarking is presented in the paper. The paper did not promise any benchmarking. While ICML also accepts theoretical work, we have also included Table 7, showing that all past approaches to predict chemical elements have maximum accuracy 86%, while the 4-layer network using only 3 distances to the nearest neighbors achieved 98% accuracy for QM9. We additionally, checked that 4 distances (rounded to 3 decimal places in Angstroms), achieve the 100% separation of all chemical elements in QM9. >the definitions of several graph invariants and proofs of NCD properties are clear. Thank you for highlighting the clarity of definitions and proofs. >Even though the authors mention machine learning in title and Section 2, they do not provide any model learning results The machine learning results are described in Table 4 and lines 354-359: "Though the data was skewed towards more popular elements H (hydrogen) and C (carbon), a default network in TensorFlow with 80/20 split for train/test achieved over 98% accuracy in predictions of the chemical element of a central atom by distances to only k = 3 nearest neighbors, see Table 4. Appendix A has all implementation details." >provide a complete and efficiently computable invariant that overcomes limitations of earlier methods. Thank you for confirming that the limitations of past methods were overcome. >Only QM9 is used in the experiment section which does not suffice The beginning of experimental section 5 described the much larger database GD (GEOM_drugs of 31+ million entries). Here is the main conclusion from experiments on this second much larger database before Table 5: "All chemical compositions in QM9 and GD were distinguished by the vector SRD of Euclidean distances (rounded to 3 decimal places in ˚A) from the molecular center of mass to 5 and 7 farthest atoms, respectively." Also, GD was mentioned in lines 408-413: "The comparisons of molecular graphs from QM9 and GD imply that all chemically different molecules are rigidly different, see the smallest distance NBM ≈ 0.07˚A on complete invariants in Table 5. So the map {molecules} → {graphs on atomic centers (without chemical elements)} is injective on rigid classes and can be inverted on its image". >Key references are well-cited. Thank you for supporting the thorough literature review. >put Table 3 and Table 4 in Appendix Yes, we can put Table 3 in Appendix. Table 4 importantly shows machine learning predictions of chemical elements with 98% accuracy. >highlight the contribution of NCP instead of other descriptors This contribution in Theorem 4.6 is well-described in your review above, quoted below: "establishing that the NCD is a complete invariant under rigid motion, that it is Lipschitz continuous, and that it is invertible." However, we will additionally highlight the importance of the complete NCD versus earlier incomplete invariants. >recommended that the authors provide the benchmarking over QM9 property prediction with and without NCD. Thank you for your helpful recommendation for future work. The chemical types of atoms are the most important properties (determining all other properties together with geometry), which are now 100% detected by very fast invariants without the complete NCD. However, the complete NCD invariants are still needed to distinguish all chemically different molecules in QM9, see Table 5 and Figure 3. In addition to fully solved Problem 1.1, here is the most important practical conclusion from the experiments on the large molecular datasets QM9 and GM: precisely enough geometry fully determines chemistry, which was previously impossible to verify without complete and Lipschitz continuous invariants. If any concerns remain, we would be happy to clarify. --- Rebuttal Comment 1.1: Comment: I still believe that highlighting training and benchmarking on existing datasets is key to demonstrating your model’s capability. Your module is designed primarily to capture more geometric information for subsequent machine-learning tasks. However, without an integrated learning procedure, I cannot guarantee that this enhancement will be beneficial. I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for the reply. >Your module is designed primarily to capture more geometric information for subsequent machine-learning tasks. We agreed that the words "machine learning" are not necessary because the primary result is a complete invariants of all embedded graphs under rigid motion, computable in polynomial time in the number of unordered vertices. This computational reduction of exponential complexity deserves recognition because ICML accepts theory. In addition to the theorems solving Problem 1.1, Tables 4,5,7 have demonstrated that the most important experimental task of reconstructing chemical elements from geometry can be done by using only up to 7 distances to atomic neighbours, checked on the world's largest dataset GD of 31+ million 3D molecular conformation. Hence using any more complicated descriptors or machine learning instead of the theoretically and practically justified hierarchy of invariants (from the fastest to complete) will only waste time and resources without guarantees beyond training datasets. Here is the final argument: triangles (cycles on 3 vertices) are uniquely determined by 3 interpoint distances, which can be written in increasing order a<=b<=c for a unique representation (complete invariant). Hence there is no need to run any machine learning for recognizing triangles or predicting their properties because everything is uniquely determined by the complete invariant a,b,c. It was an embarrassment that even 4 points in the plane had no better than a brute-force invariant using all 4!=24 permutations. Now the problem has been solved in polynomial time in a fixed dimension for all embedded graphs on any number of vertices.
Summary: This paper proposes a framework for graph isomorphic testing on Euclidean graphs by defining certain invariants. A sweep of invariants and corresponding metrics are introduced, with different time complexity. Experiments on performing central atom prediction on QM9 and Geom-Drugs have been conducted to verify the efficacy of the proposed approach. Claims And Evidence: I did not find any outstanding claims made in the paper that requires particular evidence. Methods And Evaluation Criteria: The evaluation is a bit preliminary since the task of central atom prediction is quite synthetic. More convincing empirical results would be leveraging the proposed method on tasks like geometric graph property regression such as quantum property prediction on QM9, where a widely-adopted benchmark exists. See [1], [2] for example. [1] Satorras et al. E(n) equivariant graph neural networks. In ICML'21. [2] Fuchs et al. SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks. In NeurIPS'20. Theoretical Claims: I did not check the proofs. Experimental Designs Or Analyses: The experiment setup is quite preliminary. See comments above. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The target problem of isomorphic testing on geometric graphs is interesting, though how a method that excels at this problem would benefit practical tasks such as quantum chemical property prediction remains unclear in this paper. Essential References Not Discussed: Missing related works [1] [2] [3]. [1] Satorras et al. E(n) equivariant graph neural networks. In ICML'21. [2] Cen et al. Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks? In NeurIPS'24. [3] Dym et al. Equivariant Frames and the Impossibility of Continuous Canonicalization. In ICML'24. Other Strengths And Weaknesses: The summary of different invariants and corresponding metrics with different complexity is highly valuable. Other Comments Or Suggestions: The paragraph at line 337 with the following sentence "The ICML guide for reviewing application-driven ML says that “novel ideas that are simple to apply may be especially valuable”." from my perspective is not suitable to be presented in paper writing. Questions For Authors: 1. Can the proposed approach benefit more realistic tasks like QM9 property prediction? 2. Can the proposed approach find applicability outside molecule data? 3. How is the method compared with the embeddings proposed in [1]? [1] Dym et al. Low-dimensional invariant embeddings for universal geometric learning. In Foundations of Computational Mathematics. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer Sxz9, thank you for the detailed review. >The summary of different invariants and corresponding metrics with different complexity is highly valuable. Thank you for highlighting this strength. >the task of central atom prediction is quite synthetic This task was studied in many past papers (shown in Table 7) because predicting the chemical identity of every atom determines the full chemistry of a molecule and hence all its subsequent properties. >Satorras et al. E(n) equivariant We will cite this paper, which studied equivariant networks. However, the invariance is stronger than equivariance. Any linear combination of atomic coordinates, e.g. the center of mass, is equivariant but cannot distinguish rigid shapes molecules under rigid motion, see the review in lines 144-155. Also, any discontinuous map can send near-duplicate molecules with almost equal properties to distant representations in a latent space, which unnecessarily complicates property predictions. >Fuchs et al. SE(3)-Transformers Cited in line 145. >practical tasks such as quantum chemical property prediction remains unclear in this paper The paper did not promise property predictions but focused on a more fundamental problem to design a complete and continuous invariant of all embedded graphs. Theorem 4.6 fully solving this Problem 1.1 already advanced the field of graph classification. For property predictions, the paper made the first crucial step of reconstructing all chemical types by using even simpler invariants, which are distances to only 4 and 5 atomic neighbors for QM9 and GD datasets, respectively. >Cen et al. Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks? We will cite this paper, which studies equivariants (not invariants) of geometric graphs with ordered vertices, because the input contains an adjacency matrix depending on a vertex order. All constructions use vertex indices i,j,s without guaranteed invariance under permutations of vertices. >[3] Dym et al. Equivariant Frames and the Impossibility of Continuous Canonicalization. We will cite this paper studying point clouds, not graphs, which form an exponentially larger collection of classes under rigid motion than point clouds. See Fig. 1 and the end of the page 2: "For a fixed set of m vertices in general position, one can choose any of m(m − 1)/2 edges and produce 2^{m(m−1)/2} non-isometric graphs. Problem 1.1 for arbitrary graphs is computationally much harder than for point clouds due to exponentially many different graphs on the same vertex set." >Can the proposed approach benefit more realistic tasks like QM9 property prediction? Yes, the QM9 property prediction essentially needs complete invariants of molecular graphs because any incomplete invariant can map non-isometric graphs to the same point in a latent space without any chance to predict different properties of the underlying molecules. >Can the proposed approach find applicability outside molecule data? Yes, Theorem 4.6 was proved in appendices C and D for any dimension n. >How is the method compared with the embeddings proposed in [1]? Dym et al. Low-dimensional invariant embeddings This paper is cited in line 122. Here is the quote from the middle of their page 29: "the computational effort involved in computing the invariants in our constructions grows superpolynomially in n". Here n is the number of points. Our invariants have polynomial time in the number m of points by Theorem 4.6, e.g. cubic in the plane. Moreover, the invariants are Lipschitz continuous, while continuity is not even mentioned in [1]. >the following sentence "The ICML guide for reviewing application-driven ML says that “novel ideas that are simple to apply may be especially valuable”." from my perspective is not suitable We will remove this sentence, though this was the exact quote from the ICML guide. If any concerns remain, we would be happy to clarify. --- Rebuttal Comment 1.1: Comment: Thanks for the response. After reading the rebuttal, the following concerns still remain: 1. I do not agree that Cen et al. targets on geometric graphs with ordered vertices. The adjacency matrix can indeed be permuted along with the permuatation of vertices, and the vertex indices i,j will be permuted correspondingly. I do not agree that Cen et al., which clearly studies graphs that naturally do **not** have vertex order, assumes any orders in the vertex as claimed by the authors in the rebuttal. 2. Though theoretical contributions are presented in this work, adding more experiments on leveraging the invariant to some important learning problems on geometric graphs will be of great significance. Indeed there are many cheap yet effective ways of integrating the proposed approach on benchmarks like QM9, but the exploration is never taken in this work. Therefore I will keep my score. --- Reply to Comment 1.1.1: Comment: > I do not agree that Cen et al., which clearly studies graphs that naturally do not have vertex order, assumes any orders in the vertex as claimed by the authors in the rebuttal. If the outputs Cen et al. do no depend on vertex indices that were used in constructions, could you please refer to specific claims that prove the invariance under permutations of vertices? This invariance can hold in some simple cases, e.g. one can take the sum of edge-lengths at a specific vertex (all expressed via vertex indices) and then the total sum over all vertices, which gives the double total length of the whole graph, independent of any vertex order. However, this and many other invariants are incomplete. If you think that any past work designed complete, Lipschitz continuous and polynomial-time invariants of all embedded graphs, please specify exact references with theorem numbers. >adding more experiments on leveraging the invariant to some important learning problems on geometric graphs will be of great significance. The invariants have been experimentally demonstrated on the world's largest databases of 3D molecules with unordered atoms: QM9 (130K+ entries) and GEOM (31M+ entries). If you know more significant databases, please give references. >there are many cheap yet effective ways of integrating the proposed approach on benchmarks like QM9, but the exploration is never taken in this work. This work did not promise to consider any benchmarks on QM9, because we solved the more fundamental problem of designing a complete, Lipschitz continuous and polynomial-time invariant of all embedded graphs on unordered vertices in a fixed dimension. Any optimization for QM9 benchmarks will always be restricted to QM9 without practical guarantees beyond this finite dataset. Our contribution is similar to developing a space rocket going beyond all previously possible destinations achieved by simpler transportation (easier invariants like pairwise distances and PDD, which can distinguish many but not all embedded graphs). Hence there is little sense in asking to use a space rocket for benchmarking travel between all possible countries in the world. Since international travel has been earlier solved by simpler airplanes, there is no need to fire a space rocket from each country to any other to demonstrate the capability of rocket science. Rockets are needed when all simpler transportation tools cannot help.
null
null
null
null
null
null
null
null
FlexTok: Resampling Images into 1D Token Sequences of Flexible Length
Accept (poster)
Summary: This paper presents a novel method for tokenizing an image into a one-dimensional token sequence, which allows for flexible image representation and processing. Most of the existing VAE/VQVAE methods employ quantization on 2D grids, thus the token size is proportional to the image size. This paper proposes a novel VQ method that resamples images of varying sizes into a fixed-size sequence. It can be combined with AR architecture for image generation. This paper presents thorough experiments to analyze different modules and hyperparameters for the generation. From experiments, the paper achieves performance comparable to that of existing SOTA image generation methods. ## update after rebuttal I thank the authors for their detailed response. Most of my concerns have been solved. However, as the reconstruction uncertainty and diffusion/AR sampling will affect the method's performance, the main paper should enclose a detailed discussion on them in the final version. Considering several revisions are required, I will keep my initial rate. Claims And Evidence: 1. The paper proposes the token numbers of previous methods are proportional to image size, but the method is independent of size and is affected by the complexity of the image. The claim is clear but it lacks experiments to support this. The method quantizes the image to 1-D sequence tokens, but all experiments are done on 256*256 size. The paper does not discuss: a) Does the image size affect the token sizes? For larger images, does it need more tokens for representation? b) How to define the complexity of the image? The paper lacks an experiment to analyze this. If an image' content is 'simple', does the method only require a fixed-size token to represent it no matter the size of the image? 2. FlixTok is an image tokenizer, which should losslessly compress and reconstruct an image. However, in Figure 3, when the tokens are less than 16, the reconstructed image is different from the original. It is normal the reconstruction should be worse with limited tokens, but the content should be similar to the original images. From this result, could it be called a tokenizer? It is more like a generation model. Methods And Evaluation Criteria: The problem is present clearly and the method can solve it. Theoretical Claims: The theoretical claims are clear and correct. Experimental Designs Or Analyses: 1. The paper lacks the experiments present in the 'Claims And Evidence' part. 2. The FlexTok is an image tokenizer. Ideally, it should losslessly compress an image to tokens and then reconstruct it. In experiments, it only presents two figures to analyze the reconstruction. It lacks enough experiments and comparisons with other methods, especially 2-D grid tokenization methods. Furthermore, Figure 4, only presents rFiD, which can only analyze the GT and prediction distribution. PSNR can better compare the compression losses. 3. The decoder of FlexTok is a diffusion-type model. Does the initial noise affect the reconstruction quality and appearance during inference? Is this uncertainty in inference optimal for the reconstruction? Why not employ a VQGAN decoder and corresponding supervision? The paper should give more discussions. 4. In generation, it employs an AR architecture. The sampling strategy of AR (such as TopK and TopP) and diffusion tricks both affect generation quality and diversity. Which one is more important? The paper lacks a discussion on this. Supplementary Material: I have reviewed supplementary materials. Relation To Broader Scientific Literature: No more specific contributions to a broader scientific literature. Essential References Not Discussed: All important references have been enclosed in the paper. Other Strengths And Weaknesses: All weaknesses have been present in'Experimental Designs Or Analyses' and 'Claims And Evidence'. Other Comments Or Suggestions: No more comments and suggestions. Questions For Authors: No more questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer jx6L for the thoughtful and constructive feedback. Below we address the main points raised: **1. Relation between image size, complexity, and token count** This is a good point, and we haven't explored this explicitly yet. We performed all our experiments at 256x256 resolution specifically to enable direct comparison with standard methods (e.g., VQ-GAN, LlamaGen). While FlexTok could support variable-resolution inputs (e.g. using architectures like NaViT), that's a separate research direction we leave for future work. We also agree the interplay between image complexity, resolution, and token counts is an interesting open question. **2. “Lossless” compression** We appreciate this observation about reconstruction quality at low token counts. It's important to clarify that all image tokenizers are inherently lossy, just to different degrees. Some tokenizers (e.g., SEED (Ge et al. 2023)) focus purely on semantic features without pixel accuracy, while others (e.g., VQ-GAN) are more pixel-aligned. FlexTok sits in between, explicitly transitioning from semantic-level reconstructions at low token counts (<16 tokens) toward highly pixel-aligned reconstructions as token count increases (e.g. at 256 tokens). We explicitly measure pixel-level reconstruction fidelity using Mean Absolute Error (MAE), which steadily improves with more tokens (Fig. 4). Regarding the rFID metric, we agree that additional reconstruction metrics would strengthen our analysis; we'll include PSNR and SSIM results compared directly to baseline tokenizers like VQ-GAN (see the comparison in response to reviewer ekCv). **3. Why a diffusion-type decoder rather than VQGAN?** We chose a diffusion-type decoder (rectified flow) exactly because it models conditional uncertainty. When using few tokens, the degree of compression is high and reconstructions naturally have uncertainty; the diffusion decoder handles this gracefully, producing plausible, semantically coherent outputs rather than blurry averages. As the number of tokens increases, the uncertainty naturally reduces, making reconstructions progressively more accurate and deterministic (see image reconstructions in Appendix J.1). A VQGAN-style decoder wouldn't offer this flexible control over reconstruction uncertainty. **4. AR sampling vs. diffusion sampling impact** We found both AR sampling (top-k, top-p, temperature) and diffusion sampling parameters influence image quality and diversity. AR sampling tends to be stable across reasonable settings (see Appendix F), while diffusion decoding, particularly adaptive projected guidance (Sadat et al., 2024), had a significant impact on final image quality. In short: both matter, but diffusion guidance is particularly critical. We'll clarify this explicitly. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. Another question on the uncertainty. As the VAE/VQVAE attempts to encode the image and reconstruct it. Although your method can use all tokens for reconstruction, your proposed diffusion decoder still encloses uncertainty in reconstruction. Different initial noise would result in a different image, maybe in some details. Have you compared variant initial-noise reconstruction? This part should be discussed in the paper. --- Reply to Comment 1.1.1: Comment: Thanks for the interesting follow-up question. To some extent, this reconstruction uncertainty is quantified through per-image reconstruction metrics such as MAE and DreamSim, measured between input images and their corresponding k-token reconstructions. As shown in Fig. 4, these metrics demonstrate a roughly log-linear improvement with increasing token counts, indicating that reconstructions become progressively closer to the original input and therefore necessarily more deterministic. This behavior is also visually apparent in the reconstructions provided in Appendix J.1. We expect that providing even more tokens as conditioning would further reduce reconstruction variance. To explicitly quantify the effect of initial noise variation, we conducted an additional experiment where we decoded identical token sequences 10 times using different random seeds and measured the average pairwise DreamSim similarity across reconstructions. We observed that reconstruction variability rapidly decreases with an increased number of conditioning tokens, highlighting that stronger conditioning signals lead to more deterministic outputs. We will include this analysis and discussion in the camera-ready version.
Summary: This paper introduces FlexTok, a novel 1D tokenizer that can encode images with variable token lengths. It combines casual masking and nested dropout in training to force the tokenizer to learn to reconstruct an image with a varying number of tokens. This strategy further promotes the tokenizer to encode images in a coarse-to-fine order, where the initial tokens encapsulate semantic and geometric concepts, and the subsequent tokens progressively capture finer details. To ensure reconstruction quality at extreme compression rates (e.g., using only 1-2 tokens), FlexTok employs a rectified flow model as its decoder. The method achieves strong performance in both reconstruction and generative tasks on ImageNet and COCO. ## update after rebuttal Thank the authors for their detailed response. Considering that FlexTok shows diminishing improvements for long token sequences, which limits the method's upper bound, and considering the inefficiency issues raised by other reviewers, I will maintain my initial rating. Claims And Evidence: The claims in this paper supported by experimental results or prior studies. Methods And Evaluation Criteria: The paper mainly uses rFID, MAE, and DreamSim for reconstruction quality evaluation, and leverages gFID, top-1 accuracy, and CLIP score for generation quality evaluation. These metrics are proper for the study of an visual tokenizer. Theoretical Claims: There is no proof or theoretical claim in this paper. Experimental Designs Or Analyses: The experimental designs are solid and fair. Supplementary Material: I have checked the supplementary material (part A, B). Relation To Broader Scientific Literature: In recent literatures, there is a growing interest in 1D tokenizers which improve computational efficiency (e.g., encoding an image with fewer tokens) while maintaining competitive quality. Prior works like TiTok are fixed in token length. FlexTok takes a step forward to enable variable length in tokenization. Besides, by incorporating advancements from tokenizers with rectified flow decoding, FlexTok further reduces the minimum token length from 32 to 1. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** - This paper studies a very important problem in image tokenization. The idea of 1D and variable length tokenization in FlexTok is novel and interesting, which enables images to be modeled in a similar way to language sequences. - Compared to traditional 2D tokenizers and prior 1D tokenizers, FlexTok demonstrates improved reconstruction and generation performance, while using fewer number of tokens. - The paper is well-written. The experiments are solid and comprehensive. **Weaknesses:** - It seems that variable length in the tokenization stage does not generalize to the generation stage. That is, although a single FlexTok tokenizer can handle variable length of tokens, the generator is limited to a fixed token length. As a result, it basically needs multiple generators to support variable length generation. This constrains the practical use of FlexTok in generation. (Maybe I have misunderstandings here, the authors can correct me.) - FlexTok is relatively weak in long-sequence reconstruction and generation. In Figure 4, 6, 7, it can be seen that increasing the number of tokens beyond 32 has minor improvements in rFID, and even negatively impacts gFID. This may be attributed to the trade-off in performance between low number and high number of tokens, as shown in Appendix Table 4. FlexTok adopts the “Pow2” dropout strategy, which is favored for short token length but under-samples long sequences in training. Other Comments Or Suggestions: I think this is an overall good paper, but could benefit from further enhancing the flexibility in tokenization and generation. For example, the default setting of FlexTok only supports reconstruction for a predefined set of token lengths, rather than a truly random number of tokens. Besides, instead of having a predefined token length in generation, it would be more exciting to see the generator evolve to decide the number of tokens to generate, similar to how language models generate texts. Questions For Authors: 1. From my experience, 1D tokenizers like TiTok are more adept at 'semantic-level' reconstruction than 'pixel-level' reconstruction. Therefore, having a low rFID does not guarantee that the tokenizer faithfully reconstruct the images. I wonder whether FlexTok suffers from the same problem. Could the authors provide a comparison with other tokenizers on the PSNR and SSIM metrics? 2. As shown in Appendix Figure 10, REPA greatly accelerates the convergency of FlexTok training. I am curious about whether FlexTok could achieve similar performance without REPA but longer training schedules. It would be better if the authors could provide some visualizations of the reconstruction results without REPA. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer ekCv for the thoughtful and constructive feedback. Below we address the main points raised: **1. Variable-length generation limitation (fixed token length in generator)** To clarify: in our current setup, we train a single autoregressive (AR) model capable of generating a full 256-token sequence. During inference, shorter outputs are obtained simply by early stopping this sequence at the desired length. While effective, you're right that it could be even more flexible if the AR model itself determined when to halt generation, and this is an interesting future research direction. We expect that we can create such halting conditions by augmenting the training data with per-token-subsequence reconstruction metrics (e.g., MSE, DreamSim), and when training the stage 2 AR model, simply truncating the training sequences if the score reaches a certain pre-defined threshold. **2. Weakness at longer token sequences (limited improvement beyond ~32 tokens)** The reviewer raises an insightful point about long-sequence performance. Indeed, our "Pow2" nested dropout schedule intentionally emphasizes shorter sequences during training. This design optimizes FlexTok’s ability to reconstruct images effectively at extreme compression (few tokens), a core contribution of our work. However, it is correct that this strategy results in diminishing improvements for longer sequences (>32 tokens). Adjusting dropout sampling strategies to more evenly balance short and long sequences could potentially mitigate this trade-off, and we appreciate the suggestion here. **3. Semantic-level vs. pixel-level reconstruction (PSNR/SSIM comparisons)** We agree, rFID is not a good measure for pixel alignment. We measure rFID to demonstrate that the FlexTok decoder is capable of producing outputs that could "plausibly" come from the image distribution, no matter the number of tokens given. To measure pixel-level reconstruction alignment, we show MAE and DreamSim in Fig 4, and observe a roughly log-linear relationship between the scores number of tokens. We additionally show PSNR and SSIM in the table below for various number of tokens, and find that at 256 tokens used, FlexTok reaches comparable compression performance to common 2D-grid tokenizers that use 16x16 discrete tokens. FlexTok d18-d28 reconstruction metrics on IN1K validation set, resolution 256x256: | # Tokens | PSNR | SSIM | | --- | --- | --- | | 1 | 9.35 | 0.187 | | 2 | 10.25 | 0.222 | | 4 | 11.51 | 0.254 | | 8 | 11.90 | 0.269 | | 16 | 13.05 | 0.304 | | 32 | 13.96 | 0.330 | | 64 | 14.34 | 0.343 | | 128 | 15.90 | 0.407 | | 256 | 17.70 | 0.489 | Comparison with common discrete tokenizer baselines (numbers from Cosmos paper): | Model | # Tokens | PSNR | SSIM | | --- | --- | --- | --- | | Open-MAGVIT2 | 16x16 | 17.00 | 0.398 | | LlamaGen | 16x16 | 18.38 | 0.338 | | Cosmos-0.1 | 16x16 | 20.49 | 0.518 | **4. Impact of REPA vs. longer training schedules** This is a good question, however, the difference in convergence speed is so significant that we found it computationally too expensive to ablate this. We expect that the ~17x convergence speedup (in terms of FID) demonstrated in the original REPA paper may roughly translate to our setting too. Unfortunately we are unable to add qualitative examples to this text-only reply, but we find the non-REPA reconstructions (after training for the same number of steps) to be significantly worse in terms of fidelity, and overall less semantic. We will add a discussion of these points as well as qualitatives to the camera-ready.
Summary: In this paper, the authors introduce a tokenizer that maps 2D images into variable-length, ordered 1D token sequences. This tokenizer allows images to be represented with a flexible number of tokens based on their content. In addition, an autoregressive model leverages this approach to achieve high-quality generation results with fewer image tokens. The authors conduct extensive experiments to validate the effectiveness of the proposed method. ## update after rebuttal I appreciate the authors' thorough response and the effort they put into the rebuttal. However, I still have concerns regarding the limitations of using a fixed number of register tokens, as well as the lack of a comprehensive system-level comparison for text-conditional image generation. Therefore, I am keeping my score unchanged. Claims And Evidence: Strengths + The claims regarding the limitations of existing generative models are correct and wildly recognized in the field of generative models. + The proposed method is reasonable and easy to understand. It encodes images with variable-length tokens based on image complexity. Weaknesses None Methods And Evaluation Criteria: Strengths + The proposed method is both intuitive and reasonable. It is an effective technique to employing rectified flow decoder to alleviate the blurry reconstruction caused by fewer register tokens. The nested dropout and causal attention masks also benefit the learning of visual vocabulary and AR generation. + The evaluation benchmark is reasonably appropriate for assessing the effectiveness of the proposed method. Weaknesses - Compared to the decoders used in existing tokenizers, the rectified flow decoder requires higher computation costs, due to its extensive denoising steps. - The pre-defined maximum number of register tokens may not suitable for extremely complex images, and determining an optimal setting remains an open and difficult problem. Theoretical Claims: This paper does not include any theoretical claims. Experimental Designs Or Analyses: Strengths + The superior results on both class-conditonal and text-conditional image generation demonstrate the effectiveness of the proposed method. This paper also provides exhaustive ablation studies to assess each key component. + The qualitative results across different numbers of tokens make the efficiency of FlexTok more apparent and intuitive. Weaknesses - It is better to show system-level comparison on text-conditional image generation, not only the ablation studies in the main text. - For class-conditional generation, Inception score (IS), Precision, and Recall are the primary evaluation metrics, which is widely used in the generative field. Thus, it is essential to include them in this paper. Supplementary Material: I have reviewed the supplementary material. The authors mainly provide additional ablation studies and quality results. Relation To Broader Scientific Literature: To my knowledge, the proposed method in this paper is new. Essential References Not Discussed: To my knowledge, there is no other works that should be discussed. Other Strengths And Weaknesses: None Other Comments Or Suggestions: It would be beneficial to provide a summary of the appendix content at the beginning. Questions For Authors: I acknowledge the novelty of the proposed method, despite its limitations regarding the pre-defined number of register tokens and the computation cost of the decoder. Thus, I am inclined to rate this paper as accept. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer 6zZ5 for the thoughtful and constructive feedback. Below we address the main points raised: **1. Higher computational cost of rectified flow decoder and token-count limitations** Please see our response to reviewer MMdz. Our preliminary experiments suggested that higher token counts can significantly reduce reconstruction errors, and enable decoding with fewer steps, however, for simplicity of training subsequent autoregressive models on the token sequences, we decided to choose 256 tokens as the upper-bound for this submission. **2. System-level comparison for text-to-image generation** We fully understand the motivation for requesting comparisons against external text-to-image baselines. However, unlike the somewhat standardized class-conditional ImageNet setting, proper comparison on text-to-image generation with external baselines is usually extremely difficult and nuanced due to differences in compute and dataset (size, caption quality, diversity, aesthetics, similarity to COCO, etc.) having significant impact on downstream evaluations (e.g. see "On the Scalability of Diffusion-based Text-to-Image Generation", Li et al. 2024). For that reason we decided to perform a controlled experiment in which we train a 2D grid tokenizer with the same data, compute, and rectified flow decoder objective, and perform autoregressive generation on its tokens. The results in Fig. 6 suggest that FlexTok performs comparably to classical 2D grid tokenizers at 256 tokens, but offers more flexibility overall. **3. Additional standard metrics for class-conditional generation** We appreciate the suggestion and provide the requested metrics (Inception Score, Precision, Recall, and gFID) in the table below. Our results are comparable to state-of-the-art methods such as VAR-d30 (323.1 IS, 0.82 precision, 0.59 recall, 1.92 gFID) and TiTok (gFID between 1.97 and 2.77 depending on tokenizer choice) across a broad range of token counts. We will incorporate these results into the camera-ready version of the paper. | #Tokens | IS | Precision | Recall | gFID | | --- | --- | --- | --- | --- | | 1 | 236.47 | 0.83 | 0.53 | 3.14 | | 2 | 238.07 | 0.82 | 0.57 | 2.51 | | 4 | 226.77 | 0.80 | 0.60 | 2.00 | | 8 | 266.48 | 0.82 | 0.61 | 1.82 | | 16 | 277.45 | 0.82 | 0.61 | 1.75 | | 32 | 284.99 | 0.82 | 0.61 | 1.71 | | 64 | 286.40 | 0.82 | 0.61 | 1.76 | | 128 | 275.63 | 0.82 | 0.61 | 1.89 | | 256 | 258.33 | 0.80 | 0.61 | 2.45 | **4. Additional suggestion: summary of appendix content** Thanks for the helpful suggestion. Adding a concise appendix summary at the start is indeed beneficial. We will add this to the camera-ready version.
Summary: The paper proposes FlexTok, a method for improving the tokenizer (VAE compression) used in image generation frameworks. Like previous approaches (TiTok, ALIT), FlexTok compresses 2D images into 1D tokens initialized as learnable registers. These tokens interact with encoded image patch tokens via attention mechanisms. Unlike TiTok and ALIT, FlexTok’s decoder is trained with a flow matching loss rather than standard reconstruction loss, enabling multi-step denoising decoding. Additionally, FlexTok introduces a unique causal attention masking structure: encoded image patches attend only among themselves and not to registers, whereas register tokens attend to all patches but follow a causal pattern among themselves (the i-th register attends to j-th registers only if i ≥ j). The model further uses token-dropping techniques during training, similar to ElasticTok, allowing variable-length token representations with earlier tokens representing more general information and later tokens representing details. The second-stage generative model, based on an autoregressive approach, progressively generates finer-grained images as more tokens are used. ## update after rebuttal I am willing to update my score to weak accept since ALIT and ElasticTok are concurrent works. However, my concerns about reconstruction quality, the inference cost, and the motivation are not fully addressed. Claims And Evidence: The authors claim significant benefits from their method, particularly highlighting their capability of generating images using even a single token. However, several of these claims are problematic: - **Single-token generation claim:** The authors assert that their model can encode an image into just one token. However, this claim is unfair or misleading, as the decoding process itself is multi-step flow matching (a generative process rather than a faithful reconstruction). Thus, the single-token encoding is effectively used as a conditioning input similar to class or text tokens, rather than a compressed latent representation. - **Reduction of compute or acceleration claim:** Due to the iterative, generative nature of the decoder (25-step flow matching), this method requires significantly more computational resources. This contradicts one key motivation for employing latent-based generative models, which typically aim at computational efficiency. - **Token-level faithfulness:** Reconstruction quality of individual images (crucial for downstream tasks such as editing or conditional generation) appears limited. For instance, Figure 3 shows significant discrepancies between generated images and ground truth, even with 256 tokens (e.g., misaligned dog tails), indicating poor pixel-level or even patch-level alignment. Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable in the context of image generation research. However, the experimental evaluation has critical limitations: - Reconstruction quality at the single-image level is not sufficiently addressed or emphasized. - Claims of single-token generation and compression efficiency are misleading due to the generative, iterative decoder. Theoretical Claims: The paper does not present explicit theoretical claims or analyses. Experimental Designs Or Analyses: Key weaknesses identified in experimental design: - Lack of adequate evaluation of single-image reconstruction fidelity. Given the latent-based approach, individual reconstruction quality is crucial but is largely overlooked. - The experiments presented (particularly in Figure 3) clearly illustrate significant quality gaps even at relatively high token counts (256 tokens), undermining claims of efficient and faithful representation. Supplementary Material: Yes Relation To Broader Scientific Literature: FlexTok builds upon existing literature, notably TiTok, ALIT, and ElasticTok, which have previously introduced concepts of variable-length, learnable register tokens. The flow matching decoding strategy aligns closely with ideas presented earlier (e.g., OpenAI's DALL-E 3), where diffusion-based decoders was used (see Consistency Decoder). Thus, the technical novelty of FlexTok appears limited, with incremental advances primarily in the causal attention masking strategy. Essential References Not Discussed: No essential missing references identified. Other Strengths And Weaknesses: **Strengths:** - Causal attention mask is a novel modification. - Interesting concept of flexible token lengths with importance ordering. **Weaknesses:** - Misleading or unfair claims regarding single-token generation and compression effectiveness. - Decoder complexity and computational overhead contradict original VAE motivations (speed and efficiency). - Poor single-image reconstruction fidelity limiting applicability to editing or conditional generation tasks. - Limited overall technical novelty given strong reliance on prior works (TiTok, ALIT, DALL-E 3). Other Comments Or Suggestions: The authors should clarify their claims, explicitly distinguishing their method as employing an additional generative decoding process rather than true latent compression. More focus should be placed on improving reconstruction fidelity and clearly discussing limitations inherent to the proposed approach. Questions For Authors: 1. Could you clarify why you claim that encoding images to a single token is feasible or meaningful, given your decoder itself is a multi-step generative model rather than a direct reconstruction? 2. Given the computational overhead introduced by iterative flow matching decoding (25 steps), how do you justify the additional complexity against typical VAE motivations (speed, compression)? 3. Reconstruction fidelity appears severely limited even at relatively high token counts (e.g., 256 tokens). Could you provide deeper analysis or experimental insights into how your method might address pixel-level or patch-level misalignments, which are crucial for tasks like conditional generation or editing? 4. Apart from the novel causal attention masking, could you clearly summarize the distinctive technical contributions of FlexTok beyond existing works such as TiTok, ALIT, and the diffusion-based decoding approaches found in DALL-E 3? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer MMdz for the thoughtful and constructive feedback. Below we address the main points raised: **1. Single-token generation claim** Image tokenization is commonly performed with lossy autoencoders that abstract away imperceptible information, meaning all tokenizer decoders (whether trained with diffusion/flow methods or GANs) are inherently generative to some extent. The degree to which they must be "generative" directly corresponds to the amount of compression. At the coarsest level (single FlexTok token = 2 bytes), our representation necessarily operates at a highly semantic level, conceptually similar to semantic tokenizers like SEED (Ge et al. 2023). At the finest level (256 tokens = 512 bytes), we achieve compression comparable to classical 2D grid tokenizers like VQ-GAN (also, see response to reviewer ekCv). FlexTok's unique strength is providing a single model that learns these different hierarchies, effectively offering an alternative way to describe images in a coarse-to-fine manner. The single-token scenario is indeed best interpreted as semantic conditioning rather than pixel-level compression. **2. Computational overhead (25-step flow matching decoder)** We acknowledge that FlexTok's rectified flow decoder adds computational complexity during inference (though encoding remains efficient). We explicitly chose this architecture because it consistently maintains high reconstruction fidelity across a wide range of token counts (Fig. 4). Importantly, decoding happens after the autoregressive (AR) generation step, which is already computationally intensive, so the flow decoding step adds a constant overhead rather than introducing an entirely new computational bottleneck. We also anticipate that common distillation methods (e.g., consistency decoders, Reflow) can substantially lower inference complexity. **3. Token-level faithfulness (single-image reconstruction quality)** Regarding reconstruction quality at 256 tokens (512 bytes), it's important to recognize that this still represents extremely high compression: a full-color image compressed to just 512 bytes will naturally show some loss of detail. This token count was explicitly chosen to match standard tokenizers (e.g., VQ-GAN, LlamaGen), allowing direct and fair comparisons. At this standard token count, FlexTok achieves reconstruction fidelity comparable to these established methods (see the comparison in response to reviewer ekCv). The visible imperfections at this compression rate are expected, and our results show a clear log-linear trend (Fig. 4), strongly suggesting that higher token counts (e.g., 1024 tokens or more) would yield increasingly faithful reconstructions. This trade-off between compression and fidelity is fundamental to all image tokenization approaches, and exploring higher-token-count scenarios is a natural next step. **4. Technical novelty compared to TiTok, ALIT, ElasticTok, and DALL-E 3** Regarding FlexTok's novelty beyond causal attention masking: while ElasticTok and ALIT indeed share similar concepts, these methods were developed concurrently and independently. FlexTok directly builds on TiTok's 1D tokenizer framework and diffusion/flow-matching ideas, but introduces several critical new technical components: specifically, causal attention masking combined explicitly with nested token dropout and rectified flow decoding. Crucially, these innovations collectively enable hierarchical tokenization, smoothly transitioning from coarse, semantic-level representations to detailed pixel-level reconstructions within one tokenizer. This hierarchical tokenization was not shown by TiTok or other prior methods. Additionally, we provide a detailed analysis of this hierarchical behavior in generative settings, clearly showing its practical benefits and trade-offs. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed response. First, I went back and checked the ALIT and ElasticTok papers, and indeed these were published within three months of the ICML deadline, making them concurrent works. Given this, I’m happy to withdraw my earlier concerns regarding novelty. However, I still have doubts regarding the practical utility and the intended application scenario of this work: - If the authors’ definition of "image tokenization" relaxes the strict reconstruction requirement, then it would make sense to compare your approach with semantic tokenizers such as CLIP or DINO, specifically evaluating downstream tasks like image understanding. - On the other hand, if the authors intend to compare against generative model VAEs, maintaining faithful reconstruction becomes critical—but as clearly shown, your current method struggles in this aspect. - Regarding the two-stage generation approach, while using a single token might initially seem to improve efficiency, the second-stage diffusion decoder actually shifts the computational cost from the first stage to the second. In my view, this resembles more of a two-stage cascaded generation rather than genuinely improved efficiency. Considering your clarifications and my concerns above, I can raise my rating to a weak accept, but the motivation behind this work remains somewhat unclear to me. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thoughtful follow-up and the reconsideration of novelty given the concurrent publication timeline of ALIT and ElasticTok. Regarding the remaining points about practical utility and application scenarios: **1. Semantic vs. pixel-level reconstruction** Our method strives for reconstructions as faithful as possible given the inherent information bottleneck defined by token count. At low token counts (1-16 tokens), reconstructions necessarily capture high-level semantics due to extreme compression (as low as 2 bytes per image). At higher counts (up to 256 tokens/512 bytes), reconstructions naturally become more detailed. All discrete tokenizers operate under similar trade-offs, optimizing reconstruction quality within information constraints rather than guaranteeing pixel-perfect fidelity. Our metrics (e.g. MAE, DreamSim) transparently quantify this trade-off, showing clear improvement with increased token counts. While comparing with semantic vision encoders like CLIP or DINO for image understanding could be interesting, this direction is intentionally out of scope for our paper. Like most discrete tokenizer literature, our primary focus is on generative tasks where discrete representations particularly excel. As a future direction, exploring our hierarchical representations for image understanding could indeed be valuable (potentially without the discrete bottleneck if generation isn't also needed). **2. Faithful reconstruction & generative performance** Our strong performance on established benchmarks like ImageNet FID demonstrates that FlexTok's reconstructions effectively support practical generative tasks. The information bottleneck inherent in extreme compression (2-512 bytes per image) naturally affects pixel-level fidelity, but this is true of all tokenization methods. What distinguishes FlexTok is its ability to operate across this entire spectrum within a single model, allowing users to choose the appropriate trade-off between semantic-level and pixel-level representation based on their specific use case. **3. Computational efficiency & two-stage generation** We agree with the reviewer that our approach can be viewed as a two-stage cascaded generation, and the computational cost does shift between stages depending on token count. Our paper explores this trade-off, showing how fewer tokens (where the AR model does less work and the flow decoder does more) can be sufficient for simpler conditioning scenarios like class labels, while more complex conditioning like detailed captions benefits from additional tokens. We believe this flexibility offers practical utility in adapting to different generation tasks, though we agree the flow decoder adds computational overhead that could be optimized in future work. We hope these clarifications address the main concerns and better convey the motivations behind our work.
null
null
null
null
null
null
EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents
Accept (oral)
Summary: The paper proposes a powerful and comprehensive benchmark, EmbodiedBench, for both high-level and low-level actions in embodied intelligence. It consists of four distinct subdatasets, ranging from high-level semantic tasks to low-level metric tasks, with each subdataset having its own focus. To build the entire benchmark, the authors first collect results from other datasets and utilize a simulator for data generation. They then correct and refine the limitations of the original datasets. Moreover, the tasks are categorized into six types, covering common and challenging embodied intelligence tasks. Additionally, evaluations conducted on both proprietary and open-source models demonstrate that this dataset presents significant challenges. Claims And Evidence: None. Methods And Evaluation Criteria: 1. In EB-Manipulation, I am a bit confused about the necessity of providing the detection box to the MLLM. (1) In what manner is the box provided to the MLLM? Is it drawn directly on the image, and have other forms been considered? (2) Does the color of the box or the thickness of the box's outline impact the enhancement effect? (3) Why does adding the box lead to a decrease in performance for the navigation subtask? I think it would be helpful to analyze this further in relation to the task's inputs and outputs. I saw some discussion about the detection box in the supplementary materials, but it did not fully resolve my confusion. 2. Please explain the differences between high-level and low-level trajectories. Are there differences in difficulty or the way instructions are expressed? 3. What is the effectiveness of using ChatGPT directly to generate data? How can accuracy be verified? Is there a human review process involved? If I believe that ChatGPT may have difficulty understanding spatial concepts (as referenced in the final evaluation table), how can we ensure the accuracy of the data it generates? Theoretical Claims: None. Experimental Designs Or Analyses: 1. In Task Planner, how accurate is it for the agent to execute multiple steps at once? Is there an ablation study comparing this with single-step execution? 2. For multi-image inputs, I understand that due to the limitations of large multimodal models, directly adding multiple images might lead to suboptimal results. Is there a comparative experiment that converts the information from multiple images into text and provides it to the model in an in-context manner? Supplementary Material: None. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None. Other Strengths And Weaknesses: Weakness1: The entire benchmark is centered around scenarios in a simulator, but does not address potential limitations that may be encountered in real-world scenarios. Other Comments Or Suggestions: I suggest including the evaluation of the qwen2.5-VL 7B and 72B models in the final version of the paper. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for reviewing our work and providing valuable feedback. We have carefully addressed your concerns below. Please let us know if you have any further questions. The anounymous link for figures is https://anonymous.4open.science/r/rebuttal-3568/rebuttal_file.pdf. We use "4o" to refer to GPT-4o and "Claude" to refer to Claude-3.5-Sonnet. **Q1: Detection box in EB-Manipulation. (1) How are they provided to MLLMs? (2) Do color or thickness impact performance? (3) Why does performance drop in EB-Navigation?** **A1:** (1) Detection boxes are directly drawn on images to guide the model’s focus on relevant regions. An example is on the right side of **Figure 1 in the anonymous link**. (2) Our additional ablation studies on EB-Manipulation base subset show: - All detection boxes improve performance over no box. - Box color has minimal impact; even the color similar to desk (yellow) only causes a slight drop (4.2%). - Increasing thickness (1px->2 or 3px) slightly reduces performance (<6.2%). | | Default (red, 1px) | 2px | 3px | black | yellow | no Box | | - | - | - | - |- | - | - | | Claude | 37.5 | 31.3 | 33.3 | 37.5 | 33.3 | 29.2 | (3) **Figure 2 in the anonymous link** shows that in EB-Navigation, multiple detection boxes can obscure distant objects, reducing visibility. In contrast, EB-Manipulation keeps objects at a fixed distance, ensuring clear visibility. As a result, detection boxes affect the two environments differently. In EB-Navigation, we tested using **a single box only on the target object** that can reduce obstruction, showing consistently improved accuracy. To better reflect real-world scenarios, EB-Navigation omits detection boxes by default, requiring the MLLM agent to detect and recognize objects. | Model | No Box | One Box | Multi Box | |-|-|-| -| | 4o | 61.7 | 68.3 | 53.3 | | Claude | 46.7 | 58.3 | 48.3 | **Q2: Differences between high-level and low-level trajectories. Do they vary in difficulty or instructions?** **A2:** "High-level" and "low-level" refer to different action representations based on their executability in robotic systems (see Section 3, Paragraph 1). **The trajectory structure and instructions are consistent across all high-level and low-level tasks; the key difference lies in how actions are represented—either as high-level abstractions or low-level primitives.** In terms of difficulty, our results show that low-level tasks are more challenging for MLLM agents. This is because they require stronger perception and spatial awareness, which remain limitations for current MLLMs. **Q3: How effective is ChatGPT in data generation? How is accuracy verified? Is there a human review process? How do we ensure the accuracy of the generated data?** **A3**: **Our dataset preparation combines GPT-4o and human annotation to ensure high quality**. GPT-4o is used not for full data generation but to enhance linguage instruction diversity. For example, in EB-ALFRED, task descriptions (PDDL) and instructions for the base subset are sampled from ALFRED. For other subsets (e.g., "Common Sense"), we craft 10 examples to guide GPT-4o in generating augmented instructions. To ensure accuracy, we manually review all instructions for correctness and coherence with PDDL descriptions, revising or discarding invalid data. This human-in-the-loop approach ensures dataset reliability. **Q4: How accurate is the multi-step planner? An ablation study comparing it with single-step execution.** **A4:** The multi-step planner is crucial for improving performance while reducing API/inference costs. To assess its impact, we compared multi-step and single-step execution. Results show significant performance drops with single-step execution on EB-ALFRED base subset. These results confirm the importance of multi-step planning. | | Default | Single Step | |-|-|-| | 4o | 64 | 36 | | Claude | 72 | 62 | **Q5: Evaluation of converting multi-step images into text as in-context information.** **A5:** We tested incorporating multi-step observation descriptions into the context. While this method did not improve GPT-4o’s performance, it led to a 4% gain for Claude-3.5 on EB-ALFRED (Base). We plan to offer this as an optional feature in our code release. | | Default | w/ Image Descriptions | |-|-|-| | 4o | 64 | 64 | | Claude | 72 | 76 | **Q6: Limitations of simulation in real-world scenarios.** **A6:** We acknowledge the limitations of simulations in capturing real-world challenges. Please refer to **Q1 of Reviewer yZa7** for a detailed discussion. We will add a discussion on the limitations in the revision. **Q7: Evaluation of qwen2.5-VL models.** **A7:** We evaluated Qwen2.5-VL models on EmbodiedBench and observed notable improvements over Qwen2-VL. Qwen2.5-VL-72B achieves an overall score of 34.7, surpassing previous open-source SOTA, InternVL2.5-78B (33.9). We will include evaluations of more recently released MLLMs in our updated manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, the replies address my primary concerns effectively. So I raise my rating as strong accept.
Summary: This paper introduces EmbodiedBench, a comprehensive benchmark for evaluating vision-driven embodied agents based on multi-modal large language models (MLLMs). The benchmark features 1,128 testing instances across four environments, covering both high-level semantic tasks and low-level atomic actions, with six meticulously curated subsets evaluating essential agent capabilities such as common sense reasoning, complex instruction following, spatial awareness, visual perception, and long-term planning. Through extensive experiments, the authors evaluate 13 leading proprietary and open-source MLLMs within EmbodiedBench, revealing that MLLMs excel at high-level tasks but struggle with low-level manipulation, with the best model, GPT-4o, scoring only 28.9% on average. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: Yes B. Details about EMBODIEDBENCH Tasks and Datasets Relation To Broader Scientific Literature: This paper significantly advances the evaluation of multimodal macromodels Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. EmbodiedBench is the first benchmark to comprehensively evaluate MLLM-based embodied agents across multiple environments and task levels, providing a standardized platform for comparison. 2. The benchmark includes six capability-oriented subsets that allow for detailed analysis of different agent capabilities, offering valuable insights into model limitations. 3. The authors develop a unified agent framework that effectively integrates egocentric visual perception, few-shot in-context examples, interaction history, and environment feedback for decision-making. 4. The paper presents thorough experiments with 13 state-of-the-art MLLMs, providing valuable insights into their performance on various embodied tasks. Weaknesses: 1. The absence of evaluation in real-world physical environments. The entire benchmark is implemented in a virtual environment, which raises the question of whether the results of the virtual benchmark measurements reflect the capabilities of the model in the real world. Embodied intelligence is largely meant to operate in the real world, and I would suggest that the authors add some real-world experiments, or some discussion. 2. The review includes a very large number of MLLMs, but the VLA (Vision-Language-Action) model is missing. I suggest adding some experiments. Other Comments Or Suggestions: See Weaknesses Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our work and providing valuable feedback. We have carefully addressed your concerns below. Please let us know if you have any further questions. **Q1: The absence of evaluation in real-world physical environments ... Embodied intelligence is largely meant to operate in the real world, and I would suggest that the authors add some real-world experiments, or some discussion.** **A1:** We agree with the reviewer on the importance of real-world evaluation. However, there is an inherent trade-off between reproducibility, cost, safety, and real-world applicability. While real-world testing is crucial for practical deployment, simulated benchmarks provide a standardized and easily reproducible environment, reducing the time, financial burden, and safety risks associated with real-world evaluation [1,2]. EmbodiedBench is a step forward in enabling evaluating MLLM agents in diverse simulated embodied tasks. Future research could benefit from more realistic and complex embodied simulations [3] or standardized and cost-effective real-world test suites [4,5]. **In our revision, we will add a discussion on this limitation at the end of the main paper.** [1] Evaluating Real-World Robot Manipulation Policies in Simulation. CoRL 2024. [2] Visualagentbench: Towards large multimodal models as visual foundation agents. ICLR 2025. [3] Behavior-1k: A human-centered, embodied ai benchmark with 1,000 everyday activities and realistic simulation. ArXiv, 2024 [4] Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware. RSS 2023. [5] Mobile aloha: Learning bimanual mobile manipulation using low-cost whole-body teleoperation. CORL 2024. **Q2: The review includes a very large number of MLLMs, but the VLA model is missing. I suggest adding some experiments.** **A2:** We appreciate the reviewer’s suggestion. **We reviewed VLAs in paragraph 3, Appendix A**. While VLA models such as Octo, OpenVLA, and RDT-1B have shown strong performance in manipulation tasks, there is currently no open-source VLA model that can handle both high-level household tasks and low-level navigation & manipulation simultaneously. Consequently, there is no directly comparable VLA model for our evaluation. However, to explore their capabilities, we conducted experiments on three pretrained VLA models (Octo, OpenVLA, and RDT-1B) using the EB-Manipulation (Base) subset. All models achieved a **0% success rate**. This outcome can be attributed to the distribution shift: existing robotic foundation models [6][7] are primarily trained on real-world datasets such as Open X-Embodiment. The large domain gap between these datasets and our simulator environment prevents these models from generalizing effectively without fine-tuning. This reinforces the need for a more general framework, as proposed in our paper, that enables adaptation to diverse tasks without fine-tuning. While a few VLA models [8][9] have been developed for the same simulator we use, they are not publicly available. Furthermore, other available models [10][11] have input formats that differ significantly from our environment (e.g., proprioception mismatch), making direct evaluation infeasible. The only available VLA model trained in the same simulator as ours is 6D-CLIPort [12], which processes multi-view observations and language instructions to generate 6-DoF actions. Below are the evaluation results on EB-Manipulation: | EB-Manipulation | Base | Common | Complex | Spatial | Visual | Avg | | - | - | - |- | - | - |- | | 6D-CLIPort | 8.3 | 6.3 | 2.1 | 16.7 | 16.7 | 10.0 | The model performs well on tasks that rely on visual understanding and spatial reasoning. However, its success rate drops significantly on tasks requiring common sense reasoning or understanding complex instructions, suggesting that action-grounded VLA models may struggle in these areas. In contrast, our MLLM-based agent framework achieves higher performance, with GPT-4o reaching an average score of 28.9. This result highlights the effectiveness of our MLLM agent framework and the potential of general MLLMs as embodied agents. [6] OpenVLA: An Open-Source Vision-Language-Action Model. CORL 2024. [7] RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation, ICLR 2025. [8] MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation. Arxiv 2025. [9] HAMSTER: Hierarchical Action Models For Open-World Robot Manipulation, ICLR 2025. [10] RVT-2: Learning Precise Manipulation from Few Examples. RSS 2024. [11] SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation. Arxiv 2025. [12] VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation, NeurIPS 2022.
Summary: This paper proposes embodiedbench, a benchmark for evaluating MLLMs' capability in a diverse set of embodied tasks. Specifically, the tasks range from high-level semantic tasks to low-level tasks with atomic actions. Furthermore, the tasks under different simulators are classified into different subsets, to evaluate agents' capability in common sense reasoning, complex instruction following, spatial awareness, visual perception and long-term planning. This paper benchmarks the performance of multiple open-source MLLMs and close-source MLLMs for embodied tasks, providing interesting insights in how MLLMs perform at high-level tasks and low-level manipulation. Claims And Evidence: Strengths: 1. This paper proposes a benchmark for evaluating MLLM's capability in diverse set of embodied tasks, covering a wider range of tasks compared with previous research. 2. The design of including both high-level semantics and low-level tasks are essential, which is also supported with the large performance difference observed for MLLMs. 3. Further classifying the agents' capability into different categories can bring more insights while doing evaluation. Methods And Evaluation Criteria: Strengths: 1. Benchmarking both open-source MLLMs and close-source MLLMs on the proposed embodiedbench, which serves as a good start point for futher work. Weakness: 1. MLLMs performance might relate to the prompt being used, and one single prompt is used for all the models. Exploring how MLLMs work with a different set of prompts might bring more insights and robust conclusion about how well different MLLM work for embodied tasks. Some ablations for the language prompt provided in Sec.5.3 can partially mitigate this issue. Theoretical Claims: N/A Experimental Designs Or Analyses: Strengths: 1. Detailed ablations for both language-centric analysis and vision-centric analysis. 2. Interesting findings indicating that vision is crucial for embodied tasks with low-level actions. Supplementary Material: Yes I've read all the supplementary material about additional related work and dataset details. Relation To Broader Scientific Literature: The proposed benchmark can be very crucial for future development of MLLMs for embodied AI tasks, especially in low-level manipulation category. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our work and providing valuable feedback. We have carefully addressed your concerns below. Please let us know if you have any further questions. **Q1: MLLMs performance might relate to the prompt being used, and one single prompt is used for all the models. Exploring how MLLMs work with a different set of prompts might bring more insights and robust conclusion about how well different MLLM work for embodied tasks. Some ablations for the language prompt provided in Sec.5.3 can partially mitigate this issue.** **A1:** We agree that prompt design plays an important role in MLLM agent performance. In Section 5.3, we analyzed the effects of textual environmental feedback and the number of in-context examples. To further investigate prompt robustness, we conducted additional studies on the EB-ALFRED "Base" subset, evaluating the following variations: 1. **"No Guideline"** – Removing instructional guidelines intended to assist model generation (see Page 19). 2. **"Prompt v2"** – Rewriting the original prompts using GPT-4o to rephrase and restructure while preserving similar information. The results show that removing guidelines has no effect on model performance, while "Prompt v2" causes a slight 4% performance drop. **This suggests that our MLLM agent is relatively robust to prompt modifications** compared to its sensitivity to environmental feedback (around 10% drop) and in-context examples (more than 20% drop). | | GPT-4o |GPT-4o (no guideline) | GPT-4o (prompt v2) | Claude-3.5-Sonnet | Claude-3.5-Sonnet (no guideline) | Claude-3.5-Sonnet (prompt v2) | | -------- | -------- | -------- |-------- | -------- |-------- | -------- | | EB-ALFRED (Base) | 64 | 64 | 60 | 72 | 72 |68 | Furthermore, we also examined the impact of reasoning in in-context examples (i.e., ReAct prompting [1]). Removing this reasoning step leads to an 8% performance drop for GPT-4o and a 4% drop for Claude-3.5-Sonnet, indicating that reasoning within in-context examples is more impactful than prompt rephrasing or guidelines, though still secondary to environmental feedback and in-context examples. | | GPT-4o |GPT-4o (w/o ReAct) | Claude-3.5-Sonnet | Claude-3.5-Sonnet (w/o ReAct) | | -------- | -------- | -------- |-------- | -------- | | EB-ALFRED (Base) | 64 | 56 | 72 | 68 | In summary, our MLLM agent framework is robust to prompt rephrasing and guideline removal. However, environmental feedback and in-context examples, as discussed in Section 5.3, have a much greater impact on performance. [1] React: Synergizing reasoning and acting in language models. ICLR, 2023.
Summary: The authors present EmbodiedBench - a set of diverse tasks and environment to evaluate MLLMs for embodied agents. They present high-level benchmark environments - EB-Habitat, EB-ALFRED and low-level benchmark environments - EB-Navigation and EB-Manipulation. The tasks are also divided into basic task solving, common sense, complex instructions, spatial awareness, visual perception and long-term planning. They evaluate 13 different MLLMs on this benchmark, showing that high-level tasks are easier than lower-level tasks, and that visual cues are more important for low-level tasks than higher-level tasks. They perform ablations showing image resolution should be reasonable, multi-step images harm the MLLMs, and visual in-context learning helps. They also perform error analysis showing the reasons for failures in specific aspects of robotics (planning, perception, reasoning). Claims And Evidence: 1. **EmbodiedBench is a useful tool for evaluating MLLMs in embodied settings**: The results are very useful and they help use understand the performance of difference LLMs in different settings and types of tasks. The insights are interesting and would help further research on this domain. One specific issue that I find with this approach is that all data is in simulation, and there are no real-world episodes, or evaluations. It is hard to say how aligned with the real-world performance are the results. 2. **MLLMs excel at high-level, struggle at low-level tasks**: This claim is shown by the performance on EB-ALFRED and EB-Habitat vs EB-Navigation and EB-Manipulation. I believe this is an interesting insight. 3. **Visual cues are necessary for low-level tasks:** This makes sense, since precision is key in such tasks. It is interesting that the authors empirically find this for MLLMs using their benchmark. Methods And Evaluation Criteria: 1. They create a benchmark over different environments and tasks to evaluate MLLMs for embodied AI. I think this is a sound approach and casts a wide-net for MLLM evaluation. 2. They describe the data creation in detail, and it is well-grounded in prior works. The task subsets are well thought. Theoretical Claims: N/A Experimental Designs Or Analyses: - Agent Design: The agent design uses single-step images for efficiency and provide valid skill sets for each task to the MLLM. For manipulation, they provide markers and boxes. I think this is a fair design, considering MLLMs struggle with multi-step input. - Task Planning: This design is particularly interesting to me. Instead of doing per-step planning, they plan multiple steps in a single go and let the MLLM decide the number of steps. How do the authors prevent failure of plans? Are there any qualitative examples of this? It would be nice to discuss this if possible. - Ablations: I think the ablations are also careful and evaluate several aspects of the approach. The results show importance of feedback, in-context learning, camera resolution, detection boxes (for manipulation), multi-step input, visual in-context learning. - Error analysis: They show interesting error analysis and show subtypes of error, along with discussion on why different LLMs might fail on different subtypes. Supplementary Material: I skimmed over the supplementary. Some results are repeated from the main paper and can be (optionally) removed from appendix. The error analysis is pretty detailed in the supplementary. Relation To Broader Scientific Literature: The paper presents an interesting benchmark to evaluate MLLMs for Embodied AI agents and is one of the first ones to do so. The insights are pretty interesting and the evaluation/analysis is comprehensive. I think this will help understand MLLMs better for robotics, especially since it is a up-and-coming field. Essential References Not Discussed: Can not think of anything that might be missing. They consider a good amount of related benchmarks and show a comparison table. Other Strengths And Weaknesses: The paper is pretty interesting overall, has nice insights and analysis. It adds value to the community, and might be helpful in evaluating future MLLMs. It would be nice to have some real-world evaluations, or something that shows that the benchmark performance translates. Other Comments Or Suggestions: - Line 215: Incorrect quotes around "navigate" - Line 257, Col 2: "constained"-> contained or constrained? - Line 306, Col 2: Incorrect quotes around "Lang" - Line 386: Mentions "multi-view", which is in ablation. Have the authors tried using panoramic images from multi-view? - Is it possible to train some e2e or modular policies on this benchmark and show how they perform on this benchmark? Might be interesting to compare the best MLLM with such policies. I agree that this is not a necessity since the purpose of the benchmark is to evaluate MLLMs. - Section 6 title should be: "Conclusion" Questions For Authors: 1. How is the complex instruction understanding task different from long-horizon? Is there an intersection? 2. How is the data generated for EB-Navigation? Which Python program what used? I did not find this in appendix. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for reviewing our work and providing valuable feedback. We have carefully addressed your concerns below. Please let us know if you have any further questions. **Q1: No real-world evaluations. It is hard to say how aligned with the real-world performance** **A1:** We agree with the reviewer regarding the gap between simulated and real-world tasks. While real-world evaluation is crucial for assessing model performance in practical scenarios, there is a trade-off between reproducibility, cost, safety, and real-world applicability. Simulated benchmarks offer a standardized, easily reproducible environment, reducing the time, financial cost, and safety risks for researchers to replicate results [1,2]. **In our revision, we will include a discussion about this limitation in the main paper.** [1] Evaluating Real-World Robot Manipulation Policies in Simulation. CoRL 2024. [2] Visualagentbench: Towards large multimodal models as visual foundation agents. ICLR 2025. **Q2: How to prevent failure of plans? Any qualitative examples?** **A2:** In EmbodiedBench, we allow MLLM agents to generate multiple actions at once. If a failure occurs (e.g., invalid actions or unmet goals), they **replan** using the latest image and interaction history. **In Appendix F, we provided four qualitative examples of our MLLM agents (Figures 13–16)**, demonstrating their ability to replan effectively. **Q3: Some results are repeated and can be (optionally) removed from appendix** **A3:** In Appendix D, we opt to include both the results from the main paper and additional results to provide a clearer trend across different tasks. **Q4: Using panoramic images for multi-view** **A4:** We included a panoramic image in our multi-view ablation for EB-Navigation. It provides a top-down perspective, capturing the entire scene. However, we found that it can mislead the agent, negatively impacting overall performance. Examples of the multi-view setup are shown in Figure 1 of https://anonymous.4open.science/r/rebuttal-3568/rebuttal_file.pdf. **Q5: Train policies on this benchmark and show performance ... this is not a necessity since the purpose of the benchmark is to evaluate MLLMs.** **A5:** We agree that further fine-tuning is a promising direction, but a key challenge is the lack of training data. Currently, the only available embodied planning dataset in our setting is ALFRED, which lacks the structured perception and reasoning. To address this, we are collecting trajectories using our agent framework on the base subset while reserving other subsets for evaluation. This will benefit future fine-tuning MLLM research, and we aim to release the dataset soon. **Q6: Difference between the complex instruction understanding and long-horizon tasks. Any intersection?** **A6:** The two subsets target different challenges: - "Complex Instruction" **adds longer, relevant/irrelevant context**, making user instructions harder to interpret while keeping task complexity similar to the base subset. - "Long Horizon" **increases task difficulty** by requiring more steps to complete. In EB-ALFRED, GPT-4o takes an average of 13.4 steps (base), 14.2 steps (complex instruction), and 23.9 steps (long-horizon), showing their differences. **We ensure no subset overlap in our benchmark design**. **Q7: How is the data generated for EB-Navigation? Which Python program was used? I did not find this in appendix.** **A7:** We describe the data generation process in **Appendix B.3** but will clarify it further in our revision. The EB-Navigation dataset consists of: 1. scene and object information, 2. initial robot position and pose, 3. target object information, 4. language instruction. We create the dataset using a Python program that ensures the validity of the (1,2,3,4) combinations. The process follows these steps: - Step 1: Scene and Object Initialization We use 90 scenes from AI2-THOR-supported scenes. Each scene initializes a set of objects. - Step 2: Target Object Selection In each scene, we iterate over potential target objects, excluding those inside receptacles or with multiple instances to reduce ambiguity. - Step 3: Agent Position and Pose Determination For each target object, we use AI2-THOR’s `GetInteractablePoses` to randomly sample a valid agent position, ensuring a distance at least 2.5 meters from the target. The agent's pose is set to either include the target object in view (e.g., base subset) or keep it out of view (long horizon). - Step 4: Instruction Generation and Augmentation Based on predefined templates (e.g., "Move towards the {target object} and stay near it"), we apply GPT-4o to augment linguistic diversity and preserve subset requirements. After executing the above data generation, we select 60 tasks for each subset to form the EB-Navigation dataset. **Q8: Typos** **A8:** Thank you for pointing out these typos. We have carefully corrected them in our revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for rebuttal. It addresses all of my concerns. It is interesting to see that the authors are already working on fine-tuning on the benchmark, looking forward to the results on those in a future work. Also, I like the panoramic multi-view top-down experiment. I am not sure if a top-down view is the best way, maybe an "ego-centric" panorama is a better choice? Regardless, this is just a suggestion and does not reduce the strength of this paper. I am raising the score to 5.
null
null
null
null
null
null
Auto-reconfiguration for Latency Minimization in CPU-based DNN Serving
Accept (poster)
Summary: This manuscript investigates methods for accelerating neural network model tasks on CPU-based servers to minimize latency. Specifically, the authors found that current frameworks such as TorchServe, although effective in reducing inference latency through intra-operator parallelism across multiple threads, exhibit diminishing returns. Therefore, the authors propose that instead of running a single instance of a model utilizing all available threads on a server, running multiple instances, each with smaller batch sizes and fewer threads for intra-operator parallelism, can provide lower inference latency. They also propose a corresponding algorithm to identify the optimal configuration for minimizing latency. Finally, extensive experimental validation is conducted, demonstrating the effectiveness of the proposed framework. The topic is of interest and the presented numerical results seem, indeed, promising. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The author did not conduct a relevant theoretical analysis. Experimental Designs Or Analyses: The author conducted experiments on a single CloudLab and validated the results using tasks involving ResNet-50, Inception-v3, GPT-2, and BERT models. Specifically, the study first compared throughput and latency acceleration with baseline methods and analyzed the underlying reasons. Additionally, the author provided the latency of configuration changes in the algorithm. Supplementary Material: The author did not provide any Supplementary Material. Relation To Broader Scientific Literature: Compared to the existing literature, this manuscript observes that existing methods, such as intra-operator parallelism across multiple threads, are effective in reducing inference latency but provide diminishing returns. Therefore, the core idea of this manuscript is to run multiple instances on a server, each with a smaller batch size and fewer threads, to achieve intra-operator parallelism, thereby providing lower inference latency. This approach is built as an extension to TorchServe and supports online reconfigurations to avoid serving downtime. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** - The authors provide a detailed modeling process. - The authors conducted extensive experimental validation across a wide range of models. - Based on the experimental results, it can be concluded that this research holds certain significance for serving Deep Neural Network (DNN) models on CPU-based servers. - This algorithm is built as an extension to TorchServe and supports online reconfigurations to avoid serving downtime. **Weaknesses:** - There is a lack of sufficient theoretical proof. - Other, please refer to **Questions For Authors**. Other Comments Or Suggestions: Please refer to **Questions For Authors**. Questions For Authors: - When considering NLP tasks, since the number of tokens varies across different sentences, unlike image data where each sample has a fixed size, the number of tokens in different batch sizes may differ. In such cases, how is the optimal configuration determined? - In a multi-task scenario, what is the workflow when, for example, 10 tasks arrive simultaneously? Could it happen that resources are fully allocated while searching for the optimal configuration? - In a multi-task scenario, what is the workflow? Are all tasks executed in parallel, or is only one task executed at a time? - It is recommended that the corresponding algorithm flowchart be included in Selection 3. - Future work should be included in the conclusion section. - It is recommended that the authors provide the corresponding source code to facilitate a better understanding and utilization of the research findings by the readers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your reviews and valuable feedback. - For NLP tasks, where token counts vary across sentences, the optimal configuration can be determined by profiling the effective batch size in terms of tokens rather than just the number of samples. However, if the variability in token counts makes prediction too unpredictable, dynamic batching can be relied on to normalize the effective batch size; a direction we leave for future work. - **Multi-task scenario**: Packrat improves thread allocation in a localized manner without changing overall resource allocation. It optimizes how the available resources are used. In a multi-task scenario, the service provider handles thread allocation across tasks, and then within each task, Packrat performs its profiling and dynamic configuration to optimize intra-task performance. This means that even if multiple tasks (e.g., 10 tasks) arrive simultaneously, Packrat ensures each task’s thread allocation is optimized for latency. - **Recommendations**: - Thank you for your valuable suggestions. We agree that including an algorithm flowchart in Section 3 would greatly enhance clarity, and we plan to incorporate it in the final version. - We will also expand the conclusion to outline future work clearly. - While we acknowledge the importance of providing source code to aid reproducibility and further research, the code was removed from this submission to maintain anonymity during the review process. We fully intend to release the corresponding source code in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your comments. I will keep my score as weak accept.
Summary: This paper proposes an automated optimization framework for CPU-based serving of DNNs (Packrat) aimed at minimizing inference latency. It addresses a known limitation in intra-operator parallelism—diminishing returns as thread count increases—by introducing an approach to run multiple instances of models concurrently, each with fewer threads. Packrat automatically selects the optimal combination of model instances, threads per instance, and batch sizes via targeted profiling and solving a two-dimensional knapsack problem through dynamic programming. Packrat is implemented as an extension of TorchServe and supports dynamic reconfiguration with negligible downtime. The authors demonstrate significant latency improvements ranging from 1.43x to 1.83x over standard TorchServe configurations for popular DNN models including ResNet-50, Inception-v3, GPT-2, and BERT. Claims And Evidence: The claims made by the authors regarding latency improvements are well backed by extensive experimental evidence. Experiments clearly demonstrate the advantages of their proposed dynamic partitioning of threads across multiple model instances. They also adequately justify their design decisions and carefully explain performance trade-offs, such as reconfiguration overhead. A minor weakness is that the evaluation primarily focuses on mean latency without exploring tail latency effects. Methods And Evaluation Criteria: The methods and evaluation criteria proposed in this paper are sound. Using targeted profiling and a dynamic programming-based optimization approach effectively tackles the combinatorial problem of thread-batch allocation. The evaluation, which covers a range of models and batch sizes and examines both microbenchmarks and end-to-end system impacts, is comprehensive and suitable for validating Packrat’s claims. Theoretical Claims: The theoretical claims regarding the correctness and optimality of the dynamic programming algorithm for selecting configurations appear correct. The problem formulation as a two-dimensional knapsack problem is sound and standard, and the complexity characterization is accurate and justified. The theoretical explanations and justifications provided in the paper are clear and consistent. Experimental Designs Or Analyses: The experimental design is rigorous and valid. They include clear baselines, multiple representative DNN models, and carefully controlled evaluations. One minor concern is the lack of examination of memory overhead or tail latency, which might be relevant for practical scenarios. However, overall, the experiments are sound and demonstrate substantial improvements in practice. Supplementary Material: I could not find any supplementary material related to the manuscript. Relation To Broader Scientific Literature: Packrat fills a specific niche in the literature by addressing CPU-based model inference latency optimization. Its unique contribution is the fine-grained automatic reconfiguration of threads and model instances for a single DNN model on CPU, complementing existing ML serving frameworks rather than competing with them. Essential References Not Discussed: The paper could have included CPU inference libraries (e.g., OpenVINO, ONNX Runtime) to further contextualize their contributions. These omissions do not significantly undermine the work, but addressing them could clarify the scientific context further. Other Strengths And Weaknesses: The strength of this paper lies in its practicality, clear presentation, and its novel application of dynamic programming to solve a concrete, relevant optimization problem in ML serving. The authors demonstrate careful analysis of practical overheads (e.g., CPU frequency scaling under load), which adds credibility. Weaknesses include the limited consideration of resource overheads (memory footprint, tail latency) and the narrow focus on single-model CPU serving. Other Comments Or Suggestions: Figure 1 could be more b/w friendly. ## update after rebuttal The authors have addressed my questions regarding CPU-based DNN serving in general Questions For Authors: Can you clarify the specific considerations for the CPU-based DNN serving community? Who is the target audience or user group that would benefit most from Packrat, given GPU-based inference is often preferred? What are the memory implications of running multiple model instances concurrently? Did you investigate how memory overhead scales with the number of instances, especially for larger models? Can you provide insights into the tail latency (e.g., p99 latency)? Does Packrat affect tail latency positively or negatively compared to baseline approaches? Does Packrat prevent frequent reconfiguration in highly dynamic workload scenarios? How often do you realistically expect reconfigurations to occur? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your reviews and valuable feedback. - **Target Audience for CPU-Based DNN Serving:** Packrat is aimed at users and organizations that rely on existing CPU infrastructure, such as large data centers or cloud providers with extensive CPU fleets, where GPUs might be too costly, underutilized, or unsuitable for certain workloads. This includes scenarios where cost, power consumption, or deployment constraints favor CPU-based inference over GPUs, despite the latter often being preferred for their throughput. In our experience, workloads with smaller inferences are the right candidates from both a latency and throughput point of view. - **Memory Implications of Running Multiple Instances:** Running multiple model instances concurrently does incur additional memory overhead due to repeated kernel loads and potentially duplicated data structures. However, our profiling phase captures these effects, and the optimizer selects configurations that balance these overheads against latency improvements. In our experiments, even for larger models, the memory overhead scales in a manageable way. It is important to note that the CPU-based latency for larger models is generally not well suited for real-time serving. We are happy to add a more detailed analysis of these memory implications in the final version of the paper. - **Tail Latency (p99) Insights:** Packrat’s design improves both the average latency and tail latency (e.g., p99 latency) by reducing the synchronization overhead associated with fat-instance execution. Our experimental results indicate that partitioning the workload across multiple smaller instances reduces the worst-case latencies compared to baseline approaches. However, as with any system-level optimization, occasional transient effects may occur during reconfigurations. - **Reconfiguration Frequency in Dynamic Workloads:** Packrat employs a batch size estimator with smoothing to track request arrival rates, triggering reconfigurations only when sustained workload changes are detected. This design minimizes frequent reconfiguration, ensuring stability. In practice, reconfigurations are expected to occur infrequently, typically on the order of hours rather than seconds, reflecting significant and lasting changes in the workload rather than transient spikes. --- Rebuttal Comment 1.1: Comment: Thank you for your comments, this is a valuable work to the venue. I will keep my score as week accept.
Summary: The main message the paper wants to convey seems to be "running multiple instances each with smaller batch sizes and fewer threads for intra-op parallelism can provide lower inference latency." Based on this insight, the paper introduces Packrat that optimizes the Batch, Threads, Instances, to get optimal performance. The paper claims that it leads to 1.43x-1.83x performance improvement compared to TorchServe. Claims And Evidence: I am not entirely sure whether the claim is valid. Methods And Evaluation Criteria: Evaluations seem reasonable. However, it would be great to see the impact on networks of more variegated size to understand the impact. Also, it would be great to understand the impact given different HW configurations. Also, it would be great to understand the assumptions behind the requests. Theoretical Claims: I am not entirely sure whether the claim is valid. Experimental Designs Or Analyses: Evaluations seem reasonable. However, it would be great to see the impact on networks of more variegated size to understand the impact. Also, it would be great to understand the impact given different HW configurations. Also, it would be great to understand the assumptions behind the requests. Supplementary Material: Yes. I read the appendix. There seems to be no other supplementary material provided. Relation To Broader Scientific Literature: The work aims to optimize the serving infrastructure for ML workloads. Essential References Not Discussed: Liu, Yizhi, et al. "Optimizing {CNN} model inference on {CPUs}." 2019 USENIX Annual Technical Conference (USENIX ATC 19). 2019. Other Strengths And Weaknesses: Efficient inference is very important so introducing an infrastructure to optimize that is becoming more important. As such, the paper is working on a very important topic. There seems to be some inclarity in the text that limits the learnings of the readers. Other Comments Or Suggestions: It seems that there is some inclarity in the text. I would be happy to reevaluate after rebuttal. Questions For Authors: * I understand the main idea that rather than having a single large instance that uses all available threads to parallelize inference within a single batch, it instead divides large batches into smaller batches each processed concurrently by one of several small instances that use a limited amount of intra-op parallelism. However, this may potentially incur more memory traffic due to multiple kernel reads per instance. Is this optimized by naively tuning the hyperparameters or is there a principled way of dealing with this? I feel Section A in appendix is trying to answer this, but it is not really clear to me whether this interference is okay. Figure 7 seems to provide some HW measurements, but not a good explanation about its impact on end-to-end performance. * From a similar light, I am not entirely sure whether the DP solution is a valid way to model this. On the other hand, if the paper is assuming that the table is serving as an approximation it would be great to provide how exact it is compared to real HW measurements. * From a similar light, how does this perform for LLMs larger than GPT-2. * When using CPUs, it is very sensitive to the HW configurations. Can you share the details such as clustering and memory modes? If possible it would be great to observe the sensitivity of the approach given different HW parameters. This is because it is important to understand how robust the optimizer is. * How does this compare to/combine with other works where we try to tile each layer so that we work on smaller subsection of the activation so that it optimizes for memory & compute. Liu, Yizhi, et al. "Optimizing {CNN} model inference on {CPUs}." 2019 USENIX Annual Technical Conference (USENIX ATC 19). 2019. * Can you provide the rates at which requests arrive? Was there some modeling done to mimic real-life scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your reviews and valuable feedback. Below, we provide our responses to the questions in the same order as they were asked: - Memory can become a bottleneck; however, our optimizer already accounts for this during the profiling phase. Suppose a model is highly constrained by memory bandwidth. In that case, the increased latency will be captured in the profiling data, leading the optimizer to select configurations that avoid overloading the memory subsystem. Moreover, with CPU vendors moving towards high-bandwidth memory, such as Intel's latest generation CPUs with integrated HBM, the negative impact of additional memory traffic is further mitigated, reinforcing our approach's benefits. - The DP solution is valid as it systematically considers the trade-offs between intra-op parallelism and multi-instance execution. The paper shows that while the expected speedups (derived from isolated profiling) are slightly higher than the actual speedups due to predictable interference (such as license-based downclocking and memory contention), the relative ordering remains unchanged. This confirms that the DP-based optimizer is a robust approximation method for selecting configurations. Moreover, we verified this on machines with three different configurations (two Intel server machines and one AMD machine), and the overheads have constant offset due to memory and CPU slowdowns. - Our evaluations focus on GPT-2 and BERT, but our model-agnostic framework extends to larger LLMs. It's important to note that CPU inference latency for large LLMs is significantly higher; if such latency can be tolerated, then our profiling and DP-based approach remains effective. - We evaluated Packrat on three machines—two Intel-based and one AMD-based. On the Intel systems, we used configurations that favor local memory per socket, while the AMD machine exhibited NUMA subclustering. Despite these differences, Packrat consistently achieved significant gains. As shown in Figures 4 and 5, our approach works effectively across batch sizes common in state-of-the-art work by identifying workload characteristics and updating to the optimal configuration. We are happy to include a detailed sensitivity analysis of these hardware parameters in the final version of the paper. - Packrat operates at a higher level than other works that tile each layer for memory and compute optimizations (e.g., Liu et al. in USENIX ATC 2019). While tiling focuses on optimizing the computation within each layer, Packrat optimizes across instances and threads for DNN serving. These techniques are mainly orthogonal and could be combined: tiling could be used to optimize the kernel-level execution, while Packrat's configuration selection improves end-to-end inference latency by balancing intra-op and inter-instance parallelism. - The paper introduces a batch size estimator that monitors queue depth to assess request arrival rates indirectly. Although exact rates aren't specified, experiments with different batch sizes indicate that Packrat achieves significant improvements, and reconfigurations are triggered only by sustained changes in arrival patterns, ensuring the optimizer's effectiveness in dynamic environments.
null
null
null
null
null
null
null
null
Metadata Conditioning Accelerates Language Model Pre-training
Accept (poster)
Summary: The paper proposes a metadata-enhanced training strategy for LLMs across various model sizes, ranging from 600M to 8B parameters. Specifically, during the first 90% of training, metadata is prepended to the training documents, enabling comparable performance while reducing data usage by 33%. The authors conduct experiments on three training corpora to validate the effectiveness of their approach. Claims And Evidence: 1. One of the key claims in this work is that prepending metadata, i.e., URLs, to training documents obtains comparable performance with 33% less training data. As shown on the right-hand side of Figure 1, as the number of training tokens increases from 0 to 80B, 160B, and 240B, the proposed method, Meco, consistently outperforms the baseline. However, this improvement may be influenced by randomness. In other words, even with the same metadata-enhanced training data, different random seeds could impact the average performance curve. Some seeds may lead to rapid performance gains, while others may result in slower improvements. Unfortunately, since this work conducts experiments using only a single seed, the extent to which randomness affects the shape of the average performance curve remains unclear. 2. Additionally, Table 13 only shows the performance of three runs at 160B tokens, it is still unclear how performance change across different training token counts. Methods And Evaluation Criteria: 1. The primary evaluation in this work is 5-shot, though a very few tasks are evaluated in a zero-shot manner (In Table 3). However, zero-shot evaluation better assesses a model’s ability to generalize, which could provide a stronger indication of the effectiveness of prepending metadata. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. Experiments are conducted based on a single run, which may weaken the strength of the main claim. 2. Zero-shot evaluation should be included to better assess the effectiveness of the proposed method. Supplementary Material: Yes, e.g., Table 13 Relation To Broader Scientific Literature: Previous studies have explored metadata such as source domains and document IDs, whereas this work leverages URLs, which may be new to the community. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: the paper is well-structured and easy to follow. Weaknesses: the paper lacks a theoretical contribution and primarily presents an empirical study on the effect of prepending URLs in LLM training. Other Comments Or Suggestions: N/A Questions For Authors: 1. Could you provide results from runs with different random seeds when gradually increasing the number of training tokens? 2. Could you include zero-shot performance for your main experiment results? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! We address your concerns below: **Q1: Randomness of the experiment results** A1: Thank you for raising this point. We acknowledge that a limitation of our study is the lack of multiple runs with different random seeds for most experiments, primarily due to the high cost of pre-training (our main 160B run required 1,536 H100 GPU hours). That said, we would like to emphasize the following: (1) Single-run experiments are standard in LM pre-training studies, given the resource constraints. Our setup follows established practice, consistent with prior work ([Xie et al., 2023](https://arxiv.org/pdf/2305.10429); [Li et al., 2024](https://arxiv.org/pdf/2406.11794); [Wettig et al., 2024](https://arxiv.org/pdf/2402.09739)). It is also generally accepted that pre-training exhibits less variance than fine-tuning. (2) As you noted, Table 13 shows low variance of our pre-training experiments (3 different seeds and different subsets of data), particularly when averaging across the full evaluation suite. We believe that the result we present is significant and is not due to randomness. (3) Figures 3–4 demonstrate consistent gains from MeCo across a range of model sizes and datasets, further supporting that the improvements are not artifacts of randomness but reflect meaningful trends. **Q2: Lack of zero-shot results.** A2: In this paper, we followed [OLMES](https://arxiv.org/pdf/2406.08446)’s setting, which adopts curated 5-shot examples to reduce evaluation variance. But we agree that adding zero-shot results offers a better picture of model performance. We add a zero-shot evaluation for our main 1.6B, 160B, DCLM experiment as following: | | MMLU | ARC-e | ARC-c | CSQA | HSwag | OBQA | PIQA | SIQA | WG | TruQA | Avg. | |----------|------|-------|-------|------|-------|------|------|------|------|--------|------| | Standard | 35.1 | 70.7 | 41.4 | 59.5 | 65.3 | 46.6 | 72.9 | 48.9 | 63.8 | 35.6 | 54.0 | | MeCo | 35.5 | 71.0 | 45.4 | 60.6 | 66.2 | 52.4 | 73.0 | 47.3 | 64.9 | 35.8 | 55.2 | As we can see, the MeCo model still achieves significantly better performance on zero-shot. **Q3: Lack of theoretical contributions** A3: We acknowledge that this paper did not provide a theoretical justification for MeCo’s effectiveness or a rigorous analysis of how MeCo changes the training dynamics. However, the main contribution of this paper lies in proposing the method and uncovering this interesting phenomenon (metadata conditioning accelerates pre-training), which is both novel and possesses significant empirical impact. Theoretical understanding of such a method is challenging due to the nature of pre-training, and is also beyond the scope of this empirical study. That said, we included empirical ablations and hypotheses to shed light on the possible inner-workings of MeCo. In Sec 5.2 and Table 5, we showed that using hashed URLs can achieve similar performance as natural URLs, suggesting that the semantic meaning of the URLs is not necessary for better pre-trained models, and MeCo mostly provides signals to group similar documents together. We agree with reviewer TsUA that with the grouping information, the models can either learn to upweight certain “higher-quality” domains (such as Wikipedia) or learn an implicit curriculum that helps accelerate training—but it is still unclear to us how exactly MeCo changes the training, and it warrants further investigation. We also highlight a recent preprint, [Zhu et al., On the Power of Context-Enhanced Learning in LLMs](https://arxiv.org/pdf/2503.01821). Though their setting is synthetic and differs from ours, their theoretical analysis shows that context-enhanced learning—such as providing metadata at the beginning of the sequence without actually optimizing cross entropy loss on these tokens—can improve sample efficiency. We found the result insightful and encourage the reviewers to check it out as well.
Summary: This paper proposes to include metadata (source links) in the pre-training of language models to boost learning efficiency. The proposed method, MeCo, pre-trains language models with text augmented with metadata in the first 90% of data and the metadata are removed in the last 10% of data for “cooldown”. MeCo is benchmarked on various commonsense reasoning datasets to show better performance and is also more steerable to different generation styles conditioned on different metadata. Claims And Evidence: 1. MeCo is claimed to be a more efficient pre-training paradigm. However, the included benchmarks only involve commonsense reasoning, which fails to comprehend all kinds of abilities in LMs. 2. MeCo is claimed to make LM more steerable, which is well-validated by various ablation studies and harmfulness reduction experiments. Methods And Evaluation Criteria: The benchmarks are mostly commonsense reasoning, which is a bit narrow to support the claim that MeCo is universally more efficient. The biggest concern is there to be various sources for pre-training as shown in Table 15. However, commonsense reasoning can only benchmark a few datasets (e.g. Wikipedia) in the corpus. Theoretical Claims: N/A, The paper is more about an empirical claim. Experimental Designs Or Analyses: The experiment design is reasonable. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper is connected to the source for pre-training language models, this paper incorporates metadata into the pre-training to enable better generation steering. Essential References Not Discussed: The related literature is well discussed. Other Strengths And Weaknesses: Another weakness of MeCo is the requirement for metadata to be appended to the raw text. The setup in this paper discusses only the case that metadata are available to all pre-training data. However, most curated raw texts do not contain their metadata. Then, will the steerability observed from pre-training with 90% w/ metadata + 10% w/o metadata still appear in the case that most data are without metadata? I feel the conclusion in this paper might not be applicable to larger-scale pre-training, which requires further discussion. Other Comments Or Suggestions: This paper proposes an interesting idea, which is worth further discussion. However, it's not ready for publication because its narrow scope of benchmarking, explanation of the pre-training efficiency, and limitation in the case that all metadata are available. Questions For Authors: It is intuitive that metadata enables more steerable language, and may also make sense for better pre-training efficiency. But it's not intuitive that the resulting model will perform better. The improvement is also only shown in commonsense reasoning. Can you provide some intuition as to why metadata can improve LM's performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! We address your concerns here: **W1: Evaluation only includes commonsense reasoning and only reflects a few sources such as Wikipedia.** A1: **We evaluate our models by using OLMES ([Gu et al., 2024](https://arxiv.org/abs/2406.08446v1)), the industry-standard evaluation behind AI2’s OLMo models ([Groeneveld et al., 2024](https://arxiv.org/abs/2402.00838v3))**. Similar tasks are also used in [Llama](https://arxiv.org/abs/2302.13971) and [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard). While our evaluation does not include some of the latest popular LLM benchmarks on math (GSM8K, MATH), coding (HumanEval), graduate-level QA (GPQA), or instruction following (AlpacaEval, IFEval), we emphasize that at our experiment scale (1B model, 160B tokens), models do not achieve any meaningful performance on math, coding, and graduate-level QA—making these benchmarks uninformative. Additionally, since our study focuses on pre-training instead of post-training, the models are not expected to perform well on instruction-following tasks. Other studies that similarly investigate pre-training (e.g., [Wettig et al., 2024](https://arxiv.org/pdf/2402.09739); [Yu et al., 2024](https://arxiv.org/pdf/2406.06046)) adopt comparable evaluation setups. **We also stress that OLMES covers more than just “commonsense”**. For example, MMLU benchmarks model knowledge on a diverse set of subjects, including math, medicine, and economics; OpenbookQA and TruthfulQA test models’ factual correctness. This suite of tasks has been used by numerous pre-training studies and industry LLMs, and are considered to cover a diverse range of capabilities and to picture a holistic picture of the model performance. **We respectfully disagree with the claim that the evaluation reflects only a narrow set of sources such as Wikipedia**—for example, websites like personal blog-posts often benefit models’ commonsense performance. **W2: MeCo only discussed the case where all metadata is available. Most curated raw texts do not contain metadata.** A2: We respectfully disagree with the reviewer on “most curated raw texts do not contain metadata”. Most pre-training sources, such as CommonCrawl, C4, FineWeb, RefinedWeb, and DCLM, provide at least the URL information. For companies that perform their own data crawling, retaining metadata like URLs is standard practice. **W3: Explanation of the pre-training efficiency** A3: Thanks for raising this point! We included empirical ablations and hypotheses to shed light on the possible inner-workings of MeCo. In Sec 5.2 and Table 5, we showed that using hashed URLs can achieve similar performance as natural URLs, suggesting that the semantic meaning of the URLs is not necessary for better pre-trained models, and MeCo mostly provides signals to group similar documents together. We agree with reviewer TsUA that with the grouping information, the models can either learn to upweight certain “higher-quality” domains (such as Wikipedia) or learn an implicit curriculum that helps accelerate training—but it is still unclear to us how exactly MeCo changes the training, and it warrants further investigation. We also highlight a recent preprint, [Zhu et al., On the Power of Context-Enhanced Learning in LLMs](https://arxiv.org/pdf/2503.01821). Though their setting is synthetic and differs from ours, their theoretical analysis shows that context-enhanced learning—such as providing metadata at the beginning of the sequence without actually optimizing cross entropy loss on these tokens—can improve sample efficiency. We found the result insightful and encourage the reviewers to check it out as well.
Summary: The paper presents a novel method named Metadata Conditioning then Cooldown (MeCo), which appends metadata (primarily URLs) to pretraining documents and significantly accelerates pre-training. The authors also show how MeCo can be used for model steering by conditioning prompts on metadata, enhancing both downstream task performance and reducing harmful outputs. This approach is working probably because the model can perform data grouping based on data sources. Claims And Evidence: The claims are well supported by experiments. The main claims include: 1. Accelerated Pre-training. 2. Improved Downstream. 3. Model Steerability. 4. Metadata can have different types. 5. Minimal extra computational cost. Methods And Evaluation Criteria: The Methods And Evaluation are properly performed, including comparing PPL and downstream tasks scores. Theoretical Claims: There are not too much theoretical claims. Most of the claims are emprical and based on experiment results, but I think they make sense intuitively. Experimental Designs Or Analyses: The Experimental Designs are straightforward and intuitive. They can support the claims of the paper. Supplementary Material: I've reviewed most of the supplementary material and it looks OK to me. Relation To Broader Scientific Literature: The method can greatly accelerates pre-training and enhances model steerability without introducing computational overhead or limited applicability. It can be very helpful to the pretraining domain. Essential References Not Discussed: I don't have comments on this section. Other Strengths And Weaknesses: Stengths: 1. Extensive evaluation across multiple tasks, datasets, and model sizes. 2. Insightful ablation studies. 3. Clear, practical and easy method. 4. The most interesting thing to me is that using hashed URLs can also do the trick. Weaknesses: 1. Lack of Theoretical Explanation: Empirical evidence is provided, but I am not really sure how it changes the trianing dynamics. 2. The choice of Cooldown Strategy seems effective, but arbitrary. The paper does not thoroughly justify the chosen duration. 3. Although MeCo was tested across several standard benchmarks and datasets, the paper did not extensively evaluate its generalization to languages beyond English or explicitly measure robustness across more diverse downstream tasks. Other Comments Or Suggestions: No other comments. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your suggestions and questions! We appreciate that you recognize the paper’s contributions and strengths. To address your concerns: **W1: Lack of theoretical explanation—how does MeCo change the training dynamics?** A1: Thank you for raising this point! We acknowledge that this paper did not provide a theoretical justification for MeCo’s effectiveness or a rigorous analysis of how MeCo changes the training dynamics. However, the main contribution of this paper lies in proposing the method and uncovering this interesting phenomenon (metadata conditioning accelerates pre-training), which is both novel and possesses significant empirical impact. Theoretical understanding of such a method is challenging due to the nature of pre-training, and is also beyond the scope of this empirical study. That said, we included empirical ablations and hypotheses to shed light on the possible inner-workings of MeCo. In Sec 5.2 and Table 5, we showed that using hashed URLs can achieve similar performance as natural URLs, suggesting that the semantic meaning of the URLs is not necessary for better pre-trained models, and MeCo mostly provides signals to group similar documents together. We agree with reviewer TsUA that with the grouping information, the models can either learn to upweight certain “higher-quality” domains (such as Wikipedia) or learn an implicit curriculum that helps accelerate training—but it is still unclear to us how exactly MeCo changes the training, and it warrants further investigation. We also highlight a recent preprint, [Zhu et al., On the Power of Context-Enhanced Learning in LLMs](https://arxiv.org/pdf/2503.01821). Though their setting is synthetic and differs from ours, their theoretical analysis shows that context-enhanced learning—such as providing metadata at the beginning of the sequence without actually optimizing cross entropy loss on these tokens—can improve sample efficiency. We found the result insightful and encourage the reviewers to check it out as well. **W2: Arbitrary duration choice for cooldown** A2: We provide an ablation study on the duration of cooldown in Table 14 (Appendix B.3), which demonstrates that 10% cooldown leads to competitive performance. **W3: Though the authors conducted evaluation on some standard benchmarks, there lacks (1) non-English tasks and (2) evaluation on robustness across more diverse tasks.** A3: Thanks for raising this point! We evaluate our models by using OLMES ([Gu et al., 2024](https://arxiv.org/abs/2406.08446v1)), the industry-standard evaluation behind AI2’s OLMo models ([Groeneveld et al., 2024](https://arxiv.org/abs/2402.00838v3)). Similar tasks are also used in [Llama](https://arxiv.org/abs/2302.13971) and [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard). While our evaluation does not include some of the latest popular LLM benchmarks on math (GSM8K, MATH), coding (HumanEval), graduate-level QA (GPQA), or instruction following (AlpacaEval, IFEval), we emphasize that at our experiment scale (1B model, 160B tokens), models do not achieve any meaningful performance on math, coding, and graduate-level QA—making these benchmarks uninformative. Additionally, since our study focuses on pre-training instead of post-training, the models are not expected to perform well on instruction-following tasks. Other studies that similarly investigate pre-training (e.g., [Wettig et al., 2024](https://arxiv.org/pdf/2402.09739); [Yu et al., 2024](https://arxiv.org/pdf/2406.06046)) adopt comparable evaluation setups. That said, we acknowledge that the evaluation suite we use focuses on English tasks, which is a common limitation in language model pre-training studies.
Summary: This paper proposes Metadata Conditioning then Cooldown (MeCo) to accelerate LM pre-training. MeCo starts with pre-training LMs with metadata (URL's absolute domain) prepended in front of the text in the first 90% of pre-training and uses text only (no metadata) for pre-training in the last 10% of pre-training. They conduct experiments with decoder-only LMs of four scales on three pre-training data and show that MeCo improves the performance on most downstream tasks and obtains an average performance gain of 1.0. They conduct extensive ablations to justify the design choice in MeCo and provides partial explanation for its success. Claims And Evidence: The main claim is "MeCo accelerates LM pre-training". The paper provides sufficient and clear evidence for this, with LM pre-trained on three pre-training datasets and on four model scales. The average performance improvement is convincing. Methods And Evaluation Criteria: The evaluation is good. Using OLMES, an evaluation suite from Ai2, for evaluation, makes the evaluation convincing and reproducible. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiment design is very sound Supplementary Material: No Relation To Broader Scientific Literature: This paper shows that conditioning on metadata improves pre-training efficiency. While prior works do use metadata or some tags to steer LM generations, using metadata to speed up pre-training has never been explored. This is a novel and important contribution to the community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths === - The paper is very well-written. I enjoy how the authors arrange the contents and cross-reference results in the latter part of the paper to support their claims in Section 1 and Section 2. - The method, MeCo, is simple, elegant, and effective. - The evaluation is reproducible. - The related works are properly discussed Weaknesses === None Other Comments Or Suggestions: I like this paper a lot Questions For Authors: - Q1. There is a paper [1], not strictly relevant but somewhat related, that shows that LLM's answer does not seem to be affected when changing the metadata of the retrieved documents in RAG. This somewhat contradicts the results shown in this paper, showing that adding proper metadata helps, and adversarial metadata deteriorates the results. It would be interesting to hear the author's comments on this and how MeCo may or may not be better at distinguishing more reliable sources in a retrieval-augmented generation. It will also be beneficial to include this discussion in the paper. However, this is not a requirement for the paper. - Q2. I am personally very interested in understanding why MeCo is more efficient, and I believe that the research community will also be interested in this. While the paper does provide some explanations, I think they are far from complete, as acknowledged by the authors in Section 5.2. I think studying pre-training dynamics may reveal some interesting observations about MeCo. For example, does MeCo prioritize learning the data from certain domains? Does MeCo's training automatically enable some curriculum for the LM? It would be beneficial if the authors could also release the intermediate checkpoints of these models, as pre-training is computationally infeasible for most research groups. - [1] [Do Metadata and Appearance of the Retrieved Webpages Affect LLM’s Reasoning in Retrieval-Augmented Generation?](https://aclanthology.org/2024.blackboxnlp-1.24/) (Chiang & Lee, BlackboxNLP 2024) Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for your positive review! We are glad that you found our method interesting and our experiment design sound. To answer your questions: **Q1: Chiang and Lee, 2024 show that providing different source information often does not affect RAG results. How to interpret this?** A1: Thanks for providing the reference and we’ll include a discussion of this work. We believe the influence of metadata depends on both the metadata type and the model’s training. As noted by Chiang and Lee, publication time significantly impacts RAG performance, while source does not—indicating higher sensitivity to temporal information. However, under MeCo training, models also become sensitive to source metadata as it’s explicitly conditioned on such information—this is evidenced by our conditional inference result. **Q2: Why does MeCo work? Can you release intermediate checkpoints?** A2: Thanks for raising this point! We included empirical ablations and hypotheses to shed light on the possible inner-workings of MeCo. In Sec 5.2 and Table 5, we showed that using hashed URLs can achieve similar performance as natural URLs, suggesting that the semantic meaning of the URLs is not necessary for better pre-trained models, and MeCo mostly provides signals to group similar documents together. We agree with the reviewer that with the grouping information, the models can either learn to upweight certain “higher-quality” domains (such as Wikipedia) or learn an implicit curriculum that helps accelerate training—but it is still unclear to us how exactly MeCo changes the training, and it warrants further investigation. We will also make sure to release all intermediate checkpoints to facilitate future research. --- Rebuttal Comment 1.1: Comment: Thank you for your response and for agreeing to share the intermediate checkpoints. I keep my original evaluation. This is a good paper that should be accepted.
null
null
null
null
null
null
Multilayer Matrix Factorization via Dimension-Reducing Diffusion Variational Inference
Accept (poster)
Summary: This work presents a diffusion variational inference algorithm for multilayer matrix factorization. The authors treat each layer as a diffusion step. Another difference is that the dimension of the latent variable reduces with the layer depth, termed as dimension reduction diffusion VI. This is the nature of latent presentation learning in matrix factorization, and is different with diffusion VI with equal latent variable length. The dimension reduction is achieved by imposing an orthogonal transform matrix. Based on this, the authors derive the VI objective. Experiments are conducted on two tasks, abundance estimation of hyperspectral images and representation learning on several image datasets, in which the performance is further evaluated by clustering. ## Update after rebuttal Since the authors address most concerns, I will keep the score. Claims And Evidence: The claims are supported by experiments. Methods And Evaluation Criteria: Yes. Theoretical Claims: There is no theoretical claim. There are many derivations about the final objective. I did not carefully check them. Experimental Designs Or Analyses: The experimental design seems sound, but may be restricted to some simple datasets. Supplementary Material: I quickly go through all the supplementary material. Relation To Broader Scientific Literature: The work mainly contributes to deep matrix factorization. This is a classical tool and can be related to many classical methods in signal processing. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** It introduces new inference techniques for deep matrix factorization. The model can be extended in many ways, for example changing the priors. The use of diffusion VI is promising and interesting to me. Therefore, I am learning to accept at this stage. **Weakness** 1. From the perspective of inference techniques, the improvement may be not so significant. The main difference seems to be introducing the transform matrix $U$, in order to adjust the dimensions. 2. The experiments are conducted on simple datasets. I am not sure about the usefulness of the proposed models. Especially for the low dimensional representation learning, there are many simple yet powerful models. It is not sure whether the proposed model can be scaled to more complex and larger problems. 3. In the abundance estimation experiment, is the MSE evaluated on reconstruction error? Are there other metrics to evaluate the abundance estimation accuracy? Other Comments Or Suggestions: The overall organization is clear and easy to read. However, it contains numerous unnecessary and colloquial words. It could benefit from revising for conciseness. And there could be more space to present more results. Questions For Authors: 1. To facilitate DRD-VI, the authors impose an orthogonal transform $U$ in the diffusion process. I am wondering if learning this $U$ along with VI parameters could make the optimization harder. 2. I am wondering what is the reason for adding the orthogonal constraint for $U$. And in practice, the orthogonality is introduced by regularization, which means that $U$ is not strictly orthogonal. Would this affect the results? 3. Does it require many layers for the diffusion VI? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are much obliged to you for your careful review and constructive comments. We will carefully take into account your comments in our revision, and we would like to discuss various aspects as follows. **Regarding “Other Strengths and Weaknesses”** Point 1: Thank you for expressing your view in the beginning that “the use of diffusion VI is promising and interesting to me.” In Point 1, you mentioned that the improvement from the inference viewpoint may not be so significant. Indeed, once you understand the principle of the equal-dimension diffusion models very well, it is perhaps not difficult to anticipate that dimensional-reducing diffusion models (more accurately, embedding dimensionality reduction directly into the diffusion model) could be constructed—at least intuitively. Still, the latter was not tried before. There are technical details to overcome, and some of them are not trivial. We would say that we start with a simple idea, but there are non-trivial details to work out, both in the development and in experiments. Point 2: As with some fundamental research, at this stage we focus on developing a concept (specifically, a new inference technique for MMF) and providing proof of concepts. As future work, the application of the concept to more complex datasets could be considered. Also, the hyperspectral unmixing problem we demonstrated in this paper is an important representative application in the context of hyperspectral remote sensing. Point 3: In addition to MSE, researchers in hyperspectral remote sensing also consider the abundance angle distance (AAD), which uses the angle of two vectors as the measure of similarity. We have provided AAD in the anonymous GitHub repository;please see Table [2](https://github.com/AnonymousPaper-Submission/AnonymousPaper-Submission-ICML2025-rebuttal) there. **Regarding “Other Comments Or Suggestions”** We agree and will try our best to better streamline the writing. **Regarding “Questions for Authors”** Point 1: Yes, optimization with semi-orthogonal matrix constraints is more difficult in principle. But that does not stop researchers from trying. In machine learning, for example, [1] Moustapha Cisse et al. "Parseval Networks: Improving Robustness to Adversarial Examples", ICML 2017\ [2] Nitin Bansal et al. "Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?", NeurIPS 2018, researchers have found success with their numerical results. The regularization method we use is essentially the same as that in the above references. Our empirical experience with the MMF application is that the regularization method works reasonably. Point 2: To answer your question, we provide some numerical results in Table [3 and 4](https://github.com/AnonymousPaper-Submission/AnonymousPaper-Submission-ICML2025-rebuttal). We observe that the $\boldsymbol{U}_t$'s are quite close to semi-orthogonality. Concerning the question why semi-orthogonal $\boldsymbol{U}_t$'s are used, one important reason is to simplify the variational process. It can be shown that if $\boldsymbol{U}_t$'s are not semi-orthogonal, the variational operations would be more complicated. In fact, if we do not control the rank of $\boldsymbol{U}_t$ (which we do so via semi-orthogonality), there will be a lot of problems. Point 3: In the context of MMF, our experience is that we do not need as many layer as in generative models, which can be thousands. We would say tens for MMF. The concept introduced in this work does not pose a constraint on the number of layers one can use, however, and as future work it would be interesting to see other applications that require more layers. --- Rebuttal Comment 1.1: Comment: Thanks for the authors’ response. I admit the contributions and think the paper is interesting. I am overall positive with the paper and will consider it in the reviewer discussion phase. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer cESg for the constructive suggestions and the feedback on our response.
Summary: The paper introduces a novel diffusion-model based variational inference method for multilayer matrix factorization (MMF), using a dimension-reducing Markov chain as the noise. The method is evaluated on hyperspectral image unmixing, where it outperforms state-of-the-art MMF and deep learning methods in abundance estimation, and on low-dimensional representation learning, where it achieves competitive clustering performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked the formulation of the model and objective and they seem to be sound (apart from typos, see below). The details of the derivations are not thoroughly checked. Experimental Designs Or Analyses: Yes. The experimental designs and analyses are sound. Supplementary Material: Yes, I reviewed part B of the appendix. Relation To Broader Scientific Literature: As far as I know, this paper is the first paper to leverage diffusion models for MMF. The proposed method achieves competitive or better performance than the previous state of the art methods on abundance estimation and representation learning. Essential References Not Discussed: While the authors do not claim their paper is the first to propose dimension-reducing diffusion or diffusion for representation learning, it would be good to mention existing work on dimension-reducing diffusion such as Jing et al. 2022 and Zhang et al 2023. Jing, Bowen, et al. "Subspace diffusion generative models." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. Zhang, Han, et al. "Dimensionality-varying diffusion process." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. Other Strengths And Weaknesses: - The paper presents the experiments very clearly. The setup on the abundance estimation problem is clear and easy to follow. - The paper doesn't give a lot of insights and justifications for why the diffusion-based MMF is better or how it is different from hierarchical VAEs. Other Comments Or Suggestions: - In the introduction section "in particular, for MMF, variational autoencoders (VAEs) appear to be the only viable solution" needs more clarification. - The separation of the $\gamma$ and $\phi$ parameters are confusing. Since $\gamma$ is contained in $\phi$, maybe $q_\phi(x_T\mid x_{T-1})$ can be used instead of $q_\gamma(x_T\mid x_{T-1})$? - There seems to be typos in eqations (15) and (16). It should be $q_\phi(x_{t-1}\mid x_t, x_0)$ and $q_\phi(x_{T-1}\mid x_{T}, x_0)$ instead of $q_\phi(x_{t-1}, x_t \mid x_0)$ and $q_\phi(x_{T-1}, x_{T}\mid x_0)$. - The main paper contains too much derivation that does not directly contribute to the narrative of the paper. Many of the technical details can be moved to the appendix to make room for more discussions on the motivations and comparisons with existing methods. For example, section 3.3.3 can be expanded. Questions For Authors: 1. In practice, what is the difference between the diffusion VI approach and the hierarchical VAE approach? What's the reason for preferring the diffusion approach over the VAE one? 2. What are some of the future directions of this work? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your time and effort in reviewing our work, and we are grateful for your generally positive feedback. We will take your advice to improve the paper. We also want to discuss some of the main points you raised. **Regarding “Essential References not Cited”** We agree, and thank you for providing us with the references. We will add a paragraph in the Introduction to describe existing works related to dimension-varying diffusion. We take this opportunity to give a reflection. The existing papers for dimension-varying diffusion are in the topic of generative models. They can be seen as concatenations of dimensionality-reduction processes and diffusion models. For the case of the references you provided, they concatenate multiple dimensionality-reduction processes and multiple diffusion models. On the other hand, we consider the topic of multilayer matrix factorization (MMF). We consider a single diffusion model and employ dimensionality reduction at each layer of the diffusion process, with the aim to develop a per-layer light-weight inference scheme. This makes the proposed method quite different from the dimension-varying diffusion methods we see in the prior generative modeling literature. However we agree that covering the prior literature would enhance the quality of this work, providing a broader coverage of related studies. **Regarding Comparison with Hierarchical VAEs** You mentioned that we didn’t give a lot of insights and justification on how diffusion-model based MMF differs from hierarchical VAEs (HVAEs). That is a good question, and we will try to improve the writing on this part in the revision. Initially, when we wrote this paper, we planned to cover the HVAEs in the main development (Section 3.3). However, we ended up abandoning it due to space limitation. It is in fact possible to apply HVAEs to the MMF model, associating each layer of the factorization model with one deep neural network for variational inference. We are not aware of any work on HVAEs for MMF, although it is possible. As described in [[1]](https://arxiv.org/pdf/2208.11970), the stochastic approximation in HVAEs may have larger variance when the number of layers is larger. This, together with the need to use one deep neural network for each layer of the factorization model, may not fit the purpose of per-layer light-weight operations well. One advantage of the diffusion model is that it uses a simple variational (diffusion) model to greatly simplify the variational process and sidestep the aforementioned high-variance issue [[1]](https://arxiv.org/pdf/2208.11970)—and this is our motivation for using diffusion models. It is also worthwhile to note the following: In principle, HVAEs can be directly applied to MMF. But the previous diffusion model cannot, because it assumes equal dimension with the latent variables. Our endeavor, simply speaking, is to make the MMF application possible. [1] Luo, Calvin. "Understanding diffusion models: A unified perspective." arXiv preprint arXiv:2208.11970 (2022). **Regarding “Other Comments or Suggestions”** We see your point and will try our best to streamline and make the writing more concise. The work consists of heavy details at some points, which can be better balanced, though we want to say that our writing style is to make the underlying assumptions and tricks clear to the reader; once again, we will try our best to balance. On “section 3.3.3 can be expanded,” you made a very good point and we agree. This, however, will be left as future work. We believe that from there we can develop more results. **Regarding “Questions to Authors”** Q1: We have run some simulations during this rebuttal period. Please see Table [1](https://github.com/AnonymousPaper-Submission/AnonymousPaper-Submission-ICML2025-rebuttal) in the anonymous GitHub repository. So far, our empirical finding is that, in general, the HVAE does not lead to performance improvement compared to the VAE. A possible reason is that the HVAE may require more careful training skills and more computations to achieve good performance. Q2: One future direction is to perform further analysis to expand or better understand the diffusion-based MMF interpretation in Section 3.3.3. We think that this part is quite unique to MMF in terms of providing interpretable explanations.
Summary: The paper presents presents a diffusion model (DM) based variational inference (VI) method for multilayer matrix factorization (MMF). They derive a variational process which is computationally efficient and lighter weight than other methods such as VAEs. Their method DRD-VI also reduces latent dimensionality at each step of the diffusion process, satisfying the requirement of matrix factorization to learn a lower dimensional representation of the data. The paper compares the performance of DRD-VI with multiple MF and MMF baselines on the problems of abundance estimation and low-dimensional representation learning with multiple datasets. They find DRD-VI outperforms the baselines on most datasets. Claims And Evidence: The claims are well supported and derivation of the proposed VI process is clear. Methods And Evaluation Criteria: The proposed evaluation criteria are appropriate for the problem of MMF. The paper evaluates DRD-VI on the blind inverse problem of abundance estimation of hyperspectral images as well as low dimensional representation learning of six different datasets. Theoretical Claims: I did not thoroughly check each proof in the appendix, but the main equations in the paper look correct. Experimental Designs Or Analyses: The experimental designs are valid as the authors evaluate on the established MMF blind inverse problem as well as the problem of low-dimensional representation learning using standard datasets such as CMU-PIE, Fashion-MNIST, etc. Supplementary Material: I reviewed the additional experimental results provided in Appendix B. Relation To Broader Scientific Literature: The key contribution of the paper extends existing variational inference methods for diffusion models to the problem setting of MMF. By designing the VI process for diffusion to reduce dimensionality at each step with light weight layers, this paper connects to the broader areas of multilayer matrix factorization and diffusion model variational inference. Essential References Not Discussed: How do latent diffusion models fit in the context of this paper? The paper notes that diffusion models assume the latent space has the same dimensionality as the data, however there is existing work on latent diffusion models in a lower dimensional latent space including Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." (Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022). Another point in the paper is that DM-based VI has not been considered for MMF which may be the case, however there has been work done on related dimensionality reduction problems including diffusion based recommendation models such as Wang, Wenjie, et al. "Diffusion recommender model." (Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2023.) This paper in particular proposes a method L-DiffRec which compresses the data to a lower latent dimensionality and proceeds with the diffusion process in this latent space. I believe both of these works as well as the literature on lower dimensional latent diffusion models as well as recommendation systems should be contextualized as related work in this paper. Other Strengths And Weaknesses: Strengths: - The paper provides a thorough description and derivation of a VI method for diffusion model based MMF. - The experimental results show DRD-VR is promising for multiple datasets. - The method design choices are well justified. Weaknesses: - While the experimental results look good, the authors do not provide code which would help answer the questions regarding reproducibility and transparency. - The authors should contextualize latent diffusion models and diffusion based recommendation models in their related work. Other Comments Or Suggestions: - Spelling error for section 2.2 title: "2.2. Varational Inference for MMF" should be "2.2. Variational Inference for MMF". - In section 4.1, you misspell "MiSiCNet" as "MiSiNet" when introducing baselines (line 367). - Table 2 in the paper has VAE listed as a method, should this be changed to VASCA as Figure 1 has it listed as VASCA? Questions For Authors: - For the sake of reproducibility and future research in this area, can the authors release their code? - The MiSiCNet results look better than DRD-VI for some settings (abundance estimation with the APEX dataset). Is there a reason or hypothesis for why this is the case? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are very thankful for your thoughtful comments. We will do our best to revise and improve the paper. Here we would like to give a reflection on the main points you raised. **Regarding “Essential References not Cited”** Thank you for pointing out some references related to dimension-varying diffusion. We agree that we should cover that, and we plan to add a paragraph in the Introduction to describe the related works and how the current work differs. Let us take this opportunity to give an explanation. The related papers consider the context of generative models and generative recommendation models. They, from a high-level perspective, can be seen as concatenations of dimensionality-reduction processes and diffusion models. Dimensionality reduction is done outside of the diffusion process. Our proposed method considers the context of multilayer matrix factorization (MMF). We embed dimensionality reduction inside the diffusion process. Each layer of the factorization model (or each layer of the neural network) is associated with one layer of the diffusion process. These two features are not seen in the prior literature in generative models, to the best of our best knowledge, and are the unique parts of our development. In the context of MMF, the two features give us the opportunity to develop a per-layer light-weight scheme for MMF inference. While there are differences between the prior works and our current work, both fundamentally and in the topic of interest, we look back and agree that we should cover the prior papers in dimension-varying diffusion. **Regarding “Questions to Authors”** Q1: Absolutely, we can provide the codes. You can find the codes at the [anonymous GitHub repository](https://github.com/AnonymousPaper-Submission/ICML25-submission-codes-share). Q2: It could be difficult to give a reason when different methods use different models. MiSiCNet considers spatial correlations in its model, and this may give an edge to MiSiCNet in some instances. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions and sharing your code. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer TTJj for the constructive review and the feedback on our response.
null
null
null
null
null
null
null
null
Provable In-Context Vector Arithmetic via Retrieving Task Concepts
Accept (poster)
Summary: This work analyzes the optimization dynamics of transformer networks trained on in-context learning tasks via Gradient-descent on an L2-regularized cross-entropy loss. The analysis relies on a simple specific data-generating process which provides a way to formalize the notion of task concept vector. The results shows that the internal representation of a Transformer can recover the the task concept vector. Experiments validating the theory are performed. Claims And Evidence: From what I understood of the paper, I believe the claims are supported. Methods And Evaluation Criteria: N/A Theoretical Claims: I read the theoretical statements contained in the main paper. I list some of my confusions which may be due to the fact that I am not very familiar with learning theory and bounds holding with high probability and so maybe this is commonly used jargon. In Theorem 3.2, at first I was confused since the statement does not explicitly mention which optimizer is used on which loss, but this is mentioned earlier in the text (not in an assumption environment nor anything like that). I think it should be more explicit in the theorem statement. I was very confused by the use of the $O$ and $\Omega$ notation in Theorem 3.2 and Prop 3.3. It's unclear to me what is varying/increasing here. Is it 1/epsilon? K? t? The representation dimensionality? This should be more explicit. Also in Theorem 3.2, I was confused by a statement of the form "there exists t = O(g(epsilon, L, M, ...))". Are we saying that there exists a sequence t's that belongs to O(...) ? And again, what's varying? These points are examples of the general lack of clarity in this work. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: I am not well versed in this literature. Essential References Not Discussed: The authors might be interested in https://arxiv.org/abs/2410.23501. Other Strengths And Weaknesses: Strengths: - I think the theoretical questions tackled here are interesting. - The analysis seems non-trivial. That being said, I do not think I am qualified to judge its novelty and relevance. Weakness: - Clarity is a big issue (see comments about theory). I provide some examples below: - Eq (1) is weird… $a^f_\theta$ is a deterministic function of T? Why is there a distribution p(a | T)? Same question for b… Also, are we saying a + b = f here? If so it should be written explicitly. - Line 74 (left): What does the acronym MLM stand for? - w^i_k is introduced without explaining what i is. Is there multiple low-level binary concepts for a given high-level concept? Is this what i is supposed to mean? - Line 161 (right): Notation is unnecessarily heavy, see for example $a_{k_{l_y}}$ which has too many nested indices. This makes everything difficult to parse… - Line 156 (right): I don’t understand, an expression is given for the expected label of the query but that expression contains a random element $y_{k_T,J+1}$. How is that possible? Also, I computed the expected value of y_j with j<=J and I don’t get the same expectation as for j = J+1. That’s a bit weird no? I was expecting the distribution of y given x to be the same across all examples j in the task T. Looking at (12) in the appendix didn’t help me understand. - Line 189: The number $M$ appears for the first time, without explanations. - Line 201: This illustrative example is unclear. - Line 181 (right): Something’s off with that equation. You are multiplying a vector with a vector, and it yields a vector. I think the softmax should be indexed by $\ell$. - I don’t understand how you obtain Eq. (2) - Eq. 3, what’s K’? Where does this "7’’ come from? What is the matrix U? It seems many important details are left in the appendix. The contribution should be understandable from the main text. Another example is the algorithm shown only in Appendix C. - As mentioned above, many important details necessary to understand the contribution seem to be hidden in the Appendix. - I not sure the data-generating process is realistic or interesting. I understand that this kind of analysis is challenging without simplifying assumptions, but overall I believe the data-generating process could be motivated further. That being said, it is possible I did not understand it properly due to clarity issues. Other Comments Or Suggestions: - Questions For Authors: See clarity issues above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for acknowledging our theoretical contribution and recognizing our analysis non-trivial. **Q**:Confusion over Theorem 1 - We'd make it in the theorem statement explicitly by referring Algorithm 1. - As we claim for any ε>0, the varying item here is ε, with other parameters satisfying Condition 3.1 and the conditions in our statement. The existence of $t=O(g(ε,...))$ means for every (varying) ε, there exist a corresponding t upper bounded by $C\log(1/ε)·(\eta^{-1}q_{V}^{-1}σ_{1}^{2}d^{2}KM)$ for some constant $C$. **Q**:Room for presentation We highly appreciate the feedback: - Eq.(1): We’d note $f = a^f + b$ as the label vector representation. Per [1], one task $f$ corresponds to one task vector $a^f$ in latent space, yielding $f(x_{\text{query}})$ when added to $b(x_{\text{query}})$, an encoded residual stream independent of demo-pairs in prompt $T$. Model $\theta$ infers $a_θ^f$ from $T$, with $p_θ(a|T)$ reflecting model's confidence in recognizing $a$ as the task vector. - MLM: MLM loss refers Masked Language Model loss. - $w^i_k$: It should be replaced by $w_k$ with $i$ as a typo. - Line 189-201: M represent question length excluding query **x**. 201 explains our modeling intuition—QA includes irrelevant tokens, task vector, word, label—similar to [2] and modeled after [3]. - Line 156-161 & Eq.(2): Every **x_l** - **y_l** pair in the prompt $T$ share a co-task $k_{T}\in[K]$ but each pair can have its own low-level real label $y_{k_T,l}$ regarding $k_T$. The label vector of the prompt *only* depends on $k_T$ and the $y_{k_T,J+1}$ in query **x_{J+1}**. An illustrative example showing this intuition is $T$=[Japan,Tokyo,China,Beijing,France], **y_3**=Paris. The expected **y_3** only depends on the co-task “capital” and the semantic of France (not Japan or China). Under our modeling, **x_{J+1}** ≈$0.1a_{k_{T}}+y_{k_T,l}b_{k_{T}}$, and **y_{J+1}** ≈$a_{k_{T}}+y_{k_T, l}b_{k_{T}}$, modeled after Figure 1[1]. We'd avoid nested indices by let $l_x=l_y=l,l\in[J]$. - Line 181: We’d replace $W_KT$ by $W_KT_l$. - Eq.(3): K’ is the number of irrelevant token in U, U is the dictionary matrix for cross-entropy training akin to [4] described in our 'Training Setups' in Section 2.2, 7 denotes K sets of {a±b, 0.1a±b, ±b, a} in eq.(14). While key details, such as Algorithm 1's procedure in Section 2.2, are included in the main text, we understand the importance of making our contributions more accessible—we'll add a notation table in a new appendix to summarize key definitions and intuitions for easier reference. **Q**:arXiv2410.23501 Thanks for sharing the paper—aligns with [5-6], backing up our data modeling, and we’ll cite it accordingly. **Q**:Merit of the problem & Real-world impact We'd like to emphasize that our models, grounded in empirical observations, offer significant theoretical merit: - Modeled after [1] per Figure 1 as well as the concept latent geometry [5-6], our models are indeed sparse coding approaches suitable for capturing language polysemy [7-8] (see our 1st and 3rd responses to Reviewer mnid)—we successfully show the **OOD edge** of transformer over word2vec—addresses *Question 5.1.4* in [9] in theory. - Indeed, our empirically-grounded theoretical approach is common in theory (see our 1st response to Reviewer R7sM)—akin to [10-11] analyzing on feature-noise vision data, which is justified by the properties of ResNet’s latent space [10]. While [11] states the harmful overfitting over *vision* data is due to noise’s memorization, our harmful overfitting over ICL data is by falsely memorize co-occurrence of low-level features—akin to [12]'s result—feature co-occurrence biases gradients toward memorization. - We offer the **first** optimization theory for a *softmax-layernorm-residual* realistic transformer on empirical-motivated QA data trained by cross-entropy loss for factual recall ICL per [1], going beyond prior theory with idealized assumption like residual-free [13], QA-combined attention [14] with unrealistic loss functions (square or hinge loss) and oversimplified data. High nonlinearities of our problem induces complex gradient, for which we introduce *six continuous flows* in Appendix D.1 to capture its property in different phases, contributing to theory community as a method for treating complex optimization dynamics beyond [15]. **Summary**: We thank the reviewer for acknowledging the theoretical questions tackled here interesting and our analysis non-trivial. We highly value the feedback, and should the reviewer have any further advice or wish to discuss any point further, we would be more than delighted to continue our productive exchange. Once again, we deeply appreciate the reviewer’s valuable time and comments. *Reference (arXiv identifier)* [1]2305.16130 [2]2412.06538 [3]2309.14316 [4]2305.16380 [5]2406.01506 [6]2403.03867 [7]1601.03764 [8]2105.15134 [9]2405.01964 [10]2012.09816 [11]2202.05928 [12]2410.09605 [13]2402.15607 [14]2402.01258 [15]2310.01975
Summary: The authors study task vectors in the context of single-token factual-recall ICL tasks. In this context, they show that training on QA data enables learning a task vector which can effectively solve ICL problems for Word-Label and QA tasks. They additionally analyze this phenomenon theoretically. Claims And Evidence: The theoretical claims are well-supported in the paper, albeit in a limited context and not for real-world LLMs, and the empirical claims have some support. Methods And Evaluation Criteria: The methods, e.g. evaluating the test loss under different training settings, make sense but are difficult to understand concretely. It is unclear if the QA and Word-Label tasks in this context are natural language, or a simplification. It appears to be the latter, but this makes the example provided in the QA Sentence Distribution paragraph a bit confusing. The paper would benefit from showing some examples of the exact task sequences in each case. Theoretical Claims: I checked the proofs of Linear Growth and Decelerating Growth and did not notice any glaring errors. Experimental Designs Or Analyses: The experiments seem sound, but limited in scope. Supplementary Material: I reviewed the additional Figures in the appendices, as well as the related work, the section describing the algorithm and prompt distributions and, as necessary, the lemmas. Relation To Broader Scientific Literature: The paper relates to prior work identifying task vectors in the context of ICL, showing that demonstrations allow LLMs to internally construct a vector that expresses the task concept, as well as theoretical study of ICL. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: * The authors provide a controlled and precise setting to study a model of task vectors in ICL * The theoretical conclusions are well-supported and there is some additional empirical evidence provided Weaknesses: * The study of geometric relationships seems to disappear when discussing the empirical results with the trained models * Broader impact may be limited, as it is unclear how well this transfers to the motivating settings of task vectors in pretrained LLMs Other Comments Or Suggestions: The authors discuss the geometric relationship between words and labels, and similar discuss the relationship between task vectors and task-related words in the context of pretrained LLMs. I would like to see some analysis of whether the same relationships exist in their simplified settings. When training on the QA-ICL dataset, do you notice similar relationships emerge? Questions For Authors: Is there a way to vary and study the complexity of QA and Word-Label tasks? In the natural language settings you draw from for motivation, we have a clearer sense of complexity for different QA pairs and, similarly, a clearer sense of concept hierarchy. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for acknowledging our theory as empirically-supported and sound. We appreciate your professional review and address your concerns below. **Q**:Are QA and Word-Label tasks natural language or simplifications? The example in QA Sentence Distribution is confusing. The paper would benefit from showing some examples of the exact task sequences in each case. **A**:In our theoretical context, the QA and Word-Label ICL tasks are empirically-motivated abstracted simplifications. The example in lines 201-202 backs our modeling intuition—QA comprises irrelevant tokens, a task vector, a word, and a label—akin to [1]’s model (in its Figure 2) and inspired by [2]. Per your advice, we’d add illustrative examples to further clarify our ICL data's intuition: e.g., $T_1$=[Japan,Sakura,France,Rooster,China] with co-task “National Symbol” yields **y_3**=Panda, while $T_2$=[Japan,Sakura,France,Iris,China] with co-task “National Flower” yields **y_3**=Peony. These highlight the multi-task nature of our multi-concept modeling (e.g., “**x,y**=Japan,Sakura” fits ≥2 tasks), reflecting this sparse coding-type model suited for capturing *language polysemy*[3-4], more realistic than prior ICL theories over unrealistic data [5-6]. **Q**:The study of geometric relationships of the trained models seems to disappear **A**:We would like to remark: - **LLM Geometry (Figure 1)**: Task vectors align differently with words and labels in trained LLM prerequisite layers, forming our data modeling for optimization theory, grounded in LLM concept geometry [7-8]. - **Trained Model Geometry (Theorem 3.2 and the discussions below, Figures 2-4)**: the first layer’s output $h_{\theta,0}$, formed by $W_V,W_Q,W_K$ before adding residual vector, varies by training data. QA training aligns $h_{\theta,0}$ with the true task vector $a_{k^{\star}}$, ensuring correct prediction when added to any task-specific **x** via residuals. However, when trained via ICL-type data, the $W_V,W_Q,W_K$ would *non-negligibly* memorize some low-level features—akin to [9]’s results. That is, $h_{\theta,0}$ aligns with both $a_{k^{\star}}$ and low-level $±b_{k^{\star}}$ to some non-negligible extent, leading to constant test error w.h.p.. Detailed descriptions would be added to the figures' captions. **Q**:Broader impact may be limited, as it is unclear how well this transfers to the motivating settings of task vectors in pretrained LLMs **A**:We would like to remark our impact on elucidating observed LLM phenomena: - We elucidate why gradient methods yield task vector arithmetic in LLM[10], why QA boosts retrieval[2], and why transformers outshine word2vec—grounded in an empirically-motivated theoretical modeling akin to [11-12]’s analyses on feature-noise vision data (justified by ResNet’s latent space [12]), common due to the intractability of analyzing multi-layer dynamics. - Beyond this, our theory supports recent LLM task vector-arithmetic work (e.g., application of task vector in editing, unlearning, merging [13-14]), which assume vector arithmetic between pretrained and modified models *without explaining its origins in language model*. Though focused on single-token recall, our optimization theory, grounded in concept geometry, provides a foundational step. Section 6 outlines future work on complex mechanisms to further enhance this domain’s theoretical merit. **Q**:Is there a way to vary and study the complexity of QA and Word-Label tasks? In the natural language settings you draw from for motivation, we have a clearer sense of complexity for different QA pairs and, similarly, a clearer sense of concept hierarchy. **A**:In our context regarding retrieval over hierarchical concept graph, one way to measure a task’s complexity is the difficulty a model faces in achieving high confidence for its argmax-sampled prediction. A simple metric could be $C(T)=1/\max_{y}p_θ(y|T)$, where $\max_{y}p_θ(y|T)$ is model θ’s top answer confidence, and a higher $C(T)$ denotes greater complexity. QA tasks terms to be simpler: a keyword (e.g., “capital” in “What is the capital of Japan?”) guides θ to the task collaborating with query word, more-likely keeping $C(T)$ low. Word-Label task, in contrast, lacking this cue, requires the θ to infer the task behind the pair—given prompt $T_3$=[Japan,Sakura,China], the model might have non-trivial confidence over both Panda and Peony (e.g. the answers of $T_1$ and $T_2$ defined before). This stems from the polysemy-induced challenge due to the hierarchical concept knowledge encoded in the tokens. Should the reviewer wish to discuss any point further, we would be more than delighted to continue our productive exchange! Once again, we deeply appreciate the reviewer’s time and valuable comments! *Reference (arXiv identifier)* [1]2412.06538 [2]2309.14316 [3]1601.03764 [4]2105.15134 [5]2402.15607 [6]2402.01258 [7]2406.01506 [8]2403.03867 [9]2410.09605 [10]2305.16130 [11]2012.09816 [12]2202.05928 [13]2212.04089 [14]forum?id=vRvVVb0NAz --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I have increased my score to a 4 --- Reply to Comment 1.1.1: Comment: Dear Reviewer mnid, Thank you for raising your score to 4. We're glad that your concerns have been adressed. We deeply value your thoughtful engagement and keen insight into our approach, particularly your endorsement of its empirical relevance and soundness. We’ll ensure your suggestions strengthen our broader impact in the camera-ready version. Thank you again for your valuable time and consideration. Best regards, Authors of Submission 15363
Summary: To study retrieval of task vectors in ICL, the authors perform a careful gradient descent analysis on residual self-attention modules (with nonlinearities and normalization) under a synthetic (but empirically-motivated) data distribution. They find that when pre-training on QA distribution (and testing on word-pair ICL distribution) the model achieves near zero test error w.h.p.; however, the model fails to accurately retrieve the task vector when pre-trained on the pair ICL distribution or a hybrid distribution. The transformer learns to extract the task vector from the context, then adds it to the final token embedding at the residual step, thus completing the ICL task in word2vec fashion. Accurate task vector retrieval explains the model's ability to perform ICL on low-level concepts unseen during training. The results are strongly dependent on the distribution of the prompt embeddings, which is assumed to have some particular structure (motivated by empirical observations) but the theory does not explain the process by which the prompt embeddings obtain this structure. Claims And Evidence: The claims are clear and the theoretical evidence appears convincing (I did not carefully check the proofs, though). The experiments could definitely be more convincing (see Experimental Designs section). I have some concerns about chicken-and-egg reasoning. In particular, the theoretical setup here assumes that the latent features that are inputs to the transformer layer *already have* the prerequisite structure which makes word2vec-style ICL possible. This structure then enables the model to solve ICL by simply extracting the task vector. However, it seems plausible to me that, instead, the model first learns orthogonal task and concept directions in the weight matrices, potentially by some other mechanism; these task and concept singular vectors might then be carried to previous layer weights via backpropagation, which pushes the latent representations towards having the observed structure. In other words, it's possible that W_Q, W_K, and W_V obtain the structure depicted in Fig 2 through a *different* mechanism, which *later* causes the embeddings to have the hierarchical structure observed by Park et al. If this were the case, it seems that it would invalidate the gradient descent analysis performed in this work. This possibility is not currently ruled out by the proposed theory. (Most likely, what is actually happening is something in between -- the embeddings and the attention weights gain structure in tandem, each reinforcing the other, throughout training. But I understand that such an analysis would be very difficult to do.) Methods And Evaluation Criteria: N/A Theoretical Claims: The proofs are rather cumbersome and outside my wheelhouse, so I did not check them carefully. Sorry! Experimental Designs Or Analyses: Are the prompt embeddings synthetic, or are they taken from the learned embeddings from open-source models? If it is the former, then it may be useful to perform an experiment with real data, to show more convincingly that the prompt embedding assumptions are satisfied in practice. An even more convincing experiment would be to pretrain a small language model from scratch, on a small dataset, and showing that the input embeddings converge to the desired structure quickly, and that the attention weights then learn the structure indicated by the theorems. Such an empirical result would also address the chicken-and-egg issue raised earlier. Supplementary Material: I looked for the proof of eq. 11, which I found in Lemma G.1, but I didn't really understand the main proof idea. This seems to be an important finding of the paper, so I believe the main idea of the proof (or at least some intuitive explanation) should be provided in the main text. I looked at appendix B, but it is missing a lot of details about how the experiments are run. What is the model architecture? Are any layers frozen or taken from off-the-shelf models? What are the optimizer hyperparameters? See Experimental Designs section. Relation To Broader Scientific Literature: I don't know this area well enough to comment on this. Essential References Not Discussed: I don't know this area well enough to comment on this. Other Strengths And Weaknesses: This paper makes strong assumptions on the prompt embedding distribution; in return, the analysis is able to handle the full architectural complexities of transformers, including realistic prompt structure, softmax nonlinearities, layernorm, and cross-entropy loss. This seems to me to be a major strength of the paper (that the authors obtain analytical results in this complicated and highly nonlinear regime). Probably the greatest weakness is that the proofs are very difficult to follow. I understand that it may not be possible to simplify them. However, if that's the case, I think it will be very beneficial to provide a less rigorous derivation of the main results (for example, eq. 11, ineq. 9) in a simplified setting. This will likely help readers build intuition and make the key components of the result more transparent. I think the intuition behind the proofs should be provided in the main text as well, especially explaining eq. 11 in more detail. Why do the weight matrices learn singular vectors aligned with b in ICL and QA-ICL pretraining, but not in QA pretraining? Other Comments Or Suggestions: In Matplotlib, it's straightforward to use LaTeX formatting in the titles/axes/labels. It would greatly enhance the readability of the plots. Minor point, but some of the y-axes are labelled "projection length" but lengths must be non-negative. Maybe simply "projection" is appropriate? In eq. 14, I believe there should be an ellipsis between \nu_1 and \nu_{K'}. Also, the meaning of K' is inferrible from eq. 14, but I can't find it stated in the main text. Questions For Authors: I'd rate the current submission as a very weak reject -- the proof technique appears impressive, especially in its ability to handle realistic transformer architectures, but there are a few key weaknesses in the approach, as far as I understand. I would be happy to increase my score, pending some clarification or addendums. My biggest concern is the chicken-and-egg dilemma raised in the Claims And Evidence section. A compelling explanation that rules out the alternative hypothesis would convince me to increase my score. Ideally, I'd love to see an experiment that addresses this, but I understand that it's probably a lot of extra work. I would also like to better understand the main ideas behind the major results. If it's possible to add an appendix where versions of ineq. 9 and eq. 11 are derived in a simplified setting (a nonrigorous derivation is fine, in my opinion), I would increase my score. I would also appreciate if the authors provide further explanation/intuition of this part of the result in the main text. The last minor point would be to improve the readability of the plots and provide more description of the plots in the figure captions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review! **Q**:Chicken-and-Egg Dilemma & real-world impact We'd like to emphasize that theories often relies on abstract, empirical-motivated models to enable tractable analysis and explore a model’s potential—an approach we adopt. However, we appreciate the opportunity to discuss how our theory connects to real-world multi-layer dynamics. 1. **Concept Geometry & Vector Retrieval**: Next-token prediction (NTP) shapes concept geometry in prerequisite layers[1]. [2] show that task vectors emerge in earlier layers during ICL, while arithmetic retrieval occurs in later layers. 2. **Theoretical Simplification & Justification**: The analysis of multi-layer dynamics is typically intractable. Akin to [3-4], which model simple feature-noise vision data justified by *ResNet’s deep latent space* properties[3], we simplify by directly modeling the resulting concept geometry from prerequisite layers. This is further supported by *layer convergence bias*—shallower layers typically converge faster[5]. 3. **Alignment to Real World**: We offer the first optimization theory for a realistic transformer on QA data for vector-retrieval ICL and showing its *OOD edge* over word2vec, potentially addressing Question 5.1.4 in [6]. Yet, like [3-4], our outcome does not fully match real-world multi-layer dynamics. Empirical evidence suggests that shallow and deep layers most-likely *co-evolve*: even in a toy setting for approximating polynomial functions, [7] shows that mechanisms across layers must evolve simultaneously to reinforce each other—layer-by-layer training induces unwanted errors. By studying this non-trivial learning problem in a **comparatively realistic** setting, we take an important step forward. A key future direction is to extend beyond [7] by analyzing a 2-3 block transformer on random spherical features, integrating NTP and QA training to model the *self-reinforcing* co-evolution of concept geometry and arithmetic retrieval based on empirical observations. **Q**:Experimental details Our experiments are conducted on synthetic data defined in Definition C.1-3;our model is defined in Section 2.2(no layers frozen);our hyperparameters are provided in Section 5&B;our Algorithm procedure is in Section C.1. **Q**:Room for presentation We highly appreciate the feedback and would add: - **Ineq.(9)**: Simplified flows bounding projection updates are listed in Appendix D.1(Lemmas D.1-D.6), abstracting complex updates into sequences with simple constants (e.g., $a,b,c,d$). The idea is that since analyzing original complex gradient update in eq.(27) is too hard due to the nonlinearities of our problem, we instead split training into phases identifying key components that dictate update growth rates, avoiding tackling original dynamics directly. For example, in the first phase, $||W_VS_n\pi||=\Theta(\sigma_1d)$ dominates, while $a_kW_Va_k$ starts small and the grows’ rate is then $a_{t+1}=a_t+b$ (eq.(60)), bounded by Lemma D.1’s continuous flow,yielding ineq.(9). In the next phase, once $a_kW_Va_k$ controls $||W_VS_n\pi||$’s order, the latter, as a gradient denominator, slows $a_kW_Va_k$’s update (Lemma 4.3, simplified in Lemma D.3). Unlike [8], our problem’s pronounced nonlinearities demand more complex treatment. Per your advice, we’d add intuitive links between simplified flows and actual updates in both main text and appendices. - **Eq.(11)**: To explain our *harmful overfitting* in eq.(11), we contrast it with [6], where vision-inspired data(features+Gaussian noise) leads to noise memorization. There, high noise-to-feature ratios make the inner product of weights with noise significant in the gradient, causing harmful overfitting[6]. Our case involves memorizing low-level features: in ICL prompts, some demo pair's low-level features w.h.p. co-occur with query word's ones due to imbalanced frequencies in **finite** training sets, which produces small but non-negligible products in projection's updates, driving harmful memorization. Akin to [9]'s result—feature co-occurrence biases gradients toward memorization—in our case ICL pairs introduce unexpected co-occurrence. QA data, lacking low-level features before query words, would not memorize this co-occurrence and focus on task vector. [10]’s "relation" token mirrors task vector, though their artificial model (their eq.(13)) and data lack realism. Empirical backing includes [11], showing QA aids (multi-token) recall. - **Plots & K’**: We’ll update y-axes to "projection", use LaTeX formatting, and enhance captions. We’d fix eq.(14)’s ellipsis and explain $K'$ (number of task-irrelevant tokens) around eq.(13) with intuition in the main text. Thanks again for your feedback! We welcome further discussion and appreciate your time! *Reference (arXiv identifier)* [1]2403.03867 [2]2305.16130 [3]2012.09816 [4]2202.05928 [5]iclr.cc/virtual/2023/poster/11533 [6]2405.01964 [7]2001.04413 [8]2310.01975 [9]2410.09605 [10]2412.06538 [11]2309.14316 --- Rebuttal Comment 1.1: Comment: I have increased my score to 4, contingent on the proposed changes being made to the camera-ready version. --- Reply to Comment 1.1.1: Comment: Dear Reviewer R7sM, Thank you for raising your score to a 4 — your recognition is truly encouraging. We greatly appreciate your thoughtful engagement throughout the review process. In particular, your clear understanding of our approach and your acknowledgment of our non-trivial theoretical contributions mean a great deal to us. Your insightful suggestions will help improve our broader impact. We deeply value your confidence in our work and will ensure the camera-ready version reflects the proposed changes. Thank you again for your time and trust! Best regards, Author of Submission 15363
null
null
null
null
null
null
null
null
From Feature Interaction to Feature Generation: A Generative Paradigm of CTR Prediction Models
Accept (poster)
Summary: This paper highlights issues in discriminative CTR-based recommendation models, such as information redundancy and information collapse. To address these challenges, it proposes a feature generation framework that reformulates CTR prediction as a generative problem using a customized decoder network. The decoder network predicts all feature embeddings based on the input. Claims And Evidence: In Table 1, the authors demonstrate the consistent improvement of using the generative paradigm over the discriminative one, highlighting an increase in AUC and a reduction in log loss. Methods And Evaluation Criteria: The authors use two common evaluation metrics—Area Under the Curve (AUC) and log loss error reduction—to assess the model's quality. They also discuss the trade-off between quality and performance, noting the increase in computation time and memory usage. Theoretical Claims: Given that the primary objective of this work is to reduce information redundancy and dimensional collapse, the theoretical justification is not adequately provided. Although the authors claim that both issues are mitigated empirically in lines 198-201, it remains unclear why passing raw ID embeddings through an MLP would effectively reduce information redundancy from a theoretical standpoint. Experimental Designs Or Analyses: In Table 1, can the authors shed some light on the hyperparameter setting for the decoder and MLP? Supplementary Material: I have skimmed through it. Relation To Broader Scientific Literature: Although the authors focus on a specific problem within the recommendation models domain, the approach proposed in the paper can be applied to other domains to reduce information redundancy and dimensional collapse. Essential References Not Discussed: The key contribution is a paradigm shift from discriminative to generative models aimed at reducing information redundancy and mitigating dimensionality reduction. One important reference could be the inherently generative recommendation models with semantic IDs (Rajput et al. (2023)), comparing the results with the aforementioned references. Other Strengths And Weaknesses: 1. The paper is written with a clear flow. 2. The problem that the paper targets builds a bridge between the well-established discriminative recommendation models and the state-of-the-art generative models. 3. There is a lack of sufficient theoretical proof. 4. Some results seem slightly counterintuitive. Please refer to Q1. Other Comments Or Suggestions: no comments Questions For Authors: Q1. Can the authors elaborate on this observation? "On the other hand, increasing the complexity of the decoder with (b.3) significantly degrades recommendation performance, as the AUC decreases from 0.793512 to 0.792931. This may be caused by overfitting". Intuitively, a more complex decoder could capture more intricate relationships. What do the authors mean by overfitting? Q2. Regarding result 5, do the authors have any theoretical or intuitive proof, aside from empirical evidence, to justify it? Q3. Compared to inherently generative models in terms of AUC, log loss reduction is not included. If the generative recommendation model is indeed of higher quality, then this should be mentioned. Q4. Can this approach lead to a paradigm shift for specific sequential recommendation models like Taobao Alibaba? Does the performance overhead, in this case, justify deploying this approach? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your kind remarks. We hope the following responses can address your remaining concerns. ## Concerns about Theoretical Analysis > Response to "Theoretical Claims", and Point 3 of "Other Strengths And Weaknesses" Following DirectCLR[1], we have explored theoretical justification for our method's ability to mitigate collapse. Under gradient flow analysis (gradient descent with infinitesimal learning rate), the embedding update process follows a differential equation $\dot{v}_1 = - \frac{\nabla L}{\nabla v_1}$, where $v_1$ is a feature embedding and $L$ the loss. We aim to solve this equation and prove it will be rank deficient during training. **However, even for an FM with two features, the solution involves a complex Lambert W function[2]. We are still working on analyzing its rank properties.** [1] Understanding Dimensional Collapse in Contrastive Self-supervised Learning. ICLR 2022 [2] On the Lambert W function. Advances in Computational mathematics, 1996. ## Questions about the Hyperparameter Setting > Response to "Experimental Designs Or Analyses" Our decoder is a one-layer MLP with fixed input and output dimensions. **The only tunable hyper-parameter is the nonlinear activations.** We evaluated different non-linear activations and found they are crucial for collapse mitigation (see Fig. 7 in Appendix D). Based on our experiments, we recommend ReLU or SiLU for the decoder. ## Discussion of Generative Sequential Recommendation with Semantic IDs > Response to "Essential References Not Discussed", and Q3 of "Questions For Authors". Thank you for your question. The suggested empirical comparison with them is difficult if not imposible since **Tiger is mainly designed for the sequential recommendaiton scenario while we focus on the feature interaction scenario**. ## Concerns about the Ablation Study > Response to Point 4 of "Other Strengths And Weaknesses", and Q1 of "Questions For Authors". Thank you for your insightful question. Regarding the overfitting issue, we acknowledge that our initial observation was speculative. To further investigate, we conducted a detailed spectral analysis comparing 1-layer and 2-layer MLPs, as shown in https://anonymous.4open.science/r/ICML2025-1748/supp/2layer.png. The figure shows that both 1- or 2-layer MLPs effectively alleviate the singular value decay (dimensional collapse) observed in discriminative paradigms. **However, singular values of the 2-layer MLP decline at a higher rate than the 1-layer MLP, suggesting that the extra layer leads to a more imbalanced embedding space.** This aligns with the recommendation performance. ## Theoretical or Intuitive Proof for Result 5 > Response to Q2 of "Questions For Authors". Thanks for the question. We only have empirical evidence to justify result 5. We'll revise Result 5 as follows: "Result5. The field-wise non-linear one-layer MLP is a simple yet effective decoder. **Common** modifications, such as simplifying the model with field-shared MLPs or removing non-linearities, or increasing complexity through stacking MLP layers or self-attention, lead to inferior recommendation performance." We apologize for our imprecise statement and thank you for pointing this out, which helps us make the statement more rigorous. ## Question on Paradigm Shift for Sequential Recommendation > Response to Q4 of "Questions For Authors". **Many existing sequential recommendals already employ a generative paradigm under the next-item prediction framework**. Specially, pre-ranking models such as SASRec follows a self-supervised next-item generative paradigm, while ranking models such as DIN, DIEN follow a supervised next-item generative paradigm. The overhead of generative sequential models such as SASRec and DIN is low for short sequence, leading to a widely deployment of them in industrial systems. The computation cost becomes a challenge for long sequences, but there are many works (such as SIM, TWIN) to employ a two-stage Search & Modeling approach to resolve it.
Summary: The paper “From Feature Interaction to Feature Generation: A Generative Paradigm of CTR Prediction Models” proposes a novel Supervised Feature Generation framework for Click-Through Rate (CTR) prediction models. The main algorithmic idea is to shift from the discriminative “feature interaction” paradigm to a generative “feature generation” paradigm. Instead of relying on raw ID embedding interactions, the framework predicts each feature embedding based on the concatenation of all feature embeddings. The main findings indicate that the existing discriminative paradigm has limitations such as embedding dimensional collapse and information redundancy. The proposed generative paradigm mitigates these issues. Experimental results show that the framework can reformulate nearly every existing CTR model and brings significant performance improvements. Across different models, it achieves an average of 0.272% AUC lift and 0.435% Logloss reduction. It also reduces the embedding dimensional collapse and information redundancy, and has been successfully deployed in a large - scale advertising platform, leading to a 2.68% GMV lift in a primary scenario. Claims And Evidence: The claims are, in general, backed by various experiments, including the claims on embedding dimension collapse and redundancy reduction. Methods And Evaluation Criteria: The evaluation makes sense in general, where the mainstream recommendations algorithms are compared as baselines. Some details of the datasets are missing, and I hope the authors can add them. Theoretical Claims: The paper does not contain theoretical claims. Experimental Designs Or Analyses: The experiment design is sound, and I think the experiments back the main proposed claims well. Supplementary Material: I went through the appendix on the experiments' details and extra results. Relation To Broader Scientific Literature: The paper is broadly related to the generative recommendation, which is the application of generative models for recommendation systems. Essential References Not Discussed: The paper does not have miseed reference as far as I know. Other Strengths And Weaknesses: Advantage: - The paper introduces an important research topic, especially for industry recommendations. The paper clearly states the disadvantage of discriminative models. - Experiments back up the claims that generative recommendations have an advantage over discriminative models. Moreover, the paper gives A/B test results. Though some details are missing (probably because of privacy or the double-blind policy), it makes the method much more convincing. Disadvantage: - The paper does not fully give every detail of methods and datasets, like the meaning of some notations in equation 2 and the dataset details in appendix B.1, including the dataset introduction, the number of users, items,etc. (If I missed the notations, please remind me). Particularly, without the explanation of important notations, It may get harder to understand the equation. Other Comments Or Suggestions: This paper could be valuable for the industry recommendations. I think that many engineers and researchers in the industry may also be exploring this direction, and the results of thispaper could be quite inspiring for industry recommendations. Questions For Authors: - In Table 3 in the appendix, what is the meaning of the numbers in the dataset? Does it mean number of user-item interactions? how is the scale of users or items? - In equation 2, what is the meaning of $l$ and $L$? What is meaning of $F(i), F(j)$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are truly thankful for your review efforts. We apologize for the missing information on datasets and notations, and would clarify them as follows. ## Dataset Details > Response to "Methods And Evaluation Criteria", "Other Strengths And Weaknesses", and Q1 of "Questions For Authors" Thank you for your question. The numbers in the table are the number of samples, i.e., user-item interactions. As for the user or item scales, the user and item features are not annotated in the original dataset. But we can make the following guess: - [Criteo](https://github.com/reczoo/Datasets/tree/main/Criteo/Criteo_x1) has 13 numerical feature fields and 26 categorical feature fields. All 26 categorical features have been anonymized, but we can guess based on this assumption: user or item ID features are usually with the highest cardinalities. Feature fields with the top-5 cardinalities are (C3: 413,424), (C12: 409,749), (C21: 397,981) (C16: 365,811), (C4: 248,543), where the first denotes feature name and the second denotes the feature cardinality. These features have a relatively high probability of being user or item ID features. **Therefore, the scale of users and items could be on the order of ~100K**. - [Avazu](https://github.com/reczoo/Datasets/tree/main/Avazu/Avazu_x4) has 24 categorical features after preprocessing, part of which have been anonymized. Feature fields with the top-5 cardinalities are (device_ip: 2,903,322), (device_id: 820,509), (device_model: 7,259), (app_id: 6,545) (site_domain: 5,461). We infer that 'device' features represent users' devices, which are highly correlated with the number of users. **Therefore, the scale of users could be on the order of ~1M**. The remaining named features are not likely to be item features, so the item feature may be one of the anonymized features: (C14: 2,556), (C17: 434), (C20: 173). **Therefore, the scale of items could be on the order of ~1K**. Besides, we provide more statistics about our industrial dataset: **Our industrial model is trained on billions of samples daily, with hundreds of millions of unique users and around 1 million items**. ## Notations in Eq. 2 > Response to "Other Strengths And Weaknesses", and Q2 of "Questions For Authors" Thank you for your question, and we apologize for missing explanations of Eq. 2. We clarify Eq. 2 as follows: - Eq. 2 is the formulation of DCNv2, which is a high-order feature interaction model. $L$ denotes the number of cross layers, $l$ denotes the layer index, $N$ denotes the total number of features, $i$ and $j$ denote the indices of features, $\mathbf{v}_i^{(0)} $ denotes the embedding of feature $i$ in the embedding layer, $\mathbf{v}_j^{(l)}$ denotes the embedding of the $j$-th term in the $l$-th layer. - $M_{F(i) \to F(j)}^{(l)}$ denotes the projection matrix between the $F(i)$ and $F(j)$ field pair in the $l$-th layer. - $F(i)$ and $F(j)$ denotes the field of feature $i$ and $j$, respectively. In addition, we will further review the entire manuscript to revise any potentially unclear sections and enhance its overall clarity. --- Rebuttal Comment 1.1: Comment: Thanks for the explanations, which addressed most of my concerns. I would raise the score to 3. --- Reply to Comment 1.1.1: Comment: We extend our sincere gratitude to Reviewer z1Cm for conducting a thoughtful reassessment of our work and for elevating the evaluation score. Your valuable feedback has prompted us to provide supplementary details regarding crucial experimental aspects and clarify key manuscript notations. We are deeply encouraged by the successful resolution of the issues you raised.
Summary: This paper introduces a feature generation framework that reformulates conventional CTR models through a generative paradigm, effectively addressing dimensional collapse and information redundancy issues in feature embeddings. The claims are substantiated by rigorous empirical evidence spanning widely adopted benchmark datasets. The proposed methodology is thoroughly evaluated under well-designed experimental settings, demonstrating consistent superiority over baseline approaches. The core implementation is publicly accessible in the supplementary materials to ensure reproducibility. While a related work exploring embedding collapse phenomena is cited, its methodological distinctions from the current approach warrant deeper analysis. Overall, this paper studies a foundational problem in CTR prediction by innovatively bridging generative paradigms with feature interaction models, with well-designed experiments and insightful conclusions to support the proposed method. I am inclined to recommend acceptance. Claims And Evidence: This paper claims to reformulate existing feature-interaction models into a novel feature generation paradigm. This claim is substantiated through comprehensive comparative experiments and ablation studies. It claims to mitigate the inherent drawbacks of conventional ID embeddings in traditional feature interaction models, i.e., dimensional collapse and information redundancy. This claim is validated via two inspiring and sound experiments. Methods And Evaluation Criteria: The proposed method establishes a feature generation paradigm designed to address feature embedding challenges. The framework's effectiveness is validated through empirical evaluation, employing well-established baseline models and standardized datasets consistent with common protocols. Theoretical Claims: This paper does not make theoretical claims. Experimental Designs Or Analyses: I have verified the experimental validity, including the main comparison, embedding analysis, and the ablation studies. Strengths: - The proposed paradigm delivers substantial performance improvements across diverse existing CTR models, with successful deployment in production-scale advertising systems. - The analysis experiments are well-designed, validating that the proposed method can address the claimed dimensional collapse and information redundancy issues. The correlation shift trend from weak to strong models is insightful. - The authors conduct a systematic investigation of paradigm components through well-structured ablation studies. Weaknesses: - The analysis experiments employ a batch-wise processing. I'm concerned the results may be inconsistent on the full validation dataset. Supplementary Material: Anonymous code is provided at: https://anonymous.4open.science/r/ICML2025-1748/ Relation To Broader Scientific Literature: The dimensional collapse issue in feature embeddings has been investigated in recent literature[1], where this challenge was effectively addressed via a multi-embedding method. [1] On the Embedding Collapse when Scaling up Recommendation Models. International Conference on Machine Learning_. PMLR, 2024. Essential References Not Discussed: Although mentioned, the multi-embedding[1] method is not fully discussed in the paper, which is also specially designed for embedding dimensional collapse mitigation. [1] On the Embedding Collapse when Scaling up Recommendation Models. International Conference on Machine Learning. PMLR, 2024. Other Strengths And Weaknesses: Overall, this is an interesting paper that studies a fundamental problem in a real-world application problem. PLS see the "Experimental Designs Or Analyses" section. Other Comments Or Suggestions: Typos - "hadamard product" should be "Hadamard product" - Line 113, "has focus" -> "has focused" - An extra annotated "(1)" in Line 269 Questions For Authors: Will the analysis results be inconsistent when adopted on the entire validation dataset? What is the relationship between this work and the multi-embedding[1] method in embedding collapse mitigation? [1] On the Embedding Collapse when Scaling up Recommendation Models. International Conference on Machine Learning_. PMLR, 2024. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable comments. We hope the following responses can address your concerns. ## Concerns about Batch-wise Analysis > Response to Weaknesses of "Experimental Designs Or Analyses" & Q1 of "Questions For Authors". Thank you for your suggestion. The suggested analysis of the entire validation dataset is time-consuming, so we have adopted this batch-wise setting. To ensure the experiment's robustness, we have **repeated the analysis experiments with 6 different random seeds** when sampling the batches, with results in https://anonymous.4open.science/r/ICML2025-1748/supp/seed.jpg. **The trend of embedding spectra is consistent in all batches**: On both Avazu and Criteo, the spectrum curve of discriminative paradigms exhibit an abrupt singular decay from ~$1\times 10^{-5}$ to ~$1\times 10^{-15}$, a reduction of $10^{10}$ times. This indicates a severe dimensional collapse issue. But in our generative paradigm, the abrupt singular value decay has been greatly alleviated. This verifies that the generative paradigm substantially mitigates the embedding dimensional collapse issue, forming a more balanced embedding space. ## Relationship to Multi-Embedding > Response to "Essential References Not Discussed" & Q2 of "Questions For Authors". Thanks for your question. **The proposed paradigm is orthogonal to the multi-embedding method, and these two methods can be seamlessly combined.** We have conducted experiments based on one of the most representative models DCNv2 on the Avazu dataset, with results presented as follows: - Recommendation performance (AUC) comparison. | Model\Embedding size | 16 | 16 $\times$ 2 | 16 $\times$ 4 | 16 $\times$ 8 | 16 $\times$ 10 | |----------------------|-------------|---------------|---------------|---------------|----------------| | DCNv2 - DIS | 0.79282 | 0.79402 | 0.79434 | 0.79539 | 0.79577 | | DCNv2 - GEN | **0.79342** | **0.79469** | **0.79534** | **0.79599** | **0.79617** | We can observe that, the generative DCNv2 with multi-embedding outperforms discriminative DCNv2 with multi-embedding. - Embedding spectrum analysis. We have illustrated the spectrum of DCNv2 (DIS), DCNv2 (GEN), and DCNv2(GEN + MultiEmbedding) in https://anonymous.4open.science/r/ICML2025-1748/supp/collapse.png. The results show that: - **Embedding collapse of DCNv2 (DIS)**: The spectrum curve of DCNv2 (DIS) exhibits a dramatic decay after the singular value index 250, which indicates a collapsed embedding space. - **Embedding robustness of DCNv2 (GEN)**: Different from DCNv2 (DIS), the spectrum curve of DCNv2 (GEN) does not exhibit the abrupt decay. Instead, they decline slowly, indicating a more balanced embedding space. - **Multi-embedding alleviates collapse**: DCNv2(GEN + MultiEmbedding) can lead to a slightly slower decline rate than DCNv2 (GEN), which indicates a more robust embedding space. We compute the Information Abundance[1] (IA) values of these 3 variants respectively: 9.0082, 13.5031, 13.6999. A higher IA value indicates a less-collapsed embedding space. The IA results are consistent with the embedding spectrum figure. **These spectra and IA results both demonstrate that our method and multi-embedding can be effectively combined to achieve both better performance and less collapse.** [1] Guo, Xingzhuo, et al. On the Embedding Collapse when Scaling up Recommendation Models. ICML, 2024. ## Typos > Response to "Other Comments Or Suggestions" Thank you for your corrections and we will modify the corresponding symbols accordingly. In addition, we will further review the entire manuscript to revise any potentially unclear sections and enhance the overall clarity of our work. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses, which have effectively resolved my initial concerns: - The experimental validation demonstrates consistent performance patterns across multiple data batches, reinforcing the methodological robustness. - By building upon established multi-embedding frameworks, the proposed approach achieves significant performance gains. The extra spectral analysis further suggests the potential of combining these two methods for mitigating the embedding dimensional collapse issue. I respect the authors for their rigorous experiments and recommend incorporating these findings into the final manuscript to strengthen its arguments. Having carefully reviewed the other comments, I maintain my positive assessment. This work makes a valuable contribution by shifting CTR models from discriminative to generative paradigms, effectively mitigating persistent challenges like embedding collapse and information redundancy. The comprehensive analysis experiments offer insights that could guide the design of future CTR models. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate Reviewer SGPG for re-evaluating our paper and raising the score. We have carefully incorporated your valuable suggestions, especially regarding the analysis experiment robustness and multi-embedding discussion, and have made thorough efforts to address all concerns through additional experiments and detailed explanations. Your insightful feedback has significantly enhanced our work, and we are grateful that we were able to address your concerns satisfactorily.
Summary: This paper proposes a Supervised Feature Generation (SFG) framework that reformulates the conventional discriminative CTR prediction paradigm into a generative paradigm. Rather than modeling direct interactions among raw ID embeddings, the proposed method generates each feature embedding based on the concatenation of all other feature embeddings. The goal is to address two issues in CTR modeling: embedding dimensional collapse and information redundancy. The framework is designed to be model-agnostic and can be applied to many standard CTR models such as FM, DeepFM, CrossNet, and DCN V2. Experiments on public benchmarks and an online A/B test on a large-scale advertising platform show small improvements in AUC and Logloss, as well as practical gains in industrial deployment scenarios. Claims And Evidence: The paper makes several claims, including: - That the proposed generative paradigm leads to more semantically meaningful feature embeddings. While the framework is novel in formulation, the motivation behind the paradigm shift is weakly supported. The core claim that “we shift from raw ID embedding interactions to semantically meaningful feature generation” is misleading. In practice, most modern CTR models do not rely solely on raw ID embedding interactions; they use cross networks or deep modules to overcome known limitations. Therefore, the paper constructs a false dichotomy by framing the entire feature interaction paradigm as flawed, when in fact the issue lies with simplistic interaction mechanisms. - That it provides a general-purpose enhancement applicable to a wide range of CTR models. Furthermore, although performance improvements are reported (e.g., ~0.272% AUC lift), they are relatively modest considering the additional model complexity. The method increases computation time by 3.14% and GPU memory by 1.45%, raising questions about the cost-effectiveness of the generative design, especially for production systems where latency and efficiency are critical. Methods And Evaluation Criteria: The paper does not make it entirely clear why the embedding reconstruction process should yield better representations in all settings. The notion of “generating semantically meaningful features” remains vague, and the benefit of reconstruction versus learned interaction is not theoretically or empirically justified beyond intuitive arguments. Theoretical Claims: There is no formal theoretical contribution. Experimental Designs Or Analyses: The experiments are fine. Supplementary Material: I check the contents. Relation To Broader Scientific Literature: The paper relates to feature embedding learning, representation redundancy, and autoencoding principles. However, it does not clearly position itself against existing embedding refinement, denoising, or feature interaction methods based on field graphs (there are many), which may share similar goals with more principled frameworks. Essential References Not Discussed: Please check recent embedding refinement, denoising, or feature interaction methods based on field graphs (there are many) for recommendation systems. Other Strengths And Weaknesses: Pros: - Conceptually novel formulation with a modular implementation. - Broad compatibility with many CTR models. - Real-world deployment. Cons: - Why generative paradigm is needed is still unclear. Overstated framing of the paradigm shift—feature interaction is mischaracterized. - Modest offline improvements raise concerns about cost-effectiveness. The computation cost is less mentioned. - Vague definition of “semantic generation”; unclear benefit of reconstructing embeddings over learning them directly. Other Comments Or Suggestions: "Inherent drawbacks of raw ID embeddings. Embed- dings of low-cardinality feature fields only span a low- dimensional embedding space, intrinsically leading to a bottleneck for representing abundant information. More- over, according to the interaction collapse theory (Guo et al., 2024), direct interactions between raw ID embed- dings can lead to severe dimensional collapse issue (Jing et al., 2021). Consequently, even embeddings of high- cardinality feature fields will be constrained to a low- dimensional subspace of the available embedding space, thereby limiting their information abundance." This paragraph needs to be explained or corrected. Questions For Authors: Please check my comments above. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We have addressed the comments in the rebuttal below. ## Clarification on the Motivation. > Response to Point 1 of "Claims And Evidence" & "Methods And Evaluation Criteria". We'll elaborate more on our claim. Our work is mainly inspired by the Interaction-Collapse Theory, that is, **direct interactions between ID embeddings in existing CTR models with cross networks lead to dimensional collapse**. We resolve this challenge by shifting from the discriminative paradigm that involves direct interactions between raw ID embeddings to a generative paradigm **that constructs new embeddings by a decoder network and interacts the constructed embeddings with the raw ID embeddings**. We don't aim to claim the entire feature interaction paradigm as flawed, and **we apologize of we made such unintentional and misleading claim**. As researchers in this area, we appreciate the promising progresses in the last decade. In particular, we'd like to recognize the **positive impact of cross network and DNN** as follows, and will add them in the revised version. The cross network, especially CrossNet in DCN V2, has been proved effective in CTR prediction. A recent work [1] validates that its cross function mitigates dimensional collapse to some extent via field-pair-wise transformation matrix. However, we find that our generative variants can further improve the embeddings' dimensional robustness (Fig 3.c). Regarding deep networks, **DNNs can also mitigate dimensional collapse compared to direct cross networks**. In fact, **we have a submited paper studying DNNs in feature interaction models from this perspective**, showing that **non-linear activations greatly improve dimensional robustness**. We will add this discussion to the revised version. [1] Towards Unifying Feature Interaction Models for Click-Through Rate Prediction. 2024. ## Concerns about the Cost-effectiveness. > Response to Point 2 of "Claims And Evidence". We thank the reviewer for pointing out the cost-effectiveness trade-off. In industry CTR prediction, even a 0.1% AUC lift is considered significant [1]. Our analysis shows **the ROI (GMV lift/computation cost) for several scenarios far exceeds our release threshold** (typically in the dozens; we omit the exact value since it's a commercial secret). Since February, **three generative models have passed performance/cost reviews and been fully deployed**. [1] FuxiCTR: An Open Benchmark for Click-Through Rate Prediction. 2020. ## Discussion on Essential References. > Response to "Relation To Broader Scientific Literature" & "Essential References Not Discussed". Thanks for your valuable suggestion. We will specifically discuss these methods in the Related Works section of the final paper. We list the main differences as follows due to space limits: Our paradigm differs from these works in the sense that **we aim to tackle the dimensional collapse issue due to the direct interaction of ID embeddings**. We argue that the above-mentioned related works can't achieve this by refinement, denoising or adopting a GNN architecture. We empirically compared our paradigm with several representative feature refinement models, with results as follows. We observed that some models outperform the discriminative DCN V2 models, but still underperform our generative model. | Model | | Criteo | Avazu | |:---:|:---:|:---:|:---:| | FiGNN[3] | - | 0.81352 | 0.79156 | | DCNv2 | DIS | 0.81387 | 0.79282 | | GFRL[1] | - | 0.81427 | 0.79296 | | FRNet[2] | - | 0.81431 | 0.79313 | | DCNv2 | GEN | 0.81472 | 0.79342 | We also studied the singular spectrum and found that **they can mitigate the dimensional collapse on the tail singular values**compared to the vanilla discriminative DCN V2. However, **our generative model leads to more robust values on all dimensions**. Refer to the spectrum analysis in https://anonymous.4open.science/r/ICML2025-1748/supp/refinement.png. [1] MCRF: Enhancing CTR Prediction Models via Multi-channel Feature Refinement Framework. [2] Enhancing CTR prediction with context-aware feature representation learning. [3] Fi-gnn: Modeling feature interactions via graph neural networks for ctr prediction. ## Clarification of the Discussion Parapragh. > Response to "Other Comments Or Suggestions:" We will elaborate on this paragraph and revise the manuscript accordingly with the following: **Dimensional Collapse of Raw ID Embedding Interaction**. The embeddings of some fields may only span a low-dimensional space due to various reasons, such as the low cardinality of this field. For example, the embeddings of the gender field with values of Male, Female, and Unknown can only span a 3-dimensional space. According to the Interaction-Collapse-Theory, **the interactions with these low-dimensional field embeddings may lead to the dimensional collapse of the embeddings of the other fields**.
null
null
null
null
null
null
RobustZero: Enhancing MuZero Reinforcement Learning Robustness to State Perturbations
Accept (poster)
Summary: The paper adapts the MuZero algorithm (RobustZero) with a state-robustness loss and two adaptive hyper-parameter adjustment methods. The method is designed to deal with both worst-case and random perturbations of the state, if those are available during training. The authors evaluate their approach against the S-MuZero baseline and the ATLA-PPO and PROTECTED algorithms on 4 reasonable benchmark tasks. While RobustZero does not perform in all tasks significantly better than the baselines, it shows significant improvement in the IEEE tasks, in particular under "random perturbation", and generally in the perturbed RaceTrack environment. The authors also perform a large number of ablations, which show that their method with adjustment between worst-case and random-case perturbation outperforms robustifying only one of these cases, and how large the influence of the proposed changes to S-MuZero are. Claims And Evidence: - "The presented method is the first that achieves robustness against state perturbation with MuZero." I am not an expert in robust RL, but I have not seen any relevant MuZero papers claiming to do that, so I am inclined to believe this claim. - "RobustZero performs well in both worst-case and random-case state perturbation." While no strictly true for all environments, RobustZero seems to outperform the baselines in both perturbed cases on Racetrack and in the randomly-perturbed case in the two IEEE environments. Somewhat surprisingly, even baselines that specifically optimize for that type of perturbation are outperformed here, which is pretty impressive. In summary, I believe the authors back their claims up sufficiently. Methods And Evaluation Criteria: The method seems to be mostly adapted from Liu et al. (2024), and its implementation is not terribly surprising. However, it is also not trivial and passes my personal threshold for sufficient novelty in an ICML paper. Which is of course subjective. I did not fully understand why RobustZero requires projector and predictor networks. I have seen similar arrangements when transition models are learned, and the encoding of $s_{t+1}$ can therefore be predicted from a projection of the encoding of $s_t$, but here this approach seems meaningless. It is also not clear why the projection does not simply learn a constant output (which would be guaranteed to be the same under perturbation). The authors claim that "the key functionality of the predictor is to transform projected features to stabilize optimization and reduce collapsing solutions", but I do not see why that is. Do the authors have a better explanation for this? Evaluation seems to be sufficient, with two suggestions for the authors: - Please do not make only the *best* entry in a column bold, this can easily deceive a casual reader. Make **all** entries bold that are not significantly worse (overlapping standard deviations) than then best entry. In your case the Pendulum task is practically useless, and some columns in the IEEE benchmarks should have multiple bold entries. - I am missing learning curves (performance in all three reward cases over environment steps). These are technically not necessary, but as a reinforcement learner I always want to see them to make sure the results are actually stable and the final results are not cherry picked. Please add some to the appendix. Theoretical Claims: None. Experimental Designs Or Analyses: There is are a decent amount of ablations, demonstrating the usefulness of the proposed changes, which is welcome. Making **all** significantly better entries in bold will also enhance this Table. Supplementary Material: I only superficially looked over the supplementary material. Relation To Broader Scientific Literature: I am not an expert in robust RL, but the MuZero literature is sufficiently covered. Essential References Not Discussed: None. Other Strengths And Weaknesses: The paper is mostly well written, but sometimes, in particular in the formal parts, it is a bit hard to understand. Other Comments Or Suggestions: - l.36R: Your method does not address "Even a small reality gap can compound errors in the learned models", as you only consider robust state encoding, which does not compound (only future predictions do this). Please clarify here which error you are addressing in the paper. - l.135L: the MDP is missing an initial-state distribution. You should also define $\mathcal P$ more precisely: is it stochastic or deterministic? Also, please mention that your state space must be continuous (to allow $\epsilon$-balls). - l.142L+157L+115R: Please denote $o_t$ *either* a state *or* as an observation, not both. I recommend "state" to differentiate it from the observations in POMDP. - l.117R: when you define the goal of MuZero, please clarify which action sequence has been used to compute, e.g., $p_t^k$. - l.144Lf: the goal is to maximize the *expected* return. - l.161L+163L: the reward $r_t^k$ is usually the output of $\mathcal F$, not $\mathcal G$ in MuZero. - l.141R: "environment model" -> "MDP" - l.154R: The sentence "$\tilde t$ and $\hat t$ denote time under worst-case and random-case state perturbations" makes no sense. Time is not perturbed. You mean to say that $\tilde t$ denotes data at time $t$ from a trajectory that is drawn under worst-case perturbation. - Please add the mathematical denotations of the networks in Figure 1. - Define $\mathcal D_{WC}$ and $\mathcal D_{RC}$ somewhere, e.g., in Figure 1. - In any case, it must be $\mathcal B \sim \mathcal D_{WC}$, not the other way around, and $\mathcal B$ is the batch and not the batch size (l.249L). - eq.10 misses a $||_2$ Questions For Authors: - Can the authors explain why "the predictor is to transform projected features" leads to "stabiliz[ation of] optimization and reduc[tion] collapsing solutions"? - Why do the authors not also use a third MuZero loss that optimizes for the unperturbed states? RobustZero already uses them and a third learning signal might improve performance further. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your positive and valuable comments. $\textbf{Response to Methods and Evaluation Criteria:}$ 1) Regarding the projector and predictor networks, please refer to the response to Q1; 2) Regarding the bold entries, we have revised all tables (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Tables/Tables%201-5.png); 3) We have added the learning curves (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Figures/Figure%20S-1.png). $\textbf{Response to Experimental Designs or Analyses:}$ We have revised all tables according to your suggestion. $\textbf{Response to Other Comments or Suggestions:}$ We would like to address your comments as follows: 1) MuZero-class methods learn abstract environment models, i.e., learned models, and employ MCTS for planning. At each time step, a real state is mapped to an abstract initial state by using the representation network. Then, the dynamics network and prediction network work in conjunction with MCTS to predict future evolution of these abstract states. This framework inherently involves a prediction process. The compound errors mean the accumulation of inaccuracies in these predicted abstract states. To clarify this point, we will revise the statement to ‘Even a small “reality gap” can compound errors in the predicted abstract states within MuZero-class methods, causing reduced rewards and potentially harmful decision-making’. 2) We will supplement the following missing information: i) $\rho_0$ is the initial state distribution; ii) $\mathcal{P}$ is a deterministic transition function; and 3) the state space is continuous. 3) We will use “state” uniformly. 4) We will add the following statement. The sequence of real actions from the sampled trajectory is used to compute $r_t^k$. 5) We will revise the statement to “the goal is to maximize the expected return.” 6) We have carefully double-checked the MuZero algorithm, and make sure that the reward $r_t^k$ is the output of the dynamics network $\mathcal{G}$. 7) We will replace “environment model” with “MDP”. 8) We will revise the statement to “$\widetilde{t}$ and $\hat{t}$ denote data at time t from a trajectory generated under worst-case and random perturbations, respectively.” 9) We have revised Fig. 1 by adding the mathematical denotations of the networks (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Figures/Figure%201.png). 10) We have defined $\mathcal{D}\_{\textit{WC}}$ and $\mathcal{D}\_{\textit{RC}}$ in Fig. 1 (see the link above). 11) We will revise the expressions: i) From $ \mathcal{D}\_{\textit{WC}}\sim\mathcal{B}$ to $ \mathcal{B}\sim\mathcal{D}\_{\textit{WC}}$; ii) From $ \mathcal{D}\_{\textit{RC}}\sim\mathcal{B}$ to $ \mathcal{B}\sim\mathcal{D}\_{\textit{RC}}$; and iii) From “B is the batch size” to “B is the batch”. 12) We will add the missing symbol for Eq. 10. $\textbf{Response to Q1:}$ Model collapse refers to a degenerate solution in which the encoder produces identical or nearly identical outputs for all inputs, resulting in trivial and non-informative representations. If we omit the predictor and directly enforce similarity between the outputs of two branches (e.g., using mean squared error or cosine similarity), the model may exploit a shortcut: “I might as well just output a constant vector, since all inputs yield the same output, the loss will be minimized.” This leads to model collapse. When collapse occurs, all states—regardless of whether they are perturbed or not—are mapped to the same or very similar initial hidden state, effectively losing the ability to distinguish between different inputs. To avoid model collapse, one effective strategy is to introduce an asymmetric network architecture, where a predictor is added to one branch, while the other branch remains without it and has its gradient flow stopped. This asymmetry prevents both branches from trivially converging to the same constant output. The branch with the predictor learns to align its output with the target branch, which acts as a stable reference point. Because the target branch does not receive gradients, it cannot adjust itself to match the predictor’s potentially trivial solution, thus breaking the symmetry that often leads to collapse. This design encourages the network to learn meaningful representations, resulting in stabilization of optimization and a reduction in collapsing solutions. $\textbf{Response to Q2:}$ The reasons are as follows. First, as shown in Eqs. 3-4, both the worst-case and random-case loss terms already incorporate information from perturbed and unperturbed states due to the use of a contrastive loss function. This contrastive loss encourages the learned policy to remain consistent before and after state perturbations. Therefore, an additional MuZero loss term specifically targeting unperturbed states is redundant. Second, an additional loss term will increase the computational cost.
Summary: This work proposed a robust version of Muzero framework called RobustZero to gain robustness when facing state perturbations. Muzero features a self-supervised representation network to generate a consistent initial hidden state and a unique loss function to gain robustness. In the experiment setting, Muzero shows superior performance under both worst and random state perturbations. ## update after rebuttal The rebuttal has adequately addressed most of my concerns, and I am now leaning toward accepting this paper. Claims And Evidence: I believe the claims are well supported. Methods And Evaluation Criteria: Yes, the proposed methods gain the best robustness under both worst-case and random perturbation under various environments. Theoretical Claims: Yes, I checked the formation of the loss functions proposed by the authors. Experimental Designs Or Analyses: The evaluation of the proposed method is similar to previous related work by providing rewards under no attack and rewards under worst and random perturbations. However, I do have some concerns about the experiments. 1. The environments do not include Mojuco, which is a commonly used environment in previous related work. 2. The worst attack is achieved by using ATLA-PPO, why not use PA-AD[1] attack that is currently considered the strongest attack? 3. It would be better to include some classical baselines that are compared in most related work, such as SA-PPO[2] in the experiments. [1] Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. ICLR 2022 [2] Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations NeurIps 2020 Supplementary Material: I checked the supplementary material for extra experiment results. Relation To Broader Scientific Literature: This work is related to robust RL, trustworthy AI and AI safety in general. Essential References Not Discussed: I believe there are recent works in this field using diffusion models to gain robustness under state perturbation attacks that should be cited. [1] DMBP: Diffusion model based predictor for robust offline reinforcement learning against state observation perturbations. ICLR 2024 [2] Belief-Enriched Pessimistic Q-Learning against Adversarial State Perturbations. ICLR 2024 Other Strengths And Weaknesses: Strengths: 1. Comprehensive experiment results and alation studies reported by the authors. 2. The first work to consider state perturbation in Muzero algorithm. Weaknesses: 1. There are multiple hyperparameters introduced in the Robustzero, and it is shown in the Appendix that the optimal parameters are different across different environments. Does Robustzero need to search for the best parameters for a new environment? Other Comments Or Suggestions: N/A Questions For Authors: I will summarize all my questions here. 1. The environments do not include Mojuco, which is a commonly used environment in previous related work. 2. The worst attack is achieved by using ATLA-PPO, why not use the PA-AD[1] attack that is currently considered the strongest attack? 3. It would be better to include some classical baselines that are compared in most related work, such as SA-PPO[2] in the experiments. 4. There are multiple hyperparameters introduced in the Robustzero, and it is shown in the Appendix that the optimal parameters are different across different environments. Does Robustzero need to search for the best parameters for a new environment? 5. Does Robustzero know the attack budget $\epsilon$ during training? If it knows, how is the performance with uncertain attack budget $\epsilon$? 6. Diffusion model based methods also achieved strong robustness, as shown in [3]. Could the authors include it as a baseline? I believe the setting of [3] is similar to the environments used in the paper. 7. Could the author provide training time and testing time comparisons? [1] Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. ICLR 2022 [2] Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations NeurIps 2020 [3] DMBP: Diffusion model based predictor for robust offline reinforcement learning against state observation perturbations. ICLR 2024 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your valuable comments and recognition of our contributions. $\textbf{Response to Experimental Designs or Analyses:}$ Please refer to the responses to Q1-Q3. $\textbf{Response to Essential References Not Discussed:}$ We will add the two references as follows. The recent studies [1-2] introduce the diffusion models to enhance robustness to state perturbations. Specifically, a diffusion model-based predictor is proposed [1] for offline RL to recover the actual states against state perturbations. A belief-enriched pessimistic Q-learning method is proposed [2] by using diffusion model to purify observed states. $\textbf{Response to Q1:}$ Following your suggestion, we have studied RobustZero and all baselines on five Mujoco environments, including Hopper, Walker2d, HalfCheetah, Ant and Humanoid. The results are provided in Table S-1 (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Tables/Table%20S-1.png). The results show consistently that RobustZero outperforms all baselines at defending against state perturbations. This provides evidence that RobustZero performs well in general and challenging Mujoco environments. $\textbf{Response to Q2:}$ We fully agree with the reviewer that PA-AD (referred to as PA-ATLA-PPO in our paper) is currently considered the strongest attack method. However, as mentioned in Section 2.2, it is a white-box attack, requiring access to the internal parameters of the victim model. In contrast, our work focuses on black-box attack settings, where such access is not available. Therefore, we do not include PA-ATLA-PPO as a baseline in our experiments. $\textbf{Response to Q3:}$ Since SA-PPO is a white-box attack strategy, we do not use it as a baseline. $\textbf{Response to Q4 and W1:}$ Our method involves four parameters: $w1$, $w2$, $\lambda_1$, and $\lambda_2$. We clarify their design and selection: 1) One of our contributions is the development of an adaptive mechanism to adjust $w1$ and $w2$, eliminating the need for ad-hoc adjustments. Note that in Appendix C.3, we show the selection of $w2$ across different environments. This is used in ablation studies, where we intentionally fix $w2$ and find best value to isolate its effect and demonstrate the advantage of the adaptive mechanism; 2) $\lambda_1$ adjusts the trade-off between robustness to worst-case perturbations and random perturbations. A larger $\lambda_1$ emphasizes worst-case robustness, while a smaller value favors random-case robustness. When both are considered equally important, setting $\lambda_1=1$ (as done in our experiments) is a reasonable and effective default. Thus, $\lambda_1$ does not require tuning unless specific emphasis is desired; and 3) Among the four parameters, $\lambda_2$ is the only one that requires manual tuning. To keep the selection process simple, we employ a standard grid search. Appendix C.7 analyzes the effect of different $\lambda_2$ values, which is not the selection process. In summary, we only need to search for the best setting of $\lambda_2$ for a new environment. Therefore, the hyperparameter selection is not complex. $\textbf{Response to Q5:}$ RobustZero does not know the attack budget $\epsilon$ during training. We will clarify this statement in the final paper. $\textbf{Response to Q6:}$ Literature [3] is the first study to design a defense strategy against state perturbations specifically for offline RL. The strength of offline RL lies in its ability to train agents solely from a fixed dataset, without requiring any interaction with the environment. However, this limits its exploration capability, making it more challenging to learn optimal policies. That said, [3] demonstrates strong robustness to state perturbations when compared to other offline RL methods that lack defense strategies. In contrast, RobustZero and all baseline methods evaluated in our work are online RL methods, where the agent learns by actively interacting with the environment. Generally, online RL methods are capable of achieving higher rewards than offline RL methods due to their adaptive exploration. Given the fundamental differences in training protocols and assumptions between offline and online RL, we do not include [3] as a baseline, as it would not constitute a fair comparison under the same experimental setup. $\textbf{Response to Q7:}$ In Appendix C.6, we provided comparison analysis for the training time and sampling time. Since the sampling time closely approximates the testing time, we initially omitted testing time results in the submitted paper. Now, we have added the testing time per step (TeT) and updated Table 6 accordingly (refer to https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Tables/Table%206.png). As shown in the update Table 6, the testing time is similar to the sampling time, the previous analysis of sampling time remains applicable. For a comprehensive comparison, please refer to Appendix C.6.
Summary: The authors propose RobustZero, the first MuZero-class method designed to ensure robustness against state perturbations, including both worst-case and random-case scenarios. The proposed method introduces a training framework that includes a self-supervised representation network, which facilitates the generation of consistent policies both before and after state perturbations. The framework also incorporates a unique loss function that enhances the robustness of the training process. Furthermore, the authors present an adaptive adjustment mechanism that allows for model updates, ensuring high robustness to perturbations. Extensive experiments conducted across eight environments provide strong evidence that RobustZero outperforms existing state-of-the-art methods in defending against state perturbations. Claims And Evidence: The motivation of this work starts from the limitations of both model-free and model-based methods. Therefore, MuZero was proposed to integrate the strengths from both methods. However, it is not guaranteed that RobustZero will inherit same efficiency benefits compared with robust model-free method. In short, learning curve or other metrics should be included to show that the proposed method is more sample efficient. Methods And Evaluation Criteria: The benchmark is not sufficient. Authors only pick cartpole and pendulum from MuJoco while there are more complicated environments in Mujoco. Only evaluating on more complicated environments can convince that pure model-based methods cannot have the access to prior knowledge of the environment’s dynamics Theoretical Claims: No theory Experimental Designs Or Analyses: How do you train S-MuZero-worst under worst case? Do you assume that you know the worst case scenario for different policies and is the worst case here fixed during training? This also connects to my another question that how can worst-case reward higher than random-case reward in Racetrack for S-Muzero-worst in table 1? I thought worst case means that the performance should always be the worst. Supplementary Material: Yes, I browse the comprehensive experimental results. Relation To Broader Scientific Literature: Introduce the robustness concept to a blend of model-free and model-based method Essential References Not Discussed: I will suggest the authors to include the following papers, which are also an important branch of robust RL, especially that it is still not clear what the definition of worst case and random case in this paper's setting. It should be clearly discuss and identify the difference with [2]'s worst/random settings. [1] Lerrel Pinto et al. "Robust Adversarial Reinforcement Learning", ICML, 2017 [2] Juncheng Dung et al. "Variational Adversarial Training Towards Policies with Improved Robustness", AISTATS, 2025 Other Strengths And Weaknesses: Strengths: * This paper makes a valuable contribution to the field with promising results. * Comprehensive analysis and experiments Other Comments Or Suggestions: * In line 363, I thought it is in each column instead of row. Questions For Authors: * Can S-MuZero solve the tasks with continuous action space? If yes, the explicit statement should be mentioned in 3.3 section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your positive and valuable comments. $\textbf{Response to Claims and Evidence:}$ Following your comments, we have analyzed the relationship between the number of environment samples and the natural, worst-case, and random-case rewards for RobustZero and the two robust model-free baselines: ATLA-PPO and PROTECTED. The results are presented in Table S-2 (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Tables/Table%20S-2.png). Please note that the number of samples per episode in RobustZero differs from those used in the two baselines (refer to Table 6 in Appendix C.6). Therefore, while the numbers of samples reported in Table S-2 are similar across methods, they are not exactly the same. From Table S-2, by using similar samples, RobustZero achieves higher rewards compared to ATLA-PPO and PROTECTED, demonstrating its superior sample efficiency. $\textbf{Response to Method and Evaluation Criteria:}$ Following your suggestion, we have studied RobustZero and all baselines on five Mujoco environments, including Hopper, Walker2d, HalfCheetah, Ant and Humanoid. The results are provided in Table S-1 (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Tables/Table%20S-1.png). The results show consistently that RobustZero outperforms all baselines at defending against state perturbations. This provides evidence that RobustZero also performs well in general and challenging Mujoco environments. $\textbf{Response to Experimental Designs or Analyses:}$ To ensure a fair comparison, we follow prior works, i.e., ATLA-PPO and PROTECTED by using ATLA-PPO to obtain a worst-case perturbation policy that is applied to all methods. Regarding robust RL methods, e.g., RobustZero, ATLA-PPO and PROTECTED, the agent is trained by using non-perturbed states and perturbed states information. This enables obtaining consistent policies before and after state perturbations. Regarding S-MuZero-worst, it is an extended baseline, representing S-MuZero trained under the worst-case perturbation policy. Importantly, S-MuZero-worst does not incorporate any robustness mechanism. It is trained solely on perturbed states, without any information about non-perturbed states. This causes S-MuZero-worst to be over-trained under the worst-case perturbation policy, making it highly specialized and adapted to that particular perturbation pattern. As a result, its performance under the worst-case perturbation policy unusually appears higher than its performance under random or no perturbations as observed in Table 1. This explains why its worst-case reward may appear higher than its random-case reward. Note that S-MuZero-worst is deliberately designed to showcase the effect of over-training under worst-case perturbations, regardless of its natural and other performance. Except for S-MuZero-worst, the worst-case rewards are lower than the corresponding natural and random-case rewards. $\textbf{Response to Essential References Not Discussed}:$ We would like to address your comments as follows: 1) Following your suggestion, we will add the two references. Specifically, a robust adversarial reinforcement learning method is proposed [1] to jointly train an agent and an adversary, where the agent aims to accomplish the primary task objectives while learning to remain robust against disturbances introduced by the adversary. A recent study [2] proposes the use of variational optimization over worst-case adversary distributions, rather than a single adversary, and trains an agent to maximize the lower quantile of returns to mitigate over-optimism; and 2) In our paper, a worst-case state perturbation refers to an adversarial modification of the agent’s observed state that is carefully crafted to minimize its expected return. While, a random-case state perturbation refers to a stochastic disturbance applied to the agent’s observed state. Formal definitions of both perturbation policies are provided in Definitions 4.2 and 4.3 of our paper (see pages 3-4). Additionally, we would like to note that for reference [2], only the abstract is currently available, and the full version has not yet been released online. As a result, we are unable to determine the specific settings used in that work regarding worst-case and random-case perturbations. $\textbf{Response to Other Comments or Suggestions:}$ We will change “row” to “column”. $\textbf{Response to Questions for Authors:}$ S-MuZero can solve the tasks with continuous action spaces. We will add this statement in Section 3.3.
Summary: The paper introduces RobustZero, an enhanced MuZero framework designed to be robust against both random-case and worst-case adversarial perturbations. RobustZero dynamically balances data generation between these perturbations and incorporates them directly into online training. Claims And Evidence: The claims presented in the paper are supported by extensive empirical evidence. Methods And Evaluation Criteria: I suggest employing more general and challenging benchmarks (e.g., board games, MuJoCo) that are standard in the RL community. Pendulum and CartPole appear overly simplistic. Additionally, detailed explanations of the transportation tasks would help clarify their complexity. Theoretical Claims: N/A Experimental Designs Or Analyses: Consider including training curves (e.g., reward vs. training steps plots) to illustrate learning dynamics and convergence. Supplementary Material: I read the appendix in its entirety. Relation To Broader Scientific Literature: The paper proposes a robust framework addressing sensory noise and adversarial perturbations. Demonstrating effectiveness on more complex tasks could significantly strengthen the proposed method’s relevance. Additionally, the current design involves numerous hyperparameters and ad-hoc choices. Reducing these or providing theoretical justifications would enhance the rigor of the method. Essential References Not Discussed: Given that the paper integrates contrastive representation learning with MuZero, it should further discuss representation learning approaches within the Related Work section, notably EfficientZero (https://arxiv.org/pdf/2111.00210). Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: [Q1] How does RobustZero perform on more complex tasks, such as board games (e.g. Go, Chess, Shogi) and high-dimensional continuous control environments (e.g., MuJoCo Humanoid)? [Q2] The current method involves numerous hyperparameters and seemingly ad-hoc adjustments. Can you reduce the complexity or provide theoretical justification for these choices? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your comments and our responses are detailed below. $\textbf{Response to Method and Evaluation Criteria and Q1}$: We would like to address your comments as follows: 1) We have studied RobustZero and all baselines on five Mujoco environments, including Hopper, Walker2d, HalfCheetah, Ant and Humanoid. The results are provided in Table S-1 (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Tables/Table%20S-1.png). RobustZero consistently outperforms all baselines at defending against state perturbations. This provides evidence that RobustZero also performs well in general and challenging Mujoco environments. 2) Following ATLA-PPO and PROTECTED, we adopt an $l_{p}$-norm perturbation model to generate small, semantically invariant perturbations. However, in board games, the states are discrete and highly structured, and even small $l_{p}$-norm perturbations can result in invalid or illegal states that violate game rules. Therefore, this perturbation model is not suitable for board games, and we do not evaluate all methods in such environments. 3) We will add the following explanations for the energy and transportation tasks. The three transportation environments support the testing of autonomous driving tasks. Therein, an autonomous driving car interacts with other vehicles to navigate different scenarios: i) Highway$-$Drive fast, avoid collisions, and stay in the right-most lane; ii) Intersection$-$Cross safely, follow traffic rules, and keep a steady speed; and iii) Racetrack$-$Finish quickly while staying on track and driving smoothly. The action space of an autonomous driving car is two-dimensional. The three energy environments support the testing of voltage control tasks with the objective of minimizing the total cost of voltage violations, control errors, and power losses, while meeting both networked and device constraints. The action spaces of IEEE 34-bus, IEEE 123-bus and IEEE 8500-node are 10-dimensional (8 continuous and 2 discrete), 15-dimensional (11 continuous and 4 discrete), and 32-dimensional (22 continuous and 10 discrete), respectively. Thus, these energy environments are complex and high-dimensional. $\textbf{Response to Experimental Designs or Analyses}$: We have added these training curves (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Figures/Figure%20S-1.png). $\textbf{Response to Q2}$: Our method involves four parameters: $w1$, $w2$, $\lambda_1$, and $\lambda_2$. We clarify their design and selection: 1) One of our contributions is the development of an adaptive mechanism to adjust $w1$ and $w2$, eliminating the need for ad-hoc adjustments. Note that in Appendix C.3, we show the selection of $w2$ across different environments. This is used in ablation studies, where we intentionally fix $w2$ and find best value to isolate its effect and demonstrate the advantage of the adaptive mechanism; 2) $\lambda_1$ adjusts the trade-off between robustness to worst-case perturbations and random perturbations. A larger $\lambda_1$ emphasizes worst-case robustness, while a smaller value favors random-case robustness. When both are considered equally important, setting $\lambda_1=1$ (as done in our experiments) is a reasonable and effective default. Thus, $\lambda_1$ does not require tuning unless specific emphasis is desired; and 3) Among the four parameters, $\lambda_2$ is the only one that requires manual tuning. To keep the selection process simple, we employ a standard grid search. Appendix C.7 analyzes the effect of different $\lambda_2$ values, which is not the selection process. In summary, our method does not rely on ad-hoc adjustments. The adaptive mechanism removes the need to tune $w1$ and $w2$. $\lambda_1$ can be fixed to a default value, e.g., 1. Only $\lambda_2$ requires simple tuning via grid search. Therefore, the hyperparameter selection is not complex. $\textbf{Response to Essential References Not Discussed}$: We will add the following statements. The contrastive representation learning has been used to improve the sample efficiency [A, B] and enhance the representation ability of states [C], rewards [D], and value functions [E]. Among them, one notable work is EfficientZero [B], which significantly improves the sample efficiency of the MuZero method while maintaining superior performance. It achieves this by using contrastive representation learning to build a consistent environment model and using the learned model to correct off-policy value targets. Different from these studies, we aim to leverage contrastive representation learning to improve the robustness of MuZero-class methods to state perturbations. [A] Data-efficient reinforcement learning with self-predictive representations [B] Mastering Atari games with limited data [C] Planning with goal-conditioned policies [D] Beyond reward: Offline preference-guided policy optimization [E] Contrastive learning as goal-conditioned reinforcement learning --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response. My main concern is still the difficulty of the benchmarks and the performance of the proposed algorithm. On MuJoCo, the performance improvement does not seem to be significant, and in some environments RobustZero is worse than the best baseline, while the better ones are within one standard deviation. --- Reply to Comment 1.1.1: Comment: We sincerely apologize for the insufficient explanation, which may have led to misunderstandings due to space limitations. We would like to take this opportunity to clarify the effectiveness of the proposed RobustZero on the MuJoCo environments as follows. 1) $\textbf{Response to “in some environments RobustZero is worse than the best baseline”.}$ As shown in Table S-1 (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Tables/Table%20S-1.png), we report natural rewards, worst-case rewards, and random-case rewards across the five MuJoCo environments. First, RobustZero and S-MuZero (i.e., the version of RobustZero without any defense strategies against state perturbations) achieve higher natural rewards than other baselines. Notably, S-MuZero obtains slightly higher natural rewards than RobustZero. This is because S-MuZero is trained without state perturbations. It thus obtains the best natural rewards. However, its worst-case and random-case rewards decrease notably. In comparison, RobustZero can still obtain comparable natural rewards but much better worst-case and random-case rewards. We have also provided an explanation for why S-MuZero slightly outperforms RobustZero in natural rewards on the previously selected eight environments (see Column 1, Lines 381–382, and Column 2, Lines 348–352 on Page 7 of our paper). The exception is on CartPole, where four methods are able to obtain the optimal natural reward as noted in line 661 on page 13 of our paper. Second, RobustZero consistently achieves higher worst-case and random-case rewards than all baselines across all the five MuJoCo environments, further validating its robustness to state perturbations. In summary, RobustZero outperforms all baselines in terms of worst-case and random-case performance, while maintaining comparable natural rewards to S-MuZero across the five MuJoCo environments. These findings are consistent with the results on the previously selected eight environments. 2) $\textbf{Response to the “the better ones are within one standard deviation”.} $ We consider five baselines: ATLA-PPO, PROTECTED, S-MuZero, S-MuZero-worst, and S-MuZero-random. Among them, ATLA-PPO and PROTECTED are model-free DRL methods, while RobustZero, S-MuZero, S-MuZero-worst, and S-MuZero-random are MuZero-class methods. Our key contribution is to propose RobustZero, the first MuZero-class method that is robust to both worst-case and random-case state perturbations. As shown in Table S-1 (see https://anonymous.4open.science/r/RobustZero-SupportMaterials-8512/Tables/Table%20S-1.png), RobustZero achieves the best worst-case and random-case rewards. Importantly, there are no overlapping standard deviations between RobustZero and the other three MuZero-class baselines (S-MuZero, S-MuZero-worst, and S-MuZero-random) across all five MuJoCo environments. This demonstrates that the performance improvements brought by RobustZero in robustness are statistically significant within the MuZero family. In addition, although there are overlapping standard deviations between RobustZero and the model-free baselines (ATLA-PPO and PROTECTED) in some cases, RobustZero still achieves the highest average performance. 3) $\textbf{Response to “On MuJoCo, the performance improvement does not seem to be significant”. }$ As discussed above, RobustZero significantly improves the robustness of MuZero-class methods to both worst-case and random-case state perturbations, while maintaining high natural performance across the five MuJoCo environments. Thus, RobustZero performs well in general and challenging MuJoCo environments. We hope these clarifications address your concerns.
null
null
null
null
null
null
The Sharpness Disparity Principle in Transformers for Accelerating Language Model Pre-Training
Accept (poster)
Summary: This paper uncovers a sharpness disparity across different blocks in Transformers, which persists throughout the training process. The authors propose a novel Blockwise Learning Rate strategy to accelerate LLM (e.g., GPT and LLaMA) pre-training. Furthermore, the proposed method consistently achieves lower loss. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check the proof carefully, but the theoretical results seem faithful and supported by the empirical results. Experimental Designs Or Analyses: Yes, I think the experiments are solid enough to support the authors' claim. Supplementary Material: I reviewed the "Experimental Details" in the Appendix, which looks good. Relation To Broader Scientific Literature: NA. Essential References Not Discussed: NA. Other Strengths And Weaknesses: Strengths: 1. The paper is written clearly. 2. The paper designs a systematic study on the impact of sharpness in LLM pretraining, and the experiments support their claims well. 3. The experimental results are promising. Weaknesses: 1. The approximation is only performed for the digonal Hessian matrix, and the gap between the estimated Hessian matrix and the true Hessian matrix is not been controlled. 2. For non-LLM tasks and other optimizers, the sharpness principle may not hold and need many computations to check. Other Comments Or Suggestions: NA. Questions For Authors: 1. Why do you not decrease the LRs along high-sharpness directions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper and your appreciation. We will try our best to address your questions. **Q1: Concerns about the gap between estimated digonal Hessian and the true Hessian.** "The approximation is only performed for the diagonal Hessian matrix, and the gap between the estimated Hessian matrix and the true Hessian matrix is not been controlled." **A1**: Thanks for this question. - First, we’d like to clarify that our sharpness measure (Eq. (4)) is based on the trace of blockwise Hessians, for which *only the diagonal Hessians are needed*. This follows from the identity ${\rm Tr}(H)={\rm Tr}({\rm diag}(H))$, which we will clarify in the revised version. - To approximate the diagonal Hessian, we adopt the diagonal Fisher matrix. The Fisher is widely regarded as a reasonable approximation of the Hessian in optimization and deep learning [LeCun et al., 1998; Amari, 1998; Martens, 2014]. **Q2: Suggestions for experiments on non-LLM tasks and other optimizers.** "For non-LLM tasks and other optimizers, the sharpness principle may not hold and need many computations to check." **A2**: Thanks for the constructive suggestions. To address them, we conducted two **new experiments**. - **Other optimizers.** We evaluated the sharpness disparity principle on LLaMA trained with Lion [Chen et al., 2023]. As shown in [Fig.R7](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-OPT-V2/file/R7_law_lion.pdf?v=c52ad373), it exhibits *almost the same* principle as with AdamW (Eq. (1) and Fig 3(b)). Then we integrated Blockwise LR into Lion. Remarkably, as shown in [Fig.R6](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R6_lion_web.pdf?v=6932b09d), Lion with Blockwise LR achieves lower terminal loss and 2x speedup over well-tuned Lion. - **Non-LLM tasks.** While our primary focus is on language models, as indicated by the title, we followed your suggestion and evaluated the sharpness principle on a ViT-B trained on ImageNet-1k. Surprisingly, as shown in [Fig. R8](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-OPT-V2/file/R8_ViT.pdf?v=e754a2fa), ViT exhibits a similar sharpness ordering as LLMs: S(QK)<S(FFN)<S(VO)<S(Norm). The only difference is that the embedding layer is no longer the flattest, likely due to structural differences between image and language inputs. These results suggest Blockwise LR can be extended to vision models using this revised principle. **Q3: Questions on the design of Blockwise LR.** "Why do you not decrease the LRs along high-sharpness directions?" **A3**: Thanks for this insightful question. - Our design follows the view in [Wen et al., 2024; Wang et al., 2024; Song et al., 2024]: low-sharpness directions primarily drive loss descent, while high-sharpness directions determine training stability. To maintain stability, we keep the learning rate in high-sharpness directions unchanged. - Reducing LR in high-sharpness directions may suppress oscillations in these directions but alters the stability condition, and its long-term impact remains unclear. [Wen et al., 2024] shows that using relatively large LR in high-sharpness ("hill") directions early in training can result in lower final loss. ### Reference Allen-Zhu et al. A Convergence Theory for Deep Learning via Over-Parameterization. 2018. Amari, S. Natural Gradient Works Efficiently in Learning. Neural Computation. 1998. Chen et al. Symbolic Discovery of Optimization Algorithms. 2023. D'Angelo et al. Why Do We Need Weight Decay in Modern Deep Learning? 2023. Du et al. Understanding Emergent Abilities of Language Models from the Loss Perspective. 2024. Du et al. Gradient Descent Finds Global Minima of Deep Neural Networks. 2018. Hoffmann et al. Training Compute-Optimal Large Language Models. 2022. Hu et al. MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies. 2024. LeCun et al. Efficient Backprop.1998. Liu et al. Sophia: A scalable stochastic second-order optimizer for language model pre-training. 2024. Martens, J. New insights and perspectives on the natural gradient method. 2014. Song et al. Does SGD really happen in tiny subspaces? 2024. Wang et al. Improving Generalization and Convergence by Enhancing Implicit Regularization. 2024. Wen et al. Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective. 2024. --- Rebuttal Comment 1.1: Comment: I have improved my score from 3 to 4. --- Reply to Comment 1.1.1: Comment: Thank you sincerely for taking your time to revisit your evaluation and improve your score.
Summary: The authors demonstrate that there is a sharpness disparity between the different transformer blocks, which appears early in training and persists throughout. Based on their observation, the authors introduce a novel approach called Blockwise Learning Rate, which adjusts the learning rate of each transformer block based on its sharpness. Blockwise Learning Rate with AdamW achieves lower loss with the same number of gradient steps, and reaches the same loss with nearly half the steps on various sizes of GPT-2 and LLaMA across two datasets. Additionally, the authors demonstrate that Blockwise Learning Rate is compatible with Adam-mini, a memory-efficient variant of Adam. Claims And Evidence: The two claims are (1) there is a sharpness disparity between the different transformer blocks, and (2) adjusting the learning independently for each block based on its sharpness results in faster training speed with respect to the number of gradient steps. The theoretical proofs and experimental evaluation support their claims. Methods And Evaluation Criteria: The evaluation methodology is sound: the models, datasets, and training procedure are standard, and they consider various model sizes, optimizers, and hyperparameters. However, the authors only report the loss and do not evaluate the downstream performance, which would strengthen their evidence. Theoretical Claims: I have not reviewed the proofs in detail due to a lack of time. However, the demonstrations in the main paper *appear* to be sound and supported by empirical evidence. Experimental Designs Or Analyses: As mentioned above, the experimental design is sound. The authors consider two widely popular models, GPT-2 and LLAMA, with various sizes. The two datasets considered are well established too, although they are small in comparison to modern datasets such as RedPajama or RefineWeb. The training procedure is modern, relying on AdamW with tuned β=(0.9,0.95) and a warmup phase followed by a cosine decay. In the literature, it is more common for the final learning rate to be 10% of the peak learning rate instead of 5%, but I do not believe that this invalidates their observations. The number of gradient steps is relatively small (30K to 100K) compared to modern LLMs, but I believe it is just enough to draw conclusions. The performance is only evaluated in terms of loss, which may not necessarily translate to better performance on downstream tasks, so I would like to see some downstream evaluation to be added. Supplementary Material: I briefly went over the appendix but did not check the correctness of the proofs. Relation To Broader Scientific Literature: This paper contributes to the field of sharpness analysis and efficient optimizers for transformers. As far as I am aware, this is the firs study to consider the sharpness of blocks rather than layers and to suggest modifying the learning rate per-block across the entire model. Essential References Not Discussed: I am not aware of missing relevant works. Other Strengths And Weaknesses: The figures are clear, especially how the sharpness of each block is depicted in Figure 3. I appreciate that the authors validated their proposed Blockwise Learning Rate across two models, two datasets, two optimizers, and various sizes. Additionally, I appreciate that they tuned the learning rate to ensure optimal performance for AdamW. The main weaknesses are the lack of downstream evaluation, which may reveal a closer performance than the loss suggests, and the number of gradient steps, which may close the gap. Other Comments Or Suggestions: I suggest replacing "point-wise feed-forward network" with either "position-wise" or "token-wise" as they are more common. Questions For Authors: - Can you extend the training of one of the small models, preferably Llama 0.25B? - Can you evaluate a few models on downstream benchmarks? - You have shown that the Blockwise Learning Rate works across optimizers. Do you know if it also works across learning rate schedulers? Can you apply your Blockwise Learning Rate with warm-stable-decay (WSD) instead of cosine decay? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper and your appreciation. We will try our best to address your questions. **Q1: Concerns about the downstream performance.** "However, the authors only report the loss and do not evaluate the downstream performance, which would strengthen their evidence." "The performance is only evaluated in terms of loss, which may not necessarily translate to better performance on downstream tasks, so I would like to see some downstream evaluation to be added." **A1**: Thanks for raising this concern. - We clarify that In LLM pretraining, downstream performance is widely known to correlate strongly with final pretraining loss and less so with factors like architecture or optimizer choice. See [Du et al., 2024] for a detailed analysis. Thus, the primary focus in LLM pre-training is to reduce the final loss as much as possible, under specific compute and data budgets. - In response to your suggestion, we **evaluated downstream performance**. As shown in [Tab.R4](https://anonymous.4open.science/api/repo/SharpnessTable-Downstream/file/R4_evaluate.pdf?v=8dadc1ea), LLaMA trained with our algorithm outperforms the one trained with AdamW across all evaluated tasks. **Q2: Concerns about the final learning rate.** "In the literature, it is more common for the final learning rate to be 10% of the peak learning rate instead of 5%, but I do not believe that this invalidates their observations." **A2**: Thank you for the helpful comment. - In our experiments, we followed the setup in Sophia (Sec. 3.1 in [Liu et al., 2023]), where the final LR is set to 5% of the peak LR. - As noted by the reviewer, it is more common in the literature for the final LR to be 10% of the peak LR. In our **new experiments** conducted on C4 dataset, we adopt this setting, and the corresponding results are shown in [Fig.R2](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R2_scaling_c4.pdf?v=f2e12859). **Q3: Concerns about the number of training steps.** "The number of gradient steps is relatively small (30K to 100K) compared to modern LLMs, but I believe it is just enough to draw conclusions." "Can you extend the training of one of the small models, preferably Llama 0.25B?" **A3**: Thanks for this question. - We clarify that our training steps are sufficiently large given current model and dataset sizes. For example, 100k steps on OpenWebText yields 480x1024x100k~**50 billion tokens**. According to Tab. 3 in [Hoffmann et al., 2022], most of our experiments *exceed the recommended token budget*. - Moreover, this training setup aligns with standard practice in the community, e.g., [Liu et al., 2023; D'Angelo et al., 2023]. - Following your suggestion, we conducted a **new experiment** by extending the training of 0.25B LLaMA from 50k/100k to 150k/300k steps. As shown in [Fig.R3](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R3_025B_long.pdf?v=16de8e07), Blockwise LR still achieves lower terminal loss and is 2x faster than AdamW even at longer durations. **Q4: Suggestions for "point-wise feed-forward network".** "I suggest replacing "point-wise feed-forward network" with either "position-wise" or "token-wise" as they are more common." **A4**: Thank you for your suggestion. We will revise the terminology in the revised version. **Q5: Suggestions for WSD experiments.** "Do you know if it also works across learning rate schedulers? Can you apply your Blockwise Learning Rate with warm-stable-decay (WSD) instead of cosine decay?" **A5**: Thanks for the constructive suggestion. Following your recommendation, we conducted a **new experiment** using the WSD scheduler. As shown in [Fig.R5](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R5_wsd_web.pdf?v=f591fc57), Blockwise LR still achieves a 2x speedup over AdamW under this setting. ### Reference Due to space limit, see Reference in *Rebuttal to Reviewer w3eV*. --- Rebuttal Comment 1.1: Comment: I appreciate the reviewers' clarifications and am pleased with the additional experiments. I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: We're glad that our responses addressed your concerns and appreciate your willingness to revise your score accordingly.
Summary: The paper proposed a blockwise learning rate method to accelerate training. The blockwise learning rate is designed based on blockwise sharpness estimation. The writing is clear. The principle is reasonable and makes sense to me. The experiments are mostly convincing. Overally speaking, this is a good paper and might motivate more future algorithm designs. The paper is worth sharing with community. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: no Essential References Not Discussed: no Other Strengths And Weaknesses: see below Other Comments Or Suggestions: see below Questions For Authors: Q1: In Figure 4 and 5, what do "(50k)" and "(100k)" mean? I cannot find the description anywhere around the figure or in Section 6. Q2: According to Figure 4, blockwise lr (50k) converges slower than AdamW (50k) in the early stage, and then blockwise lr (50k) converges faster in the final steps. Why would this happen? Any explanation or intuition? Q3: In Figure 4, the training seems far from convergence. What is the total number of tokens T? Does the advantage of blockwise lr maintains if we train more tokens? Q4: The current blockwise lr is tested on the top of cosine schedule. How does the proposed blockwise lr reconcile with other lr schedules like WSD? Does the acceleration maintains? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper and your appreciation. We will try our best to address your questions. **Q1: Questions of the meaning of "(50k)" and "(100k)".** "In Figure 4 and 5, what do "(50k)" and "(100k)" mean? I cannot find the description anywhere around the figure or in Section 6." **A1**: Thank you for pointing this out. 50k and 100k refers to the total training steps. We will provide more explanations in future revision. **Q2: Questions about the faster convergence of Blockwise LR in the final steps.** "According to Figure 4, blockwise lr (50k) converges slower than AdamW (50k) in the early stage, and then blockwise lr (50k) converges faster in the final steps. Why would this happen? Any explanation or intuition?" **A2**: Very interesting question. This behavior resembles that of WSD schedulers, which often outperform cosine decay in later stages. A preliminary explanation draws on the **river-valley** loss landscape [Wen et al., 2024], which splits the loss into two components: a *river component* (the primary loss along the river at the bottom of the hills) and a *hill component* (additional loss from deviations in height from the river’s course). - *Early training*: Blockwise LR boosts LR in river (low-sharpness) directions, enabling faster core progress minimizing the river component, but also inevitably in some hill (high-sharpness) directions due to noise in data or Hessian estimates—causing larger oscillations and higher loss. - *Late training*: As LR decays, the oscillations along the hill component diminish and iterates settle close to the river path [Wen et al., 2024]. Since Blockwise LR made more progress along the river early on, it achieves a lower terminal loss. We will add this discussion in the revision. **Q3: Questions about the total number of training tokens.** "In Figure 4, the training seems far from convergence. What is the total number of tokens T? Does the advantage of blockwise lr maintains if we train more tokens?" **A3**: Thanks for this question. - We clarify that our training steps are sufficiently large given current model and dataset sizes. For example, 100k steps on OpenWebText yields 480x1024x100k~*50 billion tokens*. According to Tab. 3 in [Hoffmann et al., 2022], most of our experiments *exceed the recommended token budget*. - Moreover, this training setup aligns with standard practice in the community, e.g., [Liu et al., 2023; D'Angelo et al., 2023]. - Following your suggestion, we conducted a **new experiment** by extending the training of 0.25B LLaMA from 50k/100k to 150k/300k steps. As shown in [Fig.R3](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R3_025B_long.pdf?v=16de8e07), Blockwise LR still achieves lower terminal loss and is 2x faster than AdamW even at longer durations. **Q4: Suggestions for WSD experiments.** "The current blockwise lr is tested on the top of cosine schedule. How does the proposed blockwise lr reconcile with other lr schedules like WSD? Does the acceleration maintains?" **A4**: Thanks for the constructive suggestion. Following your recommendation, we conducted a **new experiment** using the WSD scheduler. As shown in [Fig.R5](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R5_wsd_web.pdf?v=f591fc57), Blockwise LR still achieves a 2x speedup over AdamW under this setting. ### Reference Due to space limit, see Reference in *Rebuttal to Reviewer w3eV*. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the rebuttal. The new experiments in the rebuttal seem convincing. A kind suggestion for the authors to revise the paper: In the future version, I suggest the author align all the figures in the paper (which uses larger model size ~1B) with the standard of the new figures in the rebuttal (which uses small model size ~0.25B). The figures in the rebuttal seem more professional and convincing, perhaps due to the longer training durations or due to certain smoothing. In comparison, the current figures in the script seem rather preliminary and noisy. It is not easy to make fair judgments based on the figures in the current script. --- Reply to Comment 1.1.1: Comment: Thank you for your kind feedback. The figures in the rebuttal are more smoothed due to larger training steps and multi-node evaluation. By employing multi-node evaluation, we enlarge the data available for each evaluation, resulting in smoother curves. However, for the figures in the main paper, we follow the experimental settings of nanoGPT (Karpathy, A., 2022) and Sophia (Liu et al., 2024) and use only single-node evaluation. **In our future revision, we will follow your suggestion to align the figures in the main paper with the standard of figures in the rebuttal**. We hope our response has adequately addressed your concerns, and we would greatly appreciate it if you could reconsider the final assessment in light of the clarifications and resolutions we have provided. ### Reference Karpathy, A. nanoGPT. https://github.com/karpathy/nanoGPT. GitHub repository, 2022.
Summary: This paper presents the Sharpness Disparity Principle, which identifies a systematic difference in sharpness across different transformer components. Specifically, the authors find that normalization layers exhibit the highest sharpness, while embedding layers have the lowest, with other blocks lying in between. This pattern emerges early in training and persists throughout. Leveraging this observation, they propose Blockwise LR, a novel learning rate scaling strategy that assigns different LRs to different block types based on their sharpness. By integrating this strategy into AdamW, they achieve nearly 2x training speed-ups while maintaining (or improving) final loss values across different models (from 0.12B to 1.1B). Claims And Evidence: Most calims are supported clearly. But I have some concerns: - The results focus on smaller-scale pretraining (1.1B max). It remains unclear whether this strategy generalizes to massive-scale models. - The cause of this disparity is not deeply analyzed. While the paper suggests that parameter norms influence sharpness, a more in-depth theoretical analysis would be helpful. - The number of training steps is not that large. It is better to observe from a larger time scale. Methods And Evaluation Criteria: Yes. Blockwise LR is well-motivated by empirical findings and aligns with existing work on learning rate adaptation. Ablation studies clearly show that increasing LR for norm layers is harmful, supporting the rationale behind the method. Cons: - Limited discussion on transferability to larger models. Will Blockwise LR still provide speedups at large scale? - Lack of comparisons to alternative optimization strategies. It would be useful to compare against layerwise LR tuning methods. Theoretical Claims: Yes. The paper provides some theoretical justification by analyzing blockwise sharpness via the Fisher Information Matrix. Experimental Designs Or Analyses: Pros: - Results are consistent across different datasets and model sizes. - Ablation studies show that modifying LR for Norm layers degrades performance. Cons: - Most experiments stop at 100K steps. Does the speedup persist in billion-step training? - The study focuses on pretraining loss but does not analyze downstream performance on reasoning/QA tasks. Supplementary Material: No. I do not have enough time to review the code material. Relation To Broader Scientific Literature: The work is well-grounded. Essential References Not Discussed: NA Other Strengths And Weaknesses: Would benefit from larger-scale testing and comparisons to other adaptive optimizers. Other Comments Or Suggestions: NA Questions For Authors: - What is the precision setting? Is it trained based on mix-precision? - Can this be combined with second-order optimizers like Sophia or Shampoo? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper and your appreciation. We'll try our best to address your questions. **Q1: Suggestion for larger-scale experiments.** **A1**: Thanks for this question. - We clarify that our largest model (1.1B) is large enough for current dataset scale (OpenWebText and MiniPile), consistent with standard practices in the community (e.g., [Liu et al, 2023] trains models up to 0.77B on OpenWebText). - Our method scales well from 0.12B to 1.1B models, suggesting its potential scalability (see [Fig.R1](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R1_scaling_web.pdf?v=5c61cfe0). We are currently extending our experiments to **2B models**; the results are on the way. - To further support the effectiveness, we conducted **additional experiments** on a larger dataset, C4. In [Fig.R2](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R2_scaling_c4.pdf?v=f2e12859), our algorithm consistently achieves lower terminal loss than AdamW across model sizes from 0.2B to 1B, consistent with previous findings. **Q2: Concerns about the deep cause of the sharpness disparity principle.** **A2**: Thanks for this question. - We clarify that we have provided a first attempt to analyze this sharpness disparity principle from the parameter norms, supported by experiments and discussion following the theorems. Moreover, the discussion in Lines 298–306 offers an intuitive explanation from the multiplicative structure of Transformer blocks. - A more rigorous theoretical analysis is beyond this paper’s scope. Even for simpler two-layer neural networks, the analysis of the parameter norms remains challenging [Du et al, 2018; Allen-Zhu et al, 2018]. We leave this as important future work. **Q3: Suggestion for training steps.** **A3**: Thanks for this question. - We clarify that our training steps are sufficiently large given current model and dataset sizes. For example, 100k steps on OpenWebText yields 480x1024x100k~**50 billion tokens**. According to Tab. 3 in [Hoffmann et al, 2022], most of our experiments *exceed the recommended token budget*. - Moreover, this training setup aligns with standard practice in the community, e.g., [Liu et al, 2023; D’Angelo, 2023]. - Following your suggestion, we conducted a **new experiment** by extending the training of 0.25B LLaMA from 50k/100k to 150k/300k steps. In [Fig.R3](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R3_025B_long.pdf?v=16de8e07), Blockwise LR still achieves lower terminal loss and is 2x faster than AdamW even at longer durations. **Q4: Concerns about downstream performance.** **A4**: Thanks for raising this concern. - We clarify that In LLM pretraining, downstream performance is widely known to correlate strongly with final pretraining loss and less so with factors like architecture or optimizer choice. See [Du et al, 2024] for a detailed analysis. Thus, the primary focus in LLM pre-training is to reduce the final loss as much as possible, under specific compute and data budgets. - In response to your suggestion, we **evaluated downstream performance**. In [Tab.R4](https://anonymous.4open.science/api/repo/SharpnessTable-Downstream/file/R4_evaluate.pdf?v=8dadc1ea), LLaMA trained with our algorithm outperforms the one trained with AdamW across all evaluated tasks. **Q5: Concerns about comparison with other optimization strategies or optimizers.** **A5**: Thanks for the constructive question. - We clarify that our Blockwise LR is the first successful blockwise LR method for Transformers. As noted in Remark 1.1, traditional layerwise LR methods--originally developed for MLPs and CNNs--have not translated successfully to Transformers. - Notably, Blockwise LR is compatible with various optimizers, e.g., AdamW and Adam-mini. Thus, instead of comparing with other optimizers, we follow your suggestions (in Q7) to evaluate Blockwise LR in combination with other optimizers (details in A7). - Following your suggestion, we also conducted a **new experiment** using another popular strategy, WSD scheduler [Hu et al, 2024]. In [Fig.R5](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R5_wsd_web.pdf?v=f591fc57), Blockwise LR still achieves a 2x speedup over AdamW in this setting. **Q6: Question about the precision settings.** **A6**: All the models are trained with BFloat16. We will clarify it in the revised version. **Q7: Question about the compatibility with other optimizers.** **A7**: Thanks for the interesting question. We followed your suggestion and combined Blockwise LR with another popular optimizer, Lion [Chen et al, 2023]. In [Fig.R6](https://anonymous.4open.science/api/repo/ICML-2025-Sharpness-Optimization/file/R6_lion_web.pdf?v=6932b09d), this combination achieves lower terminal loss and 2x speedup over well-tuned Lion. ### Reference Due to space limit, see Reference in *Rebuttal to Reviewer w3eV*.
null
null
null
null
null
null
Transfer Q-Learning with Composite MDP Structures
Accept (poster)
Summary: The paper addresses transfer reinforcement learning (RL) under a new framework model called composite MDP, where transition dynamics consist of a low-rank “shared” component plus a sparse “task-specific” component. This setup reflects how different tasks can share a core set of dynamics while still varying in a limited number of ways. The authors first studies single-task Learning with composite MDPs. They present a UCB-based algorithm (UCB-Q-Learning) and provide regret bounds that extend standard results in linear and low-rank MDPs to this more flexible composite structure. Then the authors consider transfer learning across tasks, where in the source and target tasks, the low-rank parts are shared, and only the sparse components differ. They introduce a transfer Q-learning algorithm (UCB-TQL) to leverage a previously learned source-task model. They prove that when the difference between the target task and source tasks' sparse components is very small, the regret bound for the target task becomes nearly dimension-free w.r.t. ambient dimension. Claims And Evidence: The term **“low-rank”** in this paper appears to be used inconsistently, leading to ambiguity. In summary, I find three different interpretations of “low-rank” within the paper: 1. **"Low-rank MDPs"** - In my understanding, the notion of "low-rank" in **low-rank MDPs** compares the embedding dimension $d$ with the original large state space. As long as $ d \ll |\mathcal{S}| $, the MDP should still be considered low-rank. 2. **"Transition core** $M^*$ **is no longer a low-rank matrix"** (Line 53, Right Column) - Here, "not low-rank" likely refers to the high-dimensional setting, where $ p, q \gg N $. 3. **"Where** $L^* $ **is a low-rank incoherent matrix"** (Line 163, Right Column) - I did not find any further explanation of the **low-rank structure** of $L^*$, so its exact role seems unclear. Also I wonder whether this structure contributes to solving the high-dimensional problem. I find these differing interpretations make the discussion difficult to follow. For instance, the paper states: > *"Similarly, methods built for low-rank MDPs fail in our context due to the absence of low-rank assumptions in $M^* $."* (Line 57) However, even if $M^* $ does not satisfy a low-rank assumption, the model may still be a low-rank MDP in the sense of an embedding-based formulation. So why do methods for low-rank MDPs fail if the overall structure is still low-rank? I think clarifying these distinctions would improve the paper's readability and consistency. Methods And Evaluation Criteria: The algorithm is standard UCB-type algorithm and the evaluation criteria is standard regret. Theoretical Claims: I go through the proof of theorem 3.6, which seems correct. Experimental Designs Or Analyses: There are no experiments. Supplementary Material: There are no supplementary material. Relation To Broader Scientific Literature: The proposed model composite MDPs is different from previous literature, but more discusssions on the relationship with previous literature are required. Please see below and "Claims And Evidence". Essential References Not Discussed: The composite MDPs also seem to be a special case of a linear factored MDPs. [1,2]. I think there should be some discussions on how these models are related. [1] Sample-Optimal Parametric Q-Learning Using Linearly Additive Features. [2] Model-Based Reinforcement Learning with Value-Targeted Regression. Other Strengths And Weaknesses: Weakness: 1. Computational Efficiency of $K_\Psi$ in Large State Spaces. When the state space $\mathcal{S}$ is infinite or extremely large (which should be considered given the assumed high-dimensional setting), how can $K_\Psi$ be computed efficiently, even if $\Psi$ is known? Previous works, such as [1], typically store only the covariance matrix of the sampled features, making their approach computationally efficient. How does your method compare in terms of efficiency? Question: 1. Clarification on Theorem 3.6. Could you elaborate more on Theorem 3.6? In Remark 3.7, you mention that the result matches previous bounds in terms of $d$ , which have polynomial dependence on $d$, but your bound does not explicitly involve any polynomial dependence on $d$. Do $r$ and $s$ scale as poly$( d )$? This is crucial because you assume $p, q \gg N$; if the upper bound includes $O(d)$ , the regret will scale superlinear with $N$. If the result does not involve polynomial dependence on $d$, could you provide a high-level explanation of why your method can handle the high-dimensional setting, whereas low-rank and linear MDP methods fail? [1] Provably Efficient Reinforcement Learning with Linear Function Approximation. Other Comments Or Suggestions: Please add the Impact Statement section. Questions For Authors: Please see Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your insightful comments! **Low-Rank:** We agree that clarifying this terminology is important for improving readability and avoiding confusion. In classical low-rank MDPs literature, "low-rank" refers to embedding-based models where the effective feature dimension $d$ is small compared to $|\mathcal{S}|$. In contrast, in our setting, we work in a high-dimensional regime with $p, q \gg N$, so $M^*$ is not globally low-rank. Prior methods developed for low-rank MDPs—which assume a small feature dimension—do not apply directly, since they cannot consistently estimate $M^\ast$ within $N$ trajectories. To resolve this issue, we assume $M^* = L^* + S^*$, where $L^*$ is of low-rank $r$ and $S^*$ is sparse. This structure enables consistent estimation in high dimensions and is key to our theoretical analysis. Crucially, this assumption enables us to design an estimator that achieves *minimax-optimal error rates* for recovering $L^\ast$, $S^\ast$, and consequently $M^\ast$, *independently of the ambient dimension* $d = \max (p, q)$. We have revised the manuscript to clarify the distinction between embedding-based low-rank MDPs and our structured high-dimensional setting. **Prior Work [1,2]:** Our model can be written in the form of [1] as $P(s' \mid s, a) = \phi(s,a)^\top \alpha(s')$ with $\alpha(s') := (L^* + S^*)\psi(s')$. This reduces to [1] when $S^* = 0$. Similarly, our model matches the linear factored MDP in [2] via a Kronecker formulation: $P(s' \mid s,a) = (\phi(s,a) \otimes \psi(s'))^\top \operatorname{vec}(L^* + S^*)$. While structurally related, a key difference lies in estimation. Our setting allows $p, q \gg N$, and our estimator achieves minimax-optimal error rates that do **not** scale with $d = \max(p, q)$. In contrast, [1,2] assume low-dimensional or identifiable parameter spaces and are not suitable for high-dimensional regimes. Our structured assumption is especially crucial in **transfer RL**, enabling effective knowledge sharing via $L^*$ while capturing task-specific deviations via $S^*$. **Comp. Efficiency of $K_\psi$:** We agree this is important. When $|\mathcal{S}|$ is large or infinite, we compute $K_\psi$ using a Monte Carlo approximation: $ \hat{K}\_\psi = \frac{1}{m} \sum\_{i=1}^m \psi(s\_i)\psi(s\_i)^\top, \quad s\_i \sim \text{Unif}(\mathcal{S}). $ This is a standard approach in randomized numerical linear algebra [3] and is computed only **once** before online learning. Our method is thus comparable in efficiency to [1], which stores empirical covariances, but we use $K_\psi$ to build confidence regions specific to our model. **Clarification on Theorem 3.6:** Our method handles the high-dimensional setting by explicitly exploiting the low-rank-plus-sparse structure of the transition core matrix $M^* = L^* + S^*$, where $L^*$ is of low-rank $r << d$ and $S^*$ is of sparsity $s << d$. This structure enables consistent estimation in regimes where $p, q \gg N$, and is key to our theoretical guarantees. We have updated Remark 3.7 to clearly state the point. Crucially, our estimation procedure achieves *minimax-optimal error rates* for $L^*$, $S^*$, and $M^*$ that are **independent of the ambient dimension** $d = \max(p, q)$. We do **not** assume that the rank $r$ or sparsity level $s$ scale with $d$; under Assumption 3.1, both can remain small (e.g., constant or logarithmic in $d$), ensuring the estimation error bound does not grow with $d$ even in high-dimensional settings and our transfer learning result is **explicitly free of $d$**, demonstrating that the benefit of structural transfer carries over without dependence on the ambient dimension. **Technical Note and Transfer RL:** Our regret analysis relies on bounding Frobenius norm errors and carefully transferring them to Bellman recursion. Improvements using variance reduction or stronger structural assumptions are possible future work. A major contribution of our paper is in **transfer RL**: our low-rank-plus-sparse structure enables sample-efficient transfer, capturing shared structure via $L^*$ and task-specific deviations via $S^*$. Prior work like FLAMBE [Agarwal et al., 2020] does not handle this generality. We have added clarifications throughout the revised paper. We thank the reviewer again for your support and helpful comments, and we hope the improvements further affirm the strength and clarity of our submission. **References:** [1] Sample-Optimal Parametric Q-Learning Using Linearly Additive Features [2] Model-Based Reinforcement Learning with Value-Targeted Regression [3] Drineas & Mahoney, RandNLA: randomized numerical linear algebra, CACM 2016
Summary: This paper introduces a composite MDP framework combining low-rank shared dynamics and sparse task-specific variations, along with the UCB-TQL algorithm for transfer Q-learning. Theoretically, it establishes a dimension-free regret bound for the target task by leveraging structural similarities between tasks. While the work provides rigorous theoretical guarantees, empirical validation and comparisons with existing methods remain open questions. Claims And Evidence: Yes, the claims are generally supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes. The methods are largely appropriate for addressing the challenges of transfer reinforcement learning. Theoretical Claims: Yes. The theoretical claims are sound. Experimental Designs Or Analyses: This paper primarily focuses on theoretical analysis and lacks experimental validation. Supplementary Material: Yes, I have generally reviewed the main content. The supplemental materials primarily aim to provide more detailed derivations of the formulas. Relation To Broader Scientific Literature: The paper introduces a composite MDP framework combining low-rank shared dynamics with sparse task-specific components, and proposes UCB-TQL for transfer RL with dimension-independent regret bounds. Theoretical analysis demonstrates that UCB-TQL effectively exploits structural similarities while adapting to task variations, achieving improved sample efficiency over single-task RL through rigorous confidence region construction for sparse differences. Essential References Not Discussed: No, the essential references are discussed in the paper. Other Strengths And Weaknesses: Strengths: - This paper introduces a low-rank + sparse composite MDP to model shared dynamics and task-specific variations, addressing limitations of prior work that assumes purely low-rank structures. - It provides a very detailed theoretical analysis and proof. Weaknesses: - While theoretically sound, the paper does not include experiments to validate UCB-TQL’s performance on benchmarks or compare it with existing transfer RL methods, leaving practical efficacy unverified. - Assumes tasks differ only along sparse dimensions, ignoring more complex inter-task relationships. Real-world tasks often exhibit more complex deviations, limiting the framework’s generality. Other Comments Or Suggestions: Page 3, Definition 2.1: "probability transition model $P: \mathcal{S} \times \mathcal{A} \rightarrow \bigtriangleup(\mathcal{A})$" -> Should clarify $ \bigtriangleup(\mathcal{S})$ instead of $\bigtriangleup(\mathcal{A})$. Questions For Authors: Please see Weaknesses and Suggestions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Response to Reviewer fH8W** We sincerely thank the reviewer for the thoughtful feedback and recognition of our theoretical contributions. We address the two concerns raised below. --- **Q1: Lack of empirical validation of UCB-TQL** We appreciate the reviewer’s point. While this submission does not include empirical evaluations, our goal is to provide a *theoretical foundation* for transfer reinforcement learning (RL) in high-dimensional settings with structured heterogeneity. Specifically, UCB-TQL is the **first algorithm with provable regret guarantees** under the composite MDP model that includes *both low-rank and sparse deviations*, and achieves $\widetilde{\mathcal{O}}(\sqrt{e} H^5 N)$ regret that is **independent of ambient dimension**. This setting is motivated by real-world applications (e.g., autonomous driving, robotics, and healthcare), where tasks often share underlying dynamics but differ in subtle, structured ways. In such domains, the transition dynamics may be high-dimensional, and assuming *pure low-rank* or *dense similarity* across tasks can be too simplistic. The low-rank-plus-sparse decomposition we propose reflects this nuanced task structure and provides an *interpretable and tractable* way to model variation. Although we do not present experiments here, we see our work as a **theoretical counterpart** to recent empirical transfer RL methods. Our regret guarantees offer insights into *when and how transfer should help*, which is often unclear in empirical work. We believe our analysis will inspire future algorithms that combine empirical effectiveness with theoretical robustness, and we are currently implementing UCB-TQL in benchmark settings. We hope the community will view our contribution as laying the groundwork for **provably efficient transfer in high-dimensional RL**, and we are **excited to follow up** with empirical validation in future work. --- **Q2: Limitation of sparsity-based task differences** We fully agree that real-world tasks can exhibit more complex deviations than those modeled by sparse differences. Our current work makes a *structured but generalizable starting assumption*: tasks differ sparsely in transition dynamics after aligning shared core structure. This assumption enables us to design statistically and computationally efficient estimators, while still covering rich task classes. We appreciate the suggestion and have added discussion in the revised manuscript exploring possible **extensions**: - **Task-specific low-rank components**: One natural direction is to allow each task to have its own low-rank variation on top of a shared component. This introduces additional complexity but could capture broader relationships. - **Decomposable low-rank spaces**: Another extension is to learn a union of low-rank subspaces across tasks, leveraging techniques from subspace clustering and matrix factorization. - **Feature selection on $\Phi$ and $\Psi$**: When feature sets are known but high-dimensional, incorporating feature selection or structured sparsity could adaptively identify task-relevant components. - **Group-sparse or structured differences**: Beyond elementwise sparsity, our model could be extended to handle block or group-structured variations, which would better capture correlated shifts. We view our work as a first step toward unifying high-dimensional structure with transfer learning, and we are enthusiastic about these extensions. --- **Closing Remarks** We again thank the reviewer for the constructive comments. We believe our theoretical framework makes a timely and valuable contribution to the transfer RL literature, particularly in advancing the understanding of *when and how transfer helps* in high-dimensional environments. By addressing the above concerns, we hope to improve the clarity, flexibility, and practical relevance of our work. We respectfully hope the reviewer may consider increasing their score.
Summary: The paper introduces a Upper Confidence Bound method for Transfer Q-Learning for transfer RL settings where it is assumed that transition dynamics are assumed to decompose to a low-rank shared matrix and a sparse matrix that captures task specific dynamics. A key feature is that the method allows for high-dimensional transition dynamics. The paper follows on to provide theoretical guarantees on the UCB Q-learning applied to this structure for single-task learning as well as transfer. Transfer occurs by fixing the low-rank component and adapting the sparse component. Claims And Evidence: I believe that the key claims of the paper are verified through the theorems up to their correctness. Methods And Evaluation Criteria: This is a theoretical paper; no experiments were reported. Theoretical Claims: The paper provides several theorems. While I have gone through the proofs, and believe that they are correct, I would not say that I have verified them rigorously. Experimental Designs Or Analyses: This is a theoretical paper; no experiments were reported. Supplementary Material: - I looked at Section A.5 - The remaining Appendices are theoretical; I did not verify their correctness thoroughly. Relation To Broader Scientific Literature: The paper builds on top of Composite MDPs and Q-learning with UCB. Essential References Not Discussed: None to my knowledge. Other Strengths And Weaknesses: ### Strengths - Good literature review - Other than the weakness mentioned below, it is written well. ### Weaknesses - The notation in some places can be better explained. - There are no examples of experiments run with the proposed method to show how well the theory can be utilized in practice. Other Comments Or Suggestions: ### Section 2 - Could the authors be more explicit about the form of $P$. Is it a function $\mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow [0, 1]$? If so, I am not sure I understand the notation $[PV_{h+1}^\pi](s, a)$. ### Section 3 - Assumption 3.1(i) - Please specify what $\mu$ and $r$ are. - Equation 3 - Please state that $|| \cdot ||_F$ refers to the Frobenius norm (which I assume is what it means). Questions For Authors: 1. line 154: What does the notation $[H]$ mean. I think it means the range from $[0, H]$. If so, why not say that, or specify what the notation means. 2. line 191: Is the summation in $\mathbf{K}_\psi = \sum_{s'\in \mathcal{S} ...$ a mistake? I don't think the simplification in step 2 of Equation (1) works with this sum. 3. line 210: What does the notation $|| \cdot ||_{2, \infty}$ mean? Specifically, does it mean the 2-norm and $\infty$-norm, or all norms from 2 to $\infty$, or something else? 4. Could you give examples of where the specific 'low rank core transition with sparse task specific structures' arise. Could you also give counter examples? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We thank the reviewer for their positive assessment and constructive suggestions.** We are especially grateful for the recognition of our theoretical contribution, which we believe offers a timely and foundational advancement in transfer reinforcement learning. Our work provides the *first provable regret guarantees* for transfer RL in high-dimensional settings where the transition dynamics exhibit structured heterogeneity. By modeling transitions as $M^* = L^* + S^*$—with $L^*$ low-rank and $S^*$ sparse—we capture realistic task variations (e.g., shared core dynamics with sparse task-specific shifts). This structure allows us to derive *minimax-optimal error rates* for estimating $L^*$ and $S^*$, and achieve regret bounds that are *independent of the ambient dimension* $d = \max(p, q)$. These insights lay a theoretical foundation that complements ongoing empirical advances and help answer the fundamental question: *when and how does transfer learning help in high-dimensional RL?* We have revised the manuscript to improve clarity in notation and address specific concerns raised: - **Section 2 – Notation of $\mathcal{P}$ and $\mathcal{P}V^\pi_{h+1}(s,a)$:** $\mathcal{P}$ is a linear operator mapping a function $V : \mathcal{S} \to \mathbb{R}$ to a function over $(s,a)$ via $\mathcal{P}V^\pi\_{h+1}(s,a) = \sum\_{s'} \mathcal{P}(s'|s,a) V^\pi\_{h+1}(s')$. We have clarified this in the updated text. - **Line 154 – $[H]$ Notation:** Yes, you are right. $[H]$ denotes the index set $\{1,2,\dots,H\}$. We have clarified this in the updated text. - **Line 191 – Summation in $K_\psi$ and Equation (1):** The equality in Equation (1) is valid. $K_\psi$ is defined as the sum over all $s' \in \mathcal{S}$ and is independent from the outer summation over $s' \in \mathcal{S}$ in Equation (1). The simplification extracts common factors based on our composite MDP formulation. - **Line 210 – $\|\cdot\|_{2,\infty}$ Notation:** This denotes the 2-to-infinity operator norm, i.e., the maximum $\ell_2$ norm of the rows. We've added this to the notation section for clarity. - **Equation (3) – Frobenius Norm:** $\|\cdot\|_F$ refers to the Frobenius norm and was defined in the supplementary material, now clarified in the main text. - **Assumption 3.1 – Meaning of $\mu$ and $r$:** Thank you for your careful reading! $\mu$ is the incoherence parameter (a constant > 1), and $r$ is the rank of the low-rank component $L^*$, now clarified in the main text. Regarding examples of our model, consider **personalized healthcare**: the low-rank component captures typical patient responses to treatments, while the sparse component models anomalies or individual-specific deviations. Other real-world domains include **recommendation systems** and **robotics**, where core dynamics are shared but task-specific noise or variation exists. A **counterexample** would be settings where task variation arises from structured transformations or complex nonlinear mappings—scenarios that go beyond additive sparse deviations and are natural directions for future work. We thank the reviewer again for their support and helpful comments, and we hope the improvements further affirm the strength and clarity of our submission. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. Regarding \textbf{line 191}, my understanding of your rebuttal is that the index $s'$ in line 191 is different to the index $s'$ in Equation (1). Wouldn't using a different notation for this make sense, or do you think it is clear from context? I will leave my score as is since I think my concerns other than the above have been addressed.
null
null
null
null
null
null
null
null
BARK: A Fully Bayesian Tree Kernel for Black-box Optimization
Accept (poster)
Summary: For Bayesian Optimization, the paper proposes to use tree-based functions for the basis for the Gaussian Process prior. The paper outlines the mechanics of this, including: * Kernel definition * Sampling MCMC method * Acquisition definition The paper then shows that the regression capabilities are reasonable, and experiments with BO over standard synthetic and applied blackbox-optimization benchmarks which mostly consist of mixed search spaces (i.e. containing categorical parameters). The conclusion is that BARK is fairly decent, sometimes optimal compared to previous standard GP baselines (LeafGP, SMAC, Entmoot, etc.) Claims And Evidence: Generally yes, the paper did in fact satisfy its stated contributions in the intro, i.e. * Proposing tree-kernel based Bayesian regression method * Computationally efficient method for training this method (during the log-likelihood maximization phase) and sampling * Show that it performs reasonably on Bayesian optimization benchmarks. The only contribution that's missing I feel, is motivating "why propose trees in the first place at all"? Please see "Weaknesses" section for more details. I suspect that BARK would not do as well on a wide variety of continuous-only space benchmarks, and that's fine - but the authors need to make it more explicit where BARK truly shines (mixed spaces). Methods And Evaluation Criteria: Yes, paper uses standard benchmarking functions for BO, well known in literature Synthetic: (TreeFunction, Discrete-Ackley, Rosenbrocks) and Applied (PestControl, CCOBench, etc.). Its regression datasets (UCI) are also well-known. Theoretical Claims: Did not check carefully Section F (theoretical regret bounds). Experimental Designs Or Analyses: Yes, paper follows standard Bayesian Optimization evaluation procedures (e.g. running baselines, plotting best observed value over multiple seeded runs) - no red flags here. Table 1 compares two tree-based regression methods (BART and BARK) but doesn't compare against other standard regression baselines (e.g. Euclidean-GP) - why not? Adding these results establishes much better why tree-based BO works better than euclidean distance-based kernels. Supplementary Material: I needed to verify the search spaces of the objectives, so I viewed Table 3 and 4. I noticed that only TreeFunction contains a pure continuous search space, while all others are mixed. Like I said, I suspect that the tree-based kernel does indeed perform well on mixed search spaces, but the authors need to make this motivation more explicit. This can be boosted better with a basic 1D regression plot, as well as benchmarking on continuous-only functions - where I suspect BARK wouldn't do as well. Relation To Broader Scientific Literature: Mixed search spaces are in general have been fairly difficult to model with general euclidean-based GP kernels. This is because of having to deal with continuity issues / it's difficult to capture the shape of the objective very well with a smooth model designed originally for continuous spaces. So this paper does make a solid contribution to this area of Bayesian Optimization. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: # Weaknesses While the paper discusses in detail the mechanics of tree-based regression modeling and how it would work in a GP setup, I'm still having a hard time understanding the fundamental conceptual benefits. I.e. what exactly motivated the authors to consider that using trees would be good for Bayesian Optimization (BO)? Is it some combination of: * Tree-based functions are better at modeling categorical spaces (which from experience, classic GP-BO methods struggle with)? * The structure of the tree-based function (i.e. being very piece-wise and not smooth) helps with modeling non-smooth functions better than using smooth functions as the basis (as a regular Euclidean GP would)? If this is the case, I would strongly urge the authors to motivate these more, instead of jumping into the mechanics of the BARK method. Perhaps also provide figures on how this new BARK-GP regresses on a 1D categorical / discrete function better than a RBF-GP? EDIT: My weaknesses have been resolved. The authors have made their contributions over mixed spaces much more explicit. Other Comments Or Suggestions: For now, I will propose a weak reject - mainly because of the core weakness as I mentioned. But I am fully willing to upgrade my score once the authors respond to this issue. Questions For Authors: Please see my overall weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review, and for recognizing our contribution to mixed-space Bayesian optimization. > what exactly motivated the authors to consider that using trees would be good for BO We agree with the reviewer that we should better motivate BARK to ensure practitioners use BARK appropriately. Our revised paper will include the requested figures and the following discussion. Our greatest motivation for using trees is for modeling - and specifically optimizing over - mixed feature spaces. Purely continuous spaces are well-modeled by standard Euclidean GP kernels. Purely categorical spaces can be addressed with multi-armed bandit approaches. However, many real-world problems have mixed domains. Trees have strong modeling performance in mixed spaces. Moreover, we can define a MIP to optimize the AF exactly and we can formulate this MIP so that it is effectively solved with off-the-shelf software. The resulting BO procedure does not require approximations during the optimization step. Modeling with trees offers several additional benefits. First, we are able to model non-stationary functions, where the lengthscale changes throughout the domain. We demonstrate this behavior in [Rebuttal Figure 1](https://anonymous.4open.science/r/bark-rebuttal-8830/Fig\_1D\_toy\_continuous.pdf). This is especially desirable in high dimensions, where a locally short lengthscale will not harm BARK's global uncertainty quantification. Second, we assume prior correlation between values in categorical features due to the nature of splitting rules in trees. The indicator kernel typically used for categorical features with GPs (Ru et al., 2020) assumes that each value is independent. BARK is able to capture correlations between values, as demonstrated in [Rebuttal Figure 3](https://anonymous.4open.science/r/bark-rebuttal-8830/Fig\_1D\_toy\_categorical.pdf). Trees therefore have a more expressive prior on functions over categorical values. Some black-box functions are indeed better modeled by trees, as demonstrated in the MAX benchmark where a grid-based AF optimization is used for all methods. However, other benchmarks are better modeled with the assumptions of smoothness provided by Euclidean GP kernels. > Table 1... doesn't compare against other standard regression baselines Section 7.1 aims to show that BART and BARK have similar modeling abilities, and that our kernel perspective on BART still leads to strong regression performance. Since our only point is that BART and BARK are similar with respect to regression, we did not compare to a wide array of regression models. However, we agree that including RBF-GP would provide more context and the revised paper will include an extended version of the table: [Rebuttal Table 2](https://anonymous.4open.science/r/bark-rebuttal-8830/Tab\_Regression\_with\_GP\_RBF.pdf). Additionally, as noted in our reply to Reviewer zyg8, we will signpost the purpose of Table 1 more clearly. > I suspect that the tree-based kernel does indeed perform well on mixed search spaces, but the authors need to make this motivation more explicit. This can be boosted better with... benchmarking on continuous-only functions Thanks for helping us make our motivation more explicit. Indeed, we do not expect BARK to outperform an RBF kernel in continuous-only BO. As explained by the reviewer, these continuous-only BO problems may be smooth, and therefore have a higher prior probability under an RBF kernel. The reviewer is correct that our original submission focuses on mixed spaces where we expect BARK to be more suitable. We also agree that adding continuous-only benchmarks offers a more complete view. We will provide two continuous-only benchmarks in the revised paper: Hartmann (6D) and Styblinski-Tang (10D), which we show in [Rebuttal Figure 4](https://anonymous.4open.science/r/bark-rebuttal-8830/Fig\_continuous\_benchmarks.pdf). Despite BARK's non-smooth assumption, we still observe reasonable performance from BARK in this setting. > how this new BARK-GP regresses on a 1D categorical / discrete function To motivate using trees, our revised paper will include an example of regression on both a 1D discrete function and a 1D categorical function: [Rebuttal Figure 2](https://anonymous.4open.science/r/bark-rebuttal-8830/Fig\_1D\_toy\_discrete.pdf) and [Rebuttal Figure 3](https://anonymous.4open.science/r/bark-rebuttal-8830/Fig\_1D\_toy\_categorical.pdf). We will explain in the revised paper that BARK may not necessarily provide better regression for discrete, ordinal features (which are simply continuous features sampled on a grid) compared to Euclidean GPs, but that BARK provides a method of optimizing over such features without approximations. --- Rebuttal Comment 1.1: Comment: Thank you for the response, and my weaknesses have been resolved. The authors have made their contributions over mixed spaces much more explicit. Thus I upgrade my score.
Summary: The paper introduces a new tree-based surrogate model for use in Bayesian optimization. It extends of prior work in tree-based regression, particularly BART, to be more suitable for acquisition function optimization and Bayesian optimization. The tree model is fully Bayesian, including MCMC over tree structures. The acquisition function is an integer program. The method performs well on real mixed-variable problems. ## update after rebuttal No change Claims And Evidence: The claims of the paper were well supported. Methods And Evaluation Criteria: The method was well motivated and very clearly described. There was a strong set of ablations and results for understanding model performance. The choice of benchmark problems was reasonable. Theoretical Claims: No Experimental Designs Or Analyses: The empirical evaluation was typical for a BO paper like this. Supplementary Material: Yes, all but F. Relation To Broader Scientific Literature: The framing with the broader scientific literature was well written and appropriate. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I found the ideas in the paper to be an interesting, novel, and likely useful. The one major weakness in the paper is the complete lack of discussion of wall-time for BO. I expect this to be slow due to the combination of MCMC and integer programming (both of which tend to be slow). But I do not know if this is unusably slow or not, as the only wall-times given are for sampling, just for the regression problems and compared only to BART (Table 5 in the supplement). I expect to see total wall time for the full BO loop, compared with wall time for the other BO methods; and for that total wall time to be broken down into MCMC time vs. Gurobi time. I cannot recommend the paper very enthusiastically without that result as based on what I see now, the method may not be practical. It seems very promising though. Other Comments Or Suggestions: Fig. 5 shows error bars but doesn't say what they are. The paper states that software the model will be released. It seems that this software will depend on Gurobi, which is very expensive for many people. If the authors would like to maximize the impact of their work, it would be worth the effort to introduce an interface for using a free/open source solver. Questions For Authors: How does total BO wall time compare to all of the other baselines methods, and how much of that wall time is spent doing MCMC and how much is spent in Gurobi? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We kindly thank the reviewer for their thoughtful review, and for recognizing the potential of our method. > lack of discussion of wall-time for BO As mentioned in Appendix G.1, we limit each MIP optimization to 100 seconds. However, we agree that a more complete comparison is highly relevant. Please see requested results in [Rebuttal Figure 5](https://anonymous.4open.science/r/bark-rebuttal-8830/Fig\_optimization\_time.pdf) for a selection of 3 benchmarks. We will also include this figure and the following discussion in the main paper: BARK is slower than competing methods, taking ~50s to fit the model, and 100s to optimize. This is due to the combination of the expensive MCMC procedure, and the large optimization formulation. This is comparable to the fitting time for BART, and the time taken to evaluate BART on a grid of 2^14 points. BARK is best applied in BO settings where the objective is expensive to evaluate (e.g. taking at least several minutes, or having a large associated financial cost). Note that the PestControl and MAX (material design) benchmarks reflect examples of such black-box functions. For settings where experiments are cheap and/or function evaluations are quick, we recommend alternate methods. > Fig. 5 shows error bars but doesn't say what they are. Thanks for noting our omission: the revised paper will state explicitly in the figure caption that the bars are the 25th and 75th percentile of regret achieved across the 20 runs. > this software will depend on Gurobi, which is very expensive for many people Gurobi does provide a free academic license, but we recognize that requiring a licensed software will limit the reach of this work. Gurobi is a very strong optimizer for MIP problems, and open source solvers would take longer to maximize the AF. However, we would still be interested in providing such an interface, especially if there is community interest. --- Rebuttal Comment 1.1: Comment: Thanks, 150s is pretty reasonable for a lot of tasks and makes this a useful method.
Summary: The paper proposes a combination of forest kernel GPs and Bayesian tree models (BART) specifically tailored for the use in Bayesian Optimisation (BO). The main idea is to directly optimise the expected acquisition function values over the posterior distribution of the kernel parameters. This is done through posterior samples of those parameters obtained via a Metropolis Hastings Markov Chain Monte Carlo (MCMC) algorithm. The final optimisation of the mean acquisition function can apparently be done efficiently via a mixed integer programming (MIP) approach. Compared to a BART-BO approach where posterior sampling is performed to obtain individual trees, the proposed approach is said to lead to a much improved sample efficiency per MCMC sample. Claims And Evidence: The main claim is that the proposed method tends to outperforms similar method in their blackbox optimisation performance. The claim makes sense conceptually and experimental evidence is provided. Methods And Evaluation Criteria: The optimisation benchmark problems are standard in the field. The choice of UCI datasets, which are not used for optimisation, to investigate the model fit is questionable though, as the main objective of the paper is to explain differences in optimisation performance. Here the paper would benefit from a more stringent formulation of objectives (hypotheses to be tested) of the experimental evaluation. Theoretical Claims: There is an interesting theoretical claim about the convexity of the expected acquisition function value at query points in terms of predictive mean and variance. This and other claims might be proven in the appendix, which I did not have time to check. Experimental Designs Or Analyses: Overall the experiments cover a reasonable range of test cases and the results look sensible. Specifically for the comparison to BART I am wondering though whether the comparison does not have to be refined. This is because we principally allow for large computation times of BO methods. Hence, the key claim of an improved efficiency seems to require some quantification in terms of wall clock computation time. Having less sample efficiency alone is not necessarily a big disadvantage if one can allow a lot of samples and the complexity per sample can differ. Supplementary Material: I did not check any supplementary material. Relation To Broader Scientific Literature: The paper makes an excellent job in surveying the key literature in an integrative and insightful way. Essential References Not Discussed: None to my knowledge. Other Strengths And Weaknesses: The paper makes a laudable approach to compactly yet accurately present the extensive amount of ideas, known and novel, that are required to understand the subject matter. Yet to my perception it is still hard to follow and to comprehensively see all the difference to alternative approaches. Could it be useful to have a hierarchical presentation of a general form of the model used in this and prior works, and from there develop their differences and commonalities? The MIP approach for acquisition function optimisation seems to be very interesting in its own right. Unfortunately, there seems to be no room to discuss this in the main paper. Other Comments Or Suggestions: See other strength and weaknesses. It would really be useful to see more compactly the various parameters, especially the kernel parameters that are ultimately sampled. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review, and for identifying the strengths of our work. > The choice of UCI datasets Section 7.1 demonstrates that the BARK model performs similarly in regression to the BART model. BART is typically used in regression settings with tabular datasets, so the purpose using these UCI benchmarks is to be fair to BART. This section demonstrates that the improved performance of BARK in BO is due to the acquisition function optimization, and not any superior modeling capability. Agreed with the reviewer that we should state the purpose of Section 7.1 more clearly: the revised paper will clearly signpost our objective of comparing regression tasks where BART is already known to be strong. > the convexity of the expected acquisition function Thebelt et al. (NeurIPS, 2022) show the convexity of the nonlinear part of the acquisition function: see their development in their Section 4. The acquisition function (AF) itself is not convex in the input space, but only with respect to the mean and variance, which leads to improved performance with mixed-integer solvers. > for the comparison to BART I am wondering though whether the comparison does not have to be refined... the key claim of an improved efficiency seems to require some quantification Many BART samples are required to compute the predictive variance, e.g. experiments in Chipman et al. (2010) use at least 1000 function samples. We agree with the reviewer that, in regression, this lower sample efficiency may not be a disadvantage. However, the high number of samples means that it is infeasible to formulate an optimization problem over the BART AF, as the number of variables and constraints required to encode these tens of thousands of trees would be too great. The large number of samples therefore necessitates a grid-based approach to optimizing the AF. Our revised paper will clarify the relevance of the 'sample efficiency' claim in Section 5.2. Furthermore, the required density of the grid on which BART is evaluated increases exponentially with the problem dimension, and so it is not feasible to allow for a sufficiently dense grid in high dimensional problems. The grid sized used in our experiments was chosen to match the wall clock time of the BARK optimization (see [Rebuttal Figure 5](https://anonymous.4open.science/r/bark-rebuttal-8830/Fig\_optimization\_time.pdf), where evaluating BART at 2^14 gridpoints takes the same wall-clock time as the BARK optimization). > Could it be useful to have a hierarchical presentation of a general form of the model used in this and prior works Thank you for the suggestion - we will add to the paper a summary of the existing literature (see [Rebuttal Table 3](https://anonymous.4open.science/r/bark-rebuttal-8830/Tab\_literature\_comparison.pdf)), that highlights key similarities and differences between the various BO methods covered in the literature review. > no room to discuss [the MIP approach] in the main paper Thebelt et al. (NeurIPS, 2022) develop a detailed discussion of the MIP. We have extended the optimization model, and will clarify these improvements in Appendix G. The contributions we will clarify in the revised version include: extending the MIP to include multiple tree-structure samples and linking the kernel samples in the (approximate) integrated AF. > see more compactly the various parameters Thanks for the nice idea, which helps us improve clarity. We will add this summary of the BARK parameters (see [Rebuttal Table 1](https://anonymous.4open.science/r/bark-rebuttal-8830/Tab\_BARK\_parameters.pdf)) to the paper.
null
null
null
null
null
null
null
null
Contract Design Under Approximate Best Responses
Accept (poster)
Summary: This paper studies a repeated contracting game between a principal and an agent. The principal offers contracts to the agents who has the choice to accept the contract or not. The paper assumes a hidden action on which the outcome (to which are related the principal's and the agent's utilities) depends. Instead of the classic setup where the agent best responds to the contract, the agent here deviates by a quantity $\delta$ from his best action. In that setup, algorithm 1 gives a way to compute the "optimal $\delta$-robust contract". Then, it is assumed that the agent's type changes over time in a repeated game. For that scenario, the principal discretizes the space of contracts as an $\epsilon$-grid of the dimension m hypercube. The UCB bandit algorithm is used, the set of actions being the grid of the hypercube. From my understanding, $\delta$ is known to the principal (otherwise algorithm 1 cannot run because of equations (6a), (6b), (6c)). Thus, since it is assumed that the agent picks the worst action for the principal within the set $A^\delta$, the term "robustness" seems an abuse of language. The principal knows the agent's deviation, which makes it way easier to tackle and not very far from analyzing a best response. Claims And Evidence: I am concerned about Remark 2, which states the following: "Our algorithm can be extended to deal with settings in which the agent can play any δ-best response within the set Aθt,δ(pt). In such a setting, the principal’s utility is not fully stochastic. However, our Algorithm 2 can be easily extended by instantiating an adversarial no-regret algorithm instead of UCB1." An important difficulty would come from dealing with agents who potentially play any action in the set $A^\delta$ (for $\delta$ being not close to $0$, I am not even sure that a solution exists). I would appreciate a clear algorithm and a proof for this remark, which seems overclaimed to me. As I mention it above, the term "robustness" for this setup is also overclaimed. Methods And Evaluation Criteria: There are no experiments in this paper, which is absolutely not a concern for me. The paper is theoretical and is more about proposing a framework to study contract design than about offering possibly practical implementations. Theoretical Claims: I checked the proofs for the theoretical claims and they seem correct to me. My concern is on one side about the lack of mathematical precisions. Typically, UCB1 is called without any reference. Of course, I can understand that it is a reference to the Upper Confidence Bound algorithm but I would appreciate to see it written down, especially when it comes to analyze it. I feel like the proofs and techniques are very close to Zhu et al. 2023, except that they are made easier. The fact that the agent chooses the worst action for the principal within the set $A^\delta$ does not change the way the analysis can be done. At the end of the day, it is equivalent in the analysis (at least for the learning part) to an agent choosing the best action. I am not convinced by the technical novelty of the theoretical claims. Experimental Designs Or Analyses: NA. Supplementary Material: I appreciate the fact that the proofs are given in the supplementary material. However, I would have enjoyed if the authors had taken advantage of the extra space to give clear and rigorous mathematical explanations. Relation To Broader Scientific Literature: I am concerned about the novelty of the paper as compared to Zhu et al. 2023. The claimed advantages are the following: - apparently the method by Zhu et al. 2023 needs a more complex discretization, which makes "the approach by Zhu et al. (2023) challenging to be employed in practice compared to ours". The current state of this line of work with a UCB having an action set of size $T^{1/m+1}$ seems to me a sufficient burden to make it stay a very theoretical field right now. If the authors believe that the practical implementation is a significant advantage of their method, then I would have appreciated experiments or at least straight applications clearly stated. - second, the method in the paper "does not require apriori knowledge of the principal’s reward, which is instead required by the algorithm of Zhu et al. (2023)." I agree that it can be of some interest but to me, it is a very marginal advantage. Essential References Not Discussed: Yes. The very classic reference in contract theory could be mentioned: - Laffont and Martimort, The theory of incentives: the principal-agent model. There exists a literature on robust contract design, such as - Yu and Kong, Robust Contract Designs: Linear Contracts and Moral Hazard - Miao and Rivera, Robust contracts in continuous time but the only reference given is: - Carroll, Robustness and linear contracts. Other Strengths And Weaknesses: My first concern is the assumption on the agent's behavior. The paper assumes that the agent picks the principal's worst action within the set $A^\delta$. This assumption does not make any sense from a rational agent's perspective, does not exist in the literature and seems to be there only to allow the proofs to work. The learning part of the paper seems very weak to me. It does not go beyond running UCB on a discretized hypercube to approximate a very specific notion of regret. Also, the last section does not provide anything more than reminding the hidden type setup of Zhu et al. 2023, then proposing a discretization of the hypercube and running an UCB on it (similar techniques were already proposed). Hence, despite the pros and cons one could find in the paper, I do not believe that it is a good fit for ICML, which is a machine learning conference. The capacity of the method to handle any $\delta$ best-response seems overclaimed. I appreciate the problem of designing robust contracts, which seems very interesting to me. However, I believe that the setup and the approach are poorly formulated and do not bring additional value as compared to the existing state-of-the-art. Other Comments Or Suggestions: The regret definition in part 5 should me more discussed since it is not obvious. Questions For Authors: Could you compare the technical novelty of your approach to Zhu, B., Bates, S., Yang, Z., Wang, Y., Jiao, J., and Jordan, M. I. The sample complexity of online con- tract design? Could you explain how it makes a difference to assume that the agent picks the worst action for the principal in $A^\delta$ as long as $\delta$ is known? I am not sure it makes it a harder problem as compared to a best-responsive agent. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We believe there is misunderstanding on the main contributions of our paper and the challenges in obtaining efficient optimization algorithms. - **Computing robust equilibria is much harder than standard one**. The Reviewer claims: “The principal knows the agent's deviation, which makes it way easier to tackle and not very far from analyzing a best response.” We disagree with this statement, as it contradicts known results for similar bilevel problems, such as Stackelberg games and Bayesian persuasion, where computing an equilibrium in the best-response case is computationally easy, but hard for worst-case robust approximate best-response. - **Main contribution of our paper**. The Review seems mostly focused on the learning part, while it seems to ignore the more “surprising” computational results. As should be clear from our writing (see, e.g., the abstract), the learning part is not the main contribution but rather a valuable addition. While we agree with the Reviewer that the learning part alone is not enough for ICML, we hope the Reviewer would agree that it includes nice extensions and simplifications with respect to the approach by Zhu et al. - **On agent’s behavior**. The Reviewer claims: “The paper assumes that the agent picks the principal's worst action within the set $A^\delta(p)$. This assumption does not make any sense from a rational agent's perspective, does not exist in the literature and seems to be there only to allow the proofs to work.” **We respectfully, but strongly, disagree on this statement.** First, the assumption is not intended to be predictive of the agent's behavior: rather, it is a worst-case perspective based on the fact that **the agent's behavior is unpredictable within a small $\delta$ utility difference** (e.g., due to noise in the model parameters or the agent's perception of their own utility). In such a scenario, the principal adopts a worst-case approach by computing a robust contract. In this way, the utility the principal secures, OPT($\delta$), is a lower bound on the actual utility the principal will obtain. Indeed, **this robust equilibrium model is not our own invention**, it has been previously proposed and studied in both Stackelberg and Bayesian persuasion settings (Gan et al., 2023; Yang & Zhang 2024). **Re**: *"I am concerned about Remark 2.."* The Reviewer’s observation is right, if the agent is allowed to break ties in any way when indifferent among multiple $\delta$-best response, then an optimal contract for the principal may *not* exist. However, Remark 2 is concerned with the learning part, where the goal is *not* to compute an optimal contract, but rather to attain no-regret against an optimal $\delta$-robust contract. Notice that such a contract always exists, given the correctness of Algorithm 1. Thus, non-existence is *not* an issue. Intuitively, the extension of Algorithm 2 informally described in Remark 2 has to deal with the fact that the feedback actually observed by the algorithm is *not* about the $\delta$-best response that minimizes principal’s utility, but about some other $\delta$-best response instead. This requires the adoption of an adversarial regret-minimization algorithm, but it intuitively does *not* detriment the regret against an optimal $\delta$-robust contract, since the observed $\delta$-best responses cannot be worse than the one played under a $\delta$-robust contract. Due to space constraints, we cannot provide a complete formal proof of this to the Reviewer, but we will certainly add it in the final version of the paper. **Re**: *"Comparison with Zhu et al. (2023)"* As we have already highlighted, the learning part is not our main contribution. However, our algorithm builds on Zhu et al. to deal with $\delta$-best responses of the agent. In doing so, it provides a simple algorithmic approach and analysis. While we agree that our approach is still far from practical applications, it should be acknowledged that a simpler approach and analysis are always beneficial. **Re**: *"Could you explain how it makes a difference to assume that the agent picks the worst action for the principal in as long as $\delta$ is known? I am not sure it makes it a harder problem as compared to a best-responsive agent."* Computing a robust contract when $\delta$ is known still poses several challenges. For instance, in Stackelberg games, computing a robust commitment is computationally intractable even when $\delta$ is known. This is because, unlike the non-robust case, the principal’s utility function is piecewise linear over an exponential number of regions (see, for instance, the hard instances in the reduction of Gan et al. (2023)). This contrasts with classical contract design, where the utility is piecewise linear over $n$ regions (one per action). It is only thanks to our analysis, which exploits the particular structure of contract design (not available in other bilevel problems), that we obtain a polynomial-time algorithm. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed and interesting response. Concerning "Computing robust equilibria is much harder than standard one.", I still do not agree with the authors. In their setup, the agent's action is the $\delta$-worst case (see line 182), while the meaning of "robust" encompasses any action for which the loss of utility is between 0 and $\delta$, especially in Stackelberg Games or Bayesian Persuasion. Typically, the authors refer to Gan et al. 2023. In their paper, the agent's $\delta$-approximate best-response is the **set** $$ ( j, u(x,j)>max_i u(x,i)-\delta )$$ for a principal's action $x$ and the agent is assumed to pick **any** action in this set. It makes a major difference between both setups and definitely facilitates the analysis in the paper here. It is linked with the discussion "On agent’s behavior...". The authors answer "the agent's behavior is unpredictable within a small $\delta$ utility difference": although it is written in the paper (line 182), "the agent selects a $\delta$-best response that minimizes principal’s expected utility, namely an action $a^\delta(p) \in A^\delta(p)$ such that $a^\delta(p) \in$ arg min$_{a \in A^\delta (p)} F_a \cdot (r − p)$." It thus means that the agent's behavior is determined and fixed within the set of $\delta$-deviations. To me, the proof of the statement given in Remark 2 would be interesting and I still think that it is a pity that the authors did not stated it as a theorem. The last response "For instance, in Stackelberg games, computing a robust commitment is computationally intractable even when $\delta$ is known." is unsatisfying in the sense that the agent's behavior in this paper is **not** what is generally assumed by robust in Bayesian Persuasion or Stackelberg games. I really appreciate the time that the authors took to answer my questions but a lot of issues remain unsolved to me. Therefore, I maintain my grade. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for taking the time to read our responses. We think that the Reviewer misunderstood the robust equilibrium definition of Gan et al. (2023). Indeed, our robust equilibrium definition is **exactly the same** as that of Gan et al. (2023), and we are **absolutely certain** about this. In fact, while you pointed out Equation (2) in Gan et al. (2023) (https://arxiv.org/abs/2304.14990), just a few lines below this equation, they formally defined robust Stackelberg equilibrium, where Equation (3) clearly indicates that the follower selects a worst response for the leader from the $\delta$-best response set: $j^* \in \arg\min_{j \in \mathrm{BR}_\delta(x^)} u_l(x , j)$. Indeed, this follows exactly the rationale that the follower may choose any action in the delta-best response set. Since the exact choice is unpredictable, the worst-case perspective is applied (as in Equation (3) of Gan et al. (2023)). Notice that this is exactly what we say at line 182 when we say that the agent selects $a^\delta(p) \in \arg\min_{a \in A^\delta(p)} F_a \cdot (r - p)$. We’d therefore like to request the Reviewer to kindly review the definition in Gan et al. (2023) and ours. In fact, we’d also like to know your thoughts on the following: if Gan et al. (2023) did not use the worst action in the $\delta$-best response set in the definition, then based on what is the leader’s payoff in a robust equilibrium defined? Notice that an exact payoff must be defined in order to obtain a formal optimization problem. We invite the Reviewer to discuss with the other Reviewers in order to have an additional opinion on the correctness of the definition.
Summary: This paper studies optimal contract design under approximate best response agents in hidden-action principal-agent games. First, they propose an efficient algorithm to compute an optimal contract. They also show that the principal's utility is $\delta$ close to the optimal contract under best response agents. Finally, the paper relaxes the full knowledge assumption and devises a no-regret learning algorithm. To design an efficient algorithm, they first show that the parameter space of optimal $\delta$ robust contracts can be formulated by a collection of disjunctive constraints (2a) and use linear programming to solve it. The proof idea of $\delta$-closeness in Proposition 1 is similar to Zhu et al. 2023. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Mostly Experimental Designs Or Analyses: NA Supplementary Material: Proof of proposition 1 and skim through Theorem 2. Relation To Broader Scientific Literature: The paper considers optimal contract design under approximate best response agents. If the agent is best responding, the optimal contract can be solved by a simple LP. Approximate best response introduces additional layer of complexity. Essential References Not Discussed: Nothing I am aware of. Other Strengths And Weaknesses: ## Strength - The results are interesting in the context of principal agent game under approximate best response, e.g., Gan et al. 2023 where the paper should hidden action principal-agent games is *smooth* in the principal's objective and also admit efficient algorithm - The writing is good. ## Miner issue The comparison to Zhu et al is not very clear to me. - Can the simple discretization apply to non-robust setting? - Can algorithm 2 take $\delta=0$? - Why does the regret definition $\mathcal{R}_T(\mathcal{C})$ coincide with Zhu et al's? If $\delta\neq 0$, the second part is approx best response. Other Comments Or Suggestions: NA Questions For Authors: Can you elaborate more on which part of the results in section 5 can be applied to the non-robust setting ($\delta = 0$)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Re**:*"Can the simple discretization apply to non-robust settings? "* Yes, the simple discretization can be employed in the non-robust version of the problem. Intuitively, it is sufficient to follow the same steps as in the proof of Theorem 2, with $\delta$ going to zero. **Re**: *"Can algorithm 2 take $\delta=0$?"* No, when $\delta=0$ the $\delta$-best responses are not well defined. (Please refer to the first equation in Section 2.2.) **Re**: *"Why does the regret definition $\mathcal{R}_T(\mathcal{C})$ coincide with Zhu et al's? …."* The Reviewer is right; we will fix this in the final version. Only the baseline (i.e., OPT) in that regret definition coincides with that considered by Zhu et al. (2023). However, when $\delta$ is approximately zero, the two regret definitions essentially coincide. Indeed, as $\delta$ tends to zero, the problem nearly reduces to the non-robust one (see Proposition 1). ** **Re**: *"Can you elaborate more on which part of the results in section 5 can be applied to the non-robust setting?"* “As previously discussed, when $\delta$ is approximately zero, our problem essentially coincides with that of Zhu et al. (2023). Thus, our results can be extended to the non-robust problem. However, this requires addressing some technical details, since when $\delta = 0$, the agent's best response regions do not coincide with those in the non-robust case due to the strict inequality “>” in the definition of $A^\delta(p)$.
Summary: The paper studies hidden-action principal‐agent problems where a principal designs a contract (an outcome‐dependent payment scheme) to incentivize an agent to take actions that are in favor of the principal. Unlike traditional models assuming the agent always plays an exact best response, here the agent may choose any action that is within a $\delta\in(0, 1)$ suboptimal range. - The authors derive upper and lower bounds on the maximum utility the principal can achieve with robust contracts, characterizing the "price" the principal pays (in terms of utility loss) for robustness as a function of $\delta$. Notably, these bounds do not depend on the inducibility gap (a parameter common in Stackelberg games). - The paper presents a polynomial-time algorithm to compute an optimal $\delta$-robust contract by first fixing the optimal arm and a $\delta$-robust arm, and solve the induced LP. - The work extends to an online learning framework where the principal does not know the agent’s underlying parameters (costs, types) and shows an $T^{1-1/2(m+1)}$ regret. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: The paper addresses an interesting problem of how to learn a $\delta$-robust contract from both computational and online learning perspectives. Previous work Zhu et al. 2023 focuses on more general Stackelberg game setting, where many of the results can be further refined for the problem of contract design due to the specialty of the (linear) problem structure. Zhu, Banghua, et al. "The sample complexity of online contract design." arXiv preprint arXiv:2211.05732 (2022). Essential References Not Discussed: No Other Strengths And Weaknesses: The analysis is novel and the paper is well written and easy to follow. Other Comments Or Suggestions: No further suggestions Questions For Authors: The online learning setting assumes the follower's action to be invisible, rendering the follower's response model a "black box" and thus an exponential dependency is unavoidable. My question is, If the follower consistently takes the worst $\delta$-suboptimal action (from the leader's perspective), and the leader can observe the follower's taken action. Can the leader leverage this adversarial behavior to gain more information about the follower's cost structure? If so, could this lead to an improved regret bound compared to the standard setting? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Re**: *"The online learning setting assumes the follower's action to be invisible, rendering the follower's response model a "black box" and thus an exponential dependency is unavoidable. My question is, If the follower consistently takes the worst -suboptimal action (from the leader's perspective), and the leader can observe the follower's taken action. Can the leader leverage this adversarial behavior to gain more information about the follower's cost structure? If so, could this lead to an improved regret bound compared to the standard setting?"* We thank the Reviewer for the question. Learning an optimal contract with observable actions in a polynomial number of queries (with respect to the parameters of the problem instance) remains an open problem, even in the non-robust case. Exponential lower bounds exist in Stackelberg games, but it is unclear whether they extend to contract design due to the particular structure of the utilities. However, we do not believe that the adversarial behavior of the follower would benefit the principal, since when $\delta$ is close to zero, the problem nearly reduces to the classical one (see Proposition 1).
Summary: This paper explores hidden-action principal-agent problems where the agent follows approximate best responses rather than exact optimal strategies. The authors propose a polynomial-time algorithm for computing optimal contracts under these conditions and introduce a no-regret learning algorithm for scenarios where the principal lacks prior knowledge. Claims And Evidence: yes. Methods And Evaluation Criteria: yes. Theoretical Claims: In the first 8 pages Experimental Designs Or Analyses: N.A. Supplementary Material: N.A. Relation To Broader Scientific Literature: n.a. Essential References Not Discussed: n.a. Other Strengths And Weaknesses: Strengths: 1. The paper addresses a novel aspect of contract design in principal-agent problems, considering realistic scenarios where agents do not always act optimally. The introduction of approximation in an agent's action is new, though the idea is similar to Gan et al., 2023. 2. The authors provide a thorough analysis, rigorously establishing bounds on contract robustness, demonstrating the existence of optimal solutions, and introducing polynomial-time algorithms that improve upon prior intractability results in similar settings. Weaknesses: 1. The techniques presented in this paper do not appear particularly novel. Both the analysis of the algorithm 1 and 2 are relatively standard. The novelty of the technical results should be further clarified. Other Comments Or Suggestions: N.A. Questions For Authors: 1. Could the authors specify the unique and new challenges in their proposed model and clarify how their techniques effectively address these challenges? 2. Is there any new technique developed during this process? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Re**: *"Challenges in Our Setting and Technical Novelty.”* We believe that the computational results presented in Section 4 are rather “surprising”. Indeed, such positive results are unexpected, as very similar problems are computationally intractable. In particular, computing an optimal $\delta$-robust commitment in Bayesian persuasion and Stackelberg game settings is known to be computationally intractable (see Gan et al. (2023), Yang & Zhang (2024)). Our case actually appears even more challenging because the strategy space, $\mathbb{R}_+^m$ is much less amenable compared with $\Dleta_m$ in Bayesian persuaison and Stackelberg games. And because of this, the quasi-polynomial-time approximation schemes (QPTAS) developed in the previous works do not apply to our setting. We initially conjectured that our setting was harder and attempted to prove that it was inapproximable. It was only after a series of attemps and very careful analysis, we discovered that contract design scenarios are actually more structured and allow for efficient computation. Based on these unique structures, our method works entirely different from previous approaches to computing robust equilibria. It revolves around fixing the $\delta$-robust best response and the actual best response, then computing an optimal robust contract for such a scenario. By iterating this process for each pair of actions and solving an LP at each step, we prove that it is possible to recover an optimal robust contract. Our algorithm is guaranteed to work in contract design due to the particular structure of the principal’s and agent’s utilities. This approach is entirely new compared to existing ones, which, for example, require solving an exponential number of LPs. Indeed, notice that a similar approach cannot be employed in the settings studied in previous works. We agree with the Reviewer that some of the techniques we adopted are common in algorithmic game theory, such as solving a set of LPs. However, the main contribution and most challenging step of our work is determining which LPs need to be solved to recover an optimal robust contract. (Indeed, a naive approach would require solving exponentially many LPs!) Hence, while the algorithmic approach may appear standard, this is in fact not the case for the analysis, which, as highlighted above, is entirely disjoint from previous works on the topic and requires non-trivial arguments.
null
null
null
null
null
null
Logits are All We Need to Adapt Closed Models
Accept (poster)
Summary: This paper studies the problem of adapting a black-box LLM to a downstream task, assuming access to the logits of output tokens. The authors propose a token-level probability reweighting algorithm that modifies token logits during inference. The core idea is to frame the adaptation problem as label noise correction in supervised classification and leverage an autoregressive probability reweighting model to estimate logits for the downstream task. Theoretical justifications are provided, and empirical studies on benchmark datasets show that the proposed approach outperforms several existing adaptation techniques. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: This paper consider the adaptation of black-box LLM to downstream task, which is quite broader in scientific literature Essential References Not Discussed: yes Other Strengths And Weaknesses: Framing the problem as transition matrix estimation in the label noise correction framework and handling the challenge of an extremely large label space using an autoregressive reweighting model is new and insightful to me. The theoretical analysis effectively establishes key properties of the proposed algorithm. However, 1. the assumption of logit access significantly weakens the overall novelty and practical contribution of the work. The approach relies entirely on closed-source LLMs exposing logits, which is not currently supported by most commercial APIs, making the proposed learning setting somewhat artificial. 2. The method requires training a separate reweighting model, introducing higher computational costs compared to simpler adaptation techniques like prompt tuning or in-context learning. 3. The baseline comparisons (ICL-1, ICL-3) are relatively weak, as increasing the number of in-context demonstrations could likely improve their performance, making the reported advantage of the proposed method less conclusive. Meanwhile, other baseline adaptation method are not compared [1,2] [1] Black-Box Tuning for Language-Model-as-a-Service. ICML 2022 [2] Black-box Prompt Learning for Pre-trained Language Models. TMLR Other Comments Or Suggestions: n/a Questions For Authors: see weakness 3 above. Is there any comparison with other SOTA methods [1,2] for black-box LLM adaptation? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, which has helped us strengthen our paper. **Regarding Logit access assumption** The central goal of this paper is to encourage closed-source LLM providers to offer logit-level access as a practical middle ground when releasing full model weights is not feasible due to IP or privacy concerns. Our theoretical and empirical results show that even with this limited access, effective domain and task adaptation is possible. Unlike prior work assuming either full white-box access or fully opaque models, we demonstrate that logit access enables fine-grained control without exposing proprietary internals. With this work, we aim to motivate commercial providers to adopt this feasible and impactful compromise, filling a key gap in current adaptation methods. **Computational Cost vs. Simpler Methods** While prompt-based methods like prompt tuning or ICL may seem simpler, they often require extensive trial-and-error and show high variance, as reflected in our results. In contrast, our reweighting model offers consistent, theoretically grounded gains with lower overhead than exhaustive prompt engineering. It is also analogous to established methods like LoRA or Adapters in white-box settings, requiring a manageable training overhead. Ultimately, the stability, reliability, and performance improvements of our approach outweigh the modest compute required, especially when compared to the variability and manual tuning burden inherent in prompt-based techniques. **Baseline Comparisons: ICL Variants and API Access-based Methods** Based on the reviewer’s suggestion, we extended ICL baselines to include 5, 8, and 10 examples, and implemented Diao et al. ([2], Black-box Prompt Learning, 2023) with the recommended 75 API calls as mentioned in the paper. We did not include Sun et al. ([1], Black-Box Tuning, 2022), as Diao et al. [2] already demonstrated superior performance. Importantly, if logit access is available, Plugin can be layered on top of any prompt-based method using the best-found prompt. For instance, our Zeroshot prompt (noted in line 364, left column) is reused across all methods. We also apply Plugin on top of the best ICL variants (ICL-8/10) and Diao et al. [2]. The table below presents results across all datasets using GPT2-XL as the base model. |E2E NLG|BLEU|Rouge-1|Rouge-2|Rouge-L|METEOR|CIDEr|NIST| |--|--|--|--|--|--|--|--| |ICL-5|0.1226|0.4319 |0.2194 |0.3095 |0.4172|0.3162|0.7281| |ICL-8|0.1537|0.4432 |0.2439 |0.3180 |0.4268|0.3559|0.8253| |ICL-10|0.1582|0.4459 |0.2502 |0.3201 |0.4528|0.4125|0.9015| |Diao et al.|0.2287|0.5024 |0.2846 |0.3922 |0.4628|0.4216|0.8625| |**Plugin**|**0.2470**|**0.5536** |**0.3084** |**0.4213** |**0.5057**|**0.5455** |**1.2736**| |**ICL (best) + Plugin**|**0.3941**|**0.6713** |**0.4027** |**0.5379**|**0.5923**|**0.6172**|**1.5472**| |**Diao et al. + Plugin**|**0.4527**|**0.7126** |**0.5126** |**0.6027**|**0.6214**|**0.7002**|**2.0817**| |WEB NLG|BLEU|Rouge-1|Rouge-2|Rouge-L|METEOR|CIDEr|NIST| |--|--|--|--|--|--|--|--| |ICL-5|0.0826|0.3625|0.1725|0.2517|0.3261|0.1826|0.2614| |ICL-8|0.0943|0.3826|0.1926|0.2825|0.3425|0.2016|0.2611| |ICL-10|0.0813|0.3528|0.1718|0.2542|0.3321|0.1906|0.2425| |Diao et al.|0.1024|0.4016|0.2243|0.3017 |0.3527|0.4321|0.2631| |**Plugin**|**0.1673**|**0.4616**|**0.2527**|**0.3757** |**0.3895**|**0.8987**|**0.2646**| |**ICL (best) + Plugin**|**0.1926**|**0.5026**|**0.2735** |**0.3927**|**0.3872**|**0.9123**|**0.4267**| |**Diao et al. + Plugin**|**0.2137**|**0.6026** |**0.3021**|**0.5928**|**0.5766**|**1.0826**|**0.6142**| |Adidas|BLEU|Rouge-1|Rouge-2|Rouge-L|METEOR|CIDEr|NIST| |--|--|--|--|--|--|--|--| |ICL-5|0.0345|0.2654|0.0393|0.1601|0.1863|0.0338|0.6856| |ICL-8|0.0403|0.2527|0.0432|0.1628|0.1894|0.0615|0.6125| |ICL-10|0.0382|0.2537|0.0325|0.1528|0.1725|0.0452|0.5926| |Diao et al.|0.0417|0.2615|0.0671|0.1710|0.1826|0.0861|0.6034| |**Plugin**|**0.0600**|**0.2710**|**0.0722**|**0.1725** |**0.1995**|**0.1195**|**0.6375**| |**ICL (best) + Plugin**|**0.0591**|**0.2761**|**0.0754** |**0.1736**|**0.2047**|**0.1273**|**0.6415**| |**Diao et al. + Plugin**|**0.0623**|**0.2792**|**0.0773** |**0.1759**|**0.2148**|**0.1325**|**0.7024**| We observed similar results on Common Gen and will include them in the second response due to space constraints in the rebuttal. As shown, Plugin outperforms ICL even with 10 examples and surpasses Diao et al. (2023). While ICL’s performance plateaus with increasing examples—despite higher inference cost and variance—Plugin consistently offers greater accuracy and stability. Moreover, combining Plugin with the best ICL or Diao et al. setups yields further gains, highlighting the value of logit-level access in enhancing prompt-based methods.
Summary: The paper proposes logit reweighting to adapt closed-source LLMs for task-specific generation without accessing model weights. By learning an autoregressive transition matrix from task data, it adjusts token probabilities during inference to align outputs with target domains. Experiments show improved style/keyword compliance over zero-shot and instruction tuning, advocating logit accessibility as a practical adaptation pathway. The method bridges theoretical label shift correction with efficient LLM customization. Claims And Evidence: The claims are partially supported: empirical results on style/keyword alignment validate performance gains, but theoretical guarantees (e.g., distribution alignment) rely on idealized assumptions (e.g., perfect transition matrix estimation) not fully verified in real-world noisy settings. Scalability claims for large vocabularies lack rigorous analysis of trade-offs with token pruning. Methods And Evaluation Criteria: The methods (logit reweighting via transition matrices) align with the goal of adapting closed LLMs using limited access (logits only), and domain-specific metrics (style/keyword accuracy) suit tasks like product descriptions. However, human evaluation is notably absent for assessing output quality, and generalization tests are limited to narrow domains (e.g., one brand), leaving broader applicability under-explored. Theoretical Claims: The theoretical claim of distribution alignment (via logit reweighting under ideal transition matrices) is logically consistent given the assumptions, but the proof sketch (as described in the summary) assumes noiseless task-specific data and perfect estimation of the transition matrix—conditions unlikely in practice. No convergence rates or sensitivity analysis for estimation errors are provided, and empirical results do not explicitly validate the theoretical bound (e.g., measuring distribution divergence post-adaptation). Experimental Designs Or Analyses: The experimental design has validity in using domain-specific metrics (e.g., keyword accuracy for product descriptions), but human evaluation is missing, leaving output fluency/coherence unverified. Comparisons to baselines (zero-shot, instruction tuning) are reasonable, but scalability tests lack depth—token pruning’s impact on rare tokens is unstudied. Domain generalization is under-tested (e.g., single-brand data), raising concerns about broader applicability. Supplementary Material: Yes, I reviewed the algorithm descriptions in Appendix A, the assumptions in Appendix B, and the experimental details in Appendix C. Relation To Broader Scientific Literature: The paper connects to label noise correction literature (e.g., noise transition matrices in supervised learning) by reframing LLM adaptation as correcting “noisy” general-purpose token distributions. It extends these ideas to autoregressive generation, differing from prior LLM adaptation (e.g., prompt tuning, soft prompts) by relying solely on logits, aligning with resource-efficient methods like light-weight finetuning but avoiding weight access. Theoretically, it bridges domain adaptation (e.g., label shift theory) with LLM customization, advancing closed-model adaptation paradigms. Essential References Not Discussed: The paper does not cite controlled text generation methods (e.g., FUDGE Yang & Klein, 2021), which also modify logits using auxiliary models for task alignment. Additionally, Qiu et al. also investigated how to dynamically adjust logits by learning temperature parameters to adapt to different tasks. [1] Yang & Klein, FUDGE: Controlled Text Generation With Future Discriminators, In NAACL, 2021. [2] Qiu et al., To Cool or not to Cool? Temperature Network Meets Large Foundation Models via DRO. In ICML, 2024. Other Strengths And Weaknesses: Strengths: The paper’s originality lies in creatively bridging label noise correction with autoregressive LLM adaptation, a novel conceptual link that unlocks practical utility for closed models. Its emphasis on logit accessibility addresses a critical industry need (adapting proprietary models without weight access), offering significant real-world relevance. The method’s lightweight design (task-specific matrix learning) is a pragmatic strength. Weaknesses: While innovative, the framing underplays overlaps with logit manipulation in controlled generation (e.g., FUDGE) and distillation. Broader claims about domain generalization lack empirical rigor (limited to narrow tasks/brands), and theoretical assumptions (perfect matrix estimation) are underexplored in practical noisy settings. Other Comments Or Suggestions: Suggestions: 1. Human evaluation: Include user studies to validate output quality beyond automated metrics. 2. Error analysis: Quantify how token pruning affects rare tokens or domain-specific terms. 3. Comparison to logit-based methods: Explicitly contrast with FUDGE or distillation to clarify novelty. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s encouraging words and thoughtful feedback. **Regarding human evaluation and limited generalization tests** We already conducted a human evaluation (line 366, details in Appendix C.7) where three evaluators compared Plugin and ICL-3 on 100 Adidas samples, with Plugin preferred in 81% of cases—directly supporting output quality. While the Adidas dataset reflects a specialized domain shift, our study extends beyond a single brand. As shown in Section 7.1, we also evaluate on WEB NLG, E2E NLG, and CommonGen—each representing its own distribution shift relative to the black-box model’s pretraining data. In Section 7.3, we further explore adversarial shifts by testing Plugin on models with known biases (e.g., infrastructure in WEB NLG, male-related concepts in CommonGen), demonstrating Plugin’s broad applicability across diverse and challenging settings. For most datasets, we follow standard practices from PEFT literature (Hu et al., 2021; 2023a) that rely on automated metrics comparing outputs to well-formed references to assess overall quality. This combination of human judgments (for Adidas) and standard metrics (for broader benchmarks and Adidas) provides both qualitative and quantitative evidence of the Plugin’s generalization capabilities. **Token Pruning** We do not perform token pruning; instead, Plugin continuously reweights token probabilities at each decoding step without removing any tokens. This soft upweighing preserves vocabulary coverage and improves domain adaptation (see case study in line 408). For further clarification, we compare the total occurrences of the top-50 Adidas domain words in Plugin’s predictions versus the base model in the same case study. Across all samples, Plugin’s outputs include 25.6% occurrences of these words, while the baseline contains only 13.8%. We will add this in the paper. **Regarding FUDGE, Qiu et al.** We do cite FUDGE in line 249 (right column). While FUDGE uses attribute-specific discriminators to control generation (e.g., formality), our method enables free-form domain adaptation via a single auxiliary model. We considered FUDGE as a baseline, but it requires one discriminator per predefined attribute—unsuitable for broad or evolving domain shifts, which are hard to define upfront. TempNet in Qiu et al. learns a single temperature per input and uniformly scales logits during generation. In contrast, Plugin reweights logits at each timestep, enabling finer, context-sensitive adjustments. Additionally, Qiu et al.'s use of DRO involves an inner maximization loop, making it more computationally intensive than our efficient empirical risk minimization (ERM). Nonetheless, we include a comparison on the E2E NLG dataset, adapting TempNet to use ERM (instead of DRO) with GPT2-XL. As it applies global scaling per prompt, it underperforms in tasks requiring localized adjustments. These results will be added to the final paper. |Method|BLEU|Rouge-1|Rouge-2|Rouge-L|METEOR|CIDEr|NIST| |--|--|--|--|--|--|--|--| |TempNet |0.1325|0.4642|0.2516|0.3021 |0.4126|0.3627|0.8027| |**Plugin**|**0.2470** |**0.5536** |**0.3084** |**0.4213** |**0.5057**|**0.5455**|**1.2736**| **Regarding Theoretical Claims and Convergence Rate** Theorem 5.1 holds under mild, standard assumptions (5.1, 5.2, B.1) commonly used in convergence analyses (Frostig et al., 2015; Chaudhuri et al., 2015; Mukherjee et al., 2022). The noisy estimation of the transition matrix $T\_t({\theta}\_{\star};x_i,x_j,\mathcal{F}^{t-1})$ can be understood in two ways: as direct estimation error in the matrix itself, or as error induced by the function $f\_{I_t}({\theta}\_{\star} ; x_i, x_j,\mathcal{F}^{t-1})$ on which it depends (see Assumption 5.1). We adopt the latter view, estimating $f_{I_t}(\cdot)$ under a sequence of noisy autoregressive loss functions $\ell_1(\boldsymbol{\theta}), \ldots, \ell_t(\boldsymbol{\theta}) : \mathbb{R}^{|V|} \rightarrow \mathbb{R}$ (see Assumption 5.2). Under Assumption B.1—bounded gradients and Hessians of $f_{I_t}$—we show that the expected estimation error can be reduced and prove upper and lower bounds in terms of a problem-dependent quantity $\sigma_t^2$, establishing a convergence rate of $\Omega(\sigma_t^2 / t)$. We hope this clarifies. Our novelty lies in combining techniques from Frostig et al. (2015), Chaudhuri et al. (2015), Mukherjee et al. (2022), and Patrini et al. (2017), and presenting the **first finite-time convergence analysis** for transition matrix estimation in this autoregressive noisy loss setting. We acknowledge that real-world settings may add noise in mapping $f_{I_t}$ to the transition matrix. While this could generalize our framework, incorporating it into our finite-time guarantee is left for future work. Regarding sensitivity analysis, our convergence rate depends on the variance-like term $\sigma_t^2$; higher variance leads to slower convergence, naturally capturing sensitivity to estimation noise.
Summary: The paper tackles the issue of having to rely on prompt engineering while adapting closed-source LLMs. The proposed work formulates the problem as a supervised learning, where few task specific dataset is used to train the model. The closed source model is assumed to learn noisy labels of specific application, and by accessing logits of the tokens, the proposed loss function corrects the noisy label to adapt to a task. Extensive experiments are conducted on models, datasets, and with multiple evaluation criteria. Claims And Evidence: No issues found Methods And Evaluation Criteria: No issues found Theoretical Claims: No issues found Experimental Designs Or Analyses: No issues found Supplementary Material: Yes. Experimental details. Relation To Broader Scientific Literature: It is related to the adaptation of black-box LLM. Essential References Not Discussed: No issues found Other Strengths And Weaknesses: Strengths - The problem is well-motivated. - The approach to solving the adaption problem is novel. With this method, it can be used as a plugin to adapt a closed-source model. - Experiments are extensive to confirm the verification of the proposed method. - Theoretical analysis of the proposed method makes it guaranteed. Weakness Inclusion of some other closed-source models would make the experiment more comprehensive. Other Comments Or Suggestions: Please refer to previous sections. Questions For Authors: What are the implications of making the transition matrix diagonal? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for the reviewer’s positive remarks and valuable insights. **Regarding Inclusion of Closed Source Models** We acknowledge that including more closed-source models could further strengthen the generality of our findings. However, most proprietary models currently do not expose their output logits. Hence, we could not experiment with them. Our approach specifically hinges on logit-level access—without revealing internal weights or architecture—which we believe is a comparatively straightforward adjustment for closed-source providers to implement, especially when compared to the complexities of releasing the full model. By highlighting this limitation, we aim to encourage closed-source developers to consider offering logit-level access in the future, enabling more flexible and efficient adaptation methods for end-users. **Implications of making the transition matrix diagonal** **Benefits** 1. *Reduced Complexity:* Learning a diagonal matrix involves only $|V|$ parameters, compared to $|V|\times |V|$ parameters in a full matrix. This makes training computationally more tractable. 2. *Straightforward Integration:* Because the transition matrix is treated like a single vector, standard autoregressive models (e.g., GPT-2, LLaMA) can be used directly, scaling easily with the dataset size. 3. *Symmetric (Class-Independent) Noise:* A diagonal assumption naturally corresponds to label flips occurring with equal probability among all incorrect classes. This aligns neatly with widely studied symmetrical label-noise scenarios. 4. *Stability in Estimation:* Each diagonal entry is an autoregressive parameter independently learnt without depending on other classes (vocabulary), reducing the risk of overfitting and simplifying parameter estimation compared to a fully dense matrix. **Limitation** Although adopting a diagonal transition matrix reduces complexity and simplifies parameter estimation, it also prevents the model from capturing any off-diagonal confusions—that is, instances where certain tokens are more likely to be misclassified as particular other tokens. Real-world data may involve non-symmetric noise patterns, domain-specific synonyms, and context-dependent misclassifications. A purely diagonal matrix may therefore oversimplify these nuances, leading to diminished expressive power and potentially missing important relationships within the noise structure. Overall, the diagonal constraint provides a practical, stable, and computationally efficient solution for autoregressive reweighing learning—one that is well-aligned with symmetrical label-noise setups—while sacrificing some flexibility in capturing sophisticated non-symmetric noise or domain-specific synonym patterns. We plan to investigate non-diagonal structures to fully model these nuanced noise patterns and further improve adaptation in the future work. --- Rebuttal Comment 1.1: Comment: Thank you for your comprehensive reply to my questions. I maintain my rating as accept.
Summary: The key idea of the paper is to treat next-token prediction as a label noise correction problem, where discrepancies between the LLM’s broad training distribution and task-specific data are modeled through a transition matrix that reweights token probabilities during inference. The proposed Plugin model consists of a small reweighting network trained on limited task-specific data, which, when combined with the black-box LLM’s logits, effectively steers text generation towards the desired distribution. The authors provide theoretical guarantees showing that this probability reweighting approach converges to the target distribution with sufficient task data. Extensive experiments on multiple text generation benchmarks (E2E NLG, WebNLG, CommonGen, Adidas product descriptions) demonstrate that the Plugin model outperforms in-context learning and other adaptation methods, achieving better alignment with domain-specific content while requiring minimal computational resources compared to full fine-tuning. The results suggest that access to token logits could enable more powerful model customization, advocating for broader API-level exposure of logits in commercial LLMs. Claims And Evidence: The paper makes several claims regarding the effectiveness of the Plugin model for adapting black-box LLMs, and most of these claims are supported by theoretical justifications, empirical experiments, and comparative evaluations. The core claim—that token-level probability reweighting using logits alone is sufficient for effective task adaptation—is backed by a formal label noise correction framework, where the authors derive theoretical guarantees showing that the Plugin model can align token distributions with task-specific data under mild assumptions. Additionally, extensive experimental results on four datasets (E2E NLG, WebNLG, CommonGen, Adidas product descriptions) demonstrate that the Plugin model outperforms baselines such as zero-shot inference, in-context learning (ICL), and naive probability combination methods across multiple evaluation metrics (BLEU, ROUGE, METEOR, CIDEr, NIST). The ablation studies further support the claims by showing that model quality, reweighting complexity, and domain adaptation capabilities contribute to improved performance. However, some claims could benefit from stronger empirical validation. For instance, while the paper asserts that the Plugin model is effective under distribution shifts, the experiments focus on relatively constrained dataset modifications (e.g., filtering training data by entity types), and it remains unclear how well the approach generalizes to more extreme domain shifts or adversarial settings. Additionally, while the authors argue that their approach is computationally efficient, they do not provide a direct comparison of training or inference costs against alternative adaptation methods like parameter-efficient fine-tuning (LoRA, adapters), leaving room for further evidence on the trade-offs between performance gains and computational overhead. Nonetheless, the overall evidence presented is strong and well-aligned with the claims, making the Plugin model a compelling approach for logit-based adaptation of closed-source LLMs. Methods And Evaluation Criteria: The proposed Plugin model and its evaluation criteria are well-aligned with the problem of adapting black-box LLMs without modifying model weights. The token-level probability reweighting framework is a reasonable method given the constraints of closed-source LLMs, and the formulation as a label noise correction problem provides a solid theoretical foundation. The authors evaluate the approach using four diverse text generation datasets—E2E NLG, WebNLG, CommonGen, and Adidas product descriptions—each representing different aspects of controlled text generation and domain adaptation. Theoretical Claims: I didn't check the orrectness of any proofs for theoretical claims. Experimental Designs Or Analyses: sound and well-structured, but there are a few areas where further validation or additional analysis could strengthen the claims. The authors conduct experiments across four datasets (E2E NLG, WebNLG, CommonGen, Adidas product descriptions) and compare their Plugin model to several strong baselines including zero-shot inference, in-context learning (ICL), and a weighted combination of model predictions. The use of seven standard NLG metrics (BLEU, ROUGE, METEOR, CIDEr, NIST) ensures a comprehensive evaluation of output quality. They also include a sound human evaluation. The Plugin model is benchmarked against ICL and naive probability reweighting, but not against LoRA, Adapters, or QLoRA, which might be feasible alternatives in cases where some access to model weights is possible. Supplementary Material: No, I didn't review the supplementary material. Relation To Broader Scientific Literature: The paper situates itself within the broader literature on adapting large language models (LLMs) without access to model weights, drawing from areas such as prompt engineering, in-context learning (ICL), parameter-efficient fine-tuning (PEFT), label noise correction, and black-box model adaptation. The key contribution—reweighting token probabilities using logits as an alternative to full fine-tuning—builds upon prior work in label noise correction (Patrini et al., 2017) and autoregressive modeling, adapting these ideas to the language model decoding process. The formulation of next-token prediction as a noisy supervised classification problem is a novel connection that extends prior work on calibrating LLM outputs (Huang et al., 2024; Kapoor et al., 2024). Essential References Not Discussed: Important references are missing: On the Duality between Gradient Transformations and Adapters. Lucas Torroba-Hennigen, Hunter Lang, Han Guo, Yoon Kim. Tuning Language Models by Proxy. Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, Noah A. Smith. A Study on the Calibration of In-context Learning. Hanlin Zhang, Yi-Fan Zhang, Yaodong Yu, Dhruv Madeka, Dean Foster, Eric Xing, Hima Lakkaraju, Sham Kakade Other Strengths And Weaknesses: The paper title is a bit bold and the methodology presented in the paper is conceptually interesting but complex, as it involves multiple layers of statistical modeling and adaptation that may not be immediately intuitive. The core idea—reweighting token probabilities using a transition model learned from task-specific data—relies on a label noise correction framework, which is commonly used in supervised classification problems but less so in language model decoding. Other Comments Or Suggestions: The Plugin model is positioned as an alternative to fine-tuning, but it is not directly compared to parameter-efficient fine-tuning methods such as LoRA, Adapters, or QLoRA. Including a discussion on when logit reweighting is preferable to PEFT would improve clarity. Questions For Authors: How many flops would the method save compared to vanilla LoRA? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and insightful review. **Regarding Comparison of Plugin with Parameter Efficient Fine-tuning (PEFT) methods like LoRA, Adapters:** >"... The Plugin model is positioned as an alternative to fine-tuning, but it is not directly compared to parameter-efficient fine-tuning methods such as LoRA, Adapters, or QLoRA..." "... How many flops would the method save compared to vanilla LoRA?" We rely on only output logits of the black-box model and do not have access to its internal weights or architecture. Consequently, any form of fine-tuning—including parameter-efficient approaches like LoRA or Adapters—cannot be applied, which we clarify in the Introduction (lines 21–33, right column) and the Related Work (lines 261–273, left column). If we did have access to the model weights, parameter-efficient fine-tuning methods would indeed be the natural choice, as they use more information than just logits and are expected to yield better performance. Thus, we emphasize that the Plugin model is not an alternative to fine-tuning, but rather an approach that uniquely stands for adapting black-box LLMs which only provide logit access. Nevertheless, to address the reviewer’s point, we conducted a comparison on the E2E NLG dataset by adding rank-$𝑟=8$ LoRA matrices in the $Q$ and $V$ attention layers of GPT2-XL. The results, which will be included in our final version, show that LoRA only slightly outperforms our Plugin in terms of task metrics. LoRA adds 2.46M parameters (rank-8, Q/V only), while Plugin adds 30.72M (one full layer). During inference (up to 64 tokens), LoRA requires 188.8B FLOPs while Plugin needs 196.2B FLOPs - a negligible difference in computational cost. Notably, the parameter and efficiency gap between LoRA and Plugin narrows when increasing LoRA's rank (r) while reducing Plugin's hidden dimensions - demonstrating how both approaches can be adaptively tuned to meet specific resource constraints while maintaining competitive performance. We would like to highlight that the fundamental distinction remains - LoRA requires full model access to modify internal layers, while Plugin enables post-hoc deployment without retraining. |E2E NLG|BLEU|Rouge-1|Rouge-2|Rouge-L|METEOR|CIDEr|NIST| |--|--|--|--|--|--|--|--| |Zeroshot |0.0562|0.4013 |0.1636 |0.2862 |0.3697|0.0187|0.5338| |Plugin |0.2470|0.5536 |0.3084 |0.4213 |0.5057|0.5455|1.2736| |PEFT (LoRA r=8)|0.2517|0.5712 |0.3079 |0.4317 |0.5162|0.5225|1.2172| **Regarding Missing References** > On the Duality between Gradient Transformations and Adapters. Lucas Torroba-Hennigen et al. 2025 Tuning Language Models by Proxy. Liu et al. 2024 A Study on the Calibration of In-context Learning. Zhang et al. 2023 Thank you for pointing out these relevant works. We will add them to our final version. In particular, we note that Liu et al. (2024) describes the same method introduced in Liu et al. (2021), which we already cite in line 252. Below, we clarify how our approach differs from each reference: Torroba-Hennigen et al. (2025): They examine the equivalence between gradient transformations and adapters for efficient model adaptation, relying on full access to model weights and gradients. In contrast, our method requires no access to model internals; we adapt black-box LLMs solely by reweighting token-level logits. Liu et al. (2024): As noted, we already cite their earlier work (Liu et al. 2021), which presents the same core idea of combining logits. Our WeightedComb baseline (line 252, right column) is directly inspired by their approach. Zhang et al. (2023): We will add this reference to our discussion on calibration. Their study focuses on aligning model confidence with predictive accuracy by adjusting confidence scores as one increases shots in the few-shot learning setting. Unlike them, we explicitly modify the token predictions themselves, rather than just calibrating confidence. **Regarding Domain Shifts, Extreme Domain Shifts, and Adversarial Domain Shifts** We assume our black-box LLM already encodes extensive world knowledge. In this sense, any adaptation to a domain-specific dataset amounts to handling a distribution shift, as demonstrated in Section 7.1 with WEB NLG, E2E NLG, and CommonGen. Among these, the Adidas dataset represents a more extreme domain shift, given its specialized style of product descriptions. Additionally, the experiments in Section 7.3 can be seen as adversarial, since the Plugin is applied atop a model with known biases—for instance, the tendency to focus on infrastructure-related concepts in WEB NLG and male-related concepts in CommonGen. Although we did not explicitly label these settings “extreme” or “adversarial,” they do indeed meet those criteria to some extent, and we will clarify that in the final version. We would also welcome any further examples or clarifications on what the reviewer would consider to be more extreme or adversarial distribution shifts in this context.
null
null
null
null
null
null
A Closer Look at Backdoor Attacks on CLIP
Accept (poster)
Summary: The paper presents a detailed analysis of which type/location of layers are effected by backdoor attacks in transformer based VLMs by the help of representation decomposing. The findings indicate: global trigger based attacks mostly affect MLPs whereas localized trigger patches influence Attention heads. Based on this analysis a detection method termed Decomp-Rep which uses representation decomposing. The method seems to improve backdoor cleaning rate Claims And Evidence: For the architecture (ViT-B/32) considered in the work: the evidence seems sufficient for this. Methods And Evaluation Criteria: The backdoor learning is done in the fine-tuning setup and different attacks are considered. These seems sufficient for the problem. Theoretical Claims: There are no theoretical claims only notation used to motivate the representation decomposition and that looks okay. Experimental Designs Or Analyses: The experiments are only considered in backdoor learning by fine-tuning. I am not sure if the same conclusions would hold if backdoors were injected while training from scratch. Supplementary Material: I read over all of it, specifically focusing on Text descriptions and results for non-ImageNet datasets. Relation To Broader Scientific Literature: in my opinion the initlal experiments on finding which attacks effect which layers is novel. The proposed method is based on detection and uses text descriptions. There are other detection methods in literature but this method seems different to those. Essential References Not Discussed: RoCLIP[1] also uses candidate texts from a retrieval pool to clean backdoored models, currently only mentioned in related work but not compared to. BDetCLIP[2], a recent detection method: works on a similar setting - just added in related work not compared to: this comparison would be the most relevant Ta-cleaner[3]: not discussed, this is similar to CleanCLIP, and according to the paper, better than CleanCLIP. [1] Yang, W.,et al. Robust  contrastive language-image pretraining against data poisoning and backdoor attacks. In NeurIPS, 2023. [2] Niu, Yuwei, et al. "Bdetclip: Multimodal prompting contrastive test-time backdoor detection." arXiv preprint arXiv:2405.15269 (2024). [3] Xun, Yuan, et al. "Ta-cleaner: A fine-grained text alignment backdoor defense strategy for multimodal contrastive learning." arXiv preprint arXiv:2409.17601 (2024). Other Strengths And Weaknesses: Strengths: The initial experiments to find how the backdoors effect certain layers is very nice and commendable. The way contributions are highlighted and explained also is a plus point. Weaknesses: - The final experiments for the 'Decomp-Rep' seem lacking. The tests are only on ViT-B/32 (what would happen for more patch models like B/16? or larger ones like L/14), the baselines are not sufficient. - Decomp-Rep seems to clean better than the competitors, but also looses CACC, for instance on BadNet for CLeanCLIP: the ASR with Decomp-Rep goes down to 41.49% from 54.23%, but the decay in CACC is 6%. If the detection method is unable to clean to ASR levels around 10-20% then this decay in CACC is too large. For the Base model case as well the decay in clean performance is too large (upto 10% in some cases). Ideally one wants a completely clean model with a small degradation in clean performance, but Decomp-Rep seems to not do it from the results.From fig. * it seems BadNet and LC attacked models are not useful after detection with Decomp-Rep. - The only attack for which Decomp-Rep seems to work reasonable well is Blended. I think just improving a bit on some baselines is not enough to call Decomp-Rep effective. Other Comments Or Suggestions: Links broken on line 251, 651 etc Questions For Authors: 1. The tests are only on ViT-B/32, what would happen for more patch models like B/16? or larger ones like L/14? 2. Why no comparison to methods like BDetCLIP[1], which is also a detection method? Overall, even a positive response to these questions might not change my evaluation of this work, as I am not convincesd by its effectiveness. The method is only slightly better than baselines and completely cleaning backdoors seems not possible for the method without completely destroying the CACC. [1] Niu, Yuwei, et al. "Bdetclip: Multimodal prompting contrastive test-time backdoor detection." arXiv preprint arXiv:2405.15269 (2024). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful comments. We are encouraged by your recognition of the novelty and the solid experimental analysis of our work. Below are your concerns and our corresponding responses: **Q1: The experiments of backdoor learning from scratch.** **A:** Thank you for your valuable comment. To solve your concern, we have conducted an additional experiment of training a backdoored CLIP in the pre-training stage. Specifically, following the attack setting in CleanCLIP, we randomly select 1,500 image-text pairs from CC3M as backdoor samples (the target class is ''banana'' and we use the triggers from BadNet and Blended). Then, we train CLIP from scratch on the backdoored CC3M dataset and analyze the backdoored CLIP. The experimental results are shown in the following tables. **Table: AH ablation (BadNet)** |1-4|5-8|9|10|11|12| |:--:|:--:|:--:|:--:|:--|:--:| |97.32|96.86|95.46|95.78|96.51|1.79| **Table: MLP ablation (Blended)** |1-4|5-8|9|10|11|12|13| |:--:|:--:|:--:|:--:|:--|:--:|:--:| |98.35|97.67|95.39|92.72|68.51|20.42|5.67| From the tables, we can see that BadNet (Blended) also mainly affects AHs (MLPs) in the pre-trained backdoored CLIP, which is consistent to our claim. **Q2: Missing TA-cleaner.** **A:** Thank you for pointing out the related paper. We will discuss it in the related work. **Q3: Comparing RoCLIP and BDetCLIP** **A:** Thank you for your valuable comment. We would like to explain that RoCLIP aims to mitigate backdoor implanting during the pre-training stage, while our proposed method focuses on eliminating the implanted backdoor in the pre-trained CLIP, which works in different stages. Despite this, we really understand your concern and have conducted additional experiments by adapting RoCLIP (ViT) to the fine-tuning stage. **Table: Comparing RoCLIP (BadNet, ASR/CACC)** |No Defense|+RoCLIP|+Decomp-Rep|+Both| |:--:|:--:|:--:|:--:| |86.09/56.72|75.42/54.31|21.45/52.25|48.19/51.27| From the table, we can see that our method is more effective in reducing the ASR. In addition, we also have conducted additional experiments to compare BDetCLIP (following the original setting in the paper). **Table: Comparing BDetCLIP (AUROC)** |Method|BadNet|BadCLIP| |:--:|:--:|:--:| |BDetCLIP|0.970|0.910| |Decomp-Det|0.920|0.990| From the table, we can see that although the performance of our method against BadNet is slightly lower than BDetCLIP, the performance against BadCLIP (more advanced) is superior. Moreover, our method does not require abundant class description texts from large language models, which are often time-consuming to collect. **Q4: Experiments on other model architectures.** **A:** Thanks for your valuable suggestion. We have conducted additional experiments on ViT-L/14. **Table: AH ablation (ASR)** |1-8|9-16|16-20|21|22|23|24| |:--: |:--: |:--: |:--: |:--: |:--: |:--: | |96.44|95.72|94.38|92.78|91.20|91.33|2.53| From the table, we can see that only ablating AHs in the last layer significantly decreases the ASR, which indicates that the infected AHs are still centered in the last layer. This observation also supports our claim. **Q5: The effectiveness of Decomp-Rep.** **A:** We would like to justify the effectiveness of Decomp-Rep. Specifically, we argue that the CACC loss mainly stems from basic representation decomposition (Base-Decomp) in Eq. (4) rather than our proposed repairing method. For instance, in Table 1, the CACC of the Base-Decomp decreased by 3.01\%, 5.26\%, and 4.45\% for BadNet, LC, and BadCLIP on the base model, respectively, and our Decomp-Rep only slightly decreases the CACC by 1.46\%, 1.45\%, and 0.2\% compared with Base-Decomp. A similar observation can also be found for the CleanCLIP model. Hence, the CACC decrease is mainly caused by the basic representation decomposition framework and can be mitigated by different representation decomposition strategies (which can be seamlessly compatible with Decomp-Rep). To show this, we have conducted additional experiments that only decompose the representations in the last layer. **Table: Comparison (ASR/CACC)** |No Defense|+Base-Decomp-new|+Decomp-Rep-new| |:--:|:--:|:--:| |86.09/56.72|87.42/55.31|22.08/54.82| From the table, we can see that using a different basic representation decomposition strategy can effectively alleviate the CACC decrease in our method. Notably, we would like to emphasize that *our core contributions mainly lie in pioneering a deep exploration of backdoor attacks on CLIP, revealing significant key findings, and validating them by designing lightweight defense methods instead of pursuing strictly SOTA methods*. We think that the empirical performances of Decomp-Rep and Decomp-Det have sufficiently validated our key findings, which would motivate more researchers to design more powerful defense counterparts based on these findings in the future. **Q6: Broken Links.** **A:** Thanks for pointing out the issues. We will fix them and double-check the paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed replies. The BDetCLIP experiment shows the proposed method is not effective under all attacks, but overall I think the work has a valid enough contribution to the community. Hence, I update my score, and lean towards a marginal accept. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your insightful recognition of our work's contributions to the community. While Decomp-Det is not strictly superior to BDetCLIP under all attacks, it remains competitive and offers valuable insights into detecting backdoor samples by analyzing specific model components. As we previously stated, our goal is not to achieve SOTA performance across all settings but rather to encourage researchers to explore more interpretable and transparent actions on specific model components to defend agianst backdoor attacks. Once again, we truly appreciate your affirmation of our valid enough contribution to the community.
Summary: This paper presents a comprehensive empirical study to analyze the effects of backdoor attacks on CLIP. They found three empirical findings about how different types of backdoor attacks have various effects on CLIP. The authors conducted extensive experiments and showed visualized results, which validates their claims. Claims And Evidence: Yes, the claims of three findings are supported by extensive empirical experiments and visualized results. Methods And Evaluation Criteria: Yes, the visualized results of attention heads are reasonable to characterize the backdoor effects on CLIP. Using the text descriptions to show the property role of components in CLIP is reasonable and interesting. Theoretical Claims: N/A. The paper does not involve theoretical analysis. Experimental Designs Or Analyses: Yes, I have checked the soundness of experimental designs and analyses. The paper compares six backdoor attacks on CLIP, which are divided into two lines: local patch-based and global perturbation-based backdoor attacks, and visualizes the attention heads of local patch-based backdoor attacks. Supplementary Material: Yes, I have reviewed the code in the supplementary material. Relation To Broader Scientific Literature: The key findings in the paper are related to the attention mechanism and representation learning in vision transformers. Specifically, local patch-based triggers are implanted into the attention heads in the last model layers, which are strongly related to the research of interpreting image representations in vision transformers. Essential References Not Discussed: The related works cover backdoor attacks and defenses. Other Strengths And Weaknesses: Strengths: 1. Originality: The paper proposes three key findings that are novel in the context of backdoor attacks on CLIP, especially for using text descriptions to interpret the backdoor effects on CLIP. 2. Clarity: The writing of the paper is good. The three key findings are clearly explained. 3. Significance: The paper has a positive contribution to the research of backdoor attacks and defenses. 4. Experimental validation: The experimental results are sufficient, including two different defense methods. Weaknesses: 1. More experimental results should be further analyzed. 2. More different model architectures should be exploited. Other Comments Or Suggestions: See the above weakness Questions For Authors: 1. Could you further explain the results in Figure 4? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable comments. We are encouraged by your recognition of the novelty and significance of our work. Below are your concerns and our corresponding responses: **Q1: More experimental results should be further analyzed.** **A:** Thank you for your valuable comment. Following your suggestion, we will further provide a more thorough analysis on the experimental results in the ablation study and the comparison of different repairing strategies. As shown in Table 3 in the appendix, we compared three ablation experiments, including directly replacing all AHs, replacing the representations with zero values and random prototypes. From the experimental results, we can see that these three strategies are all inferior compared with our method. Specifically, directly replacing all AHs or MLPs reduces both the ASR and the CACC greatly, which is inapplicable in real-world applications. This observation validates the necessity and effectiveness of selectively repairing AHs or MLPs. Next, replacing the representations with zero values (rather than prototypes) reduces the ASR to a certain extent but still maintains a high level of ASR and a normal level of CACC. In contrast, replacing the representations with random values reduces both the ASR and the CACC greatly. This observation validates the significance of prototypes. Overall, these experiments validate the effectiveness of our method. In addition, as shown in Table 4 in the appendix, we compared repairing different combinations of AHs, including fixed and random AHs. From the experimental results, we can see that replacing fixed or random three AHs both has little effect on reducing the ASR. This observation indicates that the infected components are diverse and have no fixed preference, which reveals the challenge of repairing selection. **Q2: More different model architectures should be exploited.** **A:** Thank you for your insightful comment. We have conducted additional experiments on the CLIP with the ViT-L/14 architecture (14x14 patch size and 24 layers). The experimental results are shown in the following table. **Table: AH ablation (ASR)** |1-8|9-16|16-20|21|22|23|24| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| |96.44|95.72|94.38|92.78|91.20|91.33|2.53| From the table, we can see that only ablating AHs in the last layer significantly decreases the ASR, which indicates that the infected AHs are still centered in the last layer. This observation also supports our claim. **Q3: Could you further explain the results in Figure 4?** **A:** Sure, we would like to explain that Figure 4 shows the descriptive texts on MLPs. Specifically, these descriptive texts are derived via the TextSpan algorithm, which indicates the semantics of MLPs in CLIP's text spaces. We select top-5 results for better visualization. In the figure, we can see that the descriptive texts on MLPs may not show certain specific property roles (e.g., color and location) like those on AHs due to their inherent architecture difference. To better quantitatively characterize the descriptive texts, we calculate the average text similarity of the descriptive texts between clean (green color) and backdoored (red color) MLPs, which reflects how backdoor attacks affect the semantics of MLPs in CLIP's text spaces. Based on the statistics, we can see that the last several MLPs have lower similarity values compared with the first ones, which further validates the observation in Figure 2 (infected MLPs are dispersed on the several late layers) from a different perspective. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses in the rebuttal. The authors' responses have effectively addressed my concerns, and I will maintain my acceptance rating. --- Reply to Comment 1.1.1: Comment: Thank you for letting us know that your concerns were addressed by our rebuttal. We sincerely appreciate that you keep the "acceptance" recommendation for our paper!
Summary: This paper investigates how backdoor attacks infect different components of a ViT-based CLIP model (notably attention heads vs. MLP layers) and proposes a “repair” mechanism that selectively ablates or replaces infected representations in the last few layers. The authors conduct detailed experiments to show that different attack strategies target either local patch-based features (primarily encoded in attention heads) or global perturbations (often captured by MLPs). Based on these observations, they design an approach to detect and repair “infected” attention heads or MLPs during inference. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical proof. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper is related to the safety and security of multi-modal foundation models. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** 1. Detailed Empirical Analysis: Overall I like the paper. The paper provides a thorough layer-by-layer decomposition of ViT representations, highlighting how certain attacks affect different parts of the model. 2. Comprehensive Experiments: The authors evaluate multiple backdoor attacks (e.g., BadNet, Blended, ISSBA) and show quantitative improvements in attack success rate (ASR) reduction. 3. Novelty in Layer-wise “Repair”: Proposing a targeted approach—rather than blindly dropping entire layers—could, in principle, preserve clean accuracy. **Weaknesses** 1. Necessity of “Partial Repair” vs. Simple Replacement: The paper emphasizes a “repair” mechanism for only a subset of attention heads. However, it is unclear why one cannot simply replace all attention heads (or MLP modules) in the final layers with known clean versions. If we have access to a small clean set and the original architecture, fully swapping suspect components might be more straightforward and yield stronger guarantees than partial repair. The paper does not sufficiently justify why partial repair is needed or superior. 2. Adaptive Attack Analysis: The proposed method might be vulnerable if attackers specifically design triggers to blend across multiple heads and layers. The paper does not show robust experiments on truly adaptive adversaries who might anticipate “repair” or partial ablation. 3. Comparisons with Simpler Baselines: While the authors compare with other detection strategies, they do not fully demonstrate that partial repair is unequivocally better than simpler alternatives (e.g., just removing final-layer attention heads altogether, or fine-tuning them from scratch, or replace all heads with clean versions). 4. Missing related work: The paper does not discuss the recent backdoor attack[1] on CLIP. It would be better to include discussion if the current observation still holds on it. [1] Distribution preserving backdoor attack in self-supervised learning. Tao et al. IEEE S&P 2024. Other Comments Or Suggestions: No. Questions For Authors: Please respond to each point of Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable comments. We appreciate your recognition of the novelty of our work and the empirical analysis in the paper. Below are your concerns and our corresponding responses: **Q1: Necessity of “Partial Repair” vs. Simple Replacement** **A:** Thanks for your insightful comment. Actually, we have conducted this experiment in the ablation study. Specifically, we directly replace all AHs as a simple replacement. The experimental results are shown in Table 3 (in the Appendix due to the space limit). **Table: Compassion (ASR/CACC)** |Attacks|BadNet|LC|BadCLIP| |:--:|:--:|:--:|:--:| |Repair all AHs|1.21/2.10|3.01/1.91|0.01/2.45| |Ours|21.45/52.25|17.50/51.42|25.08/53.72| From the table, we can see that repairing all AHs reduces both the ASR and CACC nearly to zero. This observation indicates that although we can directly remedy all suspect components to nearly totally eliminate the backdoor, the caused side effect is unaffordable, i.e., the CACC is too low. Therefore, we have to selectively repair certain AHs to achieve a better tradeoff between ASR and CACC, which is exactly what our proposed method does. **Q2: Adaptive Attack Analysis.** **A:** Thanks for your insightful comment. We would like to explain that the adversary generally *cannot* control the trigger to specifically blend across multiple heads and layers under the black-box setting. We really understand your concern about the robust experiments against this specific adaptive attack. Hence, we have conducted additional experiments for this purpose. Specifically, we suppose that the adversary can implant the trigger into specific AHs (i.e., 1-th, 3-th, and 5-th) in certain layers (i.e., the last layer and the second-to-last layer) by replacing the clean representations of these components with the infected ones. In this way, we can use our proposed method to defend against this adaptive attack. The experimental results are shown in the following table. **Table: Adaptive attack and defense (BadNet)** |Method|ASR|CACC| |:--:|:--:|:--:| |Attack|82.43|54.78| |Defense|48.16|52.35| From the table, we can see that our proposed method can effectively defend against the adaptive attack. This observation indicates our method is robust to specifically designed backdoor attacks on specific AHs. **Q3: Comparisons with Simpler Baselines** **A:** Thanks for your valuable comment. Actually, we have conducted experiments with different AH repairing strategies. Specifically, we use the fixed AH strategy on three types of AHs and the random strategy. The experimental results are from Table 4 in the appendix. **Table: Different strategies (ASR/CACC)** |Strategy|BadNet|LC|BadCLIP| |:--:|:--:|:--:|:--:| |Fixed[1,2,3]|86.53/49.72|87.71/49.42|99.18/51.78| |Fixed[7,8,9]|88.68/47.86|88.74/47.51|58.12/50.18| |Fixed[10,11,12]|88.82/46.72|88.29/46.72|99.57/49.78| |Random|72.82/48.30|77.73/46.16|82.34/48.86| |Ours|21.45/52.25|17.50/51.42|0.94/56.08| From the table, we can see that the fixed and random repairing strategies cannot effectively reduce the ASRs and lose more CACCs. This is because they cannot find infected components effectively. In contrast, our method can specifically target infected components and repair their representations effectively. **Q4: Missing related work.** **A:** Thank you for pointing out the related paper. We will discuss the paper in the related work. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I kept my score as "weak accept". --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your positive assessment of our paper. Your valuable suggestions will significantly help us improve the quality and clarity of our work.
null
null
null
null
null
null
null
null
Text-to-CAD Generation Through Infusing Visual Feedback in Large Language Models
Accept (poster)
Summary: The authors use DPO/Rinforcement Learning to fine-tune a LLM to produce CAD instructions from a prompt, so tht the rendered objects are ranked VLM. They fined tune Llamma and use prewarming to make sure it can generate CAD sewuences froma prompt before alternating betwen DPO and direct sequential learning steps. I beleive this is "On Policy DPO". The result is a model that scores higher than GPT-4o and one other method, according to automatic and user-study based metrics on the quality of CAD files generated. Claims And Evidence: The paper lists 4 contributions - butthey dont seem tightly linked to the evaluation. Working backwards from their evaluation, they are/shouldbe claiming: 1. Improved ability to handle simple prompts 2. Improved ability handle (more) complex shapes -- mainly shown in qualitative eval. 3. Higher accuracy / quality according to automatic benchmark and human scores 4. Human feedback in training improves quality 5. DPO (On policy) is effective for text-2-CAD to incorporate the appearance of the object. These things are all evaluated at different levels, each with acceptable evaluation methods. Methods And Evaluation Criteria: The methods and evaluatin criteria make sense. The authors use pre-existing benchmarks for the most part, introducing LVM score as a new metric. This is a reasonable score as far as I can tell, and it is paired with more established ones. Importantly, they include a user study. The study had 6 people (I dont know who) rank 50 samples comparing 2 versions of CADFusion to the competing approachs (GPT-4o, Text2CAD). One could complain that this is not enough, or that we do not know their qualifications (do they know anything about CAD?) or id they have potential bias -- was it a blind study where they dont know which is which? A rubric or instructions to the participants would be nice. However I am pleased that the did a user study as I think it is important for this work. Theoretical Claims: This is not a thoery paper -- there are not proofs or theortetical claims. They present a framework and use empirical evaluation with a small user study. I would like to see discussion of static vs on-policy iterative DPO. Experimental Designs Or Analyses: As mentioned - I would like to know more about the user study. I ampleased with the amount of qualitative results shown, and the metrics used. There do not seem to be too-many baseliens to compare against in the acadmic literator (3D not 2D CAD, from text), and the wuthors were not able to implement one method, so they effectively compated only to Khan's work. I could not find a more appropriate reference either, although there is "zoo.dev" (commercial) and a text_to_cad github repo (no publication). I think that what they have done is likely sufficient. Supplementary Material: Yes - the description of the user study, the qualitative examples, the failure cases. Relation To Broader Scientific Literature: I think there is a need for more editable structured representaitons from geenrative models and this is a step inthat direction. The visual feedback/DPO appeoach seems novel. Essential References Not Discussed: I think they should mention CAD-LLM and Vitruvian. Other Strengths And Weaknesses: Strength: - Interesting approach, nice way to handle non differentiabel rendering - Good results qualitiatively and quant. Weakness: - I immediately found some refs that _seem_ relavant and I am concerned that they werent mentioned. - Only comparing to one real other method (GPT-4o makes two, but it is not aimed at CAD) - User study was small / underexplained Neutral/Unsure: - Relying on LLVM vs human feedback to evaluate Other Comments Or Suggestions: L153 bad reference L217 col2 - misplaced eqn number not sure footnote 5 is needed. Questions For Authors: How does your work relate to the CAD-LLM or Vitruvian work? Can you explain your user-study a bit more? Was it blind? What were the users qualifications? What were their instructions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, Thank you for the valuable comments. We are pleased to know that you are happy with our evaluation details including the qualitative results and user study. We hope the following clarifications address your concerns. ## More References Thanks for pointing out these relevant works. Below is our discussion to them, and we will add them to our updated manuscript. 1. CADLLM [1]: It is a work on *unconditional* CAD generation. We referenced similar works such as DeepCAD [3] and SkexGen [4]. Our work is different to them by focusing on a *conditional* task that deals with text-guided CAD generation. 2. Vitruvion [2]: It focuses on generating 2D sketches from hand drawings. Our work, in contrast, does not focus on *CAD reconstructions* that takes visual inputs, but more abstract text inputs instead. We cited similar works to Vitruvion but on 3D reconstructions [5] [6]. ## Baseline Methods As all reviewers acknowledge, the text-to-CAD task is relatively new - we have highlighted this in the introduction. As a result, there are only a few open-source baseline methods that are directly comparable to our approach. We sincerely thank you for your understanding on this matter. You are welcomed to have a look at our response to reviewer `3nTM`, in which we reported a comparison between our method with an adaption of Skexgen for text-based generation. We hope this confirms that our framework demonstrates superior capability among all existing works. ## Details of User Study Our user evaluation was conducted as a *blind* study. Participants were asked to **rank** the rendered CAD objects (without knowing their origin) based on 1. their **alignment** to the given textual descriptions, and 2. the **quality** of the CAD models. The users had college-level (or higher) education but we did not restrict their majors. Since our evaluation focuses solely on the visual appearance of the generated models, we believe this is appropriate. We have included an description of our human evaluation setup in Appendix C.2. However, we realize that this section may not be entirely clear and could be difficult to follow. We will update and clarify this section in the camera-ready version to provide more detailed instructions and improve overall readability. ## DPO: Static vs. On-policy Yes, our DPO implementation is not entirely "static". By splitting DPOs into smaller iterations and making it iterate with the sequential learning, the model being optimized is much closer to the policy model generating the sample pairs. It is a step towards an "on-policy" DPO. [1] Seff et al., Vitruvion: A Generative Model of Parametric CAD Sketches, ICLR 2022. [2] Wu et al., CAD-LLM: Large Language Model for CAD Generation, NeurIPS 2023 Workshop. [3] Wu et al., DeepCAD: A Deep Generative Network for Computer-Aided Design Models, ICCV 2021. [4] Xu et al., SkexGen: Autoregressive Generation of CAD Construction Sequences with Disentangled Codebooks, ICML 2022. [5] Ma et al., Draw Step by Step: Reconstructing CAD Construction Sequences from Point Clouds via Multimodal Diffusion. CVPR 2024. [6] Khan et al., CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise Sketch Instance Guided Attention. CVPR 2024.
Summary: This paper introduces CADFusion, a framework for Text-to-CAD generation that leverages LLMs and incorporates visual feedback to improve the quality and accuracy of generated CAD models. The core contribution is a two-stage training procedure that alternates between Sequential Learning (SL) and Visual Feedback (VF) stages. In the SL stage, an LLM (LLAMA-3-8b-Instruct) is fine-tuned on a dataset of paired textual descriptions and CAD parametric sequences (represented as text tokens), using a standard cross-entropy loss. This stage aims to teach the model to generate syntactically correct and logically coherent CAD commands. The VF stage then refines the model by incorporating visual information. Multiple CAD sequences are generated and rendered into visual representations (images). A VLM is employed to automatically score these rendered images based on multi-aspect criteria (shape quality, quantity, and distribution), creating preference data. DPO is then used to train the LLM to favor sequences that produce higher-scoring (visually preferable) CAD models, thus circumventing the non-differentiability of the rendering process. The authors propose an alternating training strategy, switching between SL and VF stages to preserve the benefits of both. The paper presents both quantitative and qualitative results comparing CADFusion to two baselines: GPT-4o (with few-shot learning) and Text2CAD (Khan et al., 2024b). The authors utilize a variety of metrics, including F1 scores for sequence accuracy, Chamfer Distance, Coverage, MMD, JSD, and Invalidity Ratio for visual quality, and a VLM-based score to assess visual-textual correspondence. Human evaluations are also conducted. The main finding is that CADFusion outperforms both baselines on most metrics, particularly those related to visual quality. Ablation studies demonstrate the importance of the visual feedback stage and the alternating training strategy. The authors also contribute two datasets: one for sequential learning (text-sequence pairs) and another for visual feedback (preference pairs). The paper acknowledges limitations regarding the generation of very complex shapes. In essence, the paper's main contributions include the combination of sequential and visual feedback training for Text-to-CAD, the use of DPO and LVM scoring to integrate visual information, and the demonstration of improved performance compared to existing methods. ## Update after Rebuttal I thank the authors for their rebuttal. They have convincingly addressed several key concerns. They acknowledged the need for more precise claims and better highlighting of limitations regarding complex models, promising revisions. Their explanation of the VLM scoring process, including steps taken to ensure stability, was mostly clear. They also clarified the Text2CAD comparison, confirming they have results using identical prompts and will include these in the main paper. The provided clarifications and planned updates sufficiently address my main concerns, leading me to raise my score. Claims And Evidence: The paper puts forward several claims regarding the effectiveness of CADFusion, and for the most part, these are backed up by the experiments. For instance, the results in Table 1, along with the visual examples in Figure 5, make a good case that CADFusion generally outperforms the baselines, GPT-4o and Text2CAD, across a range of metrics. The ablation study in Table 2 also adds weight to this. Similarly, the benefits of the alternating training strategy are reasonably demonstrated by comparing different training variations within the ablation study. However, there are a few places where the claims could be a bit more precise. The paper often uses phrases like "significantly improves performance," but without statistical significance tests, it's hard to know for sure if the improvements are truly significant or due to chance. Also, while the paper shows improvements, it's implicitly suggesting that CADFusion works well for all types of CAD models. This isn't entirely supported by the evidence. The examples are mostly simpler shapes, and the authors admit in the limitation section that the method struggles with more complex designs. This limitation should be brought forward more prominently, rather than implying broad applicability. Another point concerns the VLM Score. Could different VLMs give different scores? Is the score sensitive to how the evaluation prompt is phrased? Finally, the comparisons with Text2CAD is not that solid, because they are using different prompts, as shown in Table 3. In short, the core claims about the advantages of visual feedback and alternating training seem solid. But the claims about the extent of improvement and the general applicability need some toning down and more careful qualification, given the limited evaluation of complex models. Methods And Evaluation Criteria: Overall, the proposed methods and evaluation criteria seem suited to the problem of Text-to-CAD generation. The core idea of combining sequential learning with visual feedback makes sense. CAD models have this dual nature – they need to be syntactically correct (valid sequences of commands) and visually accurate (matching the intended design). The two-stage approach, using an LLM for sequence generation and then refining it with visual feedback, directly addresses this. The use of DPO to handle the non-differentiable rendering process is a clever and appropriate technical solution. It's a good way to get around the problem of backpropagating through the rendering step. Similarly, using a VLM to automatically generate preference data seems a practical approach. Using F1-scores, Chamfer Distance, Coverage, MMD, JSD, and Invalidity Ratio for evaluation covers different aspects of visual quality and validity. However, as mentioned before, an analysis of the VLM score's properties would strengthen the evaluation. Also, current metrics don't directly address aspects like manufacturability or design practicality. Furthermore, a more complete evaluation of the complex CAD models is needed. In short, the methods and evaluation criteria make sense for the problem. The main areas for potential improvement are a more thorough analysis of the VLM score, and ideally, the inclusion of some evaluation related to design practicality and, most importantly, complex CAD models. Theoretical Claims: This paper is primarily empirical and does not present theoretical claims or proofs. Experimental Designs Or Analyses: They're mostly solid. The comparisons with GPT-4o and Text2CAD, using a mix of sequential and visual metrics, seem reasonable. The ablation studies in Table 2 also give us a good sense of how much each part (the visual feedback and the alternating training) actually contributes. The process of constructing datasets is clear. Supplementary Material: The supplementary material provided is helpful for understanding the details of the work. It covers data preprocessing, dataset construction details, additional training details, experimental results (including more quantitative results, qualitative examples, and failure cases), and prompts used. No significant issues or concerns were identified with the supplementary material itself. Relation To Broader Scientific Literature: The paper does a reasonable job of positioning itself within the existing literature. The connections to broader work on LLMs, RLHF, and DPO are appropriately established. Essential References Not Discussed: References seem adequate. Other Strengths And Weaknesses: The strengths and weaknesses of the paper have been covered in detail in the previous sections. Please refer to the "Claims and Evidence," "Methods and Evaluation Criteria," and "Experimental Designs and Analyses" sections for a discussion of these points. Other Comments Or Suggestions: There are a few minor typos and grammatical issues, e.g, Line 066: "texutal" -> "textual". While the paper acknowledges limitations, a slightly more expansive discussion of future work would be valuable. Questions For Authors: Please refer to the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, Thank you for your thoughtful comments and for recognizing our contributions, particularly in the design approach and experimental setup. We appreciate the opportunity to clarify the points you raised. ## Claim Could be a Bit More Precise We realize this problem and apologize for that. We will tone down relevant parts such as "significant performance improvements" in the updated version. The limitation about complex models (demonstrated in figure 11) will also be brought to the main body. ## Could Different VLMs Give Different Scores? First, VLM's scores depend on their base capabilities. We will add relevant discussions in the appendix. The followings are some discoveries we have made: 1. LLaVA (non One-Vison) cannot give correct scores. It tends to give very high scores (e.g., 9 or 10) or very low scores (e.g., 2 or 3) randomly. 2. How LLaVA-One-Vision (LLaVA-OV) scores rendered CAD objects is quite similar to how GPT-4o does. Second, the scores are partially sensitive to the formation of the evaluation prompts. Specifically, models randomly over-focus on aspects such as texture and colors of the rendered CAD objects, and score some rendered CAD objects lower than others based on that factors, which leads to inconsistency. Consequently, we instruct the VLMs not to focus on these aspects (details can be found in Line 752 and 800). After this modification, the scoring seems stable with prompts. ## Comparisons with Text2CAD We have the results where Text2CAD uses the same prompts as ours (i.e. the **solid** one). It is the comparison between row 3 and 4 in Table 3. The result we selected for presentation in Table 1 is the setup **favors Text2CAD the most**, which does not affect the fairness of our comparison. We will add the results with the same prompts to the main sections to make the comparison more comprehensive. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their thorough rebuttal. After reviewing the rebuttal and considering the additional comments from other reviewers, I find the clarifications and updates compelling. As a result, I have raised my score accordingly.
Summary: This paper proposes a text-to-CAD model that leverages LLMs to generate CAD commend sequences as sequential signals. The authors use a pre-trained LLM as the backbone and perform SFT on CAD parametric sequences. They further use DPO to perform RL-based fine-tuning and introduce an LVM-based scoring pipeline to construct preference data. Claims And Evidence: 1. The authors claim that they use DPO as the RL process to optimize the training. 2. An LVM is used to generate and rank the preference dataset. The ablation study supports the claims. Methods And Evaluation Criteria: The authors evaluated the method using F1 score, CD, COV, MMD, JSD, IR, LVM score and average rank. The former ones are common in the CAD generation tasks. LVM score and average rank are also reasonable as the authors use Large vision-language models to generate the preference dataset. Theoretical Claims: Equation 1 is a commonly used cross entropy loss, and equation 2 is the loss from Direct Preference Optimization. No other theoretical claims are made in this paper. Experimental Designs Or Analyses: The ablation studies support the claims of the modules. A suggestion is that parital CAD completion could be a good application and experiment like in HNC-CAD Supplementary Material: Yes, I examined how the dataset was constructed and additional experimental results Relation To Broader Scientific Literature: This work incorporates powerful pre-trained LLMs into CAD generation. Could be a good start to the community. Essential References Not Discussed: HNC-CAD is able to perform partial CAD completion, which is similar to auto-regressive generation in this work and could be discussed. [1] Xu, Xiang, et al. "Hierarchical neural coding for controllable cad model generation." arXiv preprint arXiv:2307.00149 (2023). Other Strengths And Weaknesses: Weaknesses 1. I still have concerns that the VLM-generated captions (even with human revision) could describe the CAD models well. Even if the training results are good, there may be some kind of overfitting to the dataset. It may still be hard for human users to generate such prompts that fit the VLM caption styles and also difficult for human users to correctly describe the CAD models purely in text prompts. 2. The captions are scale-invariant, so how do the users adjust the numerical parameters to scale different parts of the CAD models? Is it integrated to the LLMs are the users have to manually adjust them as a postprocessing step? Other Comments Or Suggestions: None Questions For Authors: 1. Are the users able to modify the previously generated results with additional prompts? 2. What about the generation diversity regarding to the same text prompt? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, We sincerely appreciate your thoughtful comments and the opportunity to clarify the concerns raised. ## How Well Captions Describe CAD Models and Model Accomodates to Real Users During development, we considered similar concerns and would like to share our findings: * VLM-generated captions are effective for moderately detailed descriptions, such as structure, topology, and key component relationships. However, they struggle with precise numerical specifications, such as exact dimensions (e.g., "13.5 × 4.3 × 2"). Since designing is an iterative process, moderately detailed descriptions still provide substantial guidance to CAD modeling. * We provided guidelines to human users and empirically find that they quickly learned to interact with our model by following them. We provide more detail of the guideline and an example at the end of this response. ## Handling Numerical Parameters Currently, users should manually adjust numerical parameters as a post-processing step. In a pratical designing scenario, they normally create rough designs first and refine the numerical details iteratively. We believe our model provides a sufficient response automating the former step, and optimistically anticipate future work that supports the latter, aiming to fully automate the design process. ## More discussion for HNC-CAD We have discussed HNC-CAD in the introduction and related works as an unconditional CAD generation method, and cited it in our paper. Following your suggestion, we will add more discussion about HNC-CAD. Specifically, 1. HNC-CAD is also an autoregressive model. However, it trains a transformer **VQ-VAE** on **codebook** representations on **sequential** signals, while ours leverages a pretrained **LLM** (decoder only) on **stringified** representations on both **sequential and visual** signals. 2. The auto-completion task in HNC generates a random completion. It can be used jointly with the text-based CAD generation method. By providing a CAD sequence prefix and text guidence to the framework, we achieve a text-guided completion. ## User's Ability to Modify Generations with Additional Prompts We would put it as a promising future direction called text-based editing. It relies on different model capabilities from text-based generation. For instance, model may need to learn how to localize the place for edition based on additional prompts and make changes to that specific location. This requires not only text understanding but also CAD understanding. ## Generation Diversity Regarding the Same Prompt We did include exploration on this part! Please refer to Figure 10 in Page 18. It corresponds to Appendix C.7. [1] Khan et al., Text2cad: Generating sequential cad models from beginner-to-expert level text prompts, NeurIPS 2024. [2] Xu et al., SkexGen: Autoregressive Generation of CAD Construction Sequences with Disentangled Codebooks, ICML 2022. ## (Appendix) User Guidelines for Prompting A good prompt follows a structured description: (1) *shape overview*, (2) *shape details*, and (3) *shape applications*. Given the varying shape complexity, we encourage but do not enforce describing item (2) and (3). Below is an example caption retrieved from Figure 8, Row 2, Item 3, demonstrating this approach: ``` [Shape Overview] The 3D shape consists of a large, flat rectangular slab with two evenly spaced, identical cylindrical protrusions extending vertically from its surface. [Shape Details (Optional)] The slab provides a stable base with significant length and width compared to its thin height, while the cylinders are relatively short and have small diameters. [Shape Applications (Optional)]The overall design is symmetrical and balanced, potentially serving as a mounting base or connector. ``` We believe that such structured descriptions are clearly formatted and easy for users to interpret. If the reviewer finds it necessary, we are happy to include detailed instructions in the camera-ready version of the paper or the code repository upon release.
Summary: Authors build upon existing CAD data representation and introduce a novel visual feedback into text2cad. A dpo algorithm is used together with LVM-based scoring to improve the text2cad pipeline. Two new datasets are also proposed (text-cad pair dataset and preference dataset). Claims And Evidence: From table 2, the evaluation scores between SL (no visual feedback) and SL-VF(pro) is pretty close, I thought with the multi-aspect preference scores the valid ratio would greatly improve. But results do not seem to suggest so. Methods And Evaluation Criteria: Evaluations for whether or not the generation truly follows the text description is not obvious. LVM is not tuned to assess CAD model quality and the score might not be very reliable. The human ranking score is judging which output is the best out of four different models, but even the best model could still fail to adhere to some text constraints and this score will not reflect this. Theoretical Claims: N/A Experimental Designs Or Analyses: Same as above, there is a lack of metric for benchmarking how well the text description controls the CAD generation. Supplementary Material: Yes, I reviewed all. Relation To Broader Scientific Literature: Text2CAD is generally an interesting topic in Computer-aided design. This has been less explored than other fields (image, video) due to lack of high-quality data. The paper is moving towards the right direction and demonstrates some early promising results to automate the design process. Essential References Not Discussed: N/A Other Strengths And Weaknesses: While I like the text2cad task, the technique contribution in this paper is limited. The CAD data representation is from previous work as well as the base dataset (DeepCAD) and the use of DPO. New contributions are limited to combining DPO with visual score feedback and using VLM to annotate the DeepCAD models. But there are a lot of CAD structure and topology information not available from just a rendered image. So I am not sure if a VLM can capture all the information from image modality. The refined human annotation in figure 7 looks better, but going through the text annotations in figure 5, I find most of them are biased towards coarse shape of the CAD model (cylinder, rectangular, holes)…. This limits the text-cad pairs to very simple shapes (at least judging from the provided figures). Other Comments Or Suggestions: N/A Questions For Authors: 1) 20k data is quite small for training a generative model. I wonder why authors did not annotate the full DeepCAD/SkexGen dataset. If human annotation is not scalable (as written in supplementary) then does it mean the VLM approach proposed in the paper can not reliablely generate high-quality annotations? 2) Why is there no cov/mmd/jsd scores for the other baseline models. Also why authors did not compare to DeepCAD/SkexGen/hnc-cad? Seems like data representation is the only difference here and it is not hard to retrain those models on the provided text annotations (all are open source). 3) I might have missed this but how exactly does VLM avoids collisions (figure 4)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear reviewer, Thank you for your feedback. We appreciate the opportunity to clarify your concerns. ## SL(no VF) & SL-VF(pro) We are unsure what SL-VF(pro) refers to. We presume you are mentioning SF-VF(rpo). If so, we would like to point out that this is **not our main method**, but **an ablation** using existing techniques to regularize and stabilize DPO (Line 410). We included it to highlight the effectiveness of our *iterative visual-sequential training* (line 413-426). In Table 2, our main method is **SL-VFSL(5) in the last row**, where a considerable improvement on VLM score can be observed. ## Evaluation Criteria We regret to point this out, but there is a **factual misunderstanding** in the statement *"Evaluations for whether or not the generation truly follows the text description is not obvious."* 1. In our human evaluation, the textual description are shown alongside the generation results, and annotators were asked to evaluate the alignment between the CAD model and text (Line 808). Thus, the human rankings directly reflect instruction following. 2. Second, our VLM score also supports this evaluation by design (Line 317). Reviewer `X4me` calls this "a reasonable score", which supports the soundness of our metrics. We would appreciate it if you could reconsider this claim after reviewing the referenced lines. ## Technical Contribution We would like to highlight that the motivation behind introducing DPO is itself a contribution. Observing **limitations of sequence learning**, we proposed using visual signals to adjust model preference: an approach has never been previously explored. Moreover, as the rendering process is not differentiable, directly encoding visual signals and backpropagating them is infeasible. Consequently, we propose DPO as a solution. Furthermore, integrating DPO into our setup is **not a trivial application** of existing techniques. Table 2 (Rows 3, 4) shows that directly applying DPO or standard stabilizing algorithms (RPO) does not yield strong results. We hypothesize that this is due to the **cross-modality** nature of our task. A significant portion of our work focused on overcoming challenges though SL-VFSL training. Additionally, we would like to highlight how other reviewers acknowledge our contributions. Reviewer `jpmw` calls our handling of the non-differentiable rendering process "clever and appropriate," and Reviewer `X4me` refers to our approach as "interesting" and a "nice way" of tackling the problem. ## Simple Shapes in Figures Figure 8, 9 includes CAD generations with more complex shapes and textual descriptions specifying CAD structure and topology details. We hope these results satisfy you. The reason Figure 5 contains 'simple' shapes is that we had to choose samples from the intersection of valid generations across models. We are happy to move some instances in Figure 7, 8 to the main manuscript at your wish. ## Data Size and Captioning 1. While we acknowledge that 20k human-annotated samples could be further expanded, we choose this setting for some reasons: - First, we annotated data and trained models iteratively. At 20k samples, our method already outperformed baselines (Table 1, main experiment). - Second, the cost of annotating 20k samples is affordable for research groups of any scale. By keeping the dataset size reasonable, we aim to ensure the reproducibility of our results and provide meaningful insights to the broader community. 2. VLM itself is reliable. The CADFusion variant trained on only VLM data with sequential signals (Table 4, row 2) outperforms baselines (Table 1, row 1, 2), demonstrating improvement. ## COV/MMD/JSD Scores The Text2CAD paper does not report these metrics. We attempted to compute them using our evaluation code but encountered challenges when applying it to Text2CAD's code and results. However, we have reported other metrics that effectively reflect the our method's efficacy. ## Comparing to DeepCAD/SkexGen/HNC-CAD We did not compare against them as none take text as input or claim text-based capabilities. Adapting their frameworks for textual input requires modifications that constitute a new research project rather than a baseline. Although this requirement is doubtful, we implemented a text-guided SkexGen variant by integrating a text encoder with SkexGen’s decoder for Text-to-CAD generation. Training on the same setups, our results are: | Method | COV | MMD | JSD | IR | |-----------|-------|------|-------|-------| | SkexGen | 72.39 | 3.60 | 11.56 | 20.90 | | CADFusion | 90.40 | 3.49 | 17.11 | 6.20 | CADFusion outperforms SkexGen in multiple metrics, reinforcing its effectiveness. ## Collisions The term "collisions" refers to part intersections within a CAD model. VLM does not *explicitly* prevent collisions but evaluates their presence. This scoring mechanism benefits the subsequent visual-feedback stage, where we empirically observed an improvement in collision avoidance.
null
null
null
null
null
null
Robust Multi-bit Text Watermark with LLM-based Paraphrasers
Accept (poster)
Summary: The paper introduces methodologies for embedding imperceptible multi-bit text watermarks. The proposed algorithm aims to fine-tune a pair of LLM paraphrasers that are designed to behave differently so that their paraphrasing difference reflected in the text semantics can be identified by a trained decoder. The effectiveness of the proposed method is demonstrated through extensive experiments. Claims And Evidence: The claims are supported by clear evidence. Methods And Evaluation Criteria: Methods: The co-training framework (encoder-decoder with PPO-based RL) is novel and appropriate for aligning watermark injection and detection. Sentence-level segmentation simplifies multi-bit encoding. Evaluation: Metrics (bit accuracy, AUC, similarity scores) are well-chosen. Baselines (RemarkLLM, KGW, KTH, Waterfall) cover representative works. Theoretical Claims: No explicit theoretical proofs are provided. Experimental Designs Or Analyses: Missing ablation study. For example, does the similarity reward $r_s$ (Equation 4) remain critical when using larger models (e.g., Llama-2-7B)? Supplementary Material: Appendices include prompts (Figures 3–6), OOD results (Table 4), examples (Table 5), and Llama-2-7B experiments (Table 6). These strengthen reproducibility and validate generalization. Relation To Broader Scientific Literature: The work builds on paraphrasing-based watermarks. It uses PPO-based RL techniques to finetune the encoder so that the injected watermark can be better decoded. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: The presentation is clear and easy to follow. The authors explain the proposed algorithm clearly. The performance on several datasets shows the effectiveness of the proposed algorithm. Weakness: Lack of ablation experiments to show the effectiveness of the similarity reward $r_s$ in Equation 4. Some experimental details are not clearly described. In the experimental metrics, how do you determine a text is watermarked (the extracted multi-bit watermark is consistent with the injected multi-bit watermark? Or most of bits in the extracted multi-bit watermark are correct). Other Comments Or Suggestions: The first occurrence of an abbreviation should be given in full (e.g., LLM in Abstract). Questions For Authors: If a larger model is used, such as Llama-2-7b, is the similarity reward $r_s$ in Equation 4 still needed? In the experiment, during the calculation of TPR@FPR=1%, how to determine whether a text is the watermarked text based on the extracted multi-bit watermark information? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Ablation study of similarity reward** (Lack of ablation experiments to show the effectiveness of the similarity reward rs in Equation 4. ... If a larger model is used, such as Llama-2-7b, is the similarity reward rs in Equation 4 still needed?) **Response**: We show the effect of similarity reward $r_s$ with an ablation study on its coefficient $\lambda_s$, as well as another ablation study on $\lambda_k$ which controls the weight of the KL divergence term. The result is shown in the table below. We can observe that $\lambda_s$ and $\lambda_k$ indeed controls the trade-off between detectability and fidelity - when we increase the coefficient, fidelity will be improved but the detectability will be decreased. This shows that the similarity reward $r_s$ is important in training a high-performance paraphraser. Regarding the second question of whether $r_s$ is still needed for larger models, we argue that it is still essential in order to preserve good paraphrasing performance. This is because if we only train the model with watermark loss, its paraphrasing ability will be decreased during the training process. Therefore, we still need the similarity reward to maintain its performance. | Task | bitAcc | AUC | Similarity | | :---- | ----: | ----: | ----: | | $\lambda_s$=5.0 | 0.7606 | 0.9028 | **0.9728** | | $\lambda_s$=2.0 | 0.9525 | 0.9967 | 0.8961 | | $\lambda_s$=1.0 (original) | 0.9563 | 0.9981 | 0.8739 | | $\lambda_s$=0.5 | 0.9678 | **0.9988** | 0.8515 | | $\lambda_s$=0.2 | **0.9722** | 0.9987 | 0.8283 | | Task | bitAcc | AUC | Similarity | | :---- | ----: | ----: | ----: | | $\lambda_k$=0.1 | 0.9036 | 0.9739 | **0.8878** | | $\lambda_k$=0.05 | 0.9284 | 0.9849 | 0.8840 | | $\lambda_k$=0.02 (original) | 0.9563 | 0.9981 | 0.8739 | | $\lambda_k$=0.01 | 0.9799 | **0.9991** | 0.8529 | | $\lambda_k$=0.005 | **0.9828** | **0.9991** | 0.8489 | **Clarification on metrics** (In the experimental metrics, how do you determine a text is watermarked (the extracted multi-bit watermark is consistent with the injected multi-bit watermark? Or most of bits in the extracted multi-bit watermark are correct).) **Response**: We are sorry for the confusion here. We will determine whether a text is watermarked by checking the proportion of the extracted watermark bits that match the injected bits. For example, if 90% of the bits match, then the text is watermarked. Note that the threshold (90% in the example) is not a fixed value but will be changed in order to calculate AUC and TPR@FPR=x. We will clarify the metric computation in the revision of our paper. **Writing** (The first occurrence of an abbreviation should be given in full (e.g., LLM in Abstract).) **Response**: We thank the reviewer for pointing out the writing issues. We will fix them in the revision of the paper. --- Rebuttal Comment 1.1: Comment: I have read the responses by authrs and have no further questions.
Summary: The paper proposes a method for injecting watermarks to text. The key idea proposed is to use two LLM paraphrasers (one for the '0' bit and one for the '1' bit). The decoder (classifier) and the paraphrasers are trained together in a co-training setup. The results achieved are impressive, achieving 99.9% AUC and 95% bit accuracy while using relatively small (1.1B) models. The authors also conduct robustness checks by perturbing the text and show that their method outperforms other baselines in the presence of word substitution and sentence paraphrasing attacks. Claims And Evidence: The key claims on the performance of their framework are well-supported by the experimental evidence provided. Specifically, the authors claim their framework achieves high watermark detection performance (table 1), is robust to word substitution and paraphrasing attacks (tables 2 and 3) and generalizes to out of distribution data (Appendix B). Methods And Evaluation Criteria: The evaluation criteria of using AUC and robustness checks make sense for this problem. Theoretical Claims: The paper does not have any theoretical claims. Experimental Designs Or Analyses: Overall, the experimental design and analyses are sound. The authors evaluate on multiple datasets, compare against appropriate baselines, and test different perturbation attacks. The stealthiness claims are a bit weak, as there is no subjective evaluation performed large scale by human participants. Instead the authors use a GPT based model to check. Supplementary Material: I reviewed the supplementary material, mainly out-of-distribution experiments in appendix B and examples in table 5. Relation To Broader Scientific Literature: The existing literature could be classified to pre-LLM based watermarks and LLM based watermarks. The contributions of the paper fall under LLM based text watermarking. Within this category, there are three key works: Remark LLM that uses LLM paraphrasers, KGW and KTH that add watermarks during token generation, making it unsuitable for non LLM generated text. Although LLM based watermarking was explored before, this paper shows that having two LLM paraphrasers, trained in a co-training setup can perform better than the other methods in terms of performance and robustness check. Essential References Not Discussed: I was not able to find papers that were not cited. Other Strengths And Weaknesses: Strengths: 1. The authors propose a novel approach of having LLM phrasers for each bit and training them together in a co-training framework. 2. They show strong empirical results on detection accuracy and robustness on multiple datasets. They also compare their work against relevant baselines to show improvement. 3. The paper is well-written and structured. Weakness: 1. There are no ablation studies on the choice of sentence segmentation. I understand that it was an assumption, but what is the impact of using a different segmentation method? 2. There are no studies on the length of the text, i.e how is the performance when dealing with small texts vs large documents. 3. There are no analysis on the computational overload compared to other baselines, considering we use two llms. Other Comments Or Suggestions: The paper is well written, the claims would be strong if there are ablation studies on length of text, choice of text segmentation and computational analysis. Questions For Authors: I put my questions in the weakness section above. Specifically, my questions are related to how the performance and robustness of this method is for texts of varying length. Also how is the choice of text segmentation affecting the performance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Different Segmentation** (what is the impact of using a different segmentation method?) **Response**: To compare the performance of different segmentation strategies, we conduct an extra experiment in which we design a "segment-by-token" strategy, where we segment the text every 20 tokens and show the result in the table below. We can observe that the segment-by-token strategy also works well under normal scenario and substitution attacks. However, the performance significantly drops under translation attacks. This is because the token order will be changed after being translated back and forth, so that the segmentation of the perturbed watermarked text ($\mathcal{S}(pert(x^w))$) will be different from that of the watermarked text ($\mathcal{S}(x^w)$). This also explains why we choose our current segmentation strategy - it is generally robust under both word substitution and parapharsing. As discussed in Section 6, we view the investigation of other segmentation strategies as an important future work. | Task | bitacc | AUC | TPR@1% | Similarity | :---- | ----: | ----: | ----: | ----: | | Ours, segment-every-20-tokens | 0.9507 | **0.9987** | **98.6%** | 0.8667 | | Ours, segment-by-sentence (original) | **0.9563** | 0.9981 | 98.0% | **0.8739** | | Task | bitacc | AUC | TPR@1% | | :---- | ----: | ----: | ----: | | Ours, segment-every-20-tokens, under substitution (10%) | **0.9242** | **0.9917** | **90.1%** | | Ours, segment-by-sentence (original), under substitution (10%) | 0.9193 | 0.9871 | 86.4% | | Ours, segment-every-20-tokens, under translation | 0.6015 | 0.6835 | 1.8% | | Ours, segment-by-sentence (original), under translation | **0.8206** | **0.9310** | **67.4%** | **Different length of text** (There are no studies on the length of the text, i.e how is the performance when dealing with small texts vs large documents.) **Response**: We thank the reviewer for pointing out the necessity to do ablation studies on input text length. We perform the experiment of varying input text length and show the results in the table below, where "len" refers to the number of tokens in the input text. We can observe that the bit-wise accuracy keeps similar, while the text-wise detection AUC grows as the input length grows. This is expected, as longer text (and thus a longer watermark code) will provide better error tolerance for detection. Interestingly, paraphrasing similarity also grows with longer text. This is probably due to the fact that shorter sentences have less possibilities of being changed, so that the paraphrasers need to do more changes in order to inject watermarks. | Task | bitAcc | AUC | Similarity | | :---- | ----: | ----: | ----: | | len=16 | 0.9417 | 0.8133 | 0.7639 | | len=32 | 0.9463 | 0.8908 | 0.8064 | | len=64 | 0.9521 | 0.9698 | 0.8396 | | len=128 (original) | 0.9563 | 0.9981 | 0.8739 | **Computation Overhead** (There are no analysis on the computational overload compared to other baselines, considering we use two llms.) **Response**: Our model, similar to other text watermark methods like RemarkLLM and Waterfall, uses a LLM-based paraphraser to inject watermarks into text. The main computation is the time required to run the LLM-based paraphraser, where we use two small models (1.1B). Since we need to do forward pass for two models, the runtime is approximately similar to that of running a 2.2B model. By comparison, the Waterfall approach runs a 13B model. The earlier RemarkLLM work uses the T5 model, which is a smaller 220M model. We hypothesize that such a small model may lead to a relatively lower paraphrasing performance. As shown in Table 1 in our paper, their similarity score is around 0.8 while ours can achieve >0.87. In addition, we show the average time to run one watermark injection process in the table below. Surprisingly, the runtime of our method and RemarkLLM (for which we use their open-source implementation) is roughly the same. We owe it to the reason that current LLM packages, e.g. Huggingface Transformers, have better optimization for more recent models like Llama. | Method | # Parameters | Average Runtime (sec) | | :---- | ----: | ----: | | RemarkLLM | 220M (T5) | 1.13 | | Waterfall | 13B (Llama) | 7.92 | | Ours | 1.1B*2 (TinyLlama) | 1.23 | --- Rebuttal Comment 1.1: Comment: Thanks for the explanation. I am happy to upgrade my decision to an accept, as this is a good solid methodology paper demonstrating strong performance improvements. --- Reply to Comment 1.1.1: Comment: We are glad that the reviewer appreciates our methodology and performance. Thank you for your thoughtful review and positive feedback!
Summary: The paper presents a robust multi-bit text watermarking method that leverages LLM-based paraphrasers to embed imperceptible watermark signals into text while maintaining semantic fidelity. The approach involves fine-tuning a pair of paraphrasers designed to generate text variations that encode a predefined binary watermark at the sentence level. A trained LLM-based text classifier is then used as a decoder to retrieve the watermark from the modified text. The method employs a co-training framework using Proximal Policy Optimization (PPO), where the encoder (paraphraser) and decoder (classifier) are trained iteratively to optimize watermark embedding and extraction. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The paper proposes a Robust Multi-bit Text Watermark. Essential References Not Discussed: Yes Other Strengths And Weaknesses: Strength - Unlike traditional watermarking approaches that rely on lexical substitutions, this method leverages paraphrasing, providing a larger action space for embedding watermarks while maintaining naturalness in generated text. - The proposed watermarking method achieves over 99.99% detection AUC, outperforming existing techniques. Weakness - The method's performance is highly reliant on model initialization and carefully chosen hyperparameters (e.g., λw, λs, λk). To what extent do these hyperparameters influence the effectiveness of the approach? This dependency raises concerns regarding the method's reproducibility and its robustness when applied to different datasets or scaled-up models. - The experiments primarily use relatively small models, such as TinyLlama-1.1B. Could you discuss on why larger models, like Llama-2-7B, show only marginal performance improvements? - If the watermarking technique becomes publicly available, an attacker could develop an adaptive strategy specifically designed for the watermarking process. How can the proposed watermarking method effectively defend such adaptive attacks? Other Comments Or Suggestions: See the Weakness Questions For Authors: See the Weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Ablation Study** (The method's performance is highly reliant on model initialization and carefully chosen hyperparameters (e.g., λw, λs, λk). To what extent do these hyperparameters influence the effectiveness of the approach?) **Response**: We agree with the reviewer that the hyperparameter choices are important to the performance of our pipeline. To evaluate their impact, we conduct ablation studies that fix $\lambda_w$ and change the other two coefficients. The results are shown in the tables below. We can observe that $\lambda_s$ and $\lambda_k$ indeed control the trade-off between detectability and fidelity - the larger these coefficients, the better the paraphrasing performance but the worse the watermark detectability. Nevertheless, we argue that in most cases, performance is good for both detectability and fidelity. | Task | bitAcc | AUC | Similarity | | :---- | ----: | ----: | ----: | | $\lambda_s$=5.0 | 0.7606 | 0.9028 | **0.9728** | | $\lambda_s$=2.0 | 0.9525 | 0.9967 | 0.8961 | | $\lambda_s$=1.0 (original) | 0.9563 | 0.9981 | 0.8739 | | $\lambda_s$=0.5 | 0.9678 | **0.9988** | 0.8515 | | $\lambda_s$=0.2 | **0.9722** | 0.9987 | 0.8283 | | Task | bitAcc | AUC | Similarity | | :---- | ----: | ----: | ----: | | $\lambda_k$=0.1 | 0.9036 | 0.9739 | **0.8878** | | $\lambda_k$=0.05 | 0.9284 | 0.9849 | 0.8840 | | $\lambda_k$=0.02 (original) | 0.9563 | 0.9981 | 0.8739 | | $\lambda_k$=0.01 | 0.9799 | **0.9991** | 0.8529 | | $\lambda_k$=0.005 | **0.9828** | **0.9991** | 0.8489 | **Larger Model** (Could you discuss on why larger models, like Llama-2-7B, show only marginal performance improvements?) **Response**: We owe it to the reason that the paraphrasing task is a relatively easy task so that small models can be fine-tuned to achieve good performance. For example, although the PEGASUS paraphrasing model was proposed in 2020 and has less than 600M parameters, it has good paraphrasing performance and is still widely used in current paraphrasing tasks. With the similarity reward included during our RL process, our 1.1B models are fine-tuned to be good paraphrasers. Therefore, using larger models may only provide marginal improvements. **Adaptive Attacks** (How can the proposed watermarking method effectively defend such adaptive attacks?) **Response**: We thank the reviewer for pointing out the possibility of adaptive attacks. We can think of two possibilities of adaptive attacks. First, the adversary can attack the detection model with adversarial ML techniques, which aims to slightly change the text so that the detection model output is greatly changed. For this attack, we argue that the detection model parameters will be kept private to the watermark provider and not available by the adversary. Therefore, this is a black-box adversarial attack on LLM-based detectors which, as far as we can tell, does not have a well recognized attack method that works well. We welcome suggestions on potential attacks against our watermark detector under the black-box setting. Another possibility of adaptive attack is to hack the text segmentation process. For example, knowing that our watermark is segmented based on sentences, the adversary may try to insert or delete sentences so that the watermark code cannot be recognized. That is to say, suppose the text is assigned watermark code 1010110, the adversary can delete the second sentence to make it 110110, and thus the matching rate between the ground truth and decoded code will be low (only 2 bits are matched). To mitigate this exact problem, we may use the longest common sequence algorithm to calculate the match rate (in this case, 5 bits will be matched). In general, we argue that the segmentation method can also be varied and kept private, thus reducing the problem of being hacked. For example, in the response to Reviewer ZSdw and yNYD, we show that we may also segment the text by every 20 tokens, and also achieve a good watermark performance.
Summary: The authors proposed a multi-bit text watermark by paraphrasing a piece of text to inject watermark signals. The watermark consists of an encoder-decoder pair. The encoder is fine-tuned to generate text that is classified by the decoder. The decoder is trained with a classification loss to better classify between bit-0 texts and bit-1 texts. Claims And Evidence: Yes. Methods And Evaluation Criteria: Are there any other evaluation metrics to evaluate the performance of the watermarking texts? Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Almost Relation To Broader Scientific Literature: Other researchers working on LLMs may be interested in the topic of this paper. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths of the paper: 1. The paper is well-written and easy to follow. 2. The problem is of great value to investigate. 3. Overview of the proposed model is provided in the figure. Weaknesses of the paper: 1. The text segmentor simply consider each sentence in the text as a segment. Are there other better segment strategies? 2. The authors consider similarity etc. as evaluation metrics to evaluate the watermarked texts. Are there other evaluation metrics that can be taken for evaluation purpose? 3. How good is the proposed watermarking in terms of avoiding additional computational burden? 4. The authors are encouraged to make the source code of the proposed model publicly available such that the experimental results are convincing to other researchers. 5. Does the proposed watermark method change the parameter of the original LLM? If yes, how does the change of the parameters affect the performance of the original LLM? Other Comments Or Suggestions: No. Questions For Authors: Please see the above weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Different Segmentor** (The text segmentor simply consider each sentence in the text as a segment. Are there other better segment strategies?) **Response**: We agree with the reviewer that we use a simple segment strategy which splits text by sentences. Nevertheless, we argue that our current strategy is an effective choice, since the segmentation will be robust under word substitution and paraphrasing. To compare the performance of different segment strategies, we conduct an extra ablation experiment. We segment the text every 20 tokens and show the result in the table below. We can observe that the segment-by-token strategy also works well under normal scenario and substitution attacks. However, the performance significantly drops under translation attacks. This is because the token order will be changed after being translated back and forth, so that the segmentation of the perturbed watermarked text ($\mathcal{S}(pert(x^w))$) will be different from that of the watermarked text ($\mathcal{S}(x^w)$). As discussed in Section 6, we view the investigation of other segment strategies as an important future direction. | Task | bitacc | AUC | TPR@1% | Similarity | | :---- | ----: | ----: | ----: | ----: | | Ours, segment-every-20-tokens | 0.9507 | **0.9987** | **98.6%** | 0.8667 | | Ours, segment-by-sentence (original) | **0.9563** | 0.9981 | 98.0% | **0.8739** | | Task | bitacc | AUC | TPR@1% | | :---- | ----: | ----: | ----: | | Ours, segment-every-20-tokens, under substitution (10%) | **0.9242** | **0.9917** | **90.1%** | | Ours, segment-by-sentence (original), under substitution (10%) | 0.9193 | 0.9871 | 86.4% | | Ours, segment-every-20-tokens, under translation | 0.6015 | 0.6835 | 1.8% | | Ours, segment-by-sentence (original), under translation | **0.8206** | **0.9310** | **67.4%** | **Other evaluation metrics** (The authors consider similarity etc. as evaluation metrics to evaluate the watermarked texts. Are there other evaluation metrics that can be taken for evaluation purpose?) **Response**: We thank the reviewer for bringing out the question of metric design. We use three types of metrics to evaluate the watermarked texts - the bit-wise accuracy, the text-wise accuracy and the fidelity. These three metrics are the most commonly used metrics in the related works (e.g. RemarkLLM, Waterfall). We welcome suggestions on other metrics that could be helpful to evaluate the watermark performance. **Computation** (How good is the proposed watermarking in terms of avoiding additional computational burden?) **Response**: Our model, similar to other text watermark methods like RemarkLLM and Waterfall, uses a LLM-based paraphraser to inject watermarks into text. The main runtime overhead is the time required to run the LLM-based paraphraser. To reduce computational burden, we use two small models (1.1B) and show that we can achieve good paraphrasing performance. By comparison, the Waterfall approach runs a 13B model. Nevertheless, we notice that the earlier RemarkLLM work uses the T5 model, which is a smaller 220M model. We hypothesize that such a small model may lead to a relatively lower paraphrasing performance. As shown in Table 1 in our paper, their similarity score is around 0.8 while ours can achieve >0.87. In addition, we show the average time to run one watermark injection process in the table below. Surprisingly, the runtime of our method and RemarkLLM (for which we use their open-sourced implementation) is roughly the same. We owe it to the reason that current LLM packages, e.g. Huggingface Transformers, have better optimization for more recent models like Llama. | Method | # Parameters | Average Runtime (sec) | | :---- | ----: | ----: | | RemarkLLM | 220M (T5) | 1.13 | | Waterfall | 13B (Llama) | 7.92 | | Ours | 1.1B*2 (TinyLlama) | 1.23 | **Open-sourcing** (The authors are encouraged to make the source code of the proposed model publicly available such that the experimental results are convincing to other researchers.) **Response**: We thank the reviewer for pointing out the necessity of open-sourcing. We do have an open-source plan and promise to release the code of our work if the paper is accepted. **Original model Parameters** (Does the proposed watermark method change the parameter of the original LLM?) **Response**: As a clarification, we will have two different classes of LLMs in our pipeline. First, for the watermark injection, we will have "paraphrasing LLMs" to paraphrase the text to inject the watermark. The parameters of these paraphrasing LLMs are finetuned so that they can be used to inject watermark signals. Second, as a text watermark is often applied to watermark LLM-generated texts, we will apply our watermark method to texts that are generated by a "source LLM". We hypothesize that the reviewer is referring to this source LLM, whose parameters will not be changed in our watermark algorithm.
null
null
null
null
null
null
Diffusion Instruction Tuning
Accept (poster)
Summary: This paper introduces Lavender, the first framework designed to directly align the attention layers of vision-language models (VLMs) with Stable Diffusion. Notably, Lavender is model-agnostic, and the authors evaluate it across multiple pretrained VLMs, demonstrating its strong generalization on both in-distribution and out-of-distribution (OOD) data. Furthermore, Lavender fine-tunes efficiently, requiring only 0.13M processed pairs for training. ## update after rebuttal I have read the authors' responses. The authors have provided useful additional experiments, but not strong enough to justify a 5. Since my original score was 4, I will maintain it. Claims And Evidence: Yes Methods And Evaluation Criteria: The paper fails to discuss whether the choice of diffusion architecture matters. - The paper leverages SDv1.4 which uses UNet architecture. Does the analysis hold for DiT-based models? Theoretical Claims: Some of the assumptions in the Bayesian justification appear to be too strong. - The assumption that a single ideal attention mechanism exists could be too strong - In Appendix G, the assumption that the cross-entropy terms are approximately equal seems too strong. The transition from Equation (23) to (25) is unclear—how does $y_q$ get replaced by $y$? Experimental Designs Or Analyses: Yes, the experimental designs in sections 5, 6, 7 is reasonable and valid. Supplementary Material: Yes, all Appendix that has been referred in the main paper Relation To Broader Scientific Literature: This work represents an effort to leverage diffusion models (image generation), to enhance image understanding tasks. The proposed method demonstrates robust performance gains, suggesting a promising direction for harnessing the capabilities of generative models to improve understanding tasks. Essential References Not Discussed: No Other Strengths And Weaknesses: Weakness: - The mathematical symbols used throughout the paper are often not clearly defined. - The symbols ($x$, $y_l$, $y_q$, etc.) in sec 2.1 lack explicit definitions. - The meaning of $h, l, t, p$ in L216 $w^{hl}_{(t,p)}$ are not specified. - In Line 1433, it is stated that $A^{(l)} \in R^{N_{text} \times N_{patch}}$, however, the aggregation function used over multi-head is not explicitly described. - The implementation details for "root word match" and "exact word match" are unclear. Given that text is tokenized into subwords rather than words, how are these matches computed? - In Figure 9, it appears Lavender shows negative gain in hateful memes, the paper should provide analysis and discussion to explain the result - In Figure 10, Lavender fails to outperform AR full-FT on OCRBench and DocVQA, both of which involve text recognition. This raises several questions like: - Does Lavender not work well in improving the model’s ability to recognize text? - Could this be related to the text generation capabilities of diffusion models? Or is it due to the current attention aggregation method? - Would switching to a diffusion model with stronger text generation abilities improve performance? - Additionally, the paper could provide qualitative examples for OCR-related tasks to visualize the attention map in such cases. Other Comments Or Suggestions: Figure 1 could benefit from more distinguishable colors for the baseline to improve readability. Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Methods And Evaluation Criteria**: > "… the choice of diffusion architecture matters … Does the analysis hold for DiT-based models?" > Thank you for highlighting this important point, previously discussed with Reviewers Xyv6 and xQnL. Briefly, Lavender's effectiveness indeed depends on the chosen diffusion model. We conducted additional [visual]((https://anonymous.4open.science/r/2134/1_different_attns/1.jpg)) and [quantitative experiments](https://anonymous.4open.science/r/2134/README.md) using Stable Diffusion v1.4, v2, and Flux (with cross-attention and ConceptAttention [1]); the detailed setup is provided in the table caption. These experiments showed generally improved attention quality and performance with advanced diffusion models, despite persistent OCR-related challenges. [1] Helbling et al., 2025. ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features. --- ### Theoretical Clarifications and Concerns: > "Some of the assumptions … too strong. > > - …single ideal attention ... > - In Appendix G ... > - The transition from … We thank the reviewer for this detailed and constructive feedback and respond as follows: - We agree that assuming a single ideal attention mechanism is too strong, as transformer attentions are high-dimensional and task-specific across dimensions. We will clarify that our reference is to ideal attention in vision-centric tasks—Lavender’s focus—and thus motivate our design choice to aggregate high-dimensional attention into single-channel per-word maps to model word-to-region correlations (lines 216–219). Additionally, our Aligner network helps prevent overwriting attentions useful for other tasks, addressing potential interference, as positively noted by Reviewer Xyv6: *"Importantly, they introduce measures to handle catastrophic forgetting: e.g., an Aligner network (small learnable layers) to project VLM attention into the diffusion attention space and strategies like LoRA fine-tuning to preserve original model capabilities."* - Regarding the assumption of approximately equal cross-entropy terms, we clarify that it specifically applies to vision-centric word-to-region attention correlation rather than the entire VLM attention. - Concerning the transition from Equation (23) to (25), we clarify: - The VLM processes an image $x$, question $y_q$, and answer label $y_l$, modeling $p(y_l | x, y_q; \theta)$. - The Diffusion Model (DM), however, is conditioned on a unified text $y$, modeling $p(x | y; \theta_D)$. - In our Preprocessing Stage 1 (Algorithm 2), the DM processes image-question pairs, hence replacing $y_q$ with $y$. We will clarify this explicitly in the revision. --- ### Weaknesses (clarity and definitions): > "The mathematical ... > > - The symbols ... > - The meaning ... > - In Line 1433 ... > - The implementation details … We thank the reviewer for identifying these points and acknowledge that clarity was compromised by space constraints. We will revise the manuscript to explicitly define: - Symbols $x, y_q, y_l$, as image, question, and label answer, respectively. - Symbols $h, l, t, p$ in $w_{(t,p)}^{hl}$, indicating a single attention weight where $h \in H, l \in L, t \in T, p \in P$, with $H, L, T, P$ being heads, layers, tokens, and patches, respectively. - Clarify the mean or max aggregation function is applied over the multi-head attention. - Clarify "root word match" and "exact word match" are post-processing steps on fully generated and decoded answers prior to loss computation and backpropagation. > "In Figure 9 .." > The negative gain observed in the hateful memes dataset arises primarily because it uniquely employs a ranking classification task, unlike the captioning tasks used for training (Laion50k, Flickr30k) and the other six benchmarks, as specified in lines 1721-1725. We will explicitly add this analysis in the revised Figure 9 discussion. > "In Figure 10 ..." > Thank you for highlighting this point, previously discussed with Reviewers Xyv6 and 2Jc6. We observed degraded performance when mixing OCRVQA datasets, mainly due to the diffusion model's weaker text attention compared to object recognition. Visual examples illustrating these limitations were provided in earlier responses ([anonymized link](https://anonymous.4open.science/r/2134/2_challenge/1.jpg)). Since Lavender is model-agnostic, alternative diffusion or OCR-specialized models could enhance performance. Specifically, we suggest: - Leverage attention maps from specialized OCR models. - Using more advanced diffusion models (e.g., Flux with ConceptAttention [1]). - Increasing inversion steps, as shown in preliminary experiments ([anonymized link](https://anonymous.4open.science/r/2134/3_scale_inversion/1.jpg)). --- ### Other Comments or Suggestions: > "Figure 1 could benefit from more distinguishable colors for the baseline to improve readability." > We will fix this in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the response and additional experiments! I will maintain the score.
Summary: The paper introduces Lavender, a supervised fine-tuning (SFT) method for enhancing vision-language models (VLMs). It aligns the core transformer attention in VLMs with the attention maps of Stable Diffusion during SFT. This approach enriches the model's visual understanding, improves text generation quality, and is highly data-efficient, requiring only 0.13 million training examples. Experiments on multiple VLMs, such as Llama-3.2-11B and MiniCPM-Llama3v2.5, show significant performance gains of up to 30% on various benchmarks and a 68% boost on out-of-distribution tasks like WorldMedQA. The authors also conduct ablation studies to analyze the key components of Lavender. Claims And Evidence: Claims: The paper claims that Lavender can effectively improve the performance of VLMs, enhance word-to-region alignment, and be more data-efficient than traditional methods. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper provides a Bayesian framework to justify the inclusion of the attention alignment loss. The assumptions made in the framework, such as the DM's attention being closer to the optimal posterior attention distribution, are reasonable and supported by empirical evidence (e.g., lower entropy of DM attention). The derivations are clear and contribute to the theoretical foundation of the Lavender method. Experimental Designs Or Analyses: Experimental Designs: The experimental designs are generally sound. The authors test Lavender on different VLMs with cross-attention and self-attention mechanisms. They vary the training datasets, fine-tuning strategies (e.g., LoRA and full fine-tuning), and attention aggregation methods. The use of a smaller OpenFlamingo model for initial verification and then scaling up to larger models is a logical approach. Supplementary Material: No Supplementary Material in this paper. Relation To Broader Scientific Literature: The paper clearly situates its key contributions in the context of the broader scientific literature. It reviews the development of VLMs, the challenges in training them, and existing approaches to address these challenges. It also discusses how Lavender differs from previous methods, highlighting its novelty in directly aligning VLM transformer attention layers with Stable Diffusion. Essential References Not Discussed: No essential references seem to be missing. The paper covers a wide range of related works, from the development of VLMs and DMs to existing fine-tuning and alignment methods. Other Strengths And Weaknesses: Strengths: Novelty: The idea of aligning VLM attention with that of Stable Diffusion is innovative and shows great potential for improving VLMs. Detailed formalization: The authors establish a Bayesian framework to formalize the objective of aligning the attention mechanism of Vision-Language Models (VLMs) with that of Diffusion Models (DMs). Data-efficiency: Requiring only 2.5% of typical large-scale SFT datasets makes Lavender a practical and resource-friendly solution. Generalizability: The strong performance on out-of-distribution tasks like WorldMedQA demonstrates its potential for real-world applications. Weaknesses: Only experiment on Stable Diffusion v1.4: Using an older version of Stable Diffusion may limit the accuracy of attention maps. Upgrading to higher-resolution models could improve performance but also bring resource challenges. I am curious whether experiments with SD XL or Flux or even video diffusion models will bring new phenomena. Other Comments Or Suggestions: It would be interesting to see the performance of Lavender on more diverse and larger datasets to better understand its scalability. Exploring the use of different diffusion models or incorporating additional visual information sources could further enhance the method. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Weaknesses and Limitations: > "Only experiment on Stable Diffusion v1.4 …" > Thank you for highlighting this important point. We acknowledge that Lavender’s performance indeed depends on the chosen diffusion model. While our current results with Stable Diffusion v1.4 demonstrate strong attention quality compared to standard VLMs on general images, we recognize its limitations, particularly on tasks like OCR, as previously discussed with Reviewer Xyv6. To address your concerns and demonstrate Lavender’s generalizability, we've prepared visual examples comparing diffusion models (**Stable Diffusion v1.4**, **Stable Diffusion v2**, and **Flux** with cross-attention and ConceptAttention [1]) across various image types, accessible via [an anonymized link](https://anonymous.4open.science/r/2134/1_different_attns/1.jpg). These examples generally show improved attention quality with more advanced models, though challenges remain, especially in OCR tasks, as illustrated [here](https://anonymous.4open.science/r/2134/2_challenge/1.jpg). Additionally, we quantitatively evaluated Lavender by extracting attention from Flux (the latest DiT-based model) using cross-attention and ConceptAttention. Due to computational constraints, we limited the evaluation to approximately 2,000 OCRVQA image-text pairs, fine-tuned Lavender-Llama-3.2 with LoRA, and tested across eight benchmarks. Preliminary results (shown below) indicate that better attention from advanced models further improves Lavender’s effectiveness, supporting its model-agnostic capability. --- | Attention Model | DocVQA_VAL | InfoVQA_VAL | MME | MMMU (val) | OCRBench | POPE | RealWorldQA | HallusionBench (overall) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | SD14 Cross Attn | 73.26 | 47.71 | 1721.16 | 39.67 | 707 | **88.49** | 54.12 | **29.09** | | Flux Cross Attn | 74.05 | 48.01 | 1706.30 | 39.22 | 716 | 88.43 | 54.51 | 26.97 | | Flux Concept Attn | **78.47** | **51.88** | **1787.91** | **39.78** | **750** | 88.20 | **57.26** | 27.67 | [1] Helbling et al., 2025. ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features. --- ### Other Comments or Suggestions: > "It would be interesting … understand its scalability." > We thank the reviewer for highlighting this important point. Indeed, as explicitly noted in the Limitations and Future Works (lines 405-409) and Appendix D (lines 1140-1141): *"Lavender was evaluated on datasets of up to 0.13M samples, constrained by available compute resources."* This constraint primarily reflects our limited compute resources rather than any inherent scalability issue with Lavender. Nevertheless, we did examine Lavender’s scaling within our limits (Section 6.4 and Figure 13), showing continued improvement with increasing data size. Appendix D (lines 1142-1143) specifically states this: *"Figure 17 demonstrates non-convergent scaling behaviour, suggesting that further scaling of both dataset size and tuning length could lead to additional improvements in overall performance with Lavender."* We hope these findings encourage resource-rich groups to further explore this "*avenue for cross-model knowledge transfer,*" as Reviewer Xyv6 described, and this promising "*direction for harnessing the capabilities of generative models,*" as noted by Reviewer qVhe. Finally, as emphasized in Section 10 (Impact Statement, lines 463-476), Lavender uniquely benefits smaller research groups, enabling efficient knowledge transfer from large pre-trained models without extensive resources: *"Data Scarcity. Both the language and vision communities face current or impending data shortages... End-to-end training from scratch is resource-intensive and often infeasible. Large-scale LLM-finetuned VLMs and DMs have been trained on multi-billion-level datasets, making it inefficient if their knowledge remains isolated. Lavender offers a new approach to bridge these large model families using limited resources—requiring as little as a few thou- sand data points and one day of training on 8 Nvidia A10G GPUs (24GB memory each)—while enabling small models (<13B) to achieve performance on par with large models (>50B) across multiple benchmarks."* --- > "Exploring the use of different diffusion models or incorporating additional visual information ..." > We appreciate this insightful recommendation. Indeed, Lavender is fundamentally model-agnostic and is designed to leverage diverse and more specialized attention sources. Despite early discussed examples, in future work, we envision several directions to explore this further: - Leverage attention maps from specialized OCR models. - Using more advanced diffusion models (e.g., Flux). - Increasing inversion steps, as shown in preliminary experiments ([anonymized link](https://anonymous.4open.science/r/2134/3_scale_inversion/1.jpg)). We will incorporate these insights into the manuscript to clarify future research directions.
Summary: The paper introduces Lavender, a novel framework that enhances image-to-text generation in vision-language models by aligning their attention mechanisms with text-to-image diffusion models, specifically Stable Diffusion. The key motivation is that diffusion models, which reconstruct images at the pixel level, capture more precise attention maps with finer spatial granularity than standard VLMs optimized solely for textual output. Claims And Evidence: The claims made in the submission are largely supported by empirical evidence, including benchmark evaluations, ablation studies, and theoretical justification through Bayesian reasoning. Methods And Evaluation Criteria: The paper introduces Lavender, which aligns VLM attention with that of Stable Diffusion to enhance image-to-text tasks. The benchmarks used, including question answering, captioning, and out-of-distribution tests, appropriately measure the model’s ability to generalize and handle real-world scenarios. The inclusion of WorldMedQA for multilingual medical questions is particularly effective in demonstrating Lavender’s robustness to domain shifts, making the evaluation framework suitable for this application. Theoretical Claims: The proofs are logically structured but rely on empirical justification rather than formal theoretical guarantees. Experimental Designs Or Analyses: The experimental design is well-structured, testing Lavender on large scale benchmarks and baseline models. The study controls for data overlap to prevent benchmark leakage, ensuring fair evaluation. The paper also examines scaling behavio, showing that Lavender improves generalization without overfitting. Supplementary Material: I reviewed Appendices E, G, L, and N, focusing on theoretical justification, implementation details, and qualitative results. Relation To Broader Scientific Literature: The paper builds on vision-language models fine-tuning and diffusion model attention mechanisms. Prior works aligned image-text representations at the encoder level, but Lavender is the first to align transformer attention maps directly. It extends ideas from Stable Diffusion cross-attention and VLM tuning (LLaVA, OpenFlamingo), improving word-to-region alignment. Compared to autoregressive fine-tuning, Lavender shows better generalization with minimal data. Its focus on OOD robustness connects to multimodal domain adaptation, expanding prior research in efficient multimodal learning while reducing data reliance. Essential References Not Discussed: No, I'm not familiar with this area. But I think the author has great references. Other Strengths And Weaknesses: The experimental evaluation dataset in the paper has a maximum of only 0.13M samples, which is much smaller than the 5M to 50M level datasets used in existing technology models.This limits the ability to fully evaluate Lavender's scalability with larger data sizes. Mixing OCRVQA datasets with other datasets can sometimes degrade performance, implying a risk of overfitting on specific data. Other Comments Or Suggestions: Minor Typos – "Toekn" → "Token" (multiple occurrences) "alginment" → "alignment" Questions For Authors: The paper shows strong quantitative results, but it does not discuss where Lavender fails or underperforms. Could authors provide a deeper failure case analysis? Are there scenarios where Lavender struggles, such as highly complex or ambiguous image-text relationships? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### Weaknesses: > "The experimental evaluation dataset ... scalability with larger data sizes." > We thank the reviewer for highlighting this important point. Indeed, as explicitly noted in the Limitations and Future Works (lines 405-409) and Appendix D (lines 1140-1141): *"Lavender was evaluated on datasets of up to 0.13M samples, constrained by available compute resources."* This constraint primarily reflects our limited compute resources rather than any inherent scalability issue with Lavender. Nevertheless, we did examine Lavender’s scaling within our limits (Section 6.4 and Figure 13), showing continued improvement with increasing data size. Appendix D (lines 1142-1143) specifically states this: *"Figure 17 demonstrates non-convergent scaling behaviour, suggesting that further scaling of both dataset size and tuning length could lead to additional improvements in overall performance with Lavender."* We hope these findings encourage resource-rich groups to further explore this "*avenue for cross-model knowledge transfer,*" as Reviewer Xyv6 described, and this promising "*direction for harnessing the capabilities of generative models,*" as noted by Reviewer qVhe. Finally, as emphasized in Section 10 (Impact Statement, lines 463-476), Lavender uniquely benefits smaller research groups, enabling efficient knowledge transfer from large pre-trained models without extensive resources: *"Data Scarcity ... End-to-end training from scratch is resource-intensive and often infeasible. Large-scale LLM-finetuned VLMs and DMs have been trained on multi-billion-level datasets, making it inefficient if their knowledge remains isolated. Lavender offers a new approach to bridge these large model families using limited resources—requiring as little as a few thousand data points and one day of training on 8 Nvidia A10G GPUs (24GB memory each)—while enabling small models (<13B) to achieve performance on par with large models (>50B) across multiple benchmarks."* --- > "Mixing OCRVQA datasets ..." > Indeed, we observed performance degradation when mixing OCRVQA datasets with others, primarily due to the limited OCR capabilities of the employed diffusion model. Diffusion models, optimized mainly for image generation, tend to exhibit weaker attention on text compared to general object recognition. We previously discussed this limitation extensively with Reviewer Xyv6, providing visual evidence through [an anonymized link](https://anonymous.4open.science/r/2134/2_challenge/1.jpg) to illustrate attention map failures, specifically for OCR tasks. Lavender is model-agnostic and can leverage more appropriate teacher models with better-aligned attention maps, as noted in Section 10, Impact Statement, page 9, line 494: *"Currently, Lavender’s alignment objectives are derived from Stable Diffusion model’s attention maps. However, the same approach could be applied to other vision foundation models with well-aligned per-word attention maps."* Therefore, to address this limitation, we propose the following potential solutions: - Leverage attention maps from specialized OCR models. - Using more advanced diffusion models (e.g., Flux with ConceptAttention [1]), with [visual](https://anonymous.4open.science/r/2134/1_different_attns/1.jpg) and [quantitative results](https://anonymous.4open.science/r/2134/README.md). - Increasing inversion steps, as shown in preliminary experiments ([anonymized link](https://anonymous.4open.science/r/2134/3_scale_inversion/1.jpg)). [1] Helbling et al., 2025. ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features. --- ### Questions: > "The paper shows strong quantitative results, but it does not discuss where Lavender fails or underperforms. ..." > We thank the reviewer for this insightful question. We summarize previously discussed and newly identified failure cases below: 1. We explicitly discussed unsuccessful training strategies for Lavender in the Failure Strategies section (page 8, line 429) and Appendix C. 2. Additionally, as discussed earlier, one key scenario where Lavender underperforms is when the "teacher" diffusion model does not provide sufficiently high-quality attention, such as in OCR tasks. Examples can be found [here](https://anonymous.4open.science/r/2134/2_challenge/1.jpg). These results underscore the diffusion model’s specific limitations in accurately attending to highly complex and ambiguous textual information, negatively impacting Lavender’s OCR-VQA performance. We will further expand our manuscript to explicitly discuss Lavender’s performance limitations in highly complex or ambiguous scenarios beyond OCR, including instances of subtle semantic distinctions or particularly abstract image-text relationships. --- ### Minor Typos: > "Toekn" → "Token" (multiple occurrences), "alginment" → "alignment" > We thank the reviewer for highlighting these and will correct them in our revised version.
Summary: Diffusion Instruction Tuning introduces Lavender, a fine-tuning framework that aligns a vision-language model’s (VLM) image-to-text attention with a text-to-image diffusion model’s attention maps​. The key idea is to leverage the precise cross-attention of a pretrained Stable Diffusion model as a training signal for the VLM: during supervised fine-tuning on image-text pairs, Lavender adds an auxiliary loss that pushes the VLM’s token-level attention to mimic the diffusion model’s attention, alongside the usual next-token prediction loss​. This is the first approach to directly align transformer attention layers of a VLM with a diffusion model’s attention (prior works only aligned at image encoder or adapter levels) Claims And Evidence: - Diffusion models have more precise attention maps than VLMs. The authors claim that a text-to-image diffusion model (Stable Diffusion) learns fine-grained word-to-region alignment that VLMs lack. This is backed by qualitative and quantitative evidence: Figure 2 compares attention maps, showing diffusion’s per-word attention is more tightly focused on relevant image regions than a VLM’s​. They also report significantly lower entropy in diffusion attention distributions, indicating they are more peaked and informative​. This empirical evidence supports the premise that diffusion attention is a good target for alignment. - Aligning VLM attention to diffusion attention improves performance. This is the core hypothesis, and it is strongly validated by extensive experiments. Lavender fine-tuning consistently outperforms standard instruction tuning (next-token loss only) across diverse tasks. For instance, on an OpenFlamingo model, Lavender yields up to +70% relative improvement across several benchmarks​. On a larger Llama-3.2 11B model, Lavender improves accuracy by up to 30% on 19/20 benchmarks compared to autoregressive fine-tuning, and even surpasses comparable open-source models by ~50%​. Even a self-attention-only model (MiniCPM) sees gains (up to 4%)​. These results provide convincing evidence that the alignment loss delivers tangible benefits. Methods And Evaluation Criteria: The proposed method is well-described and appropriate for the problem. Lavender introduces a two-stage fine-tuning procedure: first, precompute the diffusion model’s cross-attention maps for each training image-text pair; second, fine-tune the VLM with a combined loss (standard language modeling loss plus an attention alignment loss)​. This approach directly addresses the stated goal of improving visual-text alignment, by explicitly training the model to align its internal attention with a more grounded reference. The method is implemented in a model-agnostic way – the loss can be applied to any VLM, with either explicit cross-attention layers or even unified self-attention (the authors devise an attention aggregation for the latter case). They provide clear pseudocode (Algorithm 1) and discuss how to aggregate attention across heads/layers in both diffusion models and VLMs​. Importantly, they introduce measures to handle catastrophic forgetting: e.g. an Aligner network (small learnable layers) to project VLM attention into the diffusion attention space​ and strategies like LoRA fine-tuning to preserve original model capabilities​. These design choices seem appropriate – aligning at the transformer attention is a sensible place (it’s the “core” of vision-language interaction), and the use of a mean-squared error loss on attention distributions is a natural choice for guiding one distribution toward another​. The method is novel yet grounded in known techniques (it can be seen as a form of knowledge distillation on attention maps), and it’s evaluated thoroughly. Theoretical Claims: No obvious correctness issues are found in the theoretical development – it’s a straightforward application of Bayesian thinking to justify an auxiliary loss. The mathematical steps (detailed in Appendix G/H) fill in the gaps for interested readers. One could argue that the assumption that Stable Diffusion’s attention is nearly optimal might not hold in all cases (it’s possible the diffusion model attends to certain aspects that are useful for generation but not for understanding, or misses some semantics). The authors acknowledge this is an approximation, supported heuristically by the entropy observations rather than a formal proof. However, given the difficulty of defining a “ground-truth” attention distribution, the argument is plausible and consistent with the results. In summary, the theoretical claims are well-aligned with the empirical approach. The use of Bayesian terminology lends a principled perspective, and while it doesn’t rigorously prove that this is the optimal training strategy, it provides a solid rationale for why aligning to diffusion’s attention should help (i.e., it injects an informative prior for the VLM’s attention). There were no apparent mathematical mistakes in the derivations provided. The paper could improve by discussing any conditions where the assumption might break (e.g., if the diffusion model’s training data is very different from the VLM’s domain), but overall the theoretical component is a correct and helpful explanation of the method’s foundation. Experimental Designs Or Analyses: The experimental design is a strong point of the paper, with thorough evaluations and thoughtful analyses. The authors trained and tested on a wide variety of datasets, which reduces the chance of overfitting the method to a particular benchmark. They compiled ~130k image-text pairs for fine-tuning, drawn from multiple sources (referred to as RV83k, Flk30k, OV30k in the paper) – this mix includes general and possibly task-specific data, ensuring the model is exposed to diverse scenarios​. While this is a relatively small corpus, it was intentional to demonstrate data-efficiency; it also means the base models are not pushed to their limit capacity, highlighting the effect of the alignment rather than brute-force data. On the evaluation side, the use of 20 benchmarks covering four broad categories (charts/docs, perception, real-world, and no-hallucination tasks) is appropriate to claim broad generalization​. The results are reported with clear metrics (often using zero-shot evaluation on each benchmark), and they even provide relative improvements which make it easy to see the benefit of Lavender over baselines. Supplementary Material: This paper did not include a supplementary material. Relation To Broader Scientific Literature: This work sits at the intersection of visual instruction tuning and diffusion models for vision, offering a novel cross-over between the two areas. In the context of vision-language models (VLMs), recent progress like Flamingo and OpenFlamingo introduced architectures with cross-attention to connect image and text features​, and methods like LLaVA (Visual Instruction Tuning by Liu et al., 2023) showed that one can take a pretrained language model and fine-tune it with a relatively small set of image-text examples (on the order of 150k) to endow it with multimodal capabilities​. Lavender builds on this line of work by addressing a specific weakness of such instruction-tuned models: their visual grounding is often coarse or suboptimal because the training primarily optimizes text outputs. Prior approaches to mitigate this include using adapter modules or refined image encoders. For example, LLaVA and similar methods attach an MLP or a projection module to feed visual features into the LLM and fine-tune that on QA pairs​. Other works have tried to improve visual grounding by enhancing the image encoder or using multiple vision experts – e.g., CoCa and BLIP-2 align encoders with the language model, and some 2024 works merge features from several specialist models via learned projections​. However, all these approaches still operate at the feature level (aligning outputs of encoders or adding new layers) rather than aligning the internal attention behavior of the model. Essential References Not Discussed: The related work coverage is extensive. Other Strengths And Weaknesses: Strengths: - The idea of using a diffusion model’s attention maps as supervision for a VLM is highly original. This is the first work to my knowledge that connects a generative vision model to a descriptive vision-language model in this way​. It opens up a novel avenue for cross-model knowledge transfer. Weaknesses: - Dependence on Diffusion Model Quality: Lavender’s success hinges on the teacher model (Stable Diffusion) having good attention maps. If the diffusion model’s attention is poor for certain types of images or concepts, the benefit to the VLM could be limited or even harmful. We saw an example with OCR – Stable Diffusion likely doesn’t attend well to fine text, and indeed adding an OCR task didn’t help​. So a weakness is that the approach inherits the biases/limitations of the diffusion model. If an image is very different from what Stable Diffusion was trained on (e.g., specialized medical imagery or abstract diagrams), its attention might not be “optimal,” potentially limiting Lavender’s performance there. The paper highlights the positive side (it still helped on medical QA), but it’s conceivable there are scenarios where this attention transfer provides little gain or requires a different teacher. Other Comments Or Suggestions: The result on MiniCPM (which lacks a dedicated cross-attention module) showed smaller gains (~4%). It would be useful to add a bit more discussion on why the improvement was limited there – is it because the model’s architecture makes it harder to inject the alignment (since image and text tokens attend to each other in the same layers)? The authors hinted that cross-attention models improved more strongly than self-attention models​. This is a valuable insight: it suggests that having an explicit cross-attention makes alignment easier to enforce. Perhaps a brief mention in the paper (if space permits) to explain this difference would be enlightening to readers considering Lavender for different model types. Questions For Authors: - Diffusion Attention Extraction: Could the authors elaborate on how you obtain the diffusion model’s attention maps for a given image x and text y? Specifically, since Stable Diffusion is a text-to-image model, do you perform some form of image reconstruction or noise conditioning with the real image to get its attention? (For example, do you encode the image into the latent space and run the diffusion model’s denoising steps while conditioning on y to collect cross-attention?) Clarifying this process would help readers understand how $p_{DM}(a|x,y)$ is computed in practice. - Handling of OCR/Text in Images The authors observed that including an OCR-VQA dataset led to worse performance​ presumably because the diffusion model doesn’t handle text in images well. How might one address this? Do you think using a different teacher model for text regions (such as an OCR model’s attention or a captioning model trained for text) could be combined with Lavender? Or would you suggest simply excluding or separately treating tasks that involve reading text from images? It would be insightful to hear your thoughts on extending the method to handle scenarios where the diffusion model’s attention might not be reliable. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Weaknesses: > "Dependence on Diffusion Model Quality ..." > Thank you for highlighting this important point. We acknowledge that Lavender’s effectiveness indeed depends on the quality and training domain of the chosen diffusion model. While our qualitative and quantitative results confirm that Stable Diffusion v1.4 provides superior attention compared to VLMs on general images, its performance on tasks such as OCR can be limited. We wish to emphasize that Lavender is fundamentally model-agnostic and is not constrained to a particular diffusion or VLM model, as noted in our submission (Section 10, Impact Statement, p.9, line 494): *"... the same approach could be applied to other vision foundation models with well-aligned per-word attention maps."* To address your concerns and demonstrate Lavender’s generalizability, we've prepared visual examples comparing diffusion models (**Stable Diffusion v1.4**, **Stable Diffusion v2**, and **Flux** with cross-attention and ConceptAttention [1]) across various image types, accessible via [an anonymized link](https://anonymous.4open.science/r/2134/1_different_attns/1.jpg). These examples generally show improved attention quality with more advanced models, though challenges remain, especially in OCR tasks, as illustrated [here](https://anonymous.4open.science/r/2134/2_challenge/1.jpg). Additionally, we quantitatively evaluated Lavender by extracting attention from Flux (the recent DiT-based model) using cross-attention and ConceptAttention. Due to computational constraints, we limited the evaluation to approximately 2,000 OCRVQA image-text pairs, fine-tuned Lavender-Llama-3.2 with LoRA, and tested across eight benchmarks. Preliminary results (shown below) indicate that better attention from advanced models further improves Lavender’s effectiveness, supporting its model-agnostic capability. --- | Attention Model | DocVQA_VAL | InfoVQA_VAL | MME | MMMU (val) | OCRBench | POPE | RealWorldQA | HallusionBench (overall) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | SD14 Cross Attn | 73.26 | 47.71 | 1721.16 | 39.67 | 707 | **88.49** | 54.12 | **29.09** | | Flux Cross Attn | 74.05 | 48.01 | 1706.30 | 39.22 | 716 | 88.43 | 54.51 | 26.97 | | Flux Concept Attn | **78.47** | **51.88** | **1787.91** | **39.78** | **750** | 88.20 | **57.26** | 27.67 | [1] Helbling et al., 2025. ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features. --- ### Other Comments or Suggestions: > "The result on MiniCPM ..." > Thank you for pointing this out. Indeed, our experiments suggest that explicit cross-attention modules align more effectively than pure self-attention models like MiniCPM. Dedicated cross-attention modules have parameters specifically for vision-text alignment, whereas self-attention models share parameters across multiple tasks, complicating simultaneous optimization. This distinction resembles how specialized brain regions handle visual and linguistic integration. We will incorporate a concise version of this explanation into the manuscript. --- ### Questions for Authors: > Diffusion Attention Extraction ..." > Thank you for highlighting this for clarification. We initially summarized this briefly in the appendix (Appendix L, lines 1589-1594): *" ... We apply a shortened image inversion process (Mokady et al., 2022; Jin et al., 2023) to approximate the text prompt embeddings for image reconstruction, collecting attention maps at each step as in Section 3.1 ..."* To elaborate further, given an image $x$ and corresponding text $y$, we reconstruct image $x$ by conditioning on the text embedding of $y$, derived using Stable Diffusion’s text encoder. The Null-text inversion (Mokady et al., 2022) further refines this by making the text embedding y learnable, typically requiring extensive inversion steps (e.g., 1000). However, we observed that Stable Diffusion v1.4 already effectively recognizes common concepts from general datasets, enabling us to obtain sufficiently accurate attention maps with just 10 inversion steps. --- > Handling of OCR/Text in Images ..." > This is a valuable point, and we propose several potential solutions: - Leverage attention maps from specialized OCR models. - Using more advanced diffusion models (e.g., Flux). - Increasing inversion steps, as shown in preliminary experiments ([anonymized link](https://anonymous.4open.science/r/2134/3_scale_inversion/1.jpg)). - While separate or integrated architectures may perform similarly for specific OCR tasks, integrated architectures offer the benefit of multitask training, potentially improving general LLM capabilities. - For OOD scenarios, scaling inversion steps (similar to test-time compute scaling) is viable if data is limited but computational resources are sufficient. Alternatively, with sufficient data, fine-tuning the diffusion model directly using techniques like LoRA can significantly boost attention quality.
null
null
null
null
null
null
Monte-Carlo Tree Search with Uncertainty Propagation via Optimal Transport
Accept (spotlight poster)
Summary: The paper introduces Wasserstein-MCTS, an uncertainty-aware version of MCTS based on Wasserstein barycenter updates for the optimal value estimates and their variance. Due to the power-mean like updates the version is particularly suitable for highly stochastic environments. A theoretical analysis is included, that shows that the method has a polynomial convergence rate. Superior performance compared to standard and Bayesian MCTS variants is demonstrated on common planning benchmarks. Claims And Evidence: * The paper claims an asymptotic convergence of the proposed method, which is supported by the theoretical analysis in Section 7, in particular Theorem 1. * The paper claims a connection to POWER-UCT, which is given in Section 5.2. * The paper claims that their method can handle partial observability and high variance, which is supported by the Experiments in Section 8. Methods And Evaluation Criteria: yes, see below. Theoretical Claims: Unfortunately, I could not find the time to check the proofs in the supplementary material. Experimental Designs Or Analyses: Yes, I reviewed both experiments in Section 8: * It makes sense to me to evaluate the method on the benchmarks used as they are a fairly standard choice. Since a claim in the paper is that the method is particularly suitable for stochastic environments, it also makes sense to evaluate them in highly stochastic environments. The method is compared to UCT, which is probably the standard MCTS baseline, as well as a Bayesian MCTS variant which similar in spirit to the proposed method due to also tracking uncertainty. The ablation with POWER-UCT is also interesting: POWER-UCT does not seem to be much better than UCT, providing evidence that the additional propagation of the uncertainty does indeed benefit the search. The experimental evaluation could be strengthened further by evaluating against newer MCTS variants, e.g. one of those with exponential convergence rate, that are cited in Section 5. * I am not very familiar with the partially observable setting, therefore I do not want to comment on the validity of this experiment. Supplementary Material: No. Relation To Broader Scientific Literature: **Relation to other methods:** * The idea is related to Bayesian tree search algorithms in the sense that distributions instead of only point estimates are propagated through the tree. The acquisition functions (Thompson sampling and UCB) are standard choices in Bandits/Bayesian Optimization/ Bayesian Tree Search. * The method is also related to power mean back-ups in MCTS (Dam et al. 2019), but the proposed method additionally propagates uncertainty estimates in form of standard deviations through the tree. * The method also shares similarities with Wasserstein Q Learning (Metelli et al. 2019), but relies on the MCTS framework instead of Q-Learning and replaces the $L^2$ Wasserstein distance with the $L^1$ Wasserstein distance in order to be more robust in stochastic environments **Relation to other theoretical findings:** * The method shares a polynomial convergence rate with POWER-UCT (Dam et al. 2019) * There are MCTS variants with exponential convergence rate in the Maximum Entropy or Boltzmann framework (Xiao et al. (2019), Dam et al. (2021), Painter et al. (2024)), but they suffer from biases (e.g. due to their additional regularization term). Essential References Not Discussed: There is more work regarding Bayesian MCTS and uncertainty propagation (in form of Gaussian distributions over the optimal values) than cited in the paper: * _Coherent inference on optimal play in game trees_, Hennig et al. 2010 * _Probabilistic DAG search_, Grosse et al. 2021 These methods are not designed for stochastic or partially observable MDPs, but since Tesauro et al. 2012 is also cited, I would suggested to add them to the list of cited methods in the introduction. I am not sure if one wants to keep the related work section restricted to MCTS, but there is more work on uncertainty quantification in context of the general RL setting, e.g. [1-3] . But maybe this is a judgement call. [1] _Efficient Exploration via Epistemic-Risk-Seeking Policy Optimization_, O’Donoghue 2023. [2] _Making sense of reinforcement learning and probabilistic inference_, O’Donoghue et al. 2020 [3] _Probabilistic Inference in Reinforcement Learning Done Right_, Tarbouriech et al. 2023 Other Strengths And Weaknesses: I appreciated the proof sketch for Theorem 1, as well as the intuition for the choice to replace the $L^2$ Wasserstein distance with $L^1$ Wasserstein distance + $\alpha$-divergence. Other Comments Or Suggestions: Last paragraph on page 3: There is a reference missing ("??"). Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for their thoughtful and constructive feedback and comments, and appreciate the reviewer's acknowledgment of the strengths of our paper. We have provided detailed responses to address each of their concerns. ## On reference suggestions. We thank the Reviewer for these excellent suggestions. We will expand our related work section to include both the works of Hennig et al. 2010, Grosse et al. 2021 and add a brief discussion of their relationship to our approach, noting that while they share the concept of Gaussian distributions over optimal values, our method differs in its focus on stochastic/partially observable MDPs and its use of L1-Wasserstein barycenters with α-divergences. Additionally, we will add a paragraph discussing the broader context of uncertainty quantification in reinforcement learning, referencing the suggested papers from O'Donoghue 2023, O'Donoghue et al. 2020 and Tarbouriech et al. 2023. ## Last paragraph on page 3: There is a reference missing ("??"). We thank the Reviewer for catching this error. We will fix the missing reference, which should have pointed to Dam et al. (2019) when discussing the connection to power-mean updates.
Summary: This paper introduces Wasserstein Monte-Carlo Tree Search (W-MCTS), a new MCTS variant designed for highly stochastic and partially observable environments. The key innovation lies in propagating uncertainty through the search tree using L1-Wasserstein barycenters combined with alpha-divergences, enabling robust distributional backups (prior work used L2-Wasserstein barycenters in the context of temporal-difference learning). The main idea is to aggregate distributions from child nodes using L1-Wasserstein barycenters, paired with alpha-divergences to interpolate between average-like and max-like backups. This balances exploration-exploitation and mitigates overestimation.The work bridges distributional RL and MCTS, offering a theoretically grounded framework for decision-making under uncertainty. The approaches can have applications in robotics, autonomous systems, and other domains requiring adaptive planning in noisy environments. Claims And Evidence: The papers emphasized the advantages of using L1-Wasserstein and alpha-divergences vs. L2-Wasserstein in terms of robustness and connection with power-mean updates. Theoretical analysis is done in terms of convergence bound for the value with a polynomial convergence rate of O(1/n^2). Methods And Evaluation Criteria: NA Theoretical Claims: Theoretical analysis is done in terms of convergence bound for the value with a polynomial convergence rate of O(1/n^2). Experimental Designs Or Analyses: Empirical comparisons are done to show the proposed methods outperform baselines (UCT, Power-UCT, Bayesian MCTS) in stochastic MDPs (e.g., RiverSwim, Taxi) and partially observable tasks (e.g., Pocman, Rocksample). Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for the thorough review of our paper on Wasserstein Monte-Carlo Tree Search (W-MCTS). ## On Key Innovations We are pleased that the Reviewer recognizes our main contribution in propagating uncertainty through the search tree using L1-Wasserstein barycenters combined with alpha-divergences. As correctly highlighted, this approach enables robust distributional backups that effectively balance exploration-exploitation while mitigating the overestimation problem that often occurs in reinforcement learning. ## On Theoretical Analysis We thank the Reviewer for acknowledging our theoretical analysis regarding the convergence rate of O(n^(-1/2)). This theoretical foundation provides important guarantees about the behavior of our algorithm in the asymptotic case. ## On Empirical Results We appreciate the recognition of our comprehensive empirical evaluation across both stochastic MDPs and partially observable tasks. These experiments have been designed to demonstrate the practical advantages of our approach in environments where traditional MCTS methods struggle with high variance or limited observability. ## On Broader Impact The Reviewer's comment about the potential applications in robotics, autonomous systems, and other domains requiring adaptive planning aligns with our insight for this work. We believe that the ability to handle uncertainty in a principled way is crucial for deploying decision-making systems in real-world scenarios.
Summary: This paper takes a distributional approach to Monte Carlo tree search. The authors propose a framework for planning in environments with uncertainty and/or partial observability. The proposed framework models state and state-action values as distributions. They also introduce a backup operator that propagates uncertainty through nodes in tree by estimating backup values as Wasserstein barycenters. Additionally, they propose two sampling techniques for use with the tree policy taking the uncertainty into account. Claims And Evidence: The authors make several claims: 1. Their approach is robust to stochastic variations. - The experimental results mostly back up this claim. 2. No need for symmetry in backups. - I do not think this is an advantage more than it is a reason why they are able to make their approach work. 3. Their approach has convergence rate of $\mathcal{O}(n^{-1/2})$ - See further. Methods And Evaluation Criteria: The methodology is founded on ideas that have worked prior. Update values based on Wasserstein barycenters are a reasonable method to model uncertainty over the values of action outcomes. The set of evaluation domains are appropriate for demonstrating the performance of their proposed method. Theoretical Claims: The paper has the theoretical claim that the proposed search method has a finite time convergence rate of $\mathcal{O}(n^{-1/2})$. While I do not have theoretical background to verify the correctness of the mathematics, the math looks sound. Experimental Designs Or Analyses: The authors perform experiments on two types of domains: - Long-horizon, highly stochastic domains - Partially observable, stochastic domains The results support the performance claims made by the authors. One exception being the results for river swim does not line up with the analysis provided. In this setting, DNG clearly converges faster than the other algorithms. Supplementary Material: I did not go through the supplementary materials. Relation To Broader Scientific Literature: I believe this work would be of interest to the planning community on the whole. Essential References Not Discussed: I do not think so. Other Strengths And Weaknesses: **Novelty** The work is a novel technique for computing update values. **Clarity** The paper was well written and moderately easy to understand. Other Comments Or Suggestions: - Pg. 3, Col. 2, Line 156: There seems to be a LaTeX compilation issue. Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for your positive review and for recommending acceptance of our paper. We appreciate their recognition of our work's novelty and clarity. ## On Our Claims and Evidence We thank the Reviewer for acknowledging that our experimental results support our claims about robustness to stochastic variations. Regarding the comment on "no need for symmetry in backups," we agree that this is more accurately described as an enabling factor rather than an advantage in itself. We will revise this framing in our final version to more precisely articulate the relationship between the α-divergence's properties and our method's effectiveness. ## On Theoretical Claims We appreciate the Reviewer's assessment that our mathematical analysis of the convergence rate appears sound, even while acknowledging the specialized nature of this theoretical work. ## On LaTeX Compilation Issue We will fix the LaTeX compilation issue on Pg. 3, Col. 2, Line 156 as you noted. We thank the Reviewer for bringing this to our attention.
Summary: This paper introduces Wasserstein Monte-Carlo Tree Search (W-MCTS), a novel approach that represents value nodes as Gaussian distributions (mean and variance), allowing explicit uncertainty propagation throughout the search tree. The method employs a backup operator based on the Wasserstein barycenter and α-divergence to aggregate value estimates from child nodes, enabling a more flexible backup strategy that interpolates between averaging and maximization. W-MCTS incorporates both an optimistic UCT-like action selection strategy and a Thompson Sampling-based approach, achieving an asymptotic convergence rate of $O(n^{−1/2})$ under the latter. Extensive experiments on highly stochastic and partially observable domains demonstrate superior performance over SOTA methods. Claims And Evidence: The empirical results support the claim that W-MCTS outperforms existing methods in highly stochastic and partially observable domains. However, as acknowledged in the paper, previous methods such as Tesauro (2012) and Bai (2013) also model uncertainty in value estimates. It is unclear what specifically enables W-MCTS to achieve superior performance. Is it the improved approximation of uncertainty, a more effective backup operation, or another factor? A clearer discussion could help isolate the key contributing components. The paper states that uncertainty is propagated "throughout" the tree. Does this mean it extends beyond the parent-child relationships to include sibling and cousin nodes, or is it strictly along the path from leaf to root? Clarifying this distinction would enhance the reader’s understanding of the method's impact on tree-wide exploration and value estimation. Methods And Evaluation Criteria: The selection of benchmark problems is comprehensive and relevant. However, the paper does not explicitly mention the number of simulations (or iterations) for each method in the results section. The x-axis in Figure 2 appears to represent the number of simulations, but this should be clearly stated in the figure description. Additionally, a comparison based on a fixed wall-clock timeout (e.g., 1 second per decision) or by reporting the time taken per search trial would provide a clearer picture of the expected performance. Theoretical Claims: The theoretical analysis provides sound justification for the claimed convergence properties. Proposition 1 establishes the mean and variance update rules under the Wasserstein barycenter framework. However, a minor presentation issue remains: the term $\delta$ is not explicitly defined in Proposition 1. While this does not impact correctness, explicitly stating its definition would improve clarity. Experimental Designs Or Analyses: N.A. Supplementary Material: I reviewed the experimental details in the supplementary section. No major issues were found. Relation To Broader Scientific Literature: The paper's contributions are significant for the planning and search community, particularly in AI applications requiring robust decision-making under uncertainty. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: - Line 155 (right side) - Missing equation reference. Questions For Authors: - Key Performance Factors: What is the primary driver of W-MCTS’s superior performance compared to previous uncertainty-aware MCTS methods? Is it the backup operation, improved uncertainty estimation, or another factor? - Scope of Uncertainty Propagation: When the paper states that uncertainty is propagated "throughout" the tree, does this extend beyond the standard parent-child relationships? - Experimental Reporting: Does Figure 2’s x-axis represent the number of simulations? If so, could this be explicitly mentioned? Additionally, would a fixed-time performance comparison provide further insights? Overall, I am leaning toward acceptance, pending clarification of the above concerns. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for their thoughtful review and insightful questions that help us improve our paper. Below are our responses to your specific concerns: ## Key Performance Factors Thank you for this insightful question. We would like to point out that the superior performance of our method stems from two complementary components: - Explicit variance propagation: Unlike previous methods that only propagate point estimates or use fixed variance models, our approach dynamically updates both means and variances at each node. This explains the superior performance gain of our method over Bayesian MCTS methods in highly stochastic and partially observable environments. For example, our algorithms demonstrated consistent improvements over POMCP across all environments, with particularly notable gains of 55.31% in LaserTag, 65.90% in RS(15,15), and with improvements of up to 21.38% over AB-DESPOT in LaserTag. In FrozenLake, we obtain an 80% improvement over DNG. - Flexibility in balancing exploration-exploitation: Our approach's ability to interpolate between average-like and max-like backups (through parameter α) allows it to adapt to varying levels of stochasticity. In highly stochastic environments, we found moderate α values (leading to more average-like updates) performed best, while in more deterministic regions of the state space, larger α values (more max-like) were optimal. We will add these insights to Section 8.4 in the revised manuscript ## Scope of Uncertainty Propagation Thank you for highlighting this ambiguity. In our method, uncertainty propagation is indeed more extensive than just parent-child relationships, though it follows the tree structure. Our phrase “uncertainty is propagated throughout the tree” means that each node not only stores a (mean, variance) pair (or a Gaussian distribution) but also uses these distributions in the backup operator at its parent. That is, any child’s updated distribution is reflected at the parent level. To clarify: W-MCTS propagates uncertainty bi-directionally in the tree: - "Bottom-up" through Q-nodes to V-nodes during backups. "Top-down" through action selection, where uncertainty influences exploration - Sibling nodes indirectly influence each other's visitation rates through their parent's uncertainty estimation, creating a form of lateral uncertainty influence. We do not directly share distributions among sibling or cousin nodes. However, each node’s distribution indirectly influences its parent’s distribution—and transitively influences siblings as the parent’s updated distribution affects how siblings are compared and selected in subsequent iterations. Unlike previous methods where uncertainty is often reset or approximated at each level, our approach maintains consistent distributional representations throughout the entire search process. Thus, “throughout” means that over multiple rollouts, the uncertainty from leaf nodes consistently flows upward, eventually impacting the root node’s estimates, while updated root estimates modulate exploration down into deeper levels. We will add a new paragraph in Section 5 to make this distinction clearer, showing how uncertainty flows throughout multiple levels of the tree simultaneously. ## Experimental Reporting You are correct that the x-axis in Figure 2 represents the number of simulations, and we apologize for not stating this explicitly. We will update the figure caption to clearly indicate this. Regarding fixed-time performance, we agree that a time-based comparison could be informative in practice—particularly for large or real-time systems. We have run standard, iteration-based comparisons, as is common in research prototypes. For the final version, we can add a note clarifying the computational budget used, and that a direct time-limited comparison in the revised version. ## Other Corrections We will fix the missing equation reference on Line 155 and explicitly define all terms in Proposition 1 to improve clarity. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I have increased my score. Good luck! --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to consider our rebuttal and for increasing your score. We greatly appreciate your constructive feedback throughout the review process, which will help us improve our paper.
null
null
null
null
null
null
An Empirical Study on Configuring In-Context Learning Demonstrations for Unleashing MLLMs' Sentimental Perception Capability
Accept (poster)
Summary: The paper explores how to enhance the performance of Multimodal Large Language Models (MLLMs) in Multimodal Sentiment Analysis (MSA) by optimizing the configuration of In-Context Learning (ICL) demonstrations. The main findings include: ​1. Enhancing MSA Performance: The authors show that by carefully configuring ICL demonstrations, MLLMs can significantly improve their sentiment analysis capabilities. They identify three key factors: similarity measurement, modality presentation, and sentiment distribution. ​2. Optimizing Similarity Measurement: The study introduces various strategies for measuring the similarity between multimodal data, focusing on image-text pairs. They refine traditional similarity measures to better capture the nuances of MSA. 3. ​Modality Presentation: The authors investigate different combinations of modalities (image and text) and find that careful presentation of multimodal information can enhance sentiment prediction. ​4. Sentiment Distribution: They explore different protocols for distributing sentiment labels in demonstrations to mitigate biases and improve fairness in predictions. 5. ​Experimental Results: The proposed strategies lead to average accuracy improvements of 15.9% over the zero-shot paradigm and 11.2% over a random ICL baseline on six MSA datasets. The methods are shown to be effective and generalizable across different MLLMs and datasets. Claims And Evidence: ​Claims Supported by Evidence: The paper provides empirical evidence that optimizing In-Context Learning (ICL) demonstrations can significantly improve MLLMs' performance in Multimodal Sentiment Analysis (MSA). The improvements in accuracy over the zero-shot and random ICL baselines suggest that the strategies are effective. ​Problematic Claims: ​1. The claim that the method "fully unleashes the potential" of MLLMs in MSA may be overstated. While the improvements are significant, it is unclear if the method reaches the absolute maximum potential, especially without direct comparisons to state-of-the-art approaches. ​Similarity Measurement: The reliance on traditional cosine similarity for multimodal data may be too simplistic, especially given the acknowledged semantic gap between text and images. The use of CLIP for measuring text-image associations could have provided a more nuanced approach. 2. The method of generating images from text to enhance multimodal presentation raises concerns about the controllability of the generated images' emotional expressions. This could lead to misinterpretation of the original sentiment, particularly with neutral or ambiguous samples. ​3. The experimental setup lacks clarity, particularly regarding the metrics used and how CLIP is applied. This makes it difficult to assess the effectiveness of the method comprehensively. Methods And Evaluation Criteria: While the paper proposes interesting directions for optimizing ICL in MSA, several aspects of the methodology and evaluation raise questions about their alignment with the problem’s demands: ​Multimodal Similarity Measurement: The paper relies on cosine similarity for cross-modal retrieval (text+image), despite acknowledging the semantic gap between modalities. CLIP, which explicitly models text-image alignment, is mentioned but not leveraged for similarity scoring. This choice risks conflating modality-specific features (e.g., syntax in text vs. objects in images) rather than capturing joint semantics. Why not use CLIP’s joint embeddings for similarity? ​Image Generation for Modality Presentation: Generating images from text (e.g., via diffusion models) to “augment” demonstrations is creative but risks introducing uncontrolled biases. For example, a neutral text paired with a generated image that skews positive/negative (as seen in Figure 3b) could mislead the model. Without validating that generated images preserve the original sentiment, this approach risks confounding results. How do you ensure generated images align with the text’s intended sentiment? ​Evaluation Metrics & Transparency: Table 2’s experimental setup lacks clarity on how CLIP is used (e.g., which layers, pooling strategies) and which metrics are computed (e.g., accuracy, F1). Similarly, the “Task Learning” experiment (Figure 5) abruptly switches to animal labels—this is clever but under-explained. How does this abstraction relate to sentiment prediction? ​Negative Sentiment Bias: The paper notes a bias toward positive/neutral predictions but stops short of diagnosing why (e.g., data imbalance in pretraining, model architecture). Without exploring this, claims about “mitigating bias” feel superficial. For example, does the “Category Balanced” protocol simply mask underlying issues in the model’s learned representations? Theoretical Claims: The paper focuses primarily on ​empirical strategies for configuring ICL demonstrations in MSA, with no explicit theoretical claims or proofs (e.g., convergence guarantees, generalization bounds). As such, there are no formal proofs to verify. However, the paper’s methodological assumptions and experimental design choices raise implicit theoretical questions: ​Assumption of Modality Independence: The use of cosine similarity (or CLIP embeddings) to combine text-image pairs assumes that modalities can be treated as independent features. This ignores the cross-modal alignment problem, where text and image embeddings may reside in disjoint semantic spaces. A more theoretically grounded approach might require proving that the proposed similarity metrics align with human perception of sentiment. ​Bias Mitigation Without Guarantees: The paper introduces heuristics (e.g., sentiment-balanced demonstration distributions) to counter MLLMs’ negative bias but provides no theoretical analysis of why these protocols work or whether they generalize beyond the tested datasets. Experimental Designs Or Analyses: See above Supplementary Material: The supplementary material provides additional experiments (Tables 7–10) and details on prompts, datasets, and methods. Relation To Broader Scientific Literature: The paper’s contributions are overshadowed by methodological gaps and lack of rigor: ​Method Weaknesses: Uses simplistic cosine similarity for text-image retrieval despite acknowledging the semantic gap. Generates images from text without validating sentiment alignment (see Figure 3b’s misleading neutral-positive mismatch). ​Experimental Opacity: Fails to specify CLIP’s role in metrics (e.g., layers, pooling) or report statistical significance. Table 2’s "ICL Random" baseline lacks clarity on metrics (accuracy? F1?) and experimental controls. ​Bias Analysis Superficial: Attributes negative sentiment bias to MLLMs without probing pretraining data or architectural flaws. Bottom Line: The paper prioritizes engineering novelty over scientific rigor. Addressing these gaps is critical for meaningful contribution to MSA/ICL. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The focus on optimizing In-Context Learning (ICL) for Multimodal Sentiment Analysis (MSA) addresses a critical challenge in deploying MLLMs in real-world scenarios where labeled data is scarce. 2. The experiments show consistent gains over zero-shot and random ICL baselines, demonstrating practical utility. Weaknesses: 1. The paper prioritizes engineering hacks (e.g., cosine similarity, text-to-image generation) over scientific rigor. Ignoring CLIP’s joint embeddings for cross-modal retrieval is a missed opportunity to bridge the semantic gap. 2. Figure 3b’s mismatched neutral text/positive image highlights uncontrolled sentiment leakage in generated visuals—a core flaw in the "transfer emotional details" claim. 3. Phrases like "fully unleashing MLLMs’ potential" are hyperbolic. The method tweaks ICL prompts but avoids tackling deeper limitations (e.g., model architectures, dataset biases). ​Experimental Opacity: 4. Table 2’s "ICL Random" baseline lacks metric definitions (accuracy? F1?), and CLIP’s role in evaluation is unclear. How is CLIP used beyond vague "similarity measurement"? No ablations on key design choices (e.g., sentiment-balanced sampling, WITA weights). ​ 5. While negative sentiment bias is noted, the analysis stops at surface-level protocols (e.g., "category balancing"). No exploration of systemic issues like pretraining data skew or model inductive biases. Other Comments Or Suggestions: No Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and in-depth comments! Below, we present detailed responses to the weaknesses (**W**) and other concerns (**O**). >**W4(1). Metric definitions in Table 2** In the caption of Table 2, we specify that "R strategy" represents random retrieval, and we report its average accuracy across 4,8,16-shot ICL. >**W4(2)&W1(2). CLIP’s role in similarity measurement** In the manuscript, we compute unimodal scores via cosine similarity in CLIP’s embedding space, and multimodal scores are aggregated from unimodal ones. We employ CLIP’s visual encoder and the following projection layer to obtain visual embeddings, and CLIP’s textual encoder and the following projection layer to obtain textual embeddings. It is essentially an intra-modal retrieval. CLIP does not provide joint embeddings that embed the joint semantics of image and text. Therefore, an alternative cross-modal retrieval approach is computing similarity scores between images from one sample and texts from another. We adopt intra-modal retrieval for two reasons: 1. CLIP aligns image and text embeddings. Both cross-modal and intra-modal retrieval operate within a unified embedding space. Their underlying mechanisms are similar. 2. In [1,2], the similarity between image-text inputs is computed within each modality and later combined, which has been empirically proven effective. Following the setup in Table 2, we compare two retrieval approaches. T2I denotes image-to-text retrieval, and I2T denotes text-to-image retrieval. From the results, cross-modal retrieval offers no additional benefits. |IDEFICS|MVSA-S|Twitter-15| |-|:-:|:-:| |R|49.2|57.4| |I|56.5|59.1| |T|56.0|58.7| |T2I|55.7|58.9| |I2T|55.0|57.4| >**W2. Uncontrolled image generation- a flaw of the "transfer emotional details" claim** While your point is insightful, we are afraid you may misunderstand our motivation in investigating modality presentation, which is modality conversion can furnish supportive information (Line 68, 196-204). We have not made the "transfer emotional details" claim, as suggested in your comment. This misunderstanding may stem from the phrase "which are conducive to evoking emotions" in Line 204, where our original intent is to highlight a potential benefit. To clarify, we will rephrase this part in the revised manuscript. Following our motivation, we investigate both substituting and augmenting original modalities with auxiliary ones. Our analysis reveals that the modality conversion introduces extra noise (Lines 298-300), which aligns with your comment. This issue leads to our conclusion: modality conversion underperforms the use of original modalities. Facilitating emotional alignment in image generation is an interesting direction. However, this field remains in its infancy. Pioneering work [3] has not released pretrained checkpoints, and it is difficult to train a model and validate its effectiveness within the rebuttal period. Instead, we will explore it in future work. >**W3(1). Hyperbolic phrases** We will carefully check potential hyperbolic phrases in the manuscript and revise them to appropriate expressions. >**W4(3). Ablations on design choices** In our investigation, variations are introduced to the pertinent settings only when probing specific factors. Therefore, Figure 6 and Lines 326-411 cover ablations of the sentiment-balanced sampling, and Figure 4(b) and lines 307-323 cover ablations of the WITA weight. >**O1. Explanation for Figure 5 experiment** Please refer to the response to **Q1** of Reviewer pNwb. >**W5&W3(2). Explorations for pretraining data skew or model inductive biases** This manuscript focuses on configuring ICL demonstrations to unleash MLLMs' sentiment perception capabilities. Therefore, we mitigate the sentimental predictive bias with ICL instead of altering MLLMs themselves. We have not explored or tackled deeper limitations, as they lie beyond the primary scope of our research. However, systematic studies of these limitations can indeed substantially contribute to both MLLMs and MSA. As further discussion, we attribute these limitations to pretraining data rather than model architecture, which is validated in [4]. By constructing emotion-related data, it enhances MLLMs' zero-shot performance on visual emotion recognition. This success has the potential to be replicated in solving the sentimental predictive bias. >**W1(1). Scientific rigor** We hope our responses can convince you of our methodology design and experimental rigor, a strength also recognized by Reviewers nCbm and pNwb. >**References** [1] Yang et al. Exploring Diverse In-Context Configurations for Image Captioning. NeurIPS, 2023.\ [2] Li et al. How to Configure Good In-Context Sequence for VQA. CVPR 2024.\ [3] Yang et al. EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models. CVPR 2024.\ [4] Xie et al. EmoVIT Revolutionizing Emotion Insights with Visual Instruction Tuning. CVPR 2024.
Summary: This paper conducts an empirical study to unleash the power of MLLMs using in-context learning for multimodal sentiment analysis. The authors study three key factors influencing in-context learning performance: similarity measurement, modality presentation, and sentiment distribution. Experiments are performed on six datasets, and the proposed optimized strategies result in a significant improvement, outperforming both the zero-shot method and in-context learning with a random selection strategy. The paper also highlights and addresses an inherent sentiment prediction bias in MLLMs. Claims And Evidence: This paper makes several key claims: (1) The zero-shot performance of MLLMs for multimodal sentiment analysis is undesirable, but MLLMs can effectively perform multimodal sentiment analysis when provided with well-configured in-context learning demonstrations. (2) The choice of similarity measurement, modality presentation, and sentiment distribution significantly affects in-context learning performance. (3) In-context learning introduces biases that can be mitigated by strategic demonstration selection. These claims are generally supported by experiments across six multimodal sentiment analysis datasets. The authors present comparative evaluations against zero-shot baselines, random in-context learning, and prior in-context learning techniques, demonstrating clear improvements. Methods And Evaluation Criteria: The proposed various in-context learning strategies with the evaluation of six multimodal sentiment analysis benchmarks make sense for exploring in-context learning to enhance the sentiment perception ability of MLLMs. This paper evaluates weighted multimodal similarity for similarity measurement. The influence of different combinations of text, image, and generated modalities on in-context learning performance is also verified. Various sentiment distribution configurations are also verified, and the authors discover predictive bias in MLLMs. The accuracies of the experiments on six multimodal sentiment analysis datasets validate the effectiveness. Theoretical Claims: The paper does not present formal theoretical results but provides a strong empirical justification for its findings. The discussion of in-context learning biases and mitigation strategies is insightful but could benefit from a deeper theoretical analysis of in-context learning's role in multimodal sentiment analysis. Experimental Designs Or Analyses: The study is thorough, covering multiple datasets and models. Ablation studies help isolate the contributions of each factor. Comparison with zero-shot, random in-context learning, and other multimodal sentiment analysis methods is well-executed. The paper focuses on two MLLMs (IDEFICS-9B, Open-Flamingo2-9B) and fixed prompts; broader validation on additional models and prompt engineering would strengthen the results. Supplementary Material: I have reviewed the supplementary materials, including additional experimental details and ablation studies. The provided information is useful but lacks a discussion on additional MLLMs and various prompts. Relation To Broader Scientific Literature: The related works about the key contribution are sufficiently provided. Exploring in-context learning for multimodal sentiment analysis is a novel topic. The study builds on prior research in in-context learning (Brown et al., 2020). The efficacy of in-context learning relies on retrieval (Zhang et al., 2022), presentation (Li et al., 2024), and distribution (Lyu et al., 2023). The basic similarity calculation is based on (Liu et al., 2022 and Yang et al.2022). The biases in in-context learning are also observed by (Yang et al., 2023c; Li et al., 2024; Baldassini et al., 2024). Essential References Not Discussed: The references that are essential to in-context learning for MLLMs and multimodal sentiment analysis are comprehensive. Other Strengths And Weaknesses: (1) Exploring in-context learning for multimodal sentiment analysis is an underexplored yet impactful direction in the multimodal area. This paper provides valuable practical experience on how to effectively utilize MLLMs for multimodal sentiment analysis with low resource consumption. (2) This paper is empirical rigor. The experimental design is meticulously crafted, and the findings are articulated. The study addresses biases in in-context learning, adding depth to the contribution. (3) This paper is well-written and well-organized. There are some unclear descriptions: (1) Some figures do not correspond to the text expression and are difficult to understand. For example, in Figure 3 (a), it is unclear how to calculate various similarities in Figure 3 (a). If these values are just schematic, using variable representation to show only the calculation process is better. (2) For the sentiment distribution, different data sets adopt different strategies (Table 4). The reason why only Twitter-15 and Twitter-15 adopt Category Balanced should be provided. (3) Different data sets adopt different final policies, and how to determine which policies to adopt in practical applications should be discussed. (4) The difference between strategies for post-level and aspect-level tasks in sentiment distribution should be clarified. Other Comments Or Suggestions: (1) It is recommended to provide more clear figures. For example, Figure 6 is too informative and should be simplified to show the experimental results more clearly. (2) A discussion about the influence of prompt design is suggested. Questions For Authors: (1) Can the authors explain the meaning of the experiments conducted in Figure 5? Why can this experiment interpret the performance degradation brought by modality conversion? (2) For the sentiment distribution, different data sets adopt different strategies (Table 4). It is necessary to analyze why only Twitter-15 and Twitter-15 adopt Category Balanced. Are there any considerations or assumptions for this design? (3) Different data sets adopt different final policies, and how to determine which policies to adopt in practical applications should be discussed. (4) What is the difference between policies for post-level and aspect-level tasks in sentiment distribution? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive feedback and valuable advice! Below, we present detailed responses to the weaknesses (**W**), comments (**C**), questions (**Q**) and other concerns (**O**). >**C1&W1. Unclear figures** In the revised manuscript, we will replace redundant components in Figure 6 with more intuitive indicators and reformulate the calculation process in Figure 3(a) using variable representations. >**Q1. Explanation of Figure 5 experiment** Table 3 indicates that neither substituting nor augmenting original modalities with auxiliary ones improves ICL performance. Our explanation for the former is that modality conversion introduces information loss, outweighing potential benefits. The experiments in Figure 5 are designed to interpret why the latter degrades ICL performance. Specifically, [1] decomposes ICL’s role into Task Recognition (TR) and Task Learning (TL). TR prompts the task format for MLLMs to apply their prior knowledge, and TL aids MLLMs in building a mapping between inputs and outputs. We hypothesize that the augmenting process weakens the TL effect. To validate this, we reformulate MSA tasks to map image-text pairs to specific animals, where MLLMs have no prior knowledge of the mapping. Figure 5(b) reveals a continuous performance decline with increased modalities, validating our hypotheses. Thereby, we interpret that the augmenting process complexifies input-output mappings and impairs the TL effect. >**C2. Influence of prompt design** Please refer to the response to **Q1** of Reviewer gx7F. >**W2&Q2. Adoption of distribution protocols on Twitter-15 and Twitter-17** In the manuscript, we determine the adopted protocol based on the proportion of negative samples in the dataset (lines 408-411). Given prior knowledge of datasets’ distribution, we adopt the Category Balanced protocol for Twitter-15 (12.1% negative samples) and Twitter-17 (13.6% negative samples), and the Unlimited protocol for the other datasets with higher proportions. Here is an intuitive explanation. Influenced by the short-cut effect, negative demonstrations in the ICL sequence can stimulate MLLMs to produce negative predictions, which intensifies as the ratio of negative demonstrations increases. By selecting protocols, we ensure an adequate ratio of negative demonstrations to mitigate MLLMs’ sentimental predictive bias. For datasets with few negative samples, the Category Balanced protocol guarantees one-third negative demonstrations, which other protocols fail to achieve. For datasets with more negative samples, reliable similarity measurement naturally retrieves sufficient negative demonstrations for negative test samples. The Unlimited protocol thereby outperforms the Category Balanced protocol by configuring more sentiment-aligned demonstrations for non-negative test samples. >**W3&Q3. Selection of final policy in practical applications** Our final policy is integrated by the optimal strategies of three factors. In practical applications, we decide the optimal similarity measurement strategy based on the type of task (WIT for post-level MSA and WITA for aspect-level MSA). The optimal modality presentation strategy is fixed, where we compose demonstrations with image and text. In cases where the proportion of negative samples is known, the optimal sentiment distribution strategies can be easily determined. Otherwise, protocol selection should depend on specific prioritizations. For applications prioritizing recall of negative samples (e.g., mental health monitoring), the Category Balanced protocol should be adopted. For applications prioritizing precision in non-negative sample identification or overall accuracy (e.g., public opinion monitoring), the Unlimited protocol is recommended. >**W4&Q4. Sentiment distribution strategies for post-level and aspect-level MSA** The optimal distribution protocol for a dataset is determined based on its proportion of negative samples, which is independent of the task type. >**O1. Theoretical analysis of ICL's role in MSA** We further analyze ICL's role in MSA through its two effects: Task Recognition (TR) and Task Learning (TL). The response to Q1 shows TL's non-negligible role in MSA. It is also evidenced by the positive correlation between ICL performance and the number of shots (Tables 7-9). Concurrently, TR also exerts a remarkable influence, as validated by ICL's robustness to textual prompts (response to **C2**). Textual prompts and TR share a similar function: both inform the MLLM about the task format. The contrast between the unstable zero-shot and stable ICL performance reveals the superiority of TR effects. Therefore, ICL's TR and TL roles are equally critical in MSA. This contrasts with prior findings in VQA, where TR dominates over TL. This highlights the unique characteristics of MSA and the necessity of task-specific investigations. >**O2. Broader validation on additional models** Please refer to the response to **C1** of Reviewer gx7F.
Summary: This paper explores using In-Context Learning (ICL) to enhance MLLMs for Multimodal Sentiment Analysis (MSA). The authors identify that MLLMs under the zero-shot paradigm exhibit weak performance on MSA tasks. They propose a systematic study of three key factors in ICL demonstration configuration: similarity measurement, modality presentation, and sentiment distribution. By optimizing these factors, they achieve average accuracy improvements of 15.9% over zero-shot performance and 11.2% over random ICL on six MSA datasets. They also identify and mitigate a sentimental predictive bias in MLLMs, leading to fairer sentiment classification. Claims And Evidence: Yes. The authors claim that MLLMs can achieve competitive sentiment perception through ICL. Three key factors (similarity measurement, modality presentation, and sentiment distribution) significantly impact ICL performance. Sentimental predictive bias exists in MLLMs but can be mitigated via distribution balancing. Empirical results on six MSA datasets validate these claims. The proposed retrieval, presentation, and distribution strategies are tested against zero-shot, random ICL, and previous ICL strategies. Ablation studies strengthen the validity of the proposed method. Methods And Evaluation Criteria: Yes. This paper systematically studies three key factors in ICL demonstration configuration: similarity measurement, modality presentation, and sentiment distribution. For similarity measurement, the authors have evaluated various retrieval strategies, including aspect-based similarity and weighted multimodal similarity. For modality presentation, the authors explore how different combinations of texts, images, and generated modalities affect ICL performance. For sentiment distribution, the impact of sentiment biases in demonstrations is analyzed, leading to the discovery of predictive bias in MLLMs. The evaluation on six standard MSA datasets with accuracy as the primary metric also makes sense for this study. Theoretical Claims: This paper primarily focuses on empirical findings rather than theoretical derivations. Experimental Designs Or Analyses: Experiments on six datasets ensure the generalization of the proposed ICL strategy. Ablation studies validate individual contributions of retrieval, modality, and distribution factors. Baseline comparisons are thorough, including random ICL, prior ICL strategies, and supervised models. However, the experiments lack the analysis of the computational cost of configuring optimal ICL demonstrations. Supplementary Material: The supplementary material was reviewed and contains additional experiments, dataset details, and ablation results. Relation To Broader Scientific Literature: Although ICL for MLLMs has been studied in other multimodal tasks, it is underexplored for MSA, which is a pivotal challenge in the quest for general artificial intelligence. The paper builds on prior work in ICL (Brown et al., 2020), multimodal learning (Yin et al., 2023), and MSA (Zadeh et al., 2017). The discussion of MLLMs' zero-shot limitations aligns with recent findings in multimodal intelligence (Lian et al., 2024; Yang et al., 2023). The work extends ICL research to MSA, aiming to unleash the sentiment perception ability of MLLMs, which is a valuable contribution. Essential References Not Discussed: No. The related works of this paper are sufficient, including fully-supervised, few-shot MSA methods and in-context learning methods. Other Strengths And Weaknesses: Strengths: 1.The paper presents a novel and systematic exploration of ICL for MSA, an area that has received limited attention. It combines existing ICL strategies with sentiment bias mitigation, offering new perspectives on optimizing MLLM performance. 2.Addressing sentiment bias in MLLMs is an important contribution, as it improves both model fairness and reliability. 3.The paper is well-written and structured. The proposed strategies are easy to follow. 4.The experimental design is clearly explained, and sufficient ablation studies strengthen the empirical validation. Weaknesses: 1.While ICL offers benefits, the computational feasibility of optimal retrieval strategies should be discussed. 2.Fine-tuning on support set could serve as additional baselines to contextualize ICL’s strengths and trade-offs. Other Comments Or Suggestions: 1.It would be better to provide an analysis of whether the ICL strategies can generalize to different MLLM architectures beyond IDEFICS and Open-Flamingo. 2.The randomness of the 1% support set might influence the performance. A discussion is needed. Questions For Authors: 1.Did the authors experiment with different prompt engineering techniques for ICL in MLLMs? Understanding the sensitivity of results to prompt variations would be valuable. 2.Have the authors tried to select the support set several times randomly? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive feedback and constructive suggestions! Below, we present detailed responses to the weaknesses (**W**), comments (**C**) and questions (**Q**). >**W1. Computational feasibility of optimal strategies** Please refer to the response to **Q2** of Reviewer nCbm. >**W2. Fine-tuning MLLMs on support set** On MVSA-S and Twitter-15, we sample three support sets (each comprising 1% of the training data) using three distinct random seeds. On these, we perform LoRA fine-tuning on the Q and V matrices within IDEFICS's gated xattn-dense layers, with a batch size of 1, a learning rate of 1e-4, and train for 3,000 steps. The training data is constructed in a zero-shot format, where the MLLM processes one image-text pair and its sentiment label at a time. The results (Accuracy) are reported below. |IDEFICS|Support Set|MVSA-S|Twitter-15| |-|:-:|:-:|:-:| |Zero-Shot Paradigm|-|38.6|60.7| |Zero-Shot Paradigm after Lora Fine-Tuning|#1|45.8|63.3| |ICL Ours 16-shot|#1|66.5|67.0| |Zero-Shot Paradigm after Lora Fine-Tuning|#2|48.9|62.3| |ICL Ours 16-shot|#2|67.2|66.6| |Zero-Shot Paradigm after Lora Fine-Tuning|#3|46.1|63.4| |ICL Ours 16-shot|#3|66.7|67.2| LoRA fine-tuning significantly improves the MLLM's sentiment perception capability under the zero-shot paradigm, yet it still lags behind the optimized ICL configuration. Across different support sets, ICL demonstrates more stable performance compared to LoRA fine-tuning. These results will be incorporated into the revised manuscript to enrich baselines and enhance the reliability of the findings. >**C1. Generalization to different MLLM architectures** We evaluate our optimized strategies on two other MLLMs: MiniCPM-o-2.6-8B [1] and GPT4o [2], and report the results (Accuracy) below. Our ICL strategies still demonstrate consistent performance advantages over other strategies, confirming their generalizability. |MiniCPM-o-2.6|Support Set|MVSA-S|Twitter-15|GPT4o|Support Set|MVSA-S|Twitter-15| |-|:-:|:-:|:-:|-|:-:|:-:|:-:| |Zero-Shot Paradigm|-|56.0|52.8|Zero-Shot Paradigm|-|60.8|59.4| |ICL Random 16-shot|1% Training|60.6|59.9|ICL Random 16-shot|1% Training|63.6|61.0| |ICL RICES 16-shot|1% Training|62.5|61.5|ICL RICES 16-shot|1% Training|66.2|61.8| |ICL Ours 16-shot|1% Training|67.4|68.2|ICL Ours 16-shot|1% Training|72.5|68.7| |ICL Random 16-shot|100% Training|60.4|59.9|ICL Random 16-shot|100% Training|63.7|61.1| |ICL RICES 16-shot|100% Training|63.6|62.3|ICL RICES 16-shot|100% Training|67.4|62.3| |ICL Ours 16-shot|100% Training|67.9|70.3|ICL Ours 16-shot|100% Training|74.1|69.7| >**C2&Q2. Randomness in the selection of the support set** Please refer to the response to **W2**. >**Q1. Sensitivity of ICL to prompt variations** In the investigation, we experiment with various textual prompts and find that they significantly impact zero-shot performance. However, their impact on ICL is minimal. Since this manuscript primarily focuses on how ICL configurations influence MLLMs' sentiment perception capabilities, we select a set of appropriate textual prompts (#1 Prompt below) and keep them fixed throughout the investigation. The performance (Accuracy) of IDEFICS under different prompts is reported below. The support set contains 1% data from the training set. For post-level MSA: \ #1 Prompt: A post contains an image and a text. Classify the sentiment of the post into [Positive, Neutral, Negative]. \ #2 Prompt: Please classify the sentiment of the image-text post into [Positive, Neutral, Negative]. \ #3 Prompt: Here is a post containing an image and a text. The optional categories are [Positive, Neutral, Negative]. What is the overall sentiment of the post? For aspect-level MSA: \ #1 Prompt: A post contains an image, a text and an aspect. Identify the sentiment of the aspect in the post. The optional categories are [Positive, Neutral, Negative]. \ #2 Prompt: Please classify the sentiment of the aspect in image-text post into [Positive, Neutral, Negative]. \ #3 Prompt: Here is a post containing an image, a text and an aspect. The optional categories are [Positive, Neutral, Negative]. What is the sentiment of the aspect in the post? |IDEFICS|Prompt|MVSA-S|Twitter-15| |-|:-:|:-:|:-:| |Zero-Shot Paradigm|#1|38.6|60.7| |ICL Ours 16-shot|#1|66.5|67.0| |Zero-Shot Paradigm|#2|28.2|51.9| |ICL Ours 16-shot|#2|66.3|66.9| |Zero-Shot Paradigm|#3|50.6|19.1| |ICL Ours 16-shot|#3|66.4|66.7| >**References** [1] Yao et al. MiniCPM-v: A GPT-4V Level MLLM on Your Phone. Arxiv, 2024. \ [2] OpenAI. GPT-4 Technical Report. Arxiv, 2023. --- Rebuttal Comment 1.1: Comment: The authors have discussed the the computational feasibility of optimal retrieval strategies, fine-tunned MLLMs on 3 different support sets, evaluated optimized strategies on two extra MLLMs (MiniCPM-o-2.6-8B and GPT4o), and presented the sensitivity of ICL to prompt variations, which seems that addressed my all concerns. Moreover, among other 3 reviewers, 2 are positive and 1 is negative. As a result, I think making this work public is beneficial for related researchers, and I increase my rating as Accept.
Summary: The paper addresses Multimodal Sentiment Analysis using MLLMs by enhancing In-Context Learning through optimized demonstration retrieval, presentation, and distribution. It achieves significant accuracy gains over zero-shot and random ICL baselines and mitigates inherent sentiment bias. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There are no theoretical proofs provided in the paper. Experimental Designs Or Analyses: The experimental design is overall rigorous, validating the proposed improvements against zero-shot and random ICL baselines. However, concerns remain regarding dataset diversity, potential overfitting in demonstration selection, and computational overhead. Supplementary Material: I reviewed the entire supplementary material. Relation To Broader Scientific Literature: The paper builds on recent advances in multimodal learning by extending ICL strategies for sentiment analysis. It contributes to the literature by refining demonstration selection and prompt engineering while addressing sentiment bias in MLLMs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths + The paper tackles Multimodal Sentiment Analysis (MSA), which is a pivotal yet challenging task in the realm of MLLMs. It demonstrates that with proper In-Context Learning (ICL) configuration, MLLMs can achieve significantly better performance in sentiment analysis. + The discovery and mitigation of a sentimental predictive bias in MLLMs not only improves accuracy but also contributes to fairness in sentiment prediction, which is a valuable consideration in AI applications. Weaknesses - The requirement to fine-tune multiple aspects of demonstration configuration (retrieval, presentation, and distribution) may introduce additional complexity in implementation. This could make it challenging to deploy the approach in real-world scenarios without careful calibration. Other Comments Or Suggestions: N/A Questions For Authors: 1. Could you elaborate on how your ICL demonstration configuration strategies generalize to other multimodal tasks or larger, more diverse datasets beyond the six MSA datasets tested? 2. What are the computational and time overheads associated with retrieving, presenting, and distributing demonstrations compared to the zero-shot paradigm? 3. How exactly is the sentimental predictive bias identified and mitigated? Are there any specific metrics or case studies that demonstrate the effectiveness of this bias reduction? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your positive feedback and helpful comments! Below, we present detailed responses to the questions (**Q**) and weaknesses (**W**). >**Q1. Generalization to other tasks or datasets** Our strategies cover three factors: similarity measurement, modality presentation, and sentiment distribution. These components can be harmoniously transferred to other multimodal tasks, such as multimodal sarcasm detection and multimodal crisis event detection. Both tasks accept image-text inputs, allowing direct adoption of the similarity measurement strategy (WIT) and modality presentation strategy (Image, Text). Regarding classification targets, multimodal sarcasm detection focuses on whether sarcasm expression is present, while multimodal crisis event detection aims to identify disaster types. By generalizing the concept of sentiment distribution to the distribution of different categories, we can replicate the Category Balanced protocol on these tasks. Experiments on HFM [1] (multimodal sarcasm detection) and CrisisMMD [2] (multimodal crisis event detection) demonstrate the generalizability of our strategies. We report accuracy and adopt 1% of the training set as the support set. |IDEFICS|HFM|CrisisMMD| |-|:-:|:-:| |Zero-Shot Paradigm|71.8|69.4| |ICL Random 16-shot|78.9|74.1| |ICL RICES 16-shot|79.2|81.6| |ICL Ours 16-shot|84.7|86.3| Regarding the scale and diversity of datasets, they do not affect the application of the strategies themselves. When applying to larger-scale datasets, the size of the support set can influence both the effect of ICL and computational overhead, necessitating a trade-off based on specific priorities. >**W1. Requirement of calibration in real-world scenarios** While the optimization process includes additional computational overhead, our optimized strategies exhibit strong generalizability across six MSA datasets (as in the manuscript), other multimodal tasks (according to the response to **Q1**), and diverse MLLM frameworks (according to the response to **C1** of Reviewer gx7F). Therefore, in practical applications, our strategies can achieve promising results without further calibration. >**Q2. Computational overhead compared to the zero-shot paradigm** In the optimized configuration, presenting and distributing demonstrations do not introduce additional computational overhead. The extra costs originate from demonstration retrieval and the expanded input sequence for MLLMs. The former scales with the size of the support set, as each test sample needs to be compared against all support set samples, while the latter is inherent to ICL. We report the average time overhead (ms) of processing an image-text sample under two support set scales. |IDEFICS|# of Samples in Support Set|Retrieval|Inference|Total| |-|:-:|:-:|:-:|:-:| |Zero-Shot Paradigm|-|-|78.1|78.1| |ICL Random 4-shot|-|-|134.5|134.5| |ICL Ours 4-shot|136|36.4|134.5|170.9| |ICL Ours 4-shot|1562|64.2|134.5|198.7| |ICL Random 16-shot|-|-|346.1|346.1| |ICL Ours 16-shot|136|36.4|346.1|382.5| |ICL Ours 16-shot|1562|64.2|346.1|410.3| Compared to the zero-shot paradigm, most additional time overhead is introduced by ICL itself. The cost introduced by our strategies accounts for a minimal proportion, demonstrating their efficiency. >**Q3. Identification and mitigation of the sentimental predictive bias** In the manuscript, the sentimental predictive bias is identified based on the precision and recall metrics of the MLLM across different sentiment samples. As shown in Figure 6 (a-4) (b-4), the recall of negative samples (0-30/0-30) is significantly lower than the precision (50-90/54-88) on both datasets, implying the MLLM’s tendency to favor positive and neutral predictions over negative predictions. On Twitter-17, the Category Balanced protocol (indicated by diamond within the red clusters in Figure 6 (b-4)) configures more negative demonstrations (higher SLR-Negative) compared to the other protocols. Affected by the short-cut effect, this configuration amplifies the MLLM's tendency to conduct negative predictions during inference, thereby mitigating this bias. A more intuitive set of metrics is the proportion of negative samples classified by the MLLM as positive ($P_{pos}$), neutral ($P_{neu}$), and negative ($P_{neg}$). The table below demonstrates that nearly all negative samples under the Unlimited protocol are misclassified as positive or neutral, revealing significant predictive bias, whereas this bias is mitigated by the Category Balanced protocol. In the revised manuscript, we will integrate these analyses and include case studies to clarify our findings. |IDEFICS|$P_{pos}$|$P_{neu}$|$P_{neg}$| |-|:-:|:-:|:-:| |Unlimited|21.0%|73.0%|6.0%| |Category Balanced|11.3%|60.7%|28.0%| >**References** [1] Cai et al. Multimodal Sarcasm Detection in Twitter with Hierarchical Fusion Model. ACL, 2019. \ [2] Alam et al. CrisisMMD: Multimodal Twitter Datasets from Natural Disasters. AAAI, 2018.
null
null
null
null
null
null
Hypo3D: Exploring Hypothetical Reasoning in 3D
Accept (poster)
Summary: This paper presents the construction of a new dataset, Hypo3D, for evaluating the hypothetical reasoning performance on 3D scene. Then, the authors conduct evaluation of several benchmark methods on this dataset and show huge performance gaps between human's performance and model's performance. Claims And Evidence: The authors claimed that existing models struggle to reason effectively in hypothetically changed scenes. Extensive experimental evaluation indeed validate this. But this claim does not involve any novel method. The only contribution is related to the dataset. Methods And Evaluation Criteria: 1) There is no proposed method. 2) Evaluation criteria may be problematic. Most of the evaluation methods are based on large language models. To my understanding, in order to reason well, the perception module needs to be powerful enough to perceive as much useful information as possible, before performing any meaningful reasoning. The authors put too much emphasis on large language models while overlooking the capability of visual perception modules. As a result, the results are very poor compared to human performance. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: 1) There is no proposed method, and the authors mainly evaluated existing approaches. 2) As discussed above, the authors put too much emphasis on LLMs, and overlook the importance of visual perception modules, yielding models' performance much lower than human's. Supplementary Material: I briefly went through SM, mainly containing more details on the dataset and experimental results. Relation To Broader Scientific Literature: The key contribution of this paper is the dataset. This dataset has limited value to the society. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1) The novelty in the dataset is limited. Conceptually, the dataset can be constructed by modifying existing 3D datasets through posing context changes, and revising the solutions of existing datasets in response to the context changes. (As we have full controls about the objects in the scene, this is convenient and easy.) As such, the value of constructing the Hypo3D dataset is greatly reduced. 2) The significance of this research is limited. No new method is proposed in this paper. 3) The so-called hypothetical reasoning is questionable. Before fully understanding the scene visually, hypothetical reasoning does not make too much sense. In addition, this problem in theory is similar to prediction-and-reasoning problem, i.e., given an existing scene, and the tendency of changing, we would like to reason over the future scene. Also, the context changing is provided in the form of text decription. This is difficult to achieve for real-world application where we have only a change in the scene while there is no such text description of context changes. Other Comments Or Suggestions: No. Questions For Authors: In summary, I would like to see more justifications on: 1) The novelty of this paper. 2) The difference and significance of propose dataset in comparison to existing datasets. 3) More justifications on hypothetical reasoning over existing problem formulation. 4) More experimental results that utilize visual models to extract visual features for the scene, integrat with linguistic features, and leverage LLMs to better answer the questions. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Interesting. **Q1: About Novelty.** Our Hypo3D benchmark is a novel, methodological contribution in itself, as noted by other reviewers. It focuses on structured evaluation and it is the first 3D reasoning benchmark with explicit question-type annotations, enabling fine-grained evaluation. Many influential multimodal reasoning benchmarks [1–3] focus on evaluation rather than proposing new models, yet remain highly impactful. Current vision models primarily handle basic spatial and semantic relationships but lack robust, holistic scene understanding. At the same time, reasoning in dynamic 3D environments remains a long-standing real-world challenge. We argue this challenge should not be deferred until all vision problems are solved. Instead, hypothetical reasoning offers a parallel research path, enabling progress on complex 3D reasoning alongside ongoing advances in foundational vision tasks. Furthermore, hypothetical reasoning is central to human cognition. As humans, we often reason with incomplete perceptual information, updating our mental models as new data emerges. This iterative process—forming, testing, and refining hypotheses—is fundamental to intelligent behavior. Integrating this capability into VLM is a step toward more human-like reasoning and a key milestone on the path to AGI. Our problem fits into the prediction-and-reasoning domain only when robust, real-time 3D scene updates are possible. However, current limitations in 3D reconstruction and editing, as noted in the paper, make accurate future scene prediction challenging. As a result, the task naturally shifts to an imagination (inference)-and-reasoning problem. Even widely used benchmarks like SQA3D could conceptually be constructed by posing situational descriptions and revising answers. Both our work and SQA3D deliberately avoided this. SQA3D emphasizes situational diversity, whereas Hypo3D focuses on diverse context changes. Our pipeline first collects a wide range of valid context changes and then generates corresponding questions and answers. Starting with fixed questions would have greatly limited feasible changes, especially given the strict constraints Hypo3D enforces on both changes and question design. Text descriptions may not always be available, but they remain a cost-effective and widely accessible alternative to reconstructing the full 3D scene after each change. Our future work will explore multimodal representation of changes (e.g., images, depth) for more comprehensive hypothetical reasoning. Notably, in other reasoning tasks, early works like SQA3D relied on text to define the observer's situation, while later studies such as MSQA expanded to multimodal representations. This progression highlights the importance of text as a foundational step in developing new reasoning tasks. **Q2: Evaluation Criteria.** Most methods we evaluate are vision-language models (VLMs), not standalone LLMs. While VLMs include an LLM backbone, it is instruction-tuned with large-scale visual data (e.g., image captioning, VQA), endowing it with visual perceptual abilities beyond standard language understanding. We conducted additional experiments to examine the impact of visual encoders on VLMs' hypothetical reasoning. Specifically, we compared Cambrian-1 [4](using SigLIP, CLIP, DINOv2, and ConvNeXt) and LLaVA-Next (using only CLIP) [5], both sharing the same LLM backbone. Partial match results on semantic top-view maps below show that while leveraging more visual features can improve reasoning performance, a significant gap remains compared to human performance. | Model | LLM Backbone | PM | |-|-|-| | Cambrian-1 13B | Vicuna1.5-13B | 41.80 | | LLaVA Next 13B | Vicuna1.5-13B | 38.87 | | Cambrian-1 34B | Hermes2-Yi-34B | 44.42 | | LLaVA Next 34B | Hermes2-Yi-34B | 41.47 | | Human | - | 92.50 | **Q3: Difference to Existing Problem Formulation.** Unlike traditional VQA tasks (2D or 3D), where answers can be directly inferred from the visual input, our hypothetical reasoning task introduces a key distinction: the visual input provides only prior context and is insufficient on its own. Hypo3D shifts the focus from “see and answer” to the more complex cognitive process of “see, imagine, and answer”. [1] TopViewRS: Vision-Language Models as Top-View Spatial Reasoners, EMNLP, 2024. [2] SQA3D: Situated Question Answering in 3D Scenes, ICLR, 2023. [3] HourVideo: 1-Hour Video-Language Understanding, NeurIPS, 2024. [4] cambrian-1: a fully open, vision-centric exploration of multimodal llms, NeurIPS, 2024. [5] LLaVA-NeXT: Improved reasoning, OCR, and world knowledge, Arxiv, 2024
Summary: The paper introduces Hypo3D, a benchmark task evaluating foundation models' ability to use hypothetical reasoning to "imagine" missing perceptual information in dynamic 3D scenes. It provides a dataset with various context changes and questions, showing that current models significantly underperform humans, frequently hallucinate irrelevant details, and struggle especially with movement changes and directional reasoning. The findings highlight current MLLMs' limitations in imagination-based reasoning, crucial for achieving human-like cognitive abilities. ## update after rebuttal The rebuttal has addressed most of my concerns. The remained concern is that the key factors for the bottlenecks of 2D VLM and LLM in the hypothetical reasoning task need to be further discussed. Overall, the motivation for benchmarking MLLM's hypothetical reasoning ability is interesting. I will keep my rating. Claims And Evidence: The experiments on Hypo3D show the deficiency of current VLMs, supporting part of the motivation for proposing such a benchmark. One concern is whether the input data types are appropriate for verifying the hypothetical reasoning ability of VLMs. - For 2D VLM, the input data is a top-view image, which compresses the 3D scene in a plane. There exists information loss. Besides, the movement is a 3D transformation, which could not be imagined in the top view. The reason is more related to the 3D understanding ability of 2D VLM. Therefore, this type of data may not be appropriate for testing the hypothetical reasoning ability of 2D VLM. - For LLM, the ability is related to the level of detail provided in the caption. Methods And Evaluation Criteria: The main concern is about the input data type problem mentioned before, which may not support the motivation of testing the hypothetical reasoning ability but the 3D imaginary ability. Theoretical Claims: There is no proof for theoretical claims. Experimental Designs Or Analyses: The experiments show current VLMs' ability of hypothetical reasoning on Hypo3D. Supplementary Material: The supplementary material includes limitation discussions, benchmark details and more experiments. Relation To Broader Scientific Literature: This paper proposes a benchmark to measure the ability of current VLMs' hypothetical reasoning in indoor scenes, which includes top-view images, scene caption, point cloud, and RGB-D video based on the ScanNet dataset. The benchmark demonstrates the deficiency of imagination. Essential References Not Discussed: It is recommended that authors add a comparison between Hypo3D and current 3D grounding and captioning datasets, e.g. ScanRefer, MMScan, and a series of datasets. Other Strengths And Weaknesses: The paper is well-written and organized. Other Comments Or Suggestions: Nil. Questions For Authors: In fact, current 2D VLMs themselves may not be proficient at handling scale direction questions. The bad performance on Hypo3D comes from both hypothetical reasoning and fundamental ability. A good comparison experiment is testing the VLMs without adding changes in the scene, which can better help to find the key reason. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We have added further explanations and additional experimental results to address your concern regarding the input data type issue. **Q1: 2D VLM Input** We adopted top-view images for 2D VLMs to maintain consistency with established 3D reasoning benchmarks like SQA3D. While top-view images cannot fully capture 3D geometry, they provide the most comprehensive spatial information in a single image. Besides, modern 2D VLMs have been able to extract depth cues from single images and reason about 3D positioning. This has been validated by our results, where 2D VLMs perform adequately on questions involving vertical reasoning (e.g., height). Also, Human evaluators also have reported that top-view images were generally sufficient for them to answer questions in our dataset. Additionally, we evaluated 2D VLMs (semantic maps) using multi-view inputs (top, front, back, left, and right) compared to using top-view only. The results on 50 randomly sampled scenes below shows that performance remains comparable to using only the top view. This suggests that while multi-view inputs offer richer visual information, integrating visual features from different views presents another challenge for the models. | Model | View | EM | PM | |---------------|-------|-------|-------| | LLaVA-OV 7B | Top | 34.81 | 38.60 | | | Multi | 34.24 | 38.19 | | LLaVA-OV 72B | Top | 43.01 | 46.83 | | | Multi | 42.52 | 47.06 | | Qwen2-VL 7B | Top | 34.40 | 38.91 | | | Multi | 35.99 | 41.19 | | Qwen2-VL 72B | Top | 44.25 | 48.25 | | | Multi | 43.04 | 47.50 | **Q2: LLM Input** To assess how caption detail affects LLM performance, we tested LLaMA-3.2 with varying numbers of sampled captions. As shown in the table below, more detailed inputs do not consistently improve performance—possibly due to the increased challenge of long-text reasoning. Following the SQA3D protocol, we use 30 randomly sampled object captions for the final scene description. | #Captions | EM | PM | |----------|-------|-------| | 30 | 23.95 | 28.62 | | 50 | 23.88 | 28.34 | | 100 | 24.34 | 28.91 | | 200 | 22.91 | 28.01 | **Q3: Dataset Comparison** The table below compares Hypo3D and existing 3D visual grounding (VG) and captioning datasets. Hypo3D is the first to annotate all question types and world frames, requiring models to understand hypothetical scenes. | Dataset | Task | Question Type Annotation? | Hypothetical? | World Frame? | #Scans | #Language | Annotation | |---------------------|--------------------------------|----------------------------|----------------|--------------|--------|-----------|---------------------| | ScanRefer | VG | N/A | ✗ | ✗ | 0.7k | 11k | Human | | Sr3D | VG | N/A | ✗ | ✗ | 0.7k | 115k | Template | | ScanQA | QA | ✗ | ✗ | ✗ | 0.8k | 41k | Template | | SQA3D | QA | ✗ | ✗ | ✗ | 0.65k | 33.4k | Human | | ScanScribe | Captioning | N/A | ✗ | ✗ | 1.2k | 278k | GPT | | MMScan | VG + Captioning + QA | ✗ | ✗ | ✗ | 5.2k | 6.9M | GPT + Temp. + Human| | Hypo3D (Ours) | QA | ✓ | ✓ | ✓ | 0.7k | 15k | GPT + Human | **Q4: Impact of Recognition on Reasoning** To mitigate the impact of object recognition on hypothetical reasoning, most experiments in the paper utilized semantic top-view maps with explicit text labels. We also evaluated VLMs on unchanged scenes in Table 3, where the performance was significantly higher. This confirms that the main challenge primarily lies in reasoning about hypothetical changes. All results above will be included in the final version of the paper. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' exquisite rebuttal. The rebuttal addressed part of my concerns. There still are some concerns about the proposed benchmark. 1. There exists ambiguities for "Replacement Change" and "Addition Change". The context change don not provides the size of the newly added objects. It is not easy even for human to answer the scale and direction-related questions by imaging, when the new objects with huge size changes. 2. Existed 3d scene caption dataset do give the description of objects, relationships, coarse locations. However, it is hard to recover the accurate layout of the whole scene form these captions, since captions face a lot information losses, e.g. the accurate position of objects, the metric measurement of objects and relationship between objects. The LLM face challenges to fully understand the whole scene from these captions, so that it probably fails to finish the hypothetical reasoning tasks. This can be validated from Tab. 3 and 4 (LLaMA-3.2). The results in the rebuttal also indicate that no matter how much captions provided, the LLM fails to conduct the hypothetical reasoning. For 2D VLM input, the results indicates the similar conclusion in LLM. The 2D VLMs themselves fall short of understanding the whole from the top-view or multi-view images to conduct hypothetical reasoning. 3. The results on Tab. 4 and Tab. 3. - Tab. 4 indicates that the irrelevant context change will leads to performance decline. And the similar decline happens after adding the relevant context changes for LLM and 2D VLM. Interestingly, for 3D VLM in Tab. 4, LLaVA-3D can adapt the irrelevant context changes. - This phenomenon may indicate that 2D VLM and LLM fail to understand the context change phrase, since they face challenges to understand the whole scene. LLaVA-3D can understand the context change phrase, since it accepts 3D scene inputs and fully understand the whole scene. Therefore, the results of LLaVA-3D between Tab. 3 and 4 is a key observation, which can explain the LLaVA-3D is not good at hypothetical reasoning. However, the LLM and 2D VLM struggle for understanding the whole scene and the context change phrase not the hypothetical reasoning. 4. The insight of testing the hypothetical reasoning ability of existed models is interesting. It is important to find the real bottleneck of these models. It is better to entangle the key factors for the bottlenecks of 2D VLM and LLM. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thanks for your comment. Since the discussion period will be closed in half a hour, I can only address your concern via further explanation and cannot provide more experimental results. **1** For changes involving object replacements and additions, we observed ambiguity in the size of the newly added objects. As a result, our scale-related questions focus primarily on proximity (e.g., which object is closer to another) rather than comparing object sizes. Since the locations of added objects are precisely defined based on neighbor objects, the model can reliably answer proximity-based and direction-based questions. To further reduce ambiguity, we only ask about pairs of objects with clearly distinguishable locations. The only size-related questions we include for addition changes are of the form: "What object is the largest below the added object?" These questions do not require knowing the exact size of the added object. Instead, they focus on comparing the sizes of nearby objects relative to the added one, without involving the added object’s own size. **2** We acknowledge that 2D VLMs and LLMs do not have full access to the 3D scene, which limits their scene understanding capabilities. However, it is important to highlight that 2D VLMs currently achieve the best performance on hypothetical reasoning tasks. In this work, we closely follow the evaluation protocol from prior studies, selecting LLMs, 2D VLMs, and 3D VLMs to assess reasoning abilities. For LLMs, we provide ground-truth scene captions as input to maximize the accuracy of scene information conveyed through text. For 2D VLMs, which can only process image inputs, it is challenging to represent the full 3D scene using a single or even multi-view image. To address this, we use a semantic top-view map as input to reduce errors from object recognition and better capture the scene layout. While previous reasoning datasets often use non-semantic maps, we argue that our dataset optimizes input representation for 2D VLMs to the greatest extent possible. **3** We appreciate the reviewer's thoughtful analysis of Tables 3 and 4. However, we must respectfully clarify that the data doesn't support the conclusion that 2D VLMs primarily struggle with scene understanding compared to 3D VLMs. In fact, our results show that 2D VLMs consistently outperform LLaVA-3D across both tables. For example, Qwen2-VL 72B achieves 31.50% EM with changes in Table 3, significantly higher than LLaVA-3D's 20.50%. The key observation is that all models—regardless of architecture—show performance degradation when required to reason about hypothetical changes, though varying degrees. This suggests a fundamental limitation in hypothetical reasoning across current models rather than primarily a scene-understanding issue. **4** We fully agree that determining the real limitations of these models is crucial. However, as the reviewer is certainly aware, LLMs and VLMs are highly complex systems with numerous interconnected components, making it challenging to disentangle individual elements and examine them with perfect interpretability at a fine-grained level. Our current work represents an initial step toward understanding the hypothetical reasoning capabilities of existing models. In this first exploration, we prioritized evaluating these models as complete systems through our carefully constructed benchmark, which has already revealed significant performance gaps and interesting patterns across different model types and question categories. Moving forward, we plan to conduct a more detailed factor analysis that isolates specific bottlenecks for each model type. This analysis will broadly address two fundamental questions: (1) Can current models accurately perceive the scene and comprehend the context change? and (2) Assuming models can successfully perceive and understand the scene and context change, can they properly perform hypothetical reasoning? To achieve this goal, we will implement a more methodical, step-by-step evaluation approach that isolates components from the models, allowing us to better pinpoint the specific limitations in their hypothetical reasoning capabilities. This deeper analysis will provide more targeted insights for future model development.
Summary: The paper introduces a 3D reasoning benchmark called Hypothetical 3D Reasoning (Hypo3D). Specifically, the components of the benchmark can be summarized as follows. Consider a 3D scene representation (S) and a world frame from the scene (F) that contains an anchor object for specifying the direction to the model, e.g. “the table is to the left” will signal the model which direction is to the left. A set of context changes (C) are designed that are capture possible modifications to the scene S to obtain modified scenes S* (note that S* is not obtained). And finally, a set of question (Q) and answers (A) on the modified scene S*. The idea is to assess the “imagination” of existing foundation models to answer questions (Q) given the scene (S) and the context changes (C) by imagining what S* would look like. The authors consider three broad categories of questions: (1) scale-based, (2) direction-based, (3) semantic and include five types of context changes: (1) movement, (2) removal, (3) attribute, (4) addition, and (5) replacement. Experimental results for a variety of LLMs, 2D VLMs, and 3D VLMs are shown on this benchmark highlighting that the existing models fail in hypothetical reasoning. ## Update after rebuttal I thank the authors for addressing all of my concerns and questions. I will keep my weak accept rating. Claims And Evidence: Most of the claims are clear. The major claim in the paper is that given the benchmark, existing LLMs and VLMs show unsatisfactory performance which is supported by the evaluation and experiments in the paper. However, I have a bunch of questions on the evaluation that are listed in the following sections. Methods And Evaluation Criteria: The paper proposes a benchmark by itself and uses that to study LLMs and VLMs. The design of the benchmark explores an interesting problem of “hypothetical reasoning” and is a good contribution to the community and makes sense to assess LLMs and VLMs. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: **Benchmark design:** * More insights in the groundtruth answers of the questions: in Line-240 (right column) it is mentioned that an answer contains 1.28 words on average. What do the answers look like? Are they always name of the objects, directions, numbers, etc.? * How diverse are the directions defined using the world frame? What I mean by this is - from a given top-view image of a 3D scene that a person is looking at, let’s consider the person’s left as the “person-left” direction for the scene. But depending on the “left” that the world frame defines, that may or may not match with “person-left” -- if it matches how often it is? Or how often does it not match? * Are there examples in which the anchor object is the object under consideration for the context change instruction? E.g. the same object that is used as anchor object is modified in the context change instruction. **Evaluation:** * While the use of exact match (EM) as a metric makes sense, I am not satisfied with the use of partial match (PM) as a metric that computes the percentage of overlapping words. The reason is because the answers on average are 1.28 words long, so given such short-length answers, PM doesn’t make sense. Also, is the overlap computed over words or tokens? * Missing baselines: as the answers mostly would contain the name of objects, attribute of objects, and directions, it is very likely for the LLM/VLM to spit out random words by simply recognizing objects in the scene. So, one important baseline would be to ask the LLM/VLM to recognize the objects, their attributes, etc. in the scene (this will result in a set of words for a scene) and make a random prediction out of that set and then report the accuracy obtained. * In-context learning: Was in-context learning tried? It would be great to see the performance of these models when in-context examples are provided. * Chain-of-thought: Did the authors try CoT on Hypo3D? It would be also great to see the performance of these models using chain-of-thought prompting on this benchmark. * Lines-359: “... most models exhibit severe hallucinations …” - any analysis on that? What do the model hallucinations look like? Are there any correlations with the question or context change type? * How often does the model predict the anchor object as the answer? * In”Insight 5”: do the evaluation consider questions in which there is no overlap of the object on which the question is asked and the object that is considered for context change. E.g. “what is the color of the coffee table” is the question and “move the couch to the left of the TV” is the context change. * In Table 2, for the “w. Camera” results, the drop is accuracy wrt “w/o frame‘ is not much. Any further analysis on this? * Any insights on why in Table 1: GPT4o (with non-semantic top-view map) performs worse than GPT4o (text only)? * Lines-317-318 (left column): “... though it is not 100% due to the open-ended nature of Hypo3D questions … ” -- what does this mean exactly? Any examples? Supplementary Material: Yes, I have reviewed all the parts of the supplementary material. Relation To Broader Scientific Literature: I think the major contribution of the paper is the benchmark that can be broadly used to assess LLMs and VLMs to make them spatially aware and enable spatial reasoning. However, I think the evaluation process and metrics used in the paper need to be made more rigorous (as discussed above). Essential References Not Discussed: I think the references are sufficient. Other Strengths And Weaknesses: Most of my concerns are listed above. Although the benchmark is a great contribution, I feel the assessment of the LLMs and VLMs in the paper could have been more rigorous. E.g. evaluation using in-context learning, chain-of-thought, etc. Other Comments Or Suggestions: N/A Questions For Authors: I have included all of my concerns in the previous sections. I would like responses focussed on answering those concerns in the rebuttal. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful feedback. We’re pleased to hear that you found our dataset to be a great contribution. All 2D VLM results reported below use the semantic map by default. **Q1: Answer Annotation** Similar to the answer types in Fig. 12 of SQA3D, our annotations include object names, attributes (e.g., shape, color, size, functionality, state, etc.), directions, numbers, and so on. The complete answer distribution will be included in the final version. **Q2: World Frame** We randomly rotated the top-view images before inputting them to VLMs, ensuring the world frame definition aligns with the person's orientation in 25% of scenes. **Q3: Anchor** 4.41% of context changes involve changes to the anchor object. E.g., *Orientation: The shower is on the front side of the scene.* *Change: The shower has been moved to the left of the toilet.* **Q4: PM** Following TopviewRS, we used PM to measure word-level overlap, particularly for answers like "front" and "front left". We also employed SBERT scores to reduce evaluation bias in phrasing (Fig. 14). We further computed GPT-based score following MSQA, with results detailed in our response to **kxrB**. Both metrics show consistent ranking with PM. **Q5: Baseline** We conducted a baseline experiment with GPT-4o, which showed significantly lower EM accuracy than models in Table 1. This confirms that LLMs/VLMs are not merely recognizing objects or guessing, but possess limited capacity for hypothetical reasoning. Also, using semantic maps improved performance over non-semantic inputs, as VLMs benefit from explicit hover-text labels for more accurate recognition. |Model|Baseline|Reasoning| |-|-|-| |GPT-4o (non-sem.) |14.86|33.58 |GPT-4o (sem.)|17.15|45.50 **Q6: In-context learning (ICL)** We applied three-shot ICL to 2D VLMs and LLMs. ICL generally reduced EM performance, potentially because the limited example set cannot adequately represent the diversity in our context changes and questions. Moreover, we observed models directly copying answers from examples, indicating their inability to process long context effectively. |Model|w/o ICL|w/ ICL| |-|-|-| |LLaMA-3.2|29.30|23.88| |LLaVA-OV 72B|40.26|33.53| |Qwen2-VL 72B|41.94|36.52| **Q7: Chain-of-Thought (CoT)** We have utilized CoT to explicitly decompose the task into: (1) imagine how the change affects the scene, and (2) answer the question based on the changed scene. To further investigate CoT, we tested models with a simplified prompt: *Scene orientation: {}* *Context Change: {}* *Question: {}* *Answer:* Removing CoT prompting reduces performance in most models except Qwen2-VL 72B, suggesting that step-by-step reasoning aids hypothetical reasoning to some extent. Still, results lag behind human levels, and we plan to explore more advanced CoT methods in the future. |Model|w/o CoT|w/ CoT| |-|-|-| |LLaMA-3.2|23.91|26.08| |LLaVA-OV 72B|42.78|43.01| |Qwen2-VL 72B|44.90|44.25| |LLaVA-3D|29.30|31.56| **Q8: Hallucination** The hallucination in line 359 refers to the model incorrectly adjusting its answer in response to an irrelevant change, as noted in Insight 5. More examples of such hallucinations are provided in the response to Q10. **Q9: Anchor Object as Answer** The table displays the probability at which models predict anchor objects as answers. LLaVA-3D exhibits a much higher rate compared to both other models and ground truth frequencies, suggesting it tends to copy anchor objects as answers rather than engaging in reasoning. ||GT|GPT4o (Text)|GPT4o (sem.)|LLaVA-3D| |-|-|-|-|-| |Rate (%)|3.4|3.2|2.8|5.3| **Q10: No Overlap** We considered questions where the queried and changed object do not overlap. E.g., *C: The lamp has been put onto the bath cabinet.* *Q: Where is the toilet paper relative to the trash can?* **Q11: World Frame Definition** The main difference between the w. camera and w/o frame settings is the inclusion of an additional camera view image. While models may not accurately interpret scene orientation from it, the image still provides extra visual context, so a significant accuracy drop is not expected. **Q12: Non-semantic vs. Text-only** GPT4o (non-sem.) underperforms due to difficulty recognizing objects in noisy scenes. In contrast, GPT (text-only) directly receives explicit object names and attributes from the caption. **Q13: Human Performance** Human performance falls short of 100% due to typos, formatting mismatches, and occasional misinterpretation of noisy scenes. Notably, less-than-perfect human performance is common in previous open-ended VQA benchmarks such as ScanQA and SQA3D. Below are human-failed examples: More examples can be found in our response to **kxrB**. |Question|Human|GT| |-|-|-| |How does the added Nescafe espresso machine's position compare to the paper towel?|Above the paper towel|Higher| |Which is closer to the added rubber gloves: the trash can or the paper towel roll?|Paper tower roll|Paper towel roll| --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing all of my concerns and questions. I will keep my weak accept rating. --- Reply to Comment 1.1.1: Comment: We are thrilled that our rebuttal and additional experiments have addressed all of your concerns. All results will be included in the final version of the paper.
Summary: This paper introduces a novel 3D-VQA benchmark called Hypo3D. Given a 3D scene representation (e.g., point clouds, BEV images) and a description of how the scene has changed, the model must infer the updated scene and answer a question based on it. The authors benchmark a range of open-source and closed-source 2D and 3D VLMs, identifying their limitations. Claims And Evidence: Most of the claims are supported. Methods And Evaluation Criteria: Strengths 1. This paper introduces a novel and compelling benchmark, tackling a crucial problem in 3D scene understanding—namely, how to handle changing scenes. 2. The authors provide a clear and detailed data collection process that involves human annotators, helping to ensure quality. Weaknesses 1. According to Figure 2, the scene orientation descriptions rely on a top-view perspective (using terms such as “at the back of the scene” or “left side of the scene”). However, when assessing 3D-based VLMs, the inputs are point clouds and ego-view images rather than top-view maps, which may lead to ambiguity in interpreting orientation. 2. The benchmark primarily uses Exact Match (EM) and Partial Match (PM) for evaluation. While somewhat reasonable for single-word or short-phrase answers, these metrics may not reliably assess correctness if models produce longer responses. A potential improvement would be to employ GPT-4 to reformat the model outputs (e.g., summarizing them into a single word or short phrase) or to have GPT-4 judge correctness based on both the model’s response and the ground-truth answers. This approach could mitigate formatting-related biases and focus more accurately on the model’s scene-understanding capabilities. 3. Lines 315–318 reveal that human annotators do not achieve 100% accuracy on the benchmark, implying the presence of ambiguous samples. The authors may wish to analyze and remove such ambiguous cases before releasing the dataset, as this would be pivotal for the broader research community. Theoretical Claims: NA Experimental Designs Or Analyses: Strengths 1. This work offers valuable insights into the performance of various models, which is much appreciated. Weaknesses 1. For open-source 2D VLMs, the InternVL series is also widely recognized. It would be beneficial to include these models in the benchmark results. 2. Current 3D VLMs underperform, likely due to insufficient instruction-tuning for such tasks. To help the field better understand their limitations, it would be useful if the authors fine-tuned these models with a small amount of relevant data before conducting the benchmark. Supplementary Material: / Relation To Broader Scientific Literature: Overall I think the proposed benchmark is novel and interesting. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your insightful feedback. We’re glad to hear that you found our work to be novel and interesting. **Q1: Scene Orientation** For 3D VLMs using point clouds (e.g., LEO), inputs have been explicitly aligned to a top-view perspective with the floor on the XY-plane and vertical structures along the Z-axis. We acknowledge that the 3D VLMs taking ego-view images (e.g., LLaVA-3D) are expected to align 3D scenes internally for orientation understanding. Yet, they are provided with comprehensive information (i.e, multi-view RGB, depth, camera poses, and axis-alignment matrices) for implicit scene alignment. Again, while orientation may be more challenging to interpret for 3D VLMs compared to 2D VLMs, all models are provided with sufficient information to support effective orientation comprehension. **Q2: GPT-4 as Judger** Beyond EM and PM, we used SBERT (Fig. 14) for text similarity scoring to reduce evaluation bias in phrasing. Here, we explored GPT-based evaluation for open-ended responses following MSQA [1]. Each GPT score $\( C \)$ is formulated as: $$C = \frac{1}{N} \sum_{i=1}^{N} \frac{s_i - 1}{4} \times 100\%$$ where $N$ is the number of questions, $s_i \in [1, 5] $ (higher is better) is the discrete score assigned by GPT-4o-mini taking the question, ground truth, and model response as input. The scores for all models are shown below: | Model | GPT Score | |-|-| | llama-3.2 | 28.13 | | GPT-4o (text-only) | 37.89 | | Qwen2-VL-7B (non-sem.) | 32.01 | | Qwen2-VL-72B (non-sem.) | 35.58 | | llava-ov-7B (non-sem.) | 32.29 | | llava-ov-72B (non-sem.) | 38.20 | | Claude 3.5 Sonnet (non-sem.) | 25.27 | | GPT-4o (non-sem.) | 35.49 | | Qwen2-VL-7B (sem.) | 36.74 | | Qwen2-VL-72B (sem.) | 45.90 | | llava-ov-7B (sem.) | 36.91 | | llava-ov-72B (sem.) | 45.11 | | Claude 3.5 Sonnet (sem.) | 42.76 | | GPT-4o (sem.) | 46.55 | | LEO | 17.47 | | LLaVA-3D | 33.80 | Rankings of GPT scores closely align with PM rankings in Table 1, validating the reliability of PM. **Q3: Human Performance** Human performance rarely reaches 100% in open-ended VQA datasets. For example, the best EM@1 on ScanQA is 51.6%, and SQA3D reports 85–95% accuracy. Our dataset achieves even higher human scores, suggesting fewer ambiguities. Most errors are due to typos, vague phrasing, formatting mismatches, and inherent noise in 3D scenes. E.g., | Question | Human Answer | GT | |-|-|-| | Where in the room would you stand to be the furthest from the liquid spill? | Door | Next to the door | | How does the placement position of the added Nescafé espresso machine compare with the paper towel? | Above the paper towel| Higher | | Which is closer to the newly added rubber gloves, the trash can or the paper towel roll? | Paper tower roll | Paper towel roll | **Q4: InternVL-2.5** In response, we have evaluated InternVL-2.5 [2] (8B and 34B) on the Hypo3D benchmark using both semantic and non-semantic top-view maps. EM results show that InternVL-2.5 (8B version) consistently underperforms LLaVa-OV 7B and Qwen2-VL 7B across all settings on Hypo3D task. | Model | Movement | Removal | Attribute | Replacement | Addition | Overall | |---------------------------|----------|---------|-----------|-------------|----------|---------| | InternVL-2.5 8B (non-sem.) | 20.07 | 19.97 | 19.22 | 16.12 | 20.57 | 19.38 | | InternVL-2.5 38B (non-sem.) | 26.56 | 27.67 | 27.02 | 24.47 | 29.47 | 27.01 | | InternVL-2.5 8B (sem.) | 21.96 | 24.82 | 27.41 | 16.92 | 27.74 | 23.83 | | InternVL-2.5 38B (sem.) | 30.66 | 36.77 | 37.42 | 35.36 | 38.37 | 35.18 | **Q5: Instruction-tuning** We agree that exploring instruction-tuned 2D and 3D VLMs on our task would be valuable. Given the limited availability of computing resources, we intend to carry out a systematic study as part of our future work. We currently focus on zero-shot evaluation to ensure a fair comparison of all models. Notably, LLaVa-3D (7B), the SoTA 3D VLM, performs comparably to similar-sized 2D VLMs, suggesting that 3D VLMs' relatively weaker performance may stem from smaller model sizes. [1] msqa: multi-modal situated reasoning in 3d scenes, NeurIPS, 2024. [2] Expanding performance boundaries of open-source multimodal models with model, arxiv, 2024. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal and think the contents in the rebuttal should be included in the final version of the paper. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your review of our rebuttal. We will certainly add all the results presented in the rebuttal in the final version of the paper. If you have any further questions regarding our work, please feel free to contact us.
null
null
null
null
null
null