title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Guided Structural Inference: Leveraging Priors with Soft Gating Mechanisms
Accept (poster)
Summary: The paper introduces a more controlled method for structural inference that allow us to imposed a series of constrained to the predicted structure. Specifically, it enables conditioning on a set of edges that must exist in the structure, enforcing the absence of certain edges, and controlling sparsity in terms of both the total number of edges and the degree of individual nodes. At a technical level, the method it is a variation of the NRI paper with the following modifications: for restricting the edges, the method introduces a learnable gate per edge, which is "clone-and-clamp" for the edges that are in one of the 2 special sets; for the sparsity in terms of number of edges or degree of the nodes additional regularisation terms are added to the loss function. Claims And Evidence: Yes. Methods And Evaluation Criteria: - In all the experiments, the paper reports only AUROC for the predicted structure. Is there a specific reason why AUROC was chosen to assess the quality of the prediction? How would the models compare using more discrete metrics, such as structure accuracy? - The main focus of the method is to predict a constrained latent structure. However, without a downstream task, the practical applicability of this approach remains unclear. Similar to previous works, such as NRI and ACD, it is important to quantify how well a decoder utilizing the predicted latent structure performs on a downstream task. While I understand that achieving a worst performance than the baselines in terms of downstream tasks may be expected (given that the model has the additional advantage of enforcing hard constraints that could be crucial) it is still essential to demonstrate that the predicted structure remains useful to some extent. These evaluations would provide a more comprehensive understanding of the model. Theoretical Claims: The theoretical claims are correct. Experimental Designs Or Analyses: - While less explicit, the NRI paper also introduce a sparsity regularisation in the form of a modified prior distribution, which assigns a higher likelihood to non-edges compared to edges. Does the baseline experiments presented in the paper control that sparsity in any way? Supplementary Material: I was reading the implementation details from the supplementary material. Relation To Broader Scientific Literature: The paper does a good job on summarizing the existing literature in relational inference. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - In my opinion the technical contribution of the paper is limited. The method differs from previous work in three main aspects: the clone-and-clamp technique and two regularisation losses. The regularisation losses are trivial and requires strong domain knowledge. Without such prior knowledge, the sparsity and degree became just another hyperparameters. Moreover it is unclear how imposing them would affect a potential downstream task. Regarding the clone-and-clamp technique, it is still unclear to me why the quality and stability of the gradients are superior in this case compared to simply masking the edges, as done in the baselines. I acknowledge the advantages brought by eliminating those terms from the KL divergence loss, however I still have concerns about how the technique improve the gradient quality. Please provide more details or additional experiments to clarify this point. - The gating mechanism relies on a learnable set of parameters, one for each edge. This impose a strong limitation into the model, since it implies that the structure behaves similarly between different examples in the dataset. In addition to that, it makes it impossible to generalise to examples with a different number of nodes. Other Comments Or Suggestions: N/A Questions For Authors: Please see the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your detailed review and constructive feedback. Below, we address your concerns (detailed tables can be found at https://anonymous.4open.science/r/SGSI-Rebuttal-1614/Tables.pdf ): **1. Choice of Evaluation Metrics and Downstream Tasks:** We selected AUROC as our primary metric because it is standard in the structural inference literature, used by ACD, iSIDG, ALaSI, and RCSI, and effectively measures true positive rates over various thresholds, as demonstrated in Pratapa et al. (2020). In our context, where many methods output probabilities rather than hard binary edges, AUROC provides a comprehensive assessment of latent graph recovery. In addition, although our primary goal is to infer the underlying interacting structure of dynamical systems, SGSI is also applicable to time-series forecasting. For instance, we report Mean Squared Error (MSE) for predicting 10 future steps on several datasets. Table 6 in the attached link compare NRI, SGSI without prior knowledge (SGSI raw), and SGSI with 20% known present edges. These results show that while SGSI raw matches NRI, the integration of prior knowledge significantly improves forecasting accuracy. This dual capability demonstrates that our method is not only effective for structural inference but also beneficial for downstream prediction tasks. **2. NRI Variants** Actually, we use both variants of NRI (with or without sparsity regularization), but figured out the the one with sparsity regularization outperformed the other variant by 1%-3% AUROC. So we refer to the one with sparsity regularization. We revised our paper to state it clearer. **3. Technical Contribution** Modern autograd frameworks track tensor “versions” to compute gradients accurately. In-place masking, forcing gating values to 0 or 1 within the same tensor, disrupts this mechanism, causing version mismatches and unstable gradients. Our clone-and-clamp strategy first duplicates the gating vector, then clamps the copy, thereby preserving the original tensor’s autograd history. This ensures that (1) the original logits $\theta_e$ remain intact for backpropagation, enabling stable gradient flow, and (2) only the cloned, clamped tensor is used in message passing. Our ablation experiments (Section 5.3) clearly show that omitting cloning leads to sharp gradient spikes and oscillatory training, while not skipping the KL for pinned edges results in contradictory updates that inflate the loss and harm AUROC. These findings confirm that simply masking in-place degrades gradient quality, whereas our clone-and-clamp technique maintains stable and consistent gradient propagation. **4. Limitations of the Learnable Gate** Our soft-gating mechanism enables the use of roughly estimated prior knowledge, such as overall sparsity or node degree constraints, that is often stable and transferable within our domain. In many application areas (e.g., transportation networks, gene regulatory networks), the underlying graph structures exhibit similar patterns. Thus, the partial knowledge (e.g., a typical density or expected node degree) is largely sharable across datasets. Moreover, our design allows the gating penalties to be adjusted; for instance, one can fine-tune the model through a pretraining and fine-tuning process if needed. As detailed in our response to Reviewer vBqs, SGSI is robust against moderate errors or deviations in prior knowledge. Although the penalties (e.g., the sum of gating values for sparsity or node-level degree sums) might appear straightforward, their true strength lies in their seamless integration with our variational framework: 1. They are **soft**: the model can override them when data strongly suggests an alternative structure. 2. They are **partial**: penalties can be applied selectively, only to nodes or edges where reliable prior knowledge exists. 3. They **co-exist** with our skip-KL mechanism for pinned edges, ensuring that enforcing known edges does not conflict with the overall latent representation. This flexibility not only facilitates the transfer of prior knowledge across related domains but also makes SGSI adaptable to different datasets within the same field. We note that approaches like META-NRI have similarly emphasized leveraging common structural priors across domains, further supporting our design choices. In summary, our soft gating penalties act as additional, adjustable levers that exploit stable, transferable domain knowledge, ensuring that SGSI remains effective whether prior knowledge is abundant or only approximately known. We believe this design is both practical and powerful, and we are confident that these clarifications strengthen the final manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for the response and additional results. Regarding the evaluation on downstream task, it is nice to see a small improvement when prior knowledge is used (even if just marginal) and I encourage the authors to add them on the final version. Regarding the technical contribution, while I agree with the authors discussion on autograd framework, I still believe that the clone-and-clamp is just an implementation detail that allows us correctly track the gradients on the current frameworks, thus not a contribution on itself. Overall, I still believe the technical contribution is somehow limited but, with the additional experiments showing benefits of prior knowledge on the downstream tasks, i am more positive towards the paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your thoughtful consideration and for acknowledging our additional experiments demonstrating improvements in downstream tasks. We will indeed integrate these results into our revised manuscript to further clarify the practical significance of incorporating partial prior knowledge. Regarding the clone-and-clamp mechanism, we understand your perspective that this aspect primarily addresses gradient-tracking issues inherent to current autograd frameworks. While it might appear as an implementation detail, we believe its thoughtful integration is essential for practically and robustly incorporating domain constraints, differentiating our approach from existing methods. Nonetheless, we will clearly position it as a practical enhancement in the final version to prevent overstating this aspect. Given your positive recognition of our experimental contributions and the clearer perspective provided through the rebuttal, we respectfully ask if you would consider increasing your recommendation score, as your recognition would significantly support our paper’s acceptance. Thank you again for your constructive feedback, which has greatly helped improve our work.
Summary: This paper proposes Soft-Gated Structural Inference (SGSI), a framework to solve the task of infer latent relational structures where additional prior knowledge can be incorporated. Theoretical analysis and experiments verify the effectiveness of the proposed method. Claims And Evidence: Yes Methods And Evaluation Criteria: The proposed methods make sense. The evaluated datasets contain simulated / synthetic datasets and benchmark datasets. However, it seems there is no clear scenario where the prior knowledge is crucial. It makes the contributions incremental. Theoretical Claims: There are no formal theoretical claims or proofs. Experimental Designs Or Analyses: The experimental designs and analyses are reasonable. Supplementary Material: Yes, I read all parts. Relation To Broader Scientific Literature: It may influence fields like social network analysis or medical diagnostics where some prior knowledge is provided. Essential References Not Discussed: No Other Strengths And Weaknesses: - There are multiple hyperparameters in the method, as shown in Sec. 4.3. It is unclear how these are selected in experiments, and it seems they are sensitive to concrete scenarios. - The theoretical analysis is designed to show "how deterministic and stochastic edges optimize the compression-prediction trade-off". However, the connection to VIB does not present any meaningful conslusion and seems far-fetched. Other Comments Or Suggestions: - ”rediscover” - ”bits” - Equ. 15, "." should be "," Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your detailed review and the constructive feedback regarding SGSI. We address your concerns below (tables can be found at https://anonymous.4open.science/r/SGSI-Rebuttal-1614/Tables.pdf ): **1. Importance of Prior Knowledge** Our experiments across multiple benchmarks, including NetSim, Spring Simulations, and StructInfer, demonstrate that even a small fraction of reliable prior knowledge (e.g., 20% known-present or known-absent edges) can improve latent structure recovery by up to 9% AUROC in LL and several VN\_NS datasets. We acknowledge that when available domain knowledge is sparse or less reliable, the improvements are naturally smaller. However, in many real-world scenarios (e.g., transportation networks, medical diagnostics), partial prior knowledge is both available and critical for ensuring interpretability and reliability. Importantly, SGSI is designed to gracefully revert to a standard VAE-based method when no prior is available, so its performance scales with the quality of the available knowledge. **2. Contribution of SGSI** We respectfully disagree that our contributions are merely incremental. Our method introduces a novel soft gating mechanism that: - \textbf{Clones and clamps} gating parameters to integrate known edges without causing in-place gradient conflicts. - \textbf{Skips KL costs} for fully known edges, effectively reallocating “bits” to uncertain connections. - Integrates domain-specific constraints (global sparsity and node degrees) via soft penalties. These design choices overcome critical challenges in merging data-driven latent inference with external knowledge, a capability absent in prior work. For instance, in gene regulatory network (GRN) inference, extensive experimental efforts identify a subset of true regulatory interactions. We evaluated SGSI on two GRN datasets, ventral spinal cord (VSC) development and gonadal sex determination (GSD) (Pratapa et al., 2020), using varying proportions of known-present edges. Table 5 in the attached link summarizes the results. The results show that incorporating prior knowledge not only improves AUROC but also helps the model produce more true positive edges, capabilities that are missing in prior approaches. We further foresee SGSI’s potential in fields such as physics, chemistry, and finance, although resource limitations preclude experiments on those domains at present. **3. Hyperparameter Sensitivity** SGSI introduces additional hyperparameters (e.g., $\beta, \lambda_{\mathrm{sparsity}}$, and $\lambda_{\mathrm{deg}}$). In experiments, we selected these values using preliminary grid searches and cross-validation on a validation set. Our recommended ranges ($\beta=1.0$ and $\lambda_{\mathrm{sparsity}}, \lambda_{\mathrm{deg}} \in [10^{-4},10^{-2}]$) are consistent with common practices in VAE-based models. We plan to include additional guidance in the Appendix: - For very sparse graphs (e.g., local interactions in physical systems), higher $\lambda_{\mathrm{sparsity}}$ values are tested. - When node-degree constraints are precise (e.g., “exactly 2 neighbors”), we set a higher $\lambda_{\mathrm{deg}}$. - If the prior knowledge is approximate, lower penalty weights prevent overshooting data evidence. These ranges avoid abrupt changes in adjacency, ensuring that SGSI functions as a softly guided VAE rather than a rigidly constrained model. **4. Theoretical Analysis and VIB Connection** Our VIB-inspired design, where known edges are excluded from the KL term, frees the model to devote its “bits” to uncertain edges, leading to notably better performance. Our ablation studies show that not skipping the KL for pinned edges degrades performance, confirming that when the model wastes capacity on fully known edges, it achieves lower accuracy. This aligns with the broader rationale behind VAE-based structural inference (e.g., ACD, iSIDG, RCSI), in which the Variational Information Bottleneck concept helps explain why prioritizing uncertain edges yields more effective adjacency discovery. While we do not claim a formal VIB theorem, we include this perspective because it clarifies the practical benefit of our skip-KL approach and highlights its theoretical consistency with known VAE methodologies. **5. Typos** We appreciate your careful reading. All typographical errors have been corrected and will appear in the camera-ready revision. **6. Datasets** In addition to the synthetic and benchmark datasets discussed in the main text, we evaluated SGSI on the PEMS (California Caltrans Performance Measurement System) dataset (see Appendix E.2). The PEMS evaluation, conducted on real-world sensor data, further demonstrates SGSI’s practical applicability and robustness in realistic settings. We believe that SGSI’s novel soft gating mechanism significantly enhances structural inference by effectively leveraging prior knowledge while maintaining computational efficiency and scalability. Thank you for your valuable feedback. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks a lot for the detailed response. My concerns on hyperparameter selection and theoretical analysis of VIB are addressed. I suggest revising the paper accordingly. Regarding the importance of prior knowledge, I still have a question. I fully agree that reliable prior knowledge should be helpful for our predictions, and the key problem here is whether we can obtain reliable prior knowledge in practical scenarios. Even though it is mentioned that > However, in many real-world scenarios (e.g., transportation networks, medical diagnostics), partial prior knowledge is both available and critical for ensuring interpretability and reliability, The main experimental results focus on simulated and synthetic datasets. I checked the additional experiments in Sec. E.2, and the prior knowledge of the additional experiments also comes from handcrafted present edges and absent edges, which is also not real prior knowledge. I wonder whether it is possible to provide experiments on the mentioned "real-world scenarios" where partial prior knowledge is both available and critical. I look forward to the further response. --- Reply to Comment 1.1.1: Comment: We thank you for raising the crucial point regarding the availability and importance of prior knowledge in real-world scenarios. To address this, we’ve conducted additional experiments on real-world single-cell RNA-seq datasets: (1) hESC (human embryonic stem cells; Chu et al., Genome Biol. 2016) and (2) mDC (mouse dendritic cells; Shalek et al., Nature 2014). These biological systems represent scenarios where reliable prior knowledge, such as known interactions or absent regulatory connections, often exists due to extensive experimental validations in literature. Below, we summarize the AUROC (%) of SGSI without prior knowledge (SGSI (Raw)), along with SGSI leveraging partial prior knowledge (10% or 15% known-present (K.P.) and known-absent (K.A.) edges): | | hESC(AUROC %) | mDC(AUROC %) | | --------------- | ------------- | ------------ | | SGSI (Raw) | 50.18 | 52.63 | | SGSI + 10% K.P. | 51.35 | 53.91 | | SGSI + 15% K.P. | 51.70 | 54.14 | | SGSI + 10% K.A. | 51.41 | 53.96 | | SGSI + 15% K.A. | 52.26 | 54.65 | From these results, even modest incorporation (10-15%) of experimentally validated knowledge consistently improves structural inference, demonstrating that SGSI can practically leverage partial prior knowledge in real biological settings. While absolute improvements may appear modest, such incremental gains are significant in real-world domains like biology, where even slight improvements in inferred networks can lead to more meaningful biological interpretations and robust downstream analyses. Additionally, we recognize the immense potential of integrating SGSI with ongoing biological experiments. Collaborations with wet labs to iteratively validate inferred edges could significantly enhance network inference accuracy and biological insight, although such a process would extend beyond the timeframe of this rebuttal. We envision pursuing this integrative approach in future research. Thank you again for your valuable feedback. We will incorporate these experiments into our revised manuscript to highlight the practical applicability and future potential of SGSI in real-world scenarios.
Summary: The paper "Guided Structural Inference: Leveraging Priors with Soft Gating Mechanisms" introduces Soft-Gated Structural Inference (SGSI), a variational autoencoder (VAE)-based method for inferring latent relational structures while integrating domain constraints. Claims And Evidence: Yes Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the problem setting. The SGSI model is rigorously compared to multiple baselines, including NRI, MPM, ACD, iSIDG, RCSI, ALaSI, and SICSM. The evaluation uses standard metrics such as AUROC to measure structural inference quality. The paper also investigates different amounts of prior knowledge and studies the effect of global sparsity and node-degree constraints. The benchmarks selected, including NetSim, Spring Simulations, and StructInfer datasets, are relevant to relational inference tasks. Theoretical Claims: The paper presents a theoretical analysis of SGSI’s soft gating approach and its connection to the information bottleneck principle. The KL term is adapted to exclude known edges, and the latent adjacency matrix is factorized into deterministic and uncertain components. The derivations appear correct and align well with established VAE principles. The simplification of KL divergence, ensuring no contradictory prior signals, is a valid and well-motivated approach. Experimental Designs Or Analyses: The experimental design is sound, with multiple datasets and extensive ablation studies. The choice of datasets captures a broad range of real-world applications. The comparison with state-of-the-art methods is thorough, including controlled experiments that vary the proportion of known-present and known-absent edges. The loss curve analysis in ablation studies further supports the necessity of key SGSI components. The paper also accounts for hyperparameter tuning and provides practical considerations for sparsity and node-degree constraints. Supplementary Material: No Relation To Broader Scientific Literature: I am not familiar with this topic. Essential References Not Discussed: na Other Strengths And Weaknesses: Weaknesses: While SGSI performs well, some datasets show only marginal improvements over existing baselines. The paper could further explore the applicability of SGSI to large-scale real-world graphs. Discussion on computational efficiency is limited; reporting training time across datasets would be useful. Other Comments Or Suggestions: see weakness Questions For Authors: see weakness Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your constructive feedback regarding our paper SGSI. Below, we address your concerns point by point: **1. Marginal Gains on Certain Datasets** While SGSI shows up to 9% AUROC improvement on some datasets, other cases yield more modest gains (1–2%). We believe this reflects two factors: (i) the inherent difficulty or limited availability of reliable partial knowledge in those tasks, and (ii) that some baselines (e.g., in Spring Simulations) already achieve high AUROC, leaving limited room for further improvement. Importantly, even small improvements can be significant for domain practitioners, especially given SGSI’s additional benefits in interpretability (via clamped edges) and stable gradient flow. Moreover, when no prior knowledge is available, SGSI effectively reverts to a standard VAE-based approach (similar to NRI), explaining the smaller gains in those scenarios. We believe this variation reflects two factors: 1. **Domain Impact:** In domains where partial prior knowledge is available, such as transportation networks or biomedical systems, a substantial gain (up to 9% AUROC) can lead to significantly improved decision-making and reliability. For example, in critical applications like traffic management or clinical diagnostics, a 9% improvement in edge prediction accuracy can meaningfully enhance system interpretability and safety. 2. **Baseline Ceiling Effects and Additional Benefits:** In some tasks like Spring Simulations, the baselines already achieve high AUROC scores, leaving little room for large numerical gains. Even when improvements are modest (1–2%), these small gains are significant because they come with additional benefits: - **Enhanced Interpretability:** The clone-and-clamp mechanism produces a more interpretable latent graph by clearly separating known from uncertain edges. - **Stable Gradient Flow:** Our design avoids in-place modifications and contradictory KL signals, leading to more stable and robust training. - **Adaptive Behavior:** SGSI gracefully reverts to a standard VAE-based approach (e.g., NRI) when no prior knowledge is available, ensuring that even minimal domain guidance is exploited without harming performance. Thus, even modest gains are valuable in high-performing scenarios and are complemented by improved interpretability and training stability, factors that are crucial for practical applications. We believe these advantages underscore the practical significance of SGSI beyond mere numerical improvements. **2. Scalability to Larger Real-World Graphs** We agree that demonstrating scalability is important. However, obtaining a reliable dynamical system dataset with thousands of nodes and a trusted underlying structure remains challenging in our field. To address this, we conducted simulation-based experiments on a synthetic dataset with 5k nodes using subgraph sampling (akin to GraphSAGE). In this experiment, SGSI was trained with mini-batching, computing gating parameters only on sampled local neighborhoods. Our preliminary results, summarized in the table below, show that SGSI remains effective at scale, converging in under 60 hours while achieving meaningful AUROC improvements: | | K.P. 10% | K.P. 20% | K.P. 30% | K.A. 10% | K.A. 20% | K.A. 30% | w. Spar. | w. In-deg. | w. Out-deg. | w. Both-deg. | | ---- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | ---------- | ----------- | ------------ | | SGSI | 0.56 | 2.40 | 4.93 | 0.77 | 3.08 | 5.73 | 7.46 | 6.90 | 7.13 | 7.68 | These findings suggest that the core mechanisms of SGSI (soft gating, cloning, and KL skipping) remain robust even for large graphs. **3. Computational Efficiency and Training Time** In response to your request for explicit runtime data, we provide the following training-time comparison across our main datasets (Springs, NetSims, LI, LL, and the 100-node VN datasets): | | Springs | NetSims | LI | LL | VN\_SP\_100 | VN\_NS\_100 | | ----- | ------- | ------- | ---- | ---- | ----------- | ----------- | | NRI | 20.1 | 16.0 | 14.3 | 18.2 | 49.0 | 47.2 | | iSIDG | 42.2 | 36.9 | 48.1 | 50.6 | 100.6 | 97.8 | | SGSI | 20.4 | 15.6 | 14.7 | 18.1 | 49.2 | 47.1 | SGSI consistently matches NRI’s training time (e.g., 20.4h vs. 20.1h on Spring Simulations and 47.1h vs. 47.2h for VN_NS_100), while methods such as iSIDG require ~1.5–2× longer. Yet recall that SGSI achieves 79.7% AUROC on Springs, which is higher than NRI. Overall, these results confirm that SGSI’s partial-knowledge gating imposes minimal additional cost while providing meaningful accuracy gains in structural inference. Thank you again for your helpful comments, and we hope these newly added experiments bolster confidence in SGSI’s robustness, scalability, and efficiency.
Summary: The paper introduces SGSI, a framework for latent graph structure learning that integrates partial prior knowledge into a VAE. It employs a soft gating mechanism with learnable parameters to smoothly control edge activation and uses a cloning and clamping strategy to fix known-present and known-absent edges without disrupting gradient flow. By enforcing adaptive regularization for global sparsity and node-degree constraints, SGSI effectively separates known from uncertain edges, optimizing the trade-off between compression and prediction via an information bottleneck perspective. Empirical results on diverse datasets—including physical simulations, biological networks, and multi-agent systems—demonstrate up to a 9% AUROC improvement over existing methods. Claims And Evidence: The paper’s main claims are generally well supported by both theoretical insights and empirical results. Methods And Evaluation Criteria: I think they make sense. Theoretical Claims: There are theorem statements in the paper. Experimental Designs Or Analyses: The experimental design is generally sound. The authors evaluate SGSI on several benchmark datasets—including Spring Simulations, NetSim, synthetic biological networks, and vascular networks—and compare its performance against multiple baselines using AUROC as a metric. They vary the fraction of known-present and known-absent edges, which effectively demonstrates how partial prior knowledge improves inference accuracy. In addition, the paper includes detailed ablation studies (e.g., omitting the KL regularization on known edges or skipping the cloning step) to isolate the impact of each component on training stability and performance. Supplementary Material: I went through the appendix. Relation To Broader Scientific Literature: It builds on VAE-based structural inference methods (e.g., NRI by Kipf et al., 2018; Alet et al., 2019; Chen et al., 2021) that aim to learn latent graph structures, but goes further by incorporating partial prior knowledge—something earlier approaches tend to overlook. The use of soft gating to control edge activation relates to established techniques in neural network regularization and gating mechanisms, ensuring smooth gradient flow compared to naive overwriting methods used in some prior works. Essential References Not Discussed: They include many references. It seems quite comprehensive in general. Other Strengths And Weaknesses: Strengths include its originality in integrating soft gating, cloning/clamping, and adaptive regularization to merge domain knowledge with latent structure learning. The paper is also theoretically grounded through an Information Bottleneck perspective and validated by strong empirical results. Weaknesses involve limited analysis on scalability to large or heterogeneous graphs, modest hyperparameter sensitivity studies, and reliance on the availability and accuracy of prior knowledge. Other Comments Or Suggestions: N/A Questions For Authors: 1. How robust is SGSI when the prior knowledge is noisy or partially inaccurate? How this will affect the approach proposed? 2. Can you provide additional experiments or analysis on the scalability of SGSI to larger and more heterogeneous graphs? A positive response with empirical evidence or theoretical insights would strengthen the scalability claim and improve my evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your positive assessment and valuable comments. In response, we clarify our approach as follows (tables can be found at https://anonymous.4open.science/r/SGSI-Rebuttal-1614/Tables.pdf .) **1. Hyperparameters** SGSI introduces hyperparameters, most notably, the KL weight $\beta$, the sparsity penalty$ \lambda_{\mathrm{sparsity}}$, and the node-degree penalty $\lambda_{\mathrm{deg}}$. These parameters are essential for balancing the trade-off between inference and the incorporation of prior knowledge. We conducted Bayesian optimization with systematic grid searches on a validation subset to identify a “reasonable zone” that yields both high AUROC and stable convergence. Our initial search considered: - $\beta \in \\{0.1, 0.5, 1.0, 1.3, 1.4, 2.0, \dots\\}$, - $\lambda_{\mathrm{sparsity}} \in \\{10^{-4}, 10^{-3}, 10^{-2}\\}$, - $\lambda_{\mathrm{deg}} \in \\{10^{-4}, 10^{-3}, 10^{-2}\\}$. Our domain heuristics further guide these choices: - When the underlying graph is known to be very sparse (e.g., physical systems with local interactions), a larger $\lambda_{\mathrm{sparsity}}$ is preferable. - When node-degree constraints are well-established (e.g., each node has exactly 2 neighbors), a higher $\lambda_{\mathrm{deg}}$ ensures the model respects these constraints. - Conversely, if the prior knowledge is only approximate, lower penalty weights prevent the model from being over-constrained. We showcase results of sensitivity study on the VN_SP_30 dataset, and the results can be found in the Table 1 in the attached link. The results demonstrate that the optimal performance is achieved with $\beta=1.3, \lambda_{\mathrm{sparsity}}=0.004$, and $\lambda_{\mathrm{deg}}=0.01$, yielding an AUROC of 92.76%. Variations from these values result in a modest decrease in performance, which validates that while SGSI is indeed sensitive to these hyperparameters, it operates robustly within a reasonable range. **2. Robustness to Inaccurate Prior Knowledge** SGSI leverages soft penalties and cloned gating to integrate partial knowledge while preserving flexibility. Because these constraints are applied softly, via mild penalties and by clamping only a cloned copy of the gating vector, the model can deviate from incorrect priors when the data strongly contradicts them. In SGSI, the soft gating mask is applied after the Node-to-Edge operation (but not after Edge-to-Node), which helps preserve residual connectivity and prevents over-reliance on potentially flawed edges. To further validate this, we conducted “noisy prior” experiments on VN_SP_100 and VN_NS_100, where we randomly flipped 20% or 50% of the known-present/absent edges (with the overall prior knowledge set to 30%) and introduced 10% or 20% errors in global sparsity and degree constraints. Table 2 in the attached link summarizes our preliminary results. These results indicate that even with moderate noise, SGSI remains significantly more accurate than a no-knowledge baseline with at least 1~2% margin. Although performance gains naturally diminish as noise increases, the model robustly leverages available prior knowledge without catastrophic failure. **3. Scalability to Larger, Heterogeneous Graphs** We appreciate your interest in extending SGSI to larger or heterogeneous graphs. SGSI is designed to scale via mini-batching and subgraph sampling, approaches similar to GraphSAGE, which allow SGSI’s gating parameters to be computed or stored per subgraph rather than across the full $N \times N$ edge space. For extremely large graphs, we can either (i) restrict gating to local neighborhoods, or (ii) compute gating logits on the fly from node embeddings, thereby avoiding an $\mathcal{O}(N^2)$ parameter explosion. In this paper, we demonstrate scalability on the PEMS datasets (approximately 300 nodes) by leveraging multi-GPU mini-batching (in Appendix E.2). To further validate SGSI on larger graphs, we generated a toy dataset with 5,000 nodes using Springs Simulations. Table 3 in the attached link shows the $\Delta$AUROC values under various levels of prior knowledge. These results indicate that SGSI, when using mini-batching, remains effective even at the 5k-node scale with prior knowledge, and with meaningful improvements in $\Delta$AUROC from 0.56 to 7.68. SGSI’s flexible architecture inherently supports heterogeneous graphs. By adopting per-relation gating parameters $\theta_{e,r}$ and applying the clone-and-clamp strategy to each edge type, SGSI can differentiate among various relations and node types while maintaining stable gradient flow and effective KL skipping. Although our current domain primarily involves homogeneous datasets, we recognize SGSI’s potential in applications such as multi-modal social networks, biomedical systems, and multi-layer transportation networks, and plan to explore these in future work. Thank you again for your positive feedback. We look forward to refining our manuscript with these additional experiments. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their insightful rebuttal. Although I am not very familiar with the topic, I can see that the authors have made a sincere effort to address all of my concerns, and their responses are reasonable. Given that my initial score was already a 4, which is quite high, I would prefer to keep my score unchanged. Thanks for your response! --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your thoughtful evaluation and positive feedback on our rebuttal. Your comments were extremely helpful in improving our manuscript. We completely understand and respect your decision to keep your score unchanged given your initial strong support. Thanks again for your valuable insights and the encouraging review!
null
null
null
null
null
null
Cavia: Camera-controllable Multi-view Video Diffusion with View-Integrated Attention
Accept (poster)
Summary: The authors propose a novel framework of image-to-multi-view video generation, with controllable cameras. To achieve this, the design a flexible multi-frame/multi-view attention module, allowing for joint training of static scene video, monocular video and multi-view dynamic scene video. Experiments on monocular and multi-view vide generation validate the 3D consistency and temporal consistency of the proposed method. Claims And Evidence: Not all. 1. The authors claim they focus on multi-view video generation, however, most of the multi-view experiments are conducted on 2 views with small camera angle change, which is confusing. 4-view experiments are provided in supplementary material but only qualitatively, where camera angle change is still not evident. 2. The joint training strategy for static video is confusing. The authors expand them with F-1 frames, but their architecture design allow for only one-frame training. I don't know if the authors refers to V cameras move separately during F-1 frames, or they simply repeat the first frame to F-1 frames. 3. Lack of comparison with state-of-the-art methods for camera controllable video generation and multi-view video generation. Methods And Evaluation Criteria: Not all, please see experiment part. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. monocular video generation: For 3D consistency, it is suggested to compare with ViewCrafter, following its evaluation pipeline. 2. multi-view video generation: 2-view video generation with limited camera change is not convincing. Does the training data also contains such small camera change? Besides, the authors are encouraged to increase the number of views and conduct more experiments. Besides, the compared method, MotionCtrl and CamCtrl are not designed for multi-view video generation, so it is unfair. It is suggested to compare with SyncCamMaster. 3. It is suggested to move the comparison with CVD to the main text and conduct quantitative experiments under its evaluation pipeline. Supplementary Material: Yes, I read the appendix and supplementary html. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: I'm willing to increase my score if the authors address my concerns in response. Questions For Authors: The training strategy of static scene videos could be explained more Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer T9u5 for their detailed comments and constructive suggestions. However, we respectfully disagree with the reviewer's concerns regarding our contribution. In response, we have added additional comparisons against concurrent works ViewCrafter and CVD, which are available on our anonymous webpage: https://cavia2025.github.io/. These new results clearly demonstrate that our proposed method, Cavia, significantly outperforms both ViewCrafter (Fig. A) and CVD (Fig. B). We address the reviewer's questions in detail below. **Q1. Multi-view experiments are mainly conducted on 2 views with small camera angle change. 4-view experiments’camera angle change is still not evident.** A1. We emphasize that camera control, the factor the reviewer prioritizes, is precisely where Cavia excels. Cavia is the first to enable multi-video generation with precise camera motion control while simultaneously preserving object motion. Concurrent works cited by the reviewer fail to achieve comparable quality. Specifically, ViewCrafter only generates static scenes, SynCamMaster is restricted to fixed-viewpoint videos, and CVD suffers from poor pixel quality with severe morphing artifacts. **Q2. 2-view video generation with limited camera change is not convincing. Does the training data also contains such small camera change? MotionCtrl and CamCtrl are not designed for multi-view video generation, so it is unfair. It is suggested to compare with SynCamMaster.** A2. As we illustrate above, SynCamMaster only supports video generation with fixed viewpoints, allowing no camera movement. In contrast, Cavia enables precise camera control while preserving object motion, a challenging and underexplored capability. While MotionCtrl and CameraCtrl are monocular methods, they are the closest relevant baselines as they also target precise camera control. Additionally, Cavia builds upon SVD, which supports video generation up to 14 frames, leading to naturally limited camera ranges. Extending this to larger ranges with a more powerful long context base video generation model is an exciting direction for future work. We provide a detailed comparison with SynCamMaster in our response to reviewer vi5f’s Q2 (Tab. A). However, we note that SynCamMaster's official GitHub repository contains only boilerplate code, and the absence of essential components such as the model checkpoint and architecture makes it impossible to compare with SynCamMaster fairly. **Q3. The joint training strategy for static video is confusing. I don't know if the authors refers to V cameras move separately during F-1 frames, or they simply repeat the first frame to F-1 frames.** A3. We do not perform single-frame training. Our "static" videos refer to sequences where static scenes lack dynamic objects. During training, our Cavia framework consistently accepts $F$ frames, each with a distinct camera matrix. **Q4. For 3D consistency of monocular video generation, it is suggested to compare with ViewCrafter.** A4. We have included comparisons against ViewCrafter in Fig. A on https://cavia2025.github.io/#FigA . ViewCrafter primarily generates videos of static 3D scenes, likely due to its reliance on 3D point clouds. We evaluate ViewCrafter using test images and trajectories from the RealEstate10K dataset and observe that it often produces frames with noticeable color and lighting artifacts, as well as geometric distortions. In contrast, Cavia achieves visually pleasing results with accurate geometry. Furthermore, ViewCrafter requires approximately 4 minutes to generate a video sequence, whereas Cavia produces a set of 2-view videos in just 14 seconds. **Q5. It is suggested to move the comparison with CVD to the main text and conduct quantitative experiments under its evaluation pipeline.** A5. Thank you for the suggestion. We initially placed CVD’s results in the appendix due to space constraints but have now moved them to the main text. We have also provided additional comparisons using CVD’s recently released official codebase. The results can be found in Fig. B on https://cavia2025.github.io/#FigB . Our comparisons show that CVD suffers from severe morphing artifacts and unnatural object motion. It also fails to follow text prompt instructions and overlooks important details. In contrast, Cavia produces outputs with greater geometric consistency and more natural object motion. For quantitative evaluation, we not only adopt SuperGlue, following CameraCtrl and CVD, but also incorporate COLMAP, a widely used tool in the 3D reconstruction community for estimating camera poses. COLMAP shares the same purpose as SuperGlue by measuring camera pose accuracy but is more effective in identifying morphing artifacts, as it focuses on global geometry rather than just local feature matching. Our extensive quantitative results in Tab. 1 and 2 demonstrate that Cavia significantly outperforms existing camera-control video generation methods. --- Rebuttal Comment 1.1: Comment: Thanks for the effort of the authors. My main concern still lies in experiments. Could the authors provide quantitative comparisons with ViewCrafter/SyncCamMaster/CVD under their evaluation settings? Now only qualitative comparisons are provided. --- Reply to Comment 1.1.1: Comment: Thank you for your comments. We’re glad that our rebuttal has resolved your other concerns. As suggested, we conducted additional quantitative comparisons with CVD using their evaluation protocol. We reached out to the authors of CVD, who kindly shared the implementation details of their evaluation setup. The table below presents results on 100 samples, using the same metrics as in the CVD paper. We use SuperGlue to assess the pose accuracy of each generated frame relative to the first frame. For all metrics, higher values indicate better performance. As shown, Cavia outperforms CVD by a significant margin. These results are consistent with our qualitative comparisons (Fig.9 and [Fig.B](https://cavia2025.github.io/#FigB) ) and 3D reconstruction analysis (Fig. 10). | Method | AUC-Rot@5 $\uparrow$ | AUC-Rot@10 $\uparrow$ | AUC-Rot@20 $\uparrow$ | AUC-Trans@5 $\uparrow$ | AUC-Trans@10 $\uparrow$ | AUC-Trans@20 $\uparrow$ | Prec $\uparrow$ | MScore $\uparrow$ | |--------|----------------------|------------------------|------------------------|-------------------------|--------------------------|--------------------------|------------------|-------------------| | **Cavia** | 6.36 | **17.11** | **37.56** | **4.13** | **8.42** | **18.18** | **10.19** | **6.70** | | CVD | **7.17** | 16.76 | 33.31 | 2.59 | 4.93 | 9.86 | 5.68 | 3.29 |
Summary: This paper introduced a multi-view video diffusion model enhanced by view-integrated attention, called Cavia. Specifically, Cavia used cross-view attention and cross-frame attention to ensure multi-view and temporal consistency respectively. This model design also enabled Cavia to train jointly with diverse datasets. This paper also included detailed data curation workflows, and ablation studies to support the re-implementation and effectiveness. Claims And Evidence: Cavia claims that it could generate multi-view videos with precise camera control and object motion, but all results from the paper (especially for multi-view videos) suffer from small camera motion changes. It is still questionable whether Cavia can generate multi-view videos with large viewpoint changes like syncammaster. Methods And Evaluation Criteria: Most experiments make sense. Some questions remain for the camera metric. The authors claimed that they normalize the camera pose scales in this paper, which is unlike previous work (CameraCtrl). However, the widely used camera metric (absolute pose error (APE), relative pose error (RPE)) should already contain the umeyama alignment to confirm the normalization of the camera pose. Why not include these metrics? Theoretical Claims: No theoretical claims are in this paper. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Ablation and other extensive experiments parts. Relation To Broader Scientific Literature: The key contributions are joint training and cross-view/cross-frame attention. However, the novelty of these components are constrained, which have been already proposed by syncammaster [ICLR2025]. Essential References Not Discussed: This paper did not discuss a very related work called syncammaster published in ICLR2025. Considering the publishing time of syncammaster is very close to the ICML deadline, the authors are under no obligation to compare to it. But this paper should also discuss and clarify the difference between syncammaster. Other Strengths And Weaknesses: In my opinion, this paper should discuss and clarify their different contributions compared to syncammaster, which also includes similar view-attention, 3d-attention, and joint training on multi-view data, multi-view videos, and general videos. Another concern is the limitation of Cavia to address multi-view video generation with large viewpoint changes. Other Comments Or Suggestions: Some words are repeated in the abstract, for example, "to our best knowledge" and "to the best of our knowledge". Questions For Authors: The questions are mainly based on the camera pose metrics, a discussion between syncammaster, and the capacity of Cavia to address large viewpoint changes as mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer vi5f for their valuable effort and for recognizing the strength of our experimental results. However, we respectfully disagree with the concerns regarding "large viewpoint changes like SynCamMaster." We would like to clarify that **the concurrent work SynCamMaster is limited to generating fixed-viewpoint videos**, whereas our proposed method, Cavia, enables precise camera control for each individual frame under a multi-view video generation setting. **Q1. Why not include absolute pose error (APE) and relative pose error (RPE) metrics?** A1. APE and RPE combine translation and rotation errors into a single value. However, following common practices in camera-controllable video generation (e.g., CameraCtrl, Collaborative Video Diffusion), we evaluate translation and rotation errors separately. Our chosen metrics align with the spirit of RPE, as they also measure relative differences between camera matrices but allow for a more detailed assessment. **Q2. Should discuss and clarify the difference between syncammaster [ICLR2025]. It is still questionable whether Cavia can generate multi-view videos with large viewpoint changes like syncammaster.** A2. We appreciate the reviewer's suggestion and will cite and discuss SynCamMaster accordingly. However, SynCamMaster is designed for generating videos with **fixed camera viewpoints** and does not support dynamic viewpoint changes. In contrast, Cavia provides **precise per-frame camera control** in multi-video generation scenarios. The table below (Tab. A) provides a detailed comparison: | Aspect | SynCamMaster | Cavia | |--------|--------------|-------| | **Task** | Text-to-video generation | Image-to-video generation | | **Control** | Generates multi-view videos with fixed cameras; each video is viewpoint-frozen | Supports precise camera control for each frame across multiple videos | | **Base Model** | Internal text-to-video model (KLing’s team, unpublished) | Uses SVD; achieves comparable object motion to SVD (see Fig. 6) | | **Method** | Focuses on static camera videos; trains only cross-view synchronization modules | Supports frame-wise camera control; Trains both cross-view and cross-frame attention modules to ensure spatial-temporal consistency with dynamic camera control | | **Data** | Both SynCamMaster and Cavia use data from 4D synthetic assets, 3D static scenes, and monocular videos; SynCamMaster collects static camera videos for monocular training | Employs a curated pipeline to collect high-quality monocular videos with accurate camera pose annotations, facilitating precise per-frame camera control | | **Joint Training Strategy** | Requires copying monocular video $v$ times and setting the same camera parameters across views | More compute-efficient; avoids data copying; flexible cross-view attention enables training/inferencing on arbitrary numbers of views | | **Viewpoint Changes** | Fixed viewpoint; cannot handle viewpoint changes | Allows precise viewpoint control for each frame in multi-video generation | **Q3. Limitation of Cavia to address multi-view video generation with large viewpoint changes.** A3. The current performance of Cavia is constrained by the limited context length of SVD (14 frames). However, Cavia's framework is general and scalable, enabling the generation of longer videos with larger viewpoint variations when paired with a stronger foundation model. It is worth emphasizing that SynCamMaster entirely lacks the ability to handle viewpoint changes. In contrast, Cavia is explicitly designed to address this limitation, directly responding to the reviewer's primary concern. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I appreciate that some of my concerns have been addressed. However, I still have reservations regarding Cavia's capacity to handle significant viewpoint changes. Notably, the camera pose movements in all qualitative results presented in the paper appear quite subtle. While the authors attribute this limitation to the constraints of SVD, the generalization and scalability of the proposed methods have not been rigorously validated within this study. Therefore, the per-frame camera control character of Cavia, as mentioned in Table A, may potentially restrict its ability to generalize to larger viewpoint changes. --- Reply to Comment 1.1.1: Comment: Thank you for your comments. We’re thrilled that our rebuttal has resolved your concerns. We’d like to kindly remind you that though the tested camera pose movements may not appear significant yet, they are **complex and compositional**. More importantly, **no other work achieves comparable performance**. In particular, the concurrent work you highlighted, SynCamMaster, is not able to produce any viewpoint change. In contrast, we have systematically evaluated the generalization capabilities of our method both quantitatively (Tab. 1, 2, 3) and qualitatively (Fig. 2, 3) on testing images and camera trajectories that are unseen during training. Regarding your thoughts on the importance of camera control, we completely agree. Precise camera pose control is exactly the focus of our work. Due to the limited sequence length of SVD (14 frames), we prioritized precision over range, resulting in the most accurate camera control framework currently available. We show in Tab. 1, 2, and 3 that our framework consistently outperforms all existing approaches aimed at precise camera control. As you noted, “ the per-frame camera control character of Cavia may potentially restricting larger viewpoint changes.” While this is true, SynCamMaster, even with in-house video models, is restricted to fixed viewpoints with no ability to handle any viewpoint changes. Our proposed modules in Cavia are general and would benefit greatly from stronger foundation models. If we had access to powerful video models like Kling, the performance could be further improved. Please note that when we conducted the Cavia project in **June 2024**, SVD was the only publicly available image-to-video generation model. In summary, Cavia represents the state of the art for the challenging task of per-frame precise camera control. As agreed by other reviewers, no existing method matches its ability to generate multiple videos of the same scene with accurate control over camera motion, while consistently preserving object motion.
Summary: This paper proposed a novel framework for camera-controlled multi-view video generation. Based on SVD, it proposes to use 3D attention in both frame attention and view attention to ensure spatio-temporal consistency. In addition, it curated a mixed dataset from a lot of real and synthetic datasets for the model training. The experiments demonstrate the superior performance of the proposed methods. Claims And Evidence: The claims are supported by convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. Theoretical Claims: No theoretical claim. Experimental Designs Or Analyses: I checked the experiments and ablative experiments. I found some baseline details are missing, e.g., 1. When comparing to SVD in Table 1 and 2, how did the authors achieve camera pose controllability in SVD? 2. When comparing to MotionCtrl and CameraCtrl in Table 2, how did the authors implement 2-view video generation? Just run each of them twice separately? Supplementary Material: I reviewed the website provided in the supplementary material. Relation To Broader Scientific Literature: This work contributes a lot to the multi-view video generation with camera control by proposing 3D attention for both spatial and temporal attention and mixed datasets for training. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The idea of using 3D attention in both spatial and temporal attention is interesting 2. The curated mixed datasets are beneficial for the following works Weakness: 1. As discussed in **Experimental Designs Or Analyses**, it would be better if more baseline details were provided. 2. Using 3D attention in the spatial and temporal module could increase the speed and memory requirement compared to baselines. A detailed speed analysis and memory requirement report are needed. 3. In the provided results, I found the performance still limited, partially due to the highly challenging task. Some visual issues are: a. The motion magnitude in the results is still very small. b. Camera trajectories are simple. Not sure if the model can support 360-degree views. ## update after rebuttal Thanks for the rebuttal and the additional experiments. Overall, the primary concern remains the method’s ability to handle large viewpoint changes in dynamic scenes. While direct comparisons with ViewCrafter and SyncCamMaster could be informative, I understand that these works are concurrent or not officially published, so such comparisons are not required. Taking into account both the strengths and limitations of the paper, I will maintain my original score. Other Comments Or Suggestions: Some concurrent works can be discussed: Sun, W., Chen, S., Liu, F., Chen, Z., Duan, Y., Zhang, J., & Wang, Y. (2024). Dimensionx: Create any 3d and 4d scenes from a single image with controllable video diffusion. *arXiv preprint arXiv:2411.04928*. Wu, R., Gao, R., Poole, B., Trevithick, A., Zheng, C., Barron, J. T., & Holynski, A. (2024). Cat4d: Create anything in 4d with multi-view video diffusion models. *arXiv preprint arXiv:2411.18613*. Bai, J., Xia, M., Wang, X., Yuan, Z., Fu, X., Liu, Z., ... & Zhang, D. (2024). SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints. *arXiv preprint arXiv:2412.07760*. Zhao, Y., Lin, C. C., Lin, K., Yan, Z., Li, L., Yang, Z., ... & Wang, L. (2024). Genxd: Generating any 3d and 4d scenes. arXiv preprint arXiv:2411.02319. Questions For Authors: Will the authors release the model and the curated datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer KtKL for the detailed comments and for recognizing our strong performance compared to existing works. Below are our detailed responses to the questions raised. **Q1. Baseline details are missing: 1) How did SVD achieve camera pose? 2) How are MotionCtrl and CameraCtrl implemented for 2-view video generation?** A1. Thank you for this insightful question. We clarify that the SVD results in our experiments are obtained from the vanilla SVD model without any modification; thus, SVD does not offer camera pose controllability. We include SVD's results solely to showcase the base model's motion generation capability, with additional visualizations provided in Fig. 6. For 2-view comparisons involving MotionCtrl and CameraCtrl, we independently run each view with the corresponding camera poses, as these methods are originally designed for monocular video generation. **Q2. Speed and memory report on the 3D attention is needed.** A2. Indeed, the proposed 3D attention introduces additional computational overhead due to the extended sequence length. However, thanks to the 8$\times$ compression of SVD's latent space, our attention operates on latent features sized $14\times 32\times 32$ for a $256 \times 256$ video. As shown in the table below, the increased cost in speed and memory remains acceptable. | Cross-View Attention | Cross-frame Attention | Max sequence length in Attention | Inference Step | Training | Train/Mem | |------------|-------------|--------------------|-----------|-----------|------------| | No | No | 32x32 | 0.16s | 1.68it/s | 32.36G | | Yes | No | 2x32x32 | 0.18s | 1.66it/s | 32.37G | | No | Yes | 14x32x32 | 0.26s | 1.62it/s | 33.85G | | Yes | Yes | 14x32x32 | 0.28s | 1.52it/s | 34.14G | **Q3. Performance is limited for this highly challenging task. Some visual issues are: a. The motion magnitude in the results is still very small. b. Camera trajectories are simple. Not sure if the model can support 360-degree views.** A3. We wish to emphasize that the factors you prioritize most, such as object motion strength and camera trajectory complexity, are precisely the areas in which Cavia significantly outperforms prior works. - Object motion strength: SVD's results serve as a reference for motion generation capabilities, with additional visualizations in Fig. 6. More importantly, existing camera-controllable video generation methods, such as MotionCtrl and CameraCtrl, are limited to generating videos of static scenes. In contrast, Cavia retains the ability to generate object motion comparable to the base SVD model. - Camera trajectory complexity: Extensive experiments in Fig. 3 and Tab. 1 demonstrate Cavia's superior camera control compared to state-of-the-art methods. However, the current version does not support 360-degree view generation due to the limited context length of SVD (14 frames). We plan to explore longer video generation with more capable base models in future work. **Q4. Some concurrent works can be discussed: DimensionX (arXiv:2411.04928), CAT4D (arXiv:2411.18613), SynCamMaster (arXiv:2412.07760), GenXD (arXiv:2411.02319).** A4. We will include and discuss these concurrent works in our revision. - DimensionX: Utilizes separate S-director and T-director modules to generate time-frozen and viewpoint-frozen videos independently. Cavia, instead, generates multi-view videos simultaneously, ensuring better spatial-temporal consistency. - CAT4D: Converts monocular videos into 4D scenes. In contrast, Cavia generates multi-view videos from a single image input. - SynCamMaster: Generates multi-view videos but only supports fixed cameras, resulting in viewpoint-frozen videos. Cavia enables precise camera control for each frame of multiple generated videos. - GenXD: Employs a masked video diffusion framework for camera-controllable monocular video generation. Cavia, however, targets multi-view video generation, which is inherently more challenging. While these works are commendable, none address camera-controllable multi-view video generation. Cavia is the first framework to generate **multiple videos of the same scene** with **precise per-frame camera control**, while maintaining object motion quality comparable to SVD. **Q5. Will the authors release the model and the curated datasets?** A5. We are currently awaiting legal approval for the public release of the model and curated datasets. Meanwhile, we have provided sufficient details to facilitate the reproducibility of our model and dataset. --- Rebuttal Comment 1.1: Comment: I thank the author's rebuttal and additional results. I have read the reviews from other reviewers. While this paper may lack some quantitative comparisons to methods such as ViewCrafter, SyncCamMaster, and CVD, I agree with Reviewer vi5f that ViewCrafter (arXiv) and SyncCamMaster (ICLR 2025) can be considered concurrent works, and thus the authors are not obligated to include direct comparisons. However, to further strengthen the validation of the proposed method, the authors could consider adding quantitative comparisons with CVD (NeurIPS 2024, code released). --- Reply to Comment 1.1.1: Comment: Thank you for supporting our submission and acknowledging that ViewCrafter (arXiv) and SyncCamMaster (ICLR 2025) are concurrent works. We’d like to clarify that ViewCrafter only generates static scenes, while SyncCamMaster produces videos from fixed viewpoints. Neither focuses on precise camera control for general image-to-video generation, which is the core of Cavia. In this sense, they are less directly related to Cavia than other works we’ve already compared, such as MotionCtrl, CameraCtrl, and CVD. As suggested, we conducted additional quantitative comparisons between Cavia and CVD using CVD’s evaluation metrics. The results show that Cavia significantly outperforms CVD in camera pose accuracy, consistent with our earlier qualitative comparisons (Fig.9 and [Fig.B](https://cavia2025.github.io/#FigB) ) and 3D reconstruction comparisons (Fig. 10). | Method | AUC-Rot@5 $\uparrow$ | AUC-Rot@10 $\uparrow$ | AUC-Rot@20 $\uparrow$ | AUC-Trans@5 $\uparrow$ | AUC-Trans@10 $\uparrow$ | AUC-Trans@20 $\uparrow$ | Prec $\uparrow$ | MScore $\uparrow$ | |--------|----------------------|------------------------|------------------------|-------------------------|--------------------------|--------------------------|------------------|-------------------| | **Cavia** | 6.36 | **17.11** | **37.56** | **4.13** | **8.42** | **18.18** | **10.19** | **6.70** | | CVD | **7.17** | 16.76 | 33.31 | 2.59 | 4.93 | 9.86 | 5.68 | 3.29 |
Summary: This paper introduces a novel framework named Cavia for generating multi-view videos with camera controllability. The primary contributions consist of two key components: 1) Cross-view and cross-frame 3D attention mechanisms designed to enhance consistency across different viewpoints and temporal frames. 2) A joint training strategy that effectively utilizes a carefully curated combination of static, monocular dynamic, and multi-view dynamic videos to ensure geometric consistency, realistic object motion, and background preservation. The experimental results demonstrate superior geometric accuracy and perceptual quality compared to existing baseline methods. Claims And Evidence: The claims presented in the paper are intuitively sound. However, the experimental results are insufficient to comprehensively validate the individual contributions of each component of the proposed method, including the cross-view 3D attention mechanism, the cross-frame 3D attention mechanism, and the joint training strategy. Methods And Evaluation Criteria: Yes. The effectiveness of the proposed method is primarily validated using the evaluation metrics and benchmark datasets presented in Table 1 and Table 2. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: Yes. There are no obvious issues in the experimental design and analysis. Supplementary Material: Yes. The Supplementary Material includes videos that qualitatively compare the baseline methods with the proposed method, as well as ablation studies for each component of the method. The provided video demonstrations illustrate smoother video consistency compared to other methods. Relation To Broader Scientific Literature: The key constributions are related to multi-view image generation such as V3D[1], IM-3D[2], SV3D[3] and Vivid-1-to-3[4]. The primary distinction lies in that these methods focus on generating static 3D objects or scenes, whereas this work introduces vivid object motion into multi-view dynamic video generation within complex scenes. [1] V3d: Video diffusion models are effective 3d generators, arxiv 2024 [2] F. Im-3d: Iterative multiview diffusion and reconstruction for high-quality 3d generation, ICML 2024 [3] Sv3d: Novel multi-view synthesis and 3d generation from a single image using latent video diffusion, arxiv 2024 [4] Vivid-1-to-3: Novel view synthesis with video diffusion models, CVPR 2024 Essential References Not Discussed: There is no essential references not discussed. Other Strengths And Weaknesses: Strengths: 1. This paper proposes two noval cross-attention mechanisms, namely cross-view and cross-frame 3D attentions, to enhance the multi-view consistency of generated videos. The evaluation results presented in the paper appear promising. 2. The supplementary materials are sufficient to demonstrate the effectiveness of the proposed method. Weaknesses: 1. The presentation of the method requires improvement. Specifically, there is a lack of formal mathematical equations to clarify how the two types of cross-attention mechanisms are calculated. 2. Additional quantitative ablation studies should be included in the main text to better illustrate the role and contribution of each component in the method’s design. Other Comments Or Suggestions: A List of Issues: - ​Abstract (Lines L033 and L036): There is a repetition of the phrase "To our best knowledge, ..." in both lines. This redundancy should be addressed for conciseness and clarity. - ​Figure 1(c): As I understand it, the attention mechanism represents four dimensions: V (View), F (Frame), H (Height), and W (Width). However, the figure lacks textual annotations to indicate each dimension, making it difficult to interpret. Additionally, the use of different colors is confusing, and there is no legend to explain their meanings. This information should be clearly labeled to improve readability. - ​Figure 1(a): The figure is unclear and lacks sufficient detail. Specifically: 1. The inputs to the cross-attention mechanism are not clearly indicated. 2. The roles of the key, query, and value are not labeled or explained. 3. There are no formal equations provided to illustrate how the two types of cross-attention (cross-view and cross-frame) are calculated. ---post-rebuttal comments--- I have no doubt about the contribution and technical merit of this work. The proposed Cavia framework addresses an interesting multi-view image-to-video generation under precise camera control. Regarding the discussion on viewpoint changes, I find the authors’ clarification reasonable. They provided evidence that the demonstrated camera motions are substantial and aligned with community standards. Questions For Authors: Please see the weakness and suggestion part. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer RKXa for their positive evaluation of the technical novelty and superior performance of our method. As suggested, we will enhance the presentation of our method and figures in the revised draft. Formal mathematical equations will be added to clarify the computation of the proposed attention modules, and the typos in the abstract will be corrected as recommended. Below, we provide detailed responses to the additional comments: **Q1. Additional quantitative ablation studies should be included in the main text.** A1. Due to space limitations, we initially placed the ablation studies in the supplementary material. However, we are happy to incorporate these studies into the main manuscript as suggested. **Q2. Figure 1(c) lacks textual annotations.** A2. Thank you for pointing this out. Your understanding is correct: V (View), F (Frame), H (Height), and W (Width) denote the respective dimensions. We used purple and orange to indicate corresponding blocks in Fig. 1(a). We apologize for any confusion and will update Fig. 1 with clear textual annotations and a legend to improve readability. **Q3. Figure 1(a) lacks sufficient detail. Specifically:(1) The inputs to the cross-attention mechanism are not clearly indicated. (2) The roles of the key, query, and value are not labeled or explained. (3) There are no formal equations provided to illustrate how the two types of cross-attention (cross-view and cross-frame) are calculated.** A3. Thank you for raising these important points. Our cross-view and cross-frame attention modules are both self-attention mechanisms, where the key, query, and value are derived from the same input features. We will clarify this in the revised draft and will also include formal equations to illustrate the computation of both cross-view and cross-frame attention modules. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. In the revision, I recomand improving the presentation and move additional comparison results from the supplementary material to the main text to enhance clarity and impact. Since there are no essential issues with the paper’s core contributions, I maintain my original rating. --- Reply to Comment 1.1.1: Comment: Thank you again for your positive assessment, highlighting our contributions, and valuable suggestions. We've made changes in our revised draft as suggested by the reviewer.
null
null
null
null
null
null
Stability and Generalization Analysis of Decentralized SGD: Sharper Bounds Beyond Lipschitzness and Smoothness
Accept (poster)
Summary: This work establishes sharper stability and generalization bounds for decentralized SGD (D-SGD) under weaker assumptions. The analysis primarily builds on the on-average model stability, with a key innovation lying in the novel decomposition of neighboring consensus errors in decentralized settings. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I have checked the theoretical results in the main text. Experimental Designs Or Analyses: This paper does not contain experiments. Supplementary Material: Yes, I have briefly reviewed the appendix proofs. Relation To Broader Scientific Literature: Prior work was constrained by strict assumptions (Richards et al., 2020; Sun et al., 2021; Zhu et al., 2022; Le Bars et al. 2024); this work relaxes the assumptions and yields improved generalization results. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Strengths * For smooth convex problems, this paper analyzes the convergence and generalization of D-SGD without relying on the Lipschitzness assumption. * The paper investigates the generalization performance of D-SGD in non-smooth problems (satisfying the Holder continuous gradients condition). * The analysis utilizes the co-coercivity property of the gradient for a novel decomposition of the neighboring consensus error. Weaknesses * The theoretical analysis in this paper is limited to the convex problem. Other Comments Or Suggestions: See the questions. Questions For Authors: 1. ~~Can the analysis in this paper be extended to non-convex problems, e.g., training deep neural networks, and does the neighboring consensus error behave differently in this case?~~ 2. ~~In line 22, the authors say that this paper develops optimal generalization bounds, how to show that the results in the paper are optimal?~~ Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank you for taking the time to review our paper and greatly appreciate your valuable feedback. **Q1**: The theoretical analysis in this paper is limited to the convex problem. Can the analysis in this paper be extended to non-convex problems, e.g., training deep neural networks, and does the neighboring consensus error behave differently in this case? **A**: Thanks for the suggestion. We agree that generalization analysis for nonconvex problems is interesting for understanding the practical performance of D-SGD. We will explore this direction in our future studies. For example, it is interesting to study the stability and generalization analysis of decentralized algorithms for training overparameterized neural networks, where we can exploit some weak-convexity [2, 3] and self-bounding weak-convexity [4, 5] to develop meaningful stability bounds. **Q2**: In line 22, the authors say that this paper develops optimal generalization bounds, how to show that the results in the paper are optimal? **A**: The minimax statistical error for learning with a convex and smooth function is $O(1/\sqrt{n})$ (e.g., Theorem 7 in [1]), where $n$ is the sample size. As we have $mn$ training examples in total, the minimax optimal error bound for the decentralized setting is $O(1/\sqrt{mn})$. Since we achieve excess risk bounds of order $O(1/\sqrt{mn})$ (Remark 4.9), we derive the minimax optimal risk bounds. We will clarify this in the revision. [1] Stability and convergence trade-off of iterative optimization algorithms. arXiv preprint, 2018. [2] Stability & generalisation of gradient descent for shallow neural networks without the neural tangent kernel. NeurIPS, 2021. [3] Generalization guarantees of gradient descent for shallow neural networks. Neural Computation, 2024. [4] Sharper guarantees for learning neural network classifiers with gradient methods. arXiv preprint, 2024. [5] On the optimization and generalization of multi-head attention. TMLR, 2024.
Summary: This work studies decentralized stochastic gradient descent where a network of agents collaborate to minimize an aggregate of local cost functions privately available to each agent. It focuses on the generalization analysis, which is different from the convergence rate analysis. It improves the generalization analysis compared to previous works under more general settings such as removing the Lipschitzness assumption and also considering nonsmooth settings. It also provides optimal generalization bounds. Claims And Evidence: Yes. All proofs are provided. Methods And Evaluation Criteria: No simulations are provided. Theoretical Claims: I did not check the proofs. The results seem reasonable. However, the writing and presentation of the results needs working on. Experimental Designs Or Analyses: There are no numerical experiments. Supplementary Material: No. Relation To Broader Scientific Literature: The contributions are novel as the theoretical results give tighter bounds with less restrictive assumptions. Essential References Not Discussed: No. The work discusses all relevant works. Other Strengths And Weaknesses: The paper has sufficient novelty but lacks clarity in presenting the problem and results. Notation is quite confusing, and clearer discussion should be provided regarding the main results. Another weakness is that the analysis applies to convex setting. Other Comments Or Suggestions: I suggest clearly discussing the novelty in this work and how you were able to provide tighter bounds with less restrictive assumption. A clear explanation of the theoretical novelty would be useful. It would be useful to provide experimental results backing the theoretical findings. Questions For Authors: Can you please discuss Table 1 in details and clearly explain how your result is tighter. The bounds involve the stepsize and it is not clear to me how you result is tighter than Taheri and Thrampoulidis (2023). Are your results comparable when considering deterministic gradient? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for taking the time to review our paper and greatly appreciate your valuable feedback. **Q1**: Lack of clarity in presenting the problem and results. Notation is quite confusing. **A**: Thanks for your valuable feedback. We will reorganize the problem statement more clearly, systematically sort out the notations and present our results in a more readable way. **Q2**: The analysis applies to convex setting. **A**: Thanks for the suggestion. We agree that generalization analysis for nonconvex problems is interesting for understanding the practical performance of decentralized SGD. We will explore this direction in our future work. For example, it is interesting to study the stability and generalization analysis of decentralized algorithms for training overparameterized neural networks, where we can exploit some weak-convexity [2, 3] and self-bounding weak convexity [4, 5] to develop meaningful stability bounds. **Q3**: Explain novelty and how you provided tighter bounds with less assumption. **A**: We highlight our novelty here. First, we remove the Lipschitzness assumptions used in [1, 6] and replace the uniform gradient bound by function value via the self-bounding property, i.e., $\lVert \ell(\theta, z) \rVert_2^2 \leq 2 L \ell(\theta, z)$. In this way, we build stability bounds involving training errors, which shows the benefit of optimization in stability and implies fast rates under low noise conditions. Second, instead of decomposing the neighboring consensus error $\mathbb{E}[\lVert \bar{\theta}^{(t)}-\theta_k^{(t)}-\bar{\theta}^{(t, i j)}+\theta_k^{(t, i j)}\rVert_2^2]$ into two consensus errors as in [6, 7], we show that it can be offset using the co-coercivity of functions, which improves the stability analysis. **Q4**: Discuss Table 1 in detail and explain how your result is tighter. **A**: Our results improve the discussions in [1, 6] by removing the Lipschitzness assumptions and involving training errors, which implies faster rates under low noise conditions. Our results improve the results in [6] and [8] by removing the term $G\eta T/(1-\lambda)$ and $\lambda$ respectively, which do not involve $m$ and $n$. Our results improve [9] by removing both the bounded variance assumption and the term $C_W$. Furthermore, our stability analysis implies fast rates under a low noise condition. As a comparison, the stability bound in [9] involves $(\sigma\eta T+GT\eta C_W)/(mn)$, which does not imply fast rates in the low noise case. The stability bound in [7] involves a dominant factor $\frac{\eta^2\sqrt{T}}{1-\lambda}$ (if $\eta\gtrsim \frac{1-\lambda}{mn}$ and we ignore $L$ for brevity), which is replaced by $\frac{\eta^{\frac{3}{2}}}{\sqrt{m n}(1-\lambda)}+\frac{\eta}{m\sqrt{n}}$ in our bound. If $\eta \gtrsim \frac{1}{mnT}$ and $\eta\gtrsim \frac{1-\lambda}{\sqrt{Tn}m}$, then we have $\frac{\eta^2\sqrt{T}}{1-\lambda}\gtrsim \frac{\eta^{\frac{3}{2}}}{\sqrt{m n}(1-\lambda)}+\frac{\eta}{m\sqrt{n}}$ and our stability bound is better. Our analysis suggests $\eta\asymp 1/\sqrt{mn}$ in Remark 4.9, and in this case we have $\frac{\eta^2\sqrt{T}}{1-\lambda}\gg \frac{\eta^{\frac{3}{2}}}{\sqrt{m n}(1-\lambda)}+\frac{\eta}{m\sqrt{n}}$. **Q5**: How your result is tighter than [7]? Is it comparable when considering deterministic gradient? **A**: Yes, our technique can still imply better stability bounds when applied to decentralized gradient descent. Indeed, the discussions in [7] control the neighboring consensus errors by two consensus errors (Remark 4.3), which fail to use the property that neighboring consensus errors consider the difference of models produced on neighboring datasets. We introduce new techniques to control the neighboring consensus errors with an error decomposition and the coercivity of smooth functions. We can apply this technique to decentralized GD and improve the existing stability analysis. **Q6**: Providing experimental results is useful. **A**: Thanks for your suggestions. We agree that empirical analysis is helpful to validate our theoretical findings. We will leave it as future work. [1] Graph-dependent implicit regularisation for distributed stochastic subgradient descent. JMLR, 2020. [2] Stability & generalisation of gradient descent for shallow neural networks without the neural tangent kernel. NeurIPS, 2021. [3] Generalization guarantees of gradient descent for shallow neural networks. NeurIPS, 2024. [4] Sharper guarantees for learning neural network classifiers with gradient methods. arXiv preprint, 2024. [5] On the optimization and generalization of multi-head attention. TMLR, 2024. [6] Stability and generalization of decentralized stochastic gradient descent. AAAI, 2021. [7] On generalization of decentralized learning with separable data. AISTATS, 2023. [8] Topology-aware generalization of decentralized sgd. ICML, 2022. [9] Improved stability and generalization guarantees of the decentralized sgd algorithm. ICML, 2024.
Summary: This paper studies the stability of D-SGD, and presents new and sharper convergence bounds on the assumption that the functions are convex and L-smooth. The removal of the function Lipschitzness and the bounded variance assumptions highlight the novelty of the work. Theoretical analysis shows an improved stability bound compared to Zhu et al. under similar conditions. In addition, the theoretical results this paper offers also interpolated many known tight bounds under certain conditions. Claims And Evidence: The major contribution of this paper is theoretical, please see Theoretical Claims. Methods And Evaluation Criteria: There are no numerical evaluations in this paper. The author provided a good comparison of the stability bounds of related literature in Table 1. Theoretical Claims: The theoretical claims are mostly to be expected and seem to be either an extension of previous D-SGD analysis or from the stability/generalization of SGD in the centralized setting. The authors were able to highlight the novelty of this work in Remark 4.3. However, I have some questions on the inequality shown on the left hand side of line 226, the consensus error is upper bounded by a norm of difference between $\nabla l$ and $l$. This seems to be a typo since $l$ denotes a scalar loss function. Experimental Designs Or Analyses: There are no experiments provided in this paper. The analysis seems straightforward and the authors were able to highlight the difference in analysis compared to prior works. Supplementary Material: I did not read the supplementary materials, apart from checking some proof details for questions I have from reading the main paper. Relation To Broader Scientific Literature: The author provided a comparison of the stability bounds of related literature in Table 1. Although the comparison is far from complete, it was able to highlight the comparison of the current paper with some closely related papers. Essential References Not Discussed: I am not very familiar with the related literature. There are no more related works that should be referenced to the best of my knowledge. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: misspelling of "related" on line 60. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for taking the time to review our paper and greatly appreciate your valuable feedback. **Q1**: The theoretical claims are mostly to be expected and seem to be either an extension of previous D-SGD analysis or from the stability/generalization of SGD in the centralized setting. **A**: We highlight our novelty as follows. As compared to SGD in the centralized setting, the analysis of decentralized SGD is more challenging as this introduces the neighboring consensus error $\mathbb{E}[\\|\bar{\theta}^{(t)}-\theta_k^{(t)}-\bar{\theta}^{(t,i,j)}+\theta_k^{(t,ij)}\\|^2]$. The existing analysis of neighboring consensus error is a bit crude since it directly decompose it into two consensus errors $\mathbb{E}[\\|\bar{\theta}^{(t)}-\theta_k^{(t)}\\|^2]$ and $\mathbb{E}[\\|\bar{\theta}^{(t,i,j)}-\theta_k^{(t,ij)}\\|^2]$ [1, 2], which ignores the important property that $\theta_k^{(t)}$ and $\theta_k^{(t,ij)}$ are produced based on two neighboring datasets and should be close. A novelty of our analysis is to show that the neighboring consensus error can be offset by using the co-coercivity of a gradient map, which is achieved via a new error decomposition. In this way, we improve the stability analysis of decentralized algorithms [2,3]. Please see Remark 4.3 for the novelty of our analysis. Our stability analysis also improves the existing analysis by removing the Lipschitzness condition [1, 3], removing the Gaussian weight difference assumption [4], and removing the bounded variance assumption [5]. We also remove some terms $G\eta T/(1-\lambda)$ and $\lambda$ in [3] and [4], respectively. Finally, our analysis shows the benefit of optimization in improving the stability, and gives the first fast rates of order $1/(mn)$ for decentralized SGD under a low-noise condition. **Q2**: However, I have some questions on the inequality shown on the left hand side of line 226, the consensus error is upper bounded by a norm of difference between $\nabla l$ and $l$. This seems to be a typo since $l$ denotes a scalar loss function. **A**: Thanks for indicating it. This should be the difference of two $\nabla$. We will revise these typos in the revision. **Q3**: Misspelling of "related" on line 60. **A**: Thanks for indicating it. We will modify it in the revision. [1] Graph-dependent implicit regularisation for distributed stochastic subgradient descent. JMLR, 2020. [2] On generalization of decentralized learning with separable data. AISTATS, 2023. [3] Stability and generalization of decentralized stochastic gradient descent. AAAI, 2021. [4] Topology-aware generalization of decentralized sgd. ICML, 2022. [5] Improved stability and generalization guarantees of the decentralized sgd algorithm. ICML, 2024. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' clarification on the novelty within the analysis. The finer-grained analysis with consensus contributed to the tighter bounds as well as the stability analysis. However, I am not entirely sure about the impact of this submission. I have modified my review accordingly.
Summary: This paper presents the generalization and excess risk analysis for decentralized stochastic gradient descent (D-SGD) on smooth (including Hölder continuous, which generalizes smoothness) and convex problems. The key contribution is the removal of the standard Lipschitzness assumption in the analysis. The authors derive generalization bounds under the relaxed condition and compare them with existing results. The paper is well-written, with clear notations and explicit order analysis of the derived bounds. ## update after rebuttal No remaining concerns. Claims And Evidence: The main claims of the paper are well-supported by theoretical derivations. The generalization bounds for D-SGD under non-Lipschitz assumptions are derived and compared with prior work. Methods And Evaluation Criteria: The comparison with existing results is reasonable. No experiments provided. Theoretical Claims: The reviewer checked the theoretical claims but did not check the proof carefully. Experimental Designs Or Analyses: No experiments provided. Supplementary Material: The reviewer briefly examined the appendix and found it to be rigorous, though the details were not fully checked. Relation To Broader Scientific Literature: The discussion of related work the comparison with prior literature is through. Essential References Not Discussed: No essential references not discussed. Other Strengths And Weaknesses: **Weaknesses**: The study focuses on **convex problems**, which provides valuable theoretical insights. However, exploring nonconvex settings could further enhance the practical relevance of the results. Additionally, the paper only analyzes the generalization of the average iterate but does not study local iterates. Other Comments Or Suggestions: Theorem 4.1 imposes a G-Lipschitz condition and establishes the bound $\frac{G^2 \eta^2}{m n} \left(\frac{T}{m} + \frac{T^2}{m n}\right)$. The authors state that this matches the serial case $\frac{G^2 \eta^2}{n} \left(T + \frac{T^2}{n}\right)$. Notably, this bound implies that increasing the number of agents/nodes ($m$) enhances generalization, as the first term in the parentheses decreases with larger $m$. The authors could provide a more detailed explanation and analysis of this effect. The authors could consider mentioning "non-Lipschitz" in the title to better highlight the key contribution of the paper. In Theorem 4.1 (Stability bound), there is an extra space after "Stability bound." Questions For Authors: In Theorem 4.1, the assumption $\left(\frac{2(1+\lambda) L \eta_t}{(1-\lambda)^2} + 2\right) \eta_t - \frac{1}{L} \leq 0$ is not carefully analyzed. The authors state that it holds when $\eta_t \lesssim \frac{(1-\lambda)}{L}$. However, in a ring topology with a large number of nodes (where $\lambda$ is close to 1), this assumption seems difficult to satisfy. This contradicts empirical results, such as Figure 1 (right) in [1], where experiments suggest that larger node sizes in a ring topology actually allow for a larger learning rate. [1] Beyond spectral gap: The role of the topology in decentralized learning. NeurIPS, 2022. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for taking the time to review our paper and greatly appreciate your valuable feedback. **Q1**: Exploring nonconvex settings could further enhance the practical relevance of the results. **A**: Thanks for the suggestion. We agree that generalization analysis for nonconvex problems is interesting for understanding the practical performance of decentralized SGD. We will explore this direction in our future studies. For example, it is interesting to study the stability and generalization analysis of decentralized algorithms for training overparameterized neural networks, where we can exploit some weak-convexity [2, 3] and self-bounding weak-convexity [4, 5] to develop meaningful stability bounds. **Q2**: The paper only analyzes the generalization of the average iterate but does not study local iterates. **A**: We consider the average of iterates since most of the existing convergence analyses focus on the average of iterates [6, 7, 8]. Then, we can combine our generalization analysis and existing convergence analyses to derive excess risk bounds. We note that the recent work [9] gave interesting discussions on the stability analysis for local iterates. Their studies consider the $\ell_1$-version of on-average model stability. It is interesting to apply our techniques to study the $\ell_2$-version of on-average model stability for local iterates. We will consider this in our future studies. **Q3**: Theorem 4.1 imposes a G-Lipschitz condition and establishes the bound $\frac{G^2 \eta^2}{m n}\left(\frac{T}{m}+\frac{T^2}{m n}\right)$. The authors state that this matches the serial case $\frac{G^2 \eta^2}{n}\left(T+\frac{T^2}{n}\right)$. Notably, this bound implies that increasing the number of agents/nodes $(m)$ enhances generalization, as the first term in the parentheses decreases with larger $m$. The authors could explain more about this effect. **A**: Note that each agent has $n$ examples in our setting, and therefore we have $mn$ examples in total. Then, the effect of perturbing a single example diminishes as $m$ increases, which implies improved stability. We will add discussions to explain this more clearly. **Q4**: The authors could consider mentioning "non-Lipschitz" in the title to better highlight the key contribution of the paper. In Theorem 4.1 (Stability bound), there is an extra space after "Stability bound." **A**: Thanks for your valuable suggestions. We will add "non-Lipschitz" in the title and fix this formatting issue in the revision. **Q5**: In Theorem 4.1, the assumption $\left(\frac{2(1+\lambda) L \eta_t}{(1-\lambda)^2}+2\right) \eta_t-\frac{1}{L} \leq 0$ is not carefully analyzed. The authors state that it holds when $\eta_t \lesssim \frac{(1-\lambda)}{L}$. However, in a ring topology with a large number of nodes (where $\lambda$ is close to 1), this assumption seems difficult to satisfy. This contradicts empirical results, such as Figure 1 (right) in [1], where experiments suggest that larger node sizes in a ring topology actually allow for a larger learning rate. **A**: Thanks for your intuitive comment on the learning rate. We note that the assumption $\eta_t\lesssim (1-\lambda)/L$ is also used in both the existing convergence analysis [6] and stability analysis [8] of decentralized algorithms. Indeed, the estimation of consensus error often leads to a term $1/(1-\lambda)$, which becomes infinite if $\lambda$ is close to $1$. Furthermore, in Remark 9 we set $\eta_t\asymp 1/\sqrt{T}$ and $T\asymp mn$ to get risk bounds of order $1/\sqrt{mn}$. Then, the assumption $\eta_t\lesssim (1-\lambda)/L$ roughly becomes $1/\sqrt{mn}\lesssim (1-\lambda)/L$, which is a mild assumption if $n$ is large. While $1-\lambda$ becomes smaller as $m$ increases, our suggested step size $\eta_t\asymp 1/\sqrt{mn}$ also decreases with $m$. It is very interesting to further relax the requirement $\eta_t\lesssim (1-\lambda)/L$. We will leave this as an interesting question for further investigation. [1] Beyond spectral gap: The role of the topology in decentralized learning. NeurIPS, 2022. [2] Stability & generalisation of gradient descent for shallow neural networks without the neural tangent kernel. NeurIPS, 2021. [3] Generalization guarantees of gradient descent for shallow neural networks. Neural Computation, 2024. [4] Sharper guarantees for learning neural network classifiers with gradient methods. arXiv preprint, 2024. [5] On the optimization and generalization of multi-head attention. TMLR, 2024. [6] Graph-dependent implicit regularisation for distributed stochastic subgradient descent. JMLR, 2020. [7] Stability and generalization of decentralized stochastic gradient descent. AAAI, 2021. [8] On generalization of decentralized learning with separable data. AISTATS, 2023. [9] Improved stability and generalization guarantees of the decentralized sgd algorithm. ICML, 2024. --- Rebuttal Comment 1.1: Comment: The reviewer thanks the authors for their detailed response. In Theorem 4.1, the term $\frac{G^2 \eta^2}{mn} ( \frac{T}{m} + \frac{T^2}{mn} )$ can also be written as $\frac{G^2 \eta^2}{N} ( \frac{T}{m} + \frac{T^2}{N} )$ where $N = mn$. This suggests that increasing $m$ actually helps stability and generalization. This differs from previous results, where improving generalization by increasing $m$ was typically attributed to a larger $N$, since $N = mn$. However, Theorem 4.1 indicates that stability can be improved by increasing $m$ even if $N$ is set fixed. This is a surprising result. Are there any related works that have discussed a similar theoretical observation? Could the authors elaborate more on this point? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the further clarifications of the comment. We think the underlying reason is that we consider an average of the iterates and the $\ell_2$ version of the on-average model stability. Please note the $\ell_2$ model stability is a second moment of a random variable, which can be decomposed as a bias and a variance term. The average operator considered in this submission reduces the variance by a factor of m. This explains why there is a factor of $1/m$ in our stability bound (Thm 4.1 gives a bound on the square of the $\ell_2$ on-average model stability). As a comparison, the previous discussions either consider local iterates or the $\ell_1$ version of stability, which may not show the effect of variance reduction of decentralized SGD with m local machines. This recent stability analysis of minibatch/local SGD also shows a similar phenomenon [1]. It was shown there the variance decreases by a factor of batch size or local machines. Therefore, the square of the $\ell_2$ model stability improves by a factor of batch size or local machines. We will add more discussions on in the revision. Thanks again and please let us know if you have further comments. [1] Stability and Generalization for Minibatch SGD and Local SGD. arXiv 2023
null
null
null
null
null
null
QoS-Efficient Serving of Multiple Mixture-of-Expert LLMs Using Partial Runtime Reconfiguration
Accept (poster)
Summary: This paper addresses the challenge of efficiently serving multiple fine-tuned MoE on a single GPU. The authors propose a novel serving system with two key components: Similarity-based expert consolidation and Runtime partial reconfiguration. The authors evaluate their approach using Mixtral-8x7B models on a server with a single NVIDIA A100 GPU. They demonstrate that their method achieves output quality comparable to individual models while maintaining throughput similar to serving a single model, with only a small increase in TTFT. Compared to NVIDIA's MIG approach, their system shows an 85% average reduction in turnaround time. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: Yes Supplementary Material: NA Relation To Broader Scientific Literature: It helps improve efficiency. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - The paper addresses a practical and significant problem in deploying multiple fine-tuned MoE LLMs in resource-constrained environments. - The proposed system achieves a good balance between quality and performance, outperforming conventional virtualization approaches. Weaknesses: - The approach is demonstrated only for two models from the same model family (Mixtral base and instruct variants). It's unclear how well it would scale to more diverse models with potentially greater differences. - While the authors mention the approach is applicable to MoEs with different sizes and parameters, they don't provide experimental evidence for this claim. -The latency analysis in Table 3 could benefit from more explanation about why certain patterns occur and their implications. Other Comments Or Suggestions: - It would be interesting to see how this approach performs with more than two models and with models that have been fine-tuned for more diverse tasks. Questions For Authors: I have the following concerns, 1. The author mentioned that "Although this paper focuses specifically on fine-tuned LLMs with identical architectures and text generation tasks, the proposed technique is applicable to MoEs with different sizes and number of parameters." The approach is demonstrated only for two models from the same model family (Mixtral base and instruct variants). It's unclear how well it would scale to more diverse models with potentially greater differences. 2. Would your approach be applicable to other MoE architectures beyond the Mixtral family, particularly those with different gating mechanisms or expert structures? 3. The experiment does not have a detailed analysis of the overhead of each part of this method. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your invaluable comments. The experiments and additional explanations provided here have been incorporated into the revised manuscript. Weaknesses: W1: The approach is demonstrated only for two models from the same model family (Mixtral base and instruct variants). It's unclear how well it would scale to more diverse models with potentially greater differences. Response 1: *For a complete, detailed response with numerical results, please refer to “Response 1” given to Reviewer 2 (xadW). Due to the character limit, the response given below is a summarized version: To demonstrate the applicability of our approach to other model families with a higher number of fine-tuned variants, we evaluated our approach against the Averaging baseline (Avg: Izmailov et al., 2018) using Google’s Switch Transformer version 8 MoE-based model (Hugging Face (HF) ID: google/switch-base-8). This model has a total of 96 experts, which are distributed equally across 12 layers (8 experts per layer). Moreover, to assess the scalability of our approach we used the base model provided by Google along with three other community-provided fine-tuned variants. As shown, by increasing the number of merged models, the averaging approach suffers considerably from quality reduction (0.49 -> 0.42 -> 0.33 -> 0.25). However, our approach demonstrates greater resilience and is able to preserve the quality of generated output even for higher number of variants (0.49 -> 0.49 -> 0.46 -> 0.46). W2: While the authors mention the approach is applicable to MoEs with different sizes and parameters, they don't provide experimental evidence for this claim. Response 2: As discussed in "Response 1," we expanded our evaluation to assess the applicability of our approach to Google's Switch Transformer version 8, an MoE-based model. Compared to Mixtral, the switch-base-8 model has a smaller memory footprint. Each expert in the Mixtral model consists of three layers, resulting in a total of 176,162,304 parameters. In contrast, each expert in switch-base-8 has two layers, with a total of 4,718,592 parameters. The expert sparsity degree for both models is the same, as they each select the top two experts per layer for computation. W3: The latency analysis in Table 3 could benefit from more explanation about why certain patterns occur and their implications. Response 3: As mentioned in Austin et al., "How to Scale Your Model" [1], during the generation phase of a transformer-based LLM, the attention kernel is memory-bound. This is because the results of prior computations, stored in the KV cache, must be copied from the GPU’s GMEM to SMEM. Consequently, each GPU instance in NVIDIA MIG has sufficient compute resources, and we do not observe a significant increase in the latency of the attention layer (0.72 -> 0.78). However, the expert block is compute-bound [1]. Therefore, when both required experts are available in GMEM, the latency increase is more noticeable (1.2 -> 1.7). On the other hand, when the hit rate decreases for the expert block, an overhead is introduced due to copying experts from the CPU’s DRAM to the GPU’s GMEM via the PCIe link. In our approach, when serving a single model, the full bandwidth of the PCIe link is available to the process. However, using NVIDIA MIG with two GPU instances splits the available PCIe bandwidth between them, nearly doubling the imposed overhead. Single/Proposed: 1.2 -> 29.2 -> 56.8 (27.8 ms for each expert not being available in GMEM) NVIDIA MIG with two instances: 1.7 -> 54.1 -> 104.3 (51.3 ms for each expert not being available in GMEM) [1] Austin et al., "How to Scale Your Model", Google DeepMind, online, 2025. Questions: Q1: The author mentioned that "Although this paper focuses specifically on fine-tuned LLMs with ... Response 4: Please check our response to the first weakness (“Response 1”) Q2: Would your approach be applicable to other MoE architectures beyond the Mixtral family, particularly those with different gating mechanisms or expert structures? Response 5: Please check our response to the second weakness (“Response 2”) Q3: The experiment does not have a detailed analysis of the overhead of each part of this method. Response 6: In our approach, a central system must be established to generate a consolidated map by analyzing each model variant. Although calculating expert-to-expert distances can be time-consuming, it is a one-time process for a given list of models and can be performed offline. The time required for this process depends on the structure of the experts within each model family. A higher number of parameters per expert or a greater number of model variants will increase the processing time. Regarding the overhead of the inference process (Algorithm 2) compared to baseline approaches, please refer to the response provided for Weakness 3 ("Response 3").
Summary: This paper presents a novel serving system for multiple fine-tuned Mixture-of-Expert (MoE) Large Language Models (LLMs) on a single GPU. The approach uses similarity-based expert consolidation to share similar experts across models, coupled with runtime partial reconfiguration to dynamically replace non-expert layers when processing requests from different models. Experiments with Mixtral-8x7B models demonstrate significant reduction in turnaround time compared to MIG. Claims And Evidence: The authors maintain that their technique: 1. Reduces memory overhead through expert consolidation (supported by implementation data) 2. Achieves competitive output quality (backed by comprehensive benchmarks) 3. Delivers throughput comparable to single-model serving with minimal increase in TTFT (validated in experiments) 4. Reduces turnaround time by 85% compared to NVIDIA MIG (well-documented) The evidence presented generally validates these assertions, particularly the performance improvements over NVIDIA MIG. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper does not provide sufficient theoretical analysis. While the similarity-based consolidation is intuitively sensible, the authors offer minimal theoretical justification for why this approach preserves model performance. Experimental Designs Or Analyses: Experimental design on realistic hardware (A100 GPU), is thorough: 1) multiple arrival rates examined using Poisson distribution; 2) comprehensive benchmarking across diverse tasks; 3) appropriate baselines (single model, NVIDIA MIG, weight-averaged model) Supplementary Material: No supplementary material is mentioned, which is a limitation. Code availability is not addressed, hampering reproducibility. Relation To Broader Scientific Literature: The paper satisfactorily situates the work within MoE serving literature and model merging research. However, connections to multi-tenant serving systems outside the MoE context could be enhanced. Essential References Not Discussed: NaN Other Strengths And Weaknesses: Strengths: 1. Practical solution to an important real-world problem 2. Clear exposition of the approach 3. Strong empirical results Weaknesses: 1. Limited analysis of scaling behavior with >2 models 2. No discussion of privacy implications when sharing experts 3. Lack of ablation studies isolating the impact of different components 4. Energy efficiency analysis is absent Other Comments Or Suggestions: The paper would benefit from: 1. Ablation studies separating expert consolidation and runtime reconfiguration effects 2. Analysis of how performance varies with different expert similarity thresholds Questions For Authors: How does the approach scale to >2 models? Does performance degrade? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your invaluable comments. The experiments and additional explanations provided here have been incorporated into the revised manuscript. Weaknesses: W1: Limited analysis of scaling behavior with >2 models: Response 1: To demonstrate the scalability and applicability of our approach to other model architectures with a higher number of fine-tuned variants, we evaluated our approach against the Averaging baseline (Avg: Izmailov et al., 2018) using Google’s Switch Transformer version 8 MoE-based model (Hugging Face (HF) ID: google/switch-base-8). This model has a total of 96 experts, which are distributed equally across 12 layers (8 experts per layer). For this experiment, we used the base model provided by Google along with three other community-provided fine-tuned variants: Model A: emre/switch-base-8-finetuned-samsum Model B: google/switch-base-8 Model C: glamprou/switch-base-8-sst2 Model D: glamprou/switch-base-8-mnli To compare the reduction in quality of output based on the number of merged models, we consider three different serving scenarios: Serving models A & B Serving models A & B & C Serving models A & B & C & D In the table below, we report the ROUGE scores (an N-gram based summarization evaluation metric) for a summarization task on the SAMSum dataset (HF id: samsung/samsum), which Model A is specifically finetuned for: Table 1: ---- ** R1: ROUGE-1, R2: ROUGE-2, L: ROUGE-L (For all three metrics higher is better) Model A: R1: 0.4939, R2: 0.2504, L: 0.4096 Model B: R1: 0.1487, R2: 0.0268, L: 0.1287 Model C: R1: 0.0523, R2: 0.0048, L: 0.0460 Model D: R1: 0.0404, R2: 0.0093, L: 0.0361 Avg(A, B): R1: 0.4217, R2: 0.2018, L: 0.3445 Proposed(A, B): R1: 0.4911, R2: 0.2453, L: 0.4070 Avg(A, B, C): R1: 0.3335, R2: 0.1325, L: 0.2776 Proposed(A, B, C): R1: 0.4600, R2: 0.2292, L: 0.3774 Avg(A, B, C, D): R1: 0.2561, R2: 0.0799, L: 0.2091 Proposed(A, B, C, D): R1: 0.4633, R2: 0.2290, L: 0.3772 ---- As shown, by increasing the number of merged models, the averaging approach suffers considerably from quality reduction (0.49 -> 0.42 -> 0.33 -> 0.25). However, our approach demonstrates greater resilience and is able to preserve the quality of generated output even for higher number of variants (0.49 -> 0.49 -> 0.46 -> 0.46). W2: No discussion of privacy implications when sharing experts: Response 2: We agree with the reviewer that a central merging system should be established to implement our approach and to generate a consolidated model, which may raise privacy concerns. However, in our approach, only model weights are shared with the central system (similar to Federated Learning), while training data and validation samples are not shared with other users. W3: Lack of ablation studies isolating the impact of different components: Response 3: To address this comment, we added further experiments to the evaluation section that demonstrate the output quality without the non-expert reconfiguration component. The following results will be included in Table 1 of the revised manuscript. Proposed-No-Reconfiguration: WikiText: 4.05, C4: 7.55, PTB: 13.12, MT-Bench(1st): 8.06, MT-Bench(2nd): 7.35, MT-Bench(avg): 7.70, MMLU: 71.80%, HellaSwag: 81.2%, TruthfulQA: 69.7% As shown, without the non-expert reconfiguration component, the quality of the generated output decreases. Although this reduction in quality is not as significant as that of the original opposing models, it causes our proposed approach to perform worse than the Averaging baseline (Avg: Izmailov et al., 2018). W4: Energy efficiency analysis is absent: Response 4: While it is not feasible to assess power consumption within the given time frame, we will include an in-depth analysis of power consumption and its behavior for our approach and the compared baselines in the future extension of this work.
Summary: Updated score 2->3 after rebuttal. ----------------------- The paper proposes a novel serving system to address the problem of efficiently serving multiple finetuned mixture-of-experts (MoE) large language models (LLMs). The idea is to run a similarity based expert consolidation to share similar experts across different models, which can reduce the memory overhead on a single GPU. The authors also propose a runtime partial reconfiguration scheme to dynamically unload non-expert layers to process requests from different models. The authors run experiments on single NVIDIA A100 GPU using Mixtral-8x7B models. The authors compare their approach with NVIDIA's multi-instance GPU (MIG), and show that the proposed method can achieve an 85% reduction in turnaround time. Claims And Evidence: The authors mainly compare the proposed approach against the NVIDIA's multi-instance GPU (MIG) and the single model approach (where one single Mixtral model is served with twice the number of requests on average). While I agree that from the experimental results, the proposed approach is better than the MIG, I am not sure what's the gain of the proposed approach over the single model in terms of the output quality (Table 1), latency (Table 3) and throughput (Figure 4). The difference looks not significant to me. The authors also claim the proposed approach "incurring only a negligible increase in time-to-first-token", which might be an understatement. Table 2 shows an increase of TTFT from 0.89s to 1.41s which is a 58% increase. It would be helpful if the authors could clarify this. The experiments only take one set of settings (prompt tokens, arrival rates, GPU, etc.) and one base model (Mixtral-8x7B). It's not clear how the proposed approach would perform in more general settings, or if the TTFT gap would be larger in more demanding settings or with larger models. Methods And Evaluation Criteria: The datasets and evaluation metrics (latency, throughput, quality) make sense and are relevant for the claims made in the paper. But my same concerns about the experimental settings above apply here as well: the authors only run experiments on one set of settings and one base model, which makes it hard to generalize the results. It would be helpful if the authors could include more experiments on different settings and models to support their claims. Theoretical Claims: N/A. This is an experimental paper and does not contain theoretical claims or proofs. Experimental Designs Or Analyses: Yes. See above. Supplementary Material: N/A. There is no supplementary material. Relation To Broader Scientific Literature: I am not an expert in this area and I cannot comment on the literature discussion. Essential References Not Discussed: I am not an expert in this area and I am not familiar with the literature. Other Strengths And Weaknesses: My biggest concern about the proposed approach is its generalizability: - I mentioned above about using only one setting and one base model. - Additionally, the use case of the proposed approach feels a bit limited. If my understanding about the similarity calculation is correct, the proposed approach can only be used when the MoE models have exactly the same network architecture. This is a strong limitation, as it means that the proposed approach cannot be used to serve different models with different architectures (unless the authors can provide a consistant way to define block and layer similarity between different architectures), which arguably would be way more useful in practice than serving two versions of the same model. Could the authors elaborate on this? - The experiments use the Base version and the Instructed version of the Mixtral-8x7B model, and naturally the similarity between the two models is high. If the two models are not similar enough, will the proposed approach still work? Or the cost of swapping layers will be too high? Beyond that, I feel the writing of the paper in Section 3 could be improved as some definitions are not clear enough: - Line 162 left: "The on-device expert weights constitute the majority of the parameters loaded into GPU memory." It would help if the authors could include some numbers about the usage of the three categories. - Figure 2: I have difficulty understanding how L2 distances are calculated between the experts. On Line 134 right, the authors mention that the weights of an expert block are flattened into a vector, and the L2 distances are calculated between the vectors. However, the authors do not mention how the weights are flattened. Additionally what does layer mean in Figure 2? Are the authors running the L2 distance between the weights of the same layer in different experts? If so, it would be helpful to clarify this or even include a diagram to illustrate the flattening and calculation process. - Line 140 right: I can infer it but (iL, iE) is not defined. - Same place: How is this range 150 to 250 related to the range of 3.6 to 4.9 mentioned above? What do the authors mean by "same expert positions"? - Line 149 right: It would be helpful if the authors could clarify by giving a math definition for the higher number of models case. - Algorithm 1: what is $m$ on Line 183? Seems not defined. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your invaluable comments. The experiments and additional explanations provided here have been incorporated into the revised manuscript. Weaknesses: W1: I mentioned above about using only ... Response 1: *For a detailed response with numerical results, please refer to “Response 1” given to Reviewer 2 (xadW). The response given below is a summarized version: To demonstrate the applicability of our approach to other model families with a higher number of fine-tuned variants, we evaluated our approach against the Averaging baseline (Avg: Izmailov et al., 2018) using Google’s Switch Transformer version 8 MoE-based model (Hugging Face (HF) ID: google/switch-base-8). This model has a total of 96 experts, which are distributed equally across 12 layers (8 experts per layer). Moreover, to assess the scalability of our approach we used the base model provided by Google along with three other community-provided fine-tuned variants. As shown, by increasing the number of merged models, the averaging approach suffers considerably from quality reduction (0.49 -> 0.42 -> 0.33 -> 0.25). However, our approach demonstrates greater resilience and can preserve the quality of generated output even for higher number of variants (0.49 -> 0.49 -> 0.46 -> 0.46). W2: Additionally, the use case of the proposed ... Response 2: We agree with the reviewer. Our approach is specifically designed for MoE models with identical network architectures, which is a key assumption in our problem statement. Despite this limitation, the network architectures discussed in the paper are among the leading performers in language tasks, demonstrating significant capabilities when fine-tuned for various functionalities and datasets. That said, we believe the model-merging problem addressed in this paper remains relevant and important to the research community. W3: The experiments use the Base version and the Instructed version ... Response 3: In the new experiments included in the revised manuscript, a new family of models with 4 different variants has been added for evaluation and these models exhibit more differences. For more details, please refer to Response 1. The cost of swapping models is directly proportional to the number of non-expert parameters which results in a higher volume of data transferred through the CPU-GPU link (PCIe). Therefore, since the PCIe bandwidth is fixed, a higher number of non-expert parameters equals higher swap time (cost). This is the overhead that is added to TTFT which increases it by almost 58%. Although this increase may seem significant for TTFT, it allows us to decrease the total turnaround time by 85% and to preserve the quality of generated output. W4: Line 162 left: "The on-device expert weights ... Response 4: The Mixtral model has a total of 256 experts (32 layers and 8 experts per layer). Each expert has 176.16 million parameters. All non-expert parameters from 32 layers total 1,605 million parameters. The A100 GPU can fit only 217 experts out of 256. The memory footprint of each part of the model is summarized below: On-GPU non-expert params: 1605 million * 2B (3.5%) On-GPU expert params: 217 * 176.16 million * 2B (81.8%) Off-GPU expert params: 39 * 176.16 million * 2B (14.7) W5: Figure 2: I have difficulty understanding how L2 distances ... Response 5: Assuming identical network structures, the experts from different variants have the same structure. By flattening the weights, we mean that all the weights for a given expert are stored in a 1D list while preserving the order. In a hypothetical case, each expert has parameters in the shape of 3*6 (a 2D tensor). This 2D tensor can be converted into a 1D tensor of size 18. Therefore, assuming we have two model variants, the expert-to-expert distance is the L2 distance between the two converted 1D tensors. Transformer-based LLMs typically have multiple layers. The Mixtral model has 32 layers. Figure 2 shows the expert-to-expert distance from Mixtral-base to Mixtral-Instruct. Here, expert-to-expert refers to experts in the same positions. For example, it represents the distance from expert 3 of layer 21 in Mixtral-base to expert 3 of layer 21 in Mixtral-Instruct. W6: Line 140 right: I can infer it but (iL, iE) is ... Response 6: These are variables that show the expert position in the model structure. Will be clarified in the revised manuscript. iL: layer number iE: expert number W7: Same place: How is this range 150 to 250 ... Response 7: By "the same expert positions," we mean two different experts from different model variants with identical iL and iE. W8: Line 149 right: It would be helpful if the authors ... Response 8: A formal mathematical definition for a higher number of model variants is provided in the revised manuscript. W9: Algorithm 1: what is m on Line 183? Seems not defined. Response 9: This is a mistake from our end. The correct variable inside the bracket should be “idx” instead of “m”.
null
null
null
null
null
null
null
null
Multivariate Conformal Selection
Accept (poster)
Summary: This paper introduces Multivariate Conformal Selection (mCS), a new approach for selecting candidates in settings with multivariate responses, such as drug discovery and large language model alignment. Unlike traditional Conformal Selection, which is limited to single-variable outputs, mCS extends the framework by leveraging multivariate nonconformity scores and ensuring False Discovery Rate (FDR) control. The authors propose two variants: mCS-dist, which uses distance-based scores, and mCS-learn, which optimizes selection criteria through differentiable learning. Claims And Evidence: Through both simulated and real-world experiments, the study demonstrates that mCS enhances selection accuracy while rigorously controlling FDR. These numerical results support the thereotical claim of the paper. I have some concerns on the proof of one of the main results of the paper (please refer to the section Theoretical Claims). Methods And Evaluation Criteria: Proposed methods and/or evaluation criteria make sense for the problem at hand. Theoretical Claims: I would be graeful is the authors could clarify the following aspects on the proof of Theorem 3.5: - On page 12, just before "The last equality is again by", $p^*_j \in \mathcal S^*_j$ should be $j \in \mathcal S^*_j$ (same later in the sentence and in the next equation). With these corrections, I seems to me that the equality before "By definition, {p_l }_{l \neq j} is invariant after..." is not true anymore. Indeed, the event "j \in S^*_{j\to0}" is equal to " 0 \leq q|S^*_{j\to0}/m" since the p-value corresponding to j in S^*_{j\to0} is 0. - In the proof of Theorem 3.5, could the authors comment on this sentence: "Above two cases happen with probability 1 since there are no ties almost surely." ? I was not clear to me why there are no ties almost surely since V. It might be related to the fact that the proof is derived considering diterministic p-values (and thus the terms in Eq.(3) for the tie-breaking of the nonconformity scores are not there), but I was not able to understand it. Experimental Designs Or Analyses: The authors provide analysis of their method considering different dimensions of the output space, different properties for the target region and discussed results on both simulated and real data. The empirical evaluation is solid. I would have been interested to have a more concrete/detailed description of the biological meaning the output space for the application on real data. Supplementary Material: I read Sections In Section A.1, it would be good to provide a reference when it is stated: "By a standard result from conformal inference ..." Relation To Broader Scientific Literature: The paper extends the work from Jin & Candès "Selection by prediction with conformal p-values" to the multivariate response setting. Some key concepts originally introduced by Jin & Candès are adapted to the multivariate setting. Essential References Not Discussed: Relevant related works are cited as far as I am aware. There is another line of work proposing conformal methods to control the FDR on the set of true edges in graph when considering a link prediction problem (such as "Conformal link prediction for false discovery rate control", Ariane Marandon). These works are not cited by the authors but - while being connected - they adress a different task than the one considered in the paper and thus are not essential. Other Strengths And Weaknesses: - Other Comments Or Suggestions: Here are some typos: - "but replaces the the denominator" (sec. 2) - In algo 1, at line 1, r_j should be r_{n+j}. - on the left side of page 5, F (V (x, y), U ) ~ U(0,1) and not (-1,1) Questions For Authors: I would be grateful if the authors could clarify my question on the proof of Theorem 3.5. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > On page 12, just before "The last equality is again by", $p\_{j}^\* \in \mathcal S\_{j}^\*$ should be replaced with $j \in \mathcal S\_{j}^*$... since the p-value corresponding to $j$ in $S^{\*}\_{j\to0}$ is 0. **A1**: Thank you very much for pointing out the typo and the error in our proof. We have corrected the typo accordingly. Let us clarify the corrected reasoning clearly. In the original manuscript, we aimed to show: $$ \text{FDR} \leq \sum\_{j=1}^m \mathbb{E}\bigg[ \frac{ \boldsymbol{1} \\{p\_j^{\*} \leq q|\mathcal{S}\_{j \rightarrow 0}^{\*}| / m \\}}{\max(1, |\mathcal{S}\_{j \rightarrow 0}^{\*}|)} \bigg] $$ which is correct, yet we used a wrong path to prove this inequality. We claimed: $$ \text{FDR} \leq \sum\_{j=1}^{m} \sum\_{k=1}^{m} \frac{1}{k} \mathbb{E}\big[ \boldsymbol{1}\\{|\mathcal{S}\_{j \rightarrow 0}^{\*}| = k\\} \boldsymbol{1}\\{j \in \mathcal{S}\_{j \to 0}^{\*} \\} \big] = \sum\_{j=1}^m \mathbb{E}\bigg[ \frac{ \boldsymbol{1} \\{p\_j^{\*} \leq q|\mathcal{S}\_{j \rightarrow 0}^{\*}| / m \\}}{\max(1, |\mathcal{S}\_{j \rightarrow 0}^{\*}|)} \bigg]. $$ which is incorrect, since as pointed out, $\mathcal{S}\_{j \rightarrow 0}^{\*} := \mathcal{S}(p\_{1}^{(j)}, \dots, p\_{j-1}^{(j)}, 0, p\_{j+1}^{(j)}, \dots, p\_m^{(j)})$'s $j$-th argument is 0, not $p\_{j}^{\*}$. However, consider the following chain of inequalities: $$ \text{FDR} \leq \sum\_{j=1}^{m} \sum\_{k=1}^{m} \frac{1}{k} \mathbb{E}\big[ \boldsymbol{1}\\{|\mathcal{S}\_{j \rightarrow 0}^{\*}| = k\\} \boldsymbol{1}\\{j \in \mathcal{S}\_{j}^{\*} \\} \big] = \sum\_{j=1}^m \mathbb{E}\bigg[ \frac{ \boldsymbol{1} \\{p\_j^{\*} \leq q|\mathcal{S}\_{j}^{\*}| / m \\} }{\max(1, |\mathcal{S}\_{j \rightarrow 0}^{\*}|)} \bigg] \leq \sum\_{j=1}^m \mathbb{E}\bigg[ \frac{ \boldsymbol{1} \\{p\_j^{\*} \leq q|\mathcal{S}\_{j \rightarrow 0}^{\*}| / m \\}}{\max(1, |\mathcal{S}\_{j \rightarrow 0}^{\*}|)} \bigg]. $$ This corrected argument allows us to properly continue the proof as originally intended. We have updated the manuscript to clearly reflect this correction. > In the proof of Theorem 3.5, could the authors comment on this sentence... but I was not able to understand it. **A2**: Thank you for pointing this out. In fact, the assumption that $V\_1, \dots, V\_n, \widehat{V}\_{n+1}, \dots, \widehat{V}\_{n+m}$ have no ties is unnecessary. We can simply handle the case where $\widehat{V}\_{n+l} = \widehat{V}\_{n+j}$ by combining it directly with the scenario originally labeled (i), where $\widehat{V}\_{n+l} > \widehat{V}\_{n+j}$. After this adjustment, the proof proceeds without difficulty. We have clarified this point and updated the manuscript accordingly. > In Section A.1, it would be good to provide a reference when it is stated: "By a standard result from conformal inference..." **A3**: We added a reference (Vovk et al., 2005) to corroborate the claim. > Other Comments Or Suggestions **A4**: We have corrected all the typos and addressed all the minor suggestions. We highly appreciate your careful review, which has helped us improve the clarity and quality of our manuscript.
Summary: This paper addresses the important problem of multivariate conformal selection. While multivariate conformal prediction is relatively well studied, this appears to be the first work on multivariate selection tasks. ## update after rebuttal: I did not change my score as the recommendation is already to accept. Claims And Evidence: The claims are supported by proofs and simulation studies. Methods And Evaluation Criteria: As this is the first paper on multivariate conformal selection there does not seem to be any benchmark data sets available. It would be good to see an evaluation on classification / binary outcomes as well. Theoretical Claims: The proofs look plausible. Experimental Designs Or Analyses: The experimental setup is well explained in the supplementary material. Supplementary Material: I looked over it but mainly for information. Relation To Broader Scientific Literature: The following paper seems also related Klein, Michal, et al. "Multivariate Conformal Prediction using Optimal Transport." arXiv preprint arXiv:2502.03609 (2025). Essential References Not Discussed: None came to mind Other Strengths And Weaknesses: Figure 1 is too small to read properly. Perhaps move Task 2 to the supplementary? Other Comments Or Suggestions: See above Questions For Authors: Could you expand more about tie breaking? What if one used a worst-case rule for ties instead of breaking them at random, so that one has a deterministic test procedure? Footnote 2: The choice of $r_[n+j}$ needs to be exchangeable in which sense? Regarding its coordinates, or ensuring exchangeability with the calibration data? How would one define regional monotonicity for a classification problem? In Theorem 4.1 why do you need to assume i.i.d.; would exchangeability suffice? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > It would be good to see an evaluation on classification outcomes as well. **A1**: In our paper, we chose to focus primarily on regression tasks because they represent a more challenging setting for conformal selection. In fact, the selection problem for classification (univariate or multivariate) can be directly reduced to the univariate conformal selection framework introduced by Jin and Candes (2023). This reduction explains why we did not include separate evaluations specifically for classification scenarios, as such evaluations would essentially revisit methods already established in the literature. To clarify this point, let us briefly demonstrate the reduction explicitly: * For the univariate classification setting, suppose the response space is composed of classes $\mathcal{Y} = \cup_{k=1}^K C_k$ with target region $R = \cup_{k=1}^{s} C_k$ (with $s < K$). Then, by defining a binary response: $\tilde{y}\_{i} = \boldsymbol{1}\\{y\_{i} \in R \\}$, the original selection problem directly translates into a univariate conformal selection problem, where we select samples with $\tilde{y}\_{i} = 1$. * For multivariate classification case, e.g. suppose responses $\boldsymbol{y}\_{i}$ are drawn from a joint class space $\mathcal{Y} = (\mathcal{Y}^{(1)},\mathcal{Y}^{(2)}) = \cup_{k,\ell} (C^{(1)}\_k, C^{(2)}\_\ell)$, and the target region $R = \cup_{(k, \ell) \in \mathcal{I}}(C^{(1)}\_k, C^{(2)}\_\ell)$. Again, we define a binary indicator $\tilde{y}\_{i} = \boldsymbol{1}\\{\boldsymbol{y}\_{i} \in R \\},$ converting the original multivariate selection problem into a standard univariate selection task: $$H_{0j}: \tilde{y}\_{n+j} < 0.5 \text{\quad versus \quad } H\_{1j}: \tilde{y}\_{n+j} \geq 0.5.$$ Since $P(\tilde{y}\_{i} = 1)=P(\boldsymbol{y}\_{i} \in R)$, there is a direct correspondence between the multivariate and univariate nonconformity scores: $ V(\boldsymbol{x},\boldsymbol{y}) = M\cdot\boldsymbol{1}\\{\boldsymbol{y}\in R\\} - \widehat{P}(\boldsymbol{y}\in R|\boldsymbol{x}) $ and $ V(\boldsymbol{x},\tilde{y}) = M\cdot\boldsymbol{1}\\{\tilde{y}\geq 0.5\\} - \tilde{\mu}(\boldsymbol{x}) $ where $\tilde{\mu}(\boldsymbol{x})\equiv \widehat{P}(\tilde{y}=1|\boldsymbol{x})$. Moreover, regional monotonicity is simply the usual monotone condition of univariate conformal selection: $V(\boldsymbol{x},\tilde{y}=0)\leq V(\boldsymbol{x},\tilde{y}=1).$ Thus, every classification-based selection task can be naturally and effectively solved using existing univariate conformal selection methods. We will clarify this point explicitly in our final manuscript. > The following paper seems also related "Klein et al. (2025)". **A2**: We have now cited this paper. > Figure 1 is too small to read properly. **A3**: We have now increased the font size and improved the clarity of Figure 1. > Could you expand more about tie breaking? ... **A4**: In our work, we follow the standard practice in conformal methods and break ties randomly. If instead, one used a deterministic worst-case tie-breaking rule-always ranking the test score behind tied calibration scores-the conformal p-value becomes: $ p_j^{dtm} = \frac{1}{n+1}(1+\sum_{i=1}^n \boldsymbol{1}\\{V_i \leq \widehat{V}_{n+j}\\}). $ This rule ensures that $p_j^{dtm} \geq p_j$ for every test sample, making the test deterministic and conservative (though not uniformly distributed). Consequently, applying the BH procedure to these deterministic p-values yields a fully reproducible selection rule, but with a slight reduction in statistical power compared to random tie-breaking. We will clarify this trade-off explicitly in the final manuscript. > Footnote 2: The choice of $r_{n+j}$ needs to be exchangeable in which sense? .... **A5**: In the original manuscript, footnote 2 mistakenly suggested that the choice of $r_{n+j}$ required some form of exchangeability. After careful reconsideration, we see clearly now that no such condition is needed. The point $r_{n+j}$ can be chosen arbitrarily from the region $R$. From the viewpoint of our proof, the critical step is the regional monotonicity condition. This condition alone ensures the relationship $p_{j}^\* \leq p_j$ under the null hypothesis. The oracle p-values $p_{j}^{*}$ are uniform precisely because of the exchangeability of the calibration data. The choice of $r_{n+j}$, therefore, does not influence the oracle p-value and imposes no extra exchangeability constraints. We have revised the manuscript accordingly to clearly state this simpler and accurate assumption. > How would one define regional monotonicity for a classification problem? **A6**: Please see Q1. > In Theorem 4.1 why do you need to assume i.i.d.?... **A7**: The proof of Theorem 4.1 indeed relies crucially on the strong law of large numbers. This result requires independence (or at least certain mixing conditions). Exchangeability alone is not sufficient here. Please see more detail in Appendix B. --- Rebuttal Comment 1.1: Comment: Thank you for the explanation. For the strong law of large numbers under exchangeability this paper seems to do the trick: Etemadi, N., and M. Kaminski. "Strong law of large numbers for 2-exchangeable random variables." Statistics & probability letters 28.3 (1996): 245-250. --- Reply to Comment 1.1.1: Comment: Thank you very much for your insightful comments and for providing this useful reference. In the final manuscript, we will update the relevant discussion in the Appendix to reflect the strong law of large numbers under the 2-exchangeability condition, as suggested.
Summary: This paper proposes multivariate conformal selection (mCS), which extends conformal selection to multi-response settings by introducing the concept of regional monotonicity, generalizing univariate monotonicity, and defining multivariate non-conformity scores. mCS guarantees finite-sample false discovery rate (FDR) control. Two types of non-conformity scores are proposed. The first, mCS-dist, is based on predefined distances. The second, mCS-learn, involves a term that is learned via gradient-based optimization where the sorting has been replaced with soft (differentiable) sorting. Experiments on simulated and real-world datasets show that mCS enhances selection power while maintaining FDR control. ## update after rebuttal The new experiments regarding hyperparameters, numerical stability, and calib/test data sizes are convincing. I recommend the paper's acceptance. Claims And Evidence: - The extension of conformal selection to the multi-response setting is a novel and significant contribution. - The framework guarantees finite-sample FDR control. The theoretical arguments seem correct. - The proposed non-conformity scores are well-motivated. - There were some ambiguities in the definition and implementation of the `mCS-learn` algorithm for me to assess it fully (please see comments in "Methods And Evaluation Criteria" section) - Overall, the clarity of the presentation could be significantly improved. I suggest moving up some of the experimental details (such as the definition of settings, tasks) to the main text so that the figures and tables can be interpreted from information given in the main text. - Both `mCS-dist` and `mCS-learn` both perform well against baselines in the simulated setting. All of the figures and tables in the main text are on the simulated setting, however, and not the real data application. - If the authors are planning to make the aggregated and imputed ADMET dataset public, that would be a significant contribution worth mentioning up front. Methods And Evaluation Criteria: I had multiple questions and concerns about the form of the `mCS-learn` method as well as its presentation: - The method requires an extra split of the calibration set into training, validation, and proper calibration sets. I would like to see more discussion on the signal lost from a reduced calibration set size, which I'd presume would be more significant in higher dimensions (many targets). - Why is a further split of the training set necessary in line 3 of Algorithm 2? - What was the exact algorithm used for soft ranking? The paper cites both Blondel et al. 2020 and Cuturi et al. 2019, but does not indicate the exact algorithm nor the hyperparameters it would introduce. - Related to the above, soft sorting algorithms are typically very sensitive to the regularization parameter and small numerical errors may impede training. From Algorithm 2, it seems like only $\theta$ was chosen via a held-out validation split and the sorting regularization wasn't. If it was predetermined, please explain the procedure. - The second loss function (Eq 16) introduces another hyperparameter $\gamma$. How was this determined? - In Algorithm 2, line 8, what is $k$? Why does repeated application of mCS on $\mathcal{D}_{f\textrm{-val}}$ lead to different power values? - Section 5.2: what was $p$? - It is very difficult to interpret the tables of results. The "settings" are only defined in the Appendix -- it would also help to refer to them by their respective characteristics in the main text (for example, setting 1 = "linear, Gaussian noise"). Please also indicate the nominal level in the table caption. Theoretical Claims: The proof of Theorem 3.5 seems correct. Experimental Designs Or Analyses: - For the simulated study, as there is full control over the data generation, the authors can use a bigger test set than 100 for cleaner evaluation - Please also report the standard errors of the metrics across runs in the tables. - Please share the anonymized code so the details of the implementation can be checked Supplementary Material: Appendix A.2 (proof of Theorem 3.5), C, D Relation To Broader Scientific Literature: - Extending CS to multi-response settings is a significant contribution. The paper introduces the concept of regional monotonicity, which generalizes univariate monotonicity, required to guarantee FDR control. - Even in the univariate setting, regional monotonicity generalizes the threshold-based framework of Jin and Candes 2023 to arbitrary sets of intervals. - The authors provide finite‐sample theoretical guarantees (Theorem 3.5) using similar arguments as Jin and Candes 2023. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - Algorithm 1, line 1: $r_j$ is confusing as it reuses $j$ index --> $r_{n+j} \in R$ - Please explain the role of $M$ up front when it's introduced, right after Eq 8 and also Eq 11 - In Algorithm 2, please include a line that assigns the value to $\theta_t$ (currently introduced without definition) - In Algorithm 2, $t^*$ is overloaded with the previous discussion of Theorem 4.1 - Fig 1: Please make the figure axis labels and legend font larger - Fig 1: Next to task number, please include a few-word description of what the task is designed for Questions For Authors: - The choice of $r_{n+j}$ (choosing it on the boundary of $R$ that would be most informative for the given $x$) also seem to offer room to improve selection power, as long as the choice doesn't violate exchangeability. Is this correct? If so, did the authors consider simply optimizing for this within the `mCS-dist` variant? - Why is it necessary to further split $\mathcal{D}_{f\textrm{-train}}$ into two? - How does the method have to adapt when the underlying predictor outputs a conditional distribution $P(Y|X)$ rather than a point estimate? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The method requires an extra split... in higher dim. **A1**: From our experience, the (multivariate) conformal selection procedure is insensitive to the size of calibration data - the calibration scores only affect the resolution of the p-values, which is typically sufficient when $|D\_{cal}| \geq 100$. To support this point, we ran an additional experiment using $\texttt{mCS-learn}$ with real-data Task 2 at $q=0.5$. We set $|D\_{f-train}|=6400, |D\_{f-val}|=800$ and $|D\_{test}|=200$, respectively, and we consider different $|D'\_{cal}|$ shown below: |$n\_{cal}$|FDR|power| |-:|-:|-:| |100|0.486|0.594| |500|0.501|0.603| |3000|0.499|0.598| As shown, even with a small calibration set, the method maintains FDR control and strong power. > Why is a further split of the training set necessary... **A2**: In short, this is because the computation of smoothed p-values require both calibration and test data; therefore to train $f_\theta$ based on the performance of a certain score, we would need to split the training set into two, which serve as calibration data and test data respectively. > What was the exact Algo used for soft ranking...typically very sensitive...explain **A3**: We adopted the implementation in Blondel et al. (2020), with $\ell_2$ regularization and strength set to 0.1. Preliminary tests showed that within a reasonable range, different regularization strengths produce similar overall behavior in mCS. While cross-validation could optimize this parameter further, its exact value is not central to our primary contributions. To maintain clarity and simplicity, we chose not to include additional tuning steps for this hyperparameter. > The 2nd loss introduces another hyperparameter $\gamma$... **A4**: This value was chosen based on a series of preliminary experiments. Similar to the previous question, while we could cross-validate on $\gamma$, for the simplicity of our presentation, we fix $\gamma$ to 0.5 in our main procedure. > In Algo 2, line 8, what is $k$... **A5**: To accurately estimate the power of the selection rule given by $V^{\theta}$ (defined as an expectation in Eq. 2), we average empirical selection power over multiple random partitions of $D\_{f-val}$. Each random partition produces different power values, even with the same $V^{\theta}(x,y)$. We use $k=100$ partitions, which provides a stable and accurate approximation. We clarified this point in the updated version of our manuscript. > For the simulated study... bigger test set than 100. **A6**: We appreciate the suggestion. While a test size of 100 may seem small for general machine learning tasks' evaluation, conformal selection performance is generally insensitive to test set size (as noted in Sec 3.1 of Jin and Candes, 2023). Additionally, we repeated our experiments over 100 independently generated datasets, ensuring the results are stable. This stability is also supported by our simulation results provided in response to subsequent comments. > Please report the std. errors... **A7**: For both simulation and real-data experiments, each iteration involves sampling a new dataset, introducing variation in data and model training. Reporting std errors under these conditions would reflect primarily data variability rather than the inherent stability of the selection methods. However, we agree numerical stability is important. Thus, in Appendix D, we provide an additional real-data experiment with a fixed dataset across iterations. Below are the observed std dev. of FDR and power (over 100 iterations) from this experiment: FDR/Power: |Task|$q$|CS\\_int|CS\\_ib|CS\\_is|bi|mCS-d|mCS-l| |-:|-:|-:|-:|-:|-:|-:|-:| |1|0.3|0.000/0.000|0.000/0.000|0.248/0.022|0.000/0.000|0.000/0.000|0.000/0.000| |2|0.3|-|-|-|0.000/0.000|0.000/0.000|0.002/0.001| |3|0.3|-|-|-|0.000/0.000|0.021/0.005|0.004/0.002| These results confirm strong numerical stability for our proposed methods. Note that for $\texttt{mCS-learn}$, the trained model $f_\theta$ was fixed, so variability from model training is not considered here. > The choice of $r_{n+j}$...? **A8**: We have added a detailed discussion about the choice of $r\_{n+j}$, please refer to our response under A5 for Reviewer 2 (PjuQ). For both the generalized signed score and the clipped score, choosing $r_{n+j}$ on the boundary is already optimal. In this case, the first term $D_1$ is minimized (equal to 0), so no further optimization is needed to enhance selection power. > How ... to adapt when ... outputs a conditional distribution $\hat{P}(Y|X)$? **A9**: When the model $\hat\mu$ outputs an estimated conditional distribution $\widehat{P}(y|x)$, the second term $D_2$ can be replaced by the predicted probability of being in the target region: $y \in R$, i.e. $\int \boldsymbol{1}\\{ y\in R\\} d\widehat{P}(y|x)$. This serves the same purpose: points with high predicted probability of satisfying the selection criterion will receive lower scores and are more likely to be selected. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions regarding the hyperparameters, numerical stability, and calib/test data sizes. The new demonstrations are convincing. l'll raise my score to an accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful review and constructive feedback. We're pleased that our additional demonstrations addressed your questions. Your insights have greatly helped us clarify and improve our work.
null
null
null
null
null
null
null
null
Enhancing Graph Contrastive Learning for Protein Graphs from Perspective of Invariance
Accept (poster)
Summary: This paper proposes a novel framework for protein representation learning by introducing two biologically informed graph-augmentation strategies for contrastive learning. Specifically, it combines: 1. Functional Community Invariance (FCI), which preserves crucial residue clusters (communities) involved in protein functionality when augmenting the 2D graph connectivity. 2. 3D Protein Structure Invariance (3-PSI), which perturbs three-dimensional protein conformations in a biologically plausible manner by manipulating backbone dihedral angles or rotating secondary structures (α-helices and β-sheets) without destroying essential structural motifs. Claims And Evidence: 1. Claim: Biology-Aware Augmentations Improve Representation Quality • The authors propose Functional Community Invariance (FCI) to preserve biologically relevant residue clusters. They measure how often pockets (functional communities) remain intact after augmentation and compare results with standard or random graph augmentations. This directly supports the claim that their method better retains key functional components. • They introduce 3D Protein Structure Invariance (3-PSI) to avoid unrealistic disruptions to three-dimensional protein structures (e.g., by preserving secondary structure integrity or peptide planes). 2. Claim: The Proposed Method Achieves Superior Performance Across Multiple Tasks • The authors conduct evaluations on four tasks: Protein Fold Classification, Enzyme Reaction Classification, Gene Ontology (GO) Prediction, and Enzyme Commission (EC) Number Prediction. Within fold classification and GO, they further break down sub-tasks (e.g., family, superfamily, etc.), generating a broad evidence base. • Results Tables show consistent gains over both: • 2D-only GCL baselines (e.g., GraphCL, CI-GCL), and • 3D-augmentation baselines (e.g., random coordinate perturbations, homology modeling). • Ablation studies (e.g., using FCI alone, 3-PSI alone, or their combination) confirm that combining the two invariance strategies yields the best improvements. 3. Claim: The Proposed Approach is Robust to Noisy Structural Perturbations • In the robustness experiment, the authors randomly rotate backbone segments in test proteins, simulating environmental or measurement-induced structural variations. They then plot model accuracy/Fmax at proportions from 10% to 50% of residues disturbed. • Results show that their method’s performance degrades more slowly than traditional 3D augmentations and certain baselines, indicating robustness to structural noise. 4. Claim: FCI and 3-PSI are Synergistic • Ablation: FCI or 3-PSI alone improve performance, but together they outperform either approach individually. The results tables consistently show the “FCI + 3-PSI” variant is highest-performing. The paper’s primary claims are largely well-supported by clear empirical results, multiple tasks, ablation studies, and robust analyses. Minor open questions around comparisons to cutting-edge pretrained models or large-scale data do not undermine the validity of the demonstrated improvements within the scope of graph-based protein learning. Methods And Evaluation Criteria: Method: The paper’s primary contributions—Functional Community Invariance (FCI) and 3D Protein Structure Invariance (3-PSI)—directly address known gaps in protein contrastive learning. Most existing approaches use 2D topology manipulations that may disrupt functionally critical residues or rely on overly simplistic 3D perturbations that distort key structural motifs. By incorporating community preservation (via spectral constraints and side-chain similarity) and 3D backbone constraints (via dihedral angles or secondary structure rotations), the paper’s techniques remain biologically plausible. This stronger biological grounding is exactly what protein graph methods need to capture higher fidelity embeddings. Benchmark Datasets: The authors run experiments on four major tasks: fold classification, enzyme reaction classification, gene ontology (GO) prediction, and enzyme commission (EC) number prediction. These are widely used benchmarks in structure-based protein modeling research. For fold classification, they even subdivide it (family, superfamily, fold), which is aligned with SCOP classification and demonstrates how well the method distinguishes proteins at varying levels of structural/sequence similarity. GO and EC tasks capture functional annotation challenges, which are biologically relevant real-world scenarios. Potential Improvements: One area that might strengthen the paper is a scalability discussion—e.g., how the computational overhead of generating more sophisticated 3D augmentations compares with simpler random approaches when moving to large datasets. Another is comparing or combining these graph-based methods with large-scale protein language models. However, the existing baselines and metrics remain representative of graph-based contrastive learning and typical protein structure tasks. Theoretical Claims: The paper’s main theoretical arguments center on characterizing how individual edge perturbations affect the graph spectrum. Specifically: Theorem 4.1 (Bounds of Spectral Changes) Claim: When a single edge is flipped (added or removed), the change in the eigenvalues of the normalized Laplacian can be upper-bounded and lower-bounded by expressions involving the spectral embeddings (the eigenvectors) of the unperturbed graph. Assessment: This result aligns with known techniques in spectral graph theory, where the Davis–Kahan or Weyl inequalities often provide bounds on eigenvalue/eigenvector perturbations. The text’s statement that spectral changes are related to the l_2 distance between the two node eigenvector embeddings also matches typical graph-spectral intuitions: if two nodes lie “far apart” in the spectral embedding, flipping edges between them tends to yield a larger spectral impact. No immediate inconsistencies stand out; the proof outline (in references to standard matrix perturbation results) appears coherent and consistent with existing spectral bounding approaches. Lemma 4.2 (Weighted Graphs) Claim: The absolute spectral change when dropping an edge of weight  can be upper-bounded by the product of \(\lvert w_{ij}\rvert\) and terms involving eigenvectors/eigenvalues. Assessment: This follows a similar logic to Theorem 4.1 but accounts for weighted adjacency. The bound’s proportionality to \(\lvert w_{ij}\rvert\) is intuitively correct—heavier edges should create more significant perturbations when flipped. This is reminiscent of standard matrix perturbation arguments. The statement seems plausible, and no obvious red flags arise in the bounding steps as described. The full formal proofs are outlined briefly in the text (and presumably with additional detail in an appendix). From the information provided and familiarity with spectral graph theory, these claims appear mathematically consistent and do not contradict well-known eigenvalue/eigenvector perturbation theories. The derivations rely on expansions of the difference between Laplacian eigenvalues and standard bounding approaches (e.g., norm-based inequalities). While the paper does not reproduce the entire step-by-step derivation in the main body (likely for brevity), the reasoning is logically sound on inspection. Experimental Designs Or Analyses: 1. Benchmark Tasks and Datasets: The authors evaluate on four widely used tasks in protein representation learning (fold classification, enzyme reaction classification, GO term prediction, and EC number prediction), each with recognized standard datasets. 2. Comparisons with Baselines: Baseline Variety: The authors compare with multiple 2D topology-based GCL approaches (GraphCL, GCS, etc.) and 3D structure-based augmentations (random coordinate perturbation, homology modeling tools). Hyperparameters: The paper mentions the range of augmentation strengths (e.g., the proportion of edges changed in 2D, or number of dihedral/secondary structure rotations in 3D). They present ablation curves that show how these hyperparameters affect performance. This is helpful to confirm that gains are robust to different augmentation intensities. 3. Ablation Studies: FCI vs. 3-PSI vs. Combined: The experiments systematically compare using Functional Community Invariance (FCI) alone, 3D Protein Structure Invariance (3-PSI) alone, or combining them. Observing that the combined approach generally yields the best performance supports the synergy claim. Sensitivity to Augmentation Strength: Varying the fraction of edges dropped/added (2D) or the number of secondary-structure rotations/dihedral perturbations (3D) is a direct test of how robust the method is to over- or under-augmentation. This analysis appears well-motivated and thorough. 4. Robustness Checks: Structural Perturbation at Test Time: The authors apply random rotations to subsets of residues at test time, mimicking protein conformation changes or partial misalignments. They track performance as the proportion of rotated residues increases. This design is a realistic measure of robustness, given that experimental structural data can be noisy. 5. Qualitative Analyses: Functional Community Preservation: They measure the fraction of intact protein pockets under each augmentation method. This is a direct reflection of how well the augmentation strategy avoids disrupting crucial functional sites. 6. Potential Limitations or Omissions 1. Comparison to Protein Language Models: The authors do not evaluate the proposed method in conjunction with or against large-scale pretrained language models (like ESM or ProtT5). While not necessarily invalidating the experiment design, it’s a potential extension. 2. Scalability: They do not provide time or memory complexity analyses for 3D augmentations, though that would be helpful for large-scale applications. Supplementary Material: The supplementary material contains the code to reproduce the results. Relation To Broader Scientific Literature: 1. Protein Representation Learning • Over the past few years, protein representation learning has emerged as a critical area at the intersection of machine learning, structural bioinformatics, and computational biology. Classic approaches include: • Sequence-based methods (e.g., language-model–type architectures such as ESM, ProtT5, and ProtBERT), focusing on large-scale protein sequence corpora. These methods often capture powerful evolutionary context but may overlook critical 3D structural cues. • Structure-based methods (e.g., GNNs like GVP, GearNet, and IEConv), which treat proteins as graphs of residues to incorporate geometric constraints. These approaches have proven effective in tasks such as fold classification, function prediction, and protein–protein interaction studies. 2. Graph Contrastive Learning (GCL) • In the broader machine learning literature, contrastive learning has become a standard technique for self-supervised representation learning on images, text, and graphs. Recent advances on graphs (e.g., GraphCL, GCS, Auto-GCL, and CI-GCL) demonstrate how data augmentations can push GNNs toward more generalizable embeddings. However, these methods often rely on simplistic graph transformations (like random edge dropping or adding) and rarely leverage specific biological or domain constraints. 3. Biologically Grounded Augmentations • In protein research, random manipulations of graph structures can cause the loss of essential structural or functional information (e.g., removing edges around catalytic sites). The idea of building biology-aware augmentations partly echoes earlier efforts in structure-based drug design and protein–ligand modeling, where physically plausible perturbations are used to sample conformational states. • This paper’s Functional Community Invariance (FCI) connects with the notion of residue interaction networks and community detection (common in network analyses of protein structures), ensuring that biologically relevant clusters (pockets, communities, etc.) remain intact. Prior literature on “residue co-evolution” and “community detection in protein structures” has highlighted the importance of pockets and functional domains for accurate protein functional interpretation. Essential References Not Discussed: 1. Rives A, Meier J, Sercu T, Goyal S, Lin Z, Liu J, Guo D, Ott M, Zitnick CL, Ma J, Fergus R. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. Proceedings of the National Academy of Sciences. 2021 Apr 13;118(15):e2016239118. 2. Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Žídek A, Potapenko A, Bridgland A. Highly accurate protein structure prediction with AlphaFold. nature. 2021 Aug;596(7873):583-9. 3. Batzner S, Musaelian A, Sun L, Geiger M, Mailoa JP, Kornbluth M, Molinari N, Smidt TE, Kozinsky B. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature communications. 2022 May 4;13(1):2453. Other Strengths And Weaknesses: Additional Strengths 1. Originality through Domain-Driven Adaptations • The paper demonstrates a creative fusion of well-known spectral graph theory methods with domain-specific protein structural constraints. While graph contrastive learning itself is not new, the authors’ emphasis on preserving functional communities and realistic 3D transformations elevates the approach beyond standard random augmentations. • By explicitly integrating knowledge about side-chain similarity, functional pockets, dihedral angles, and secondary structure motifs, they bring a strong biological grounding to the contrastive learning framework, which is less common in generic GCL. 2. Clarity in Presentation • The paper largely maintains clear structuring: definitions, methodology (with sub-sections for 2D vs. 3D invariance), experiment descriptions, and ablation studies. • Many figures and tables (e.g., pocket-preservation, UMAP visualizations, sensitivity analyses) effectively illustrate key points. This visual clarity helps readers grasp the impact of each augmentation technique. 3. Robust Methodological Design • The authors run multiple ablations (e.g., FCI alone, 3-PSI alone, combined) and vary hyperparameters to show where the model sees performance gains. Their robustness experiment specifically showcases how the model handles noisy protein structures—mirroring real-world conditions where data can be imperfect or proteins adopt alternative conformations. Additional Weaknesses 1. Limited Discussion of Scalability • Performing complex 3D transformations (e.g., systematically rotating dihedral angles, secondary structures) might increase computational overhead compared to simpler node/edge manipulations. The paper does not deeply analyze how this scales to massive protein databases, which can be a concern for large-scale or high-throughput applications. 2. Comparisons with Non-Graph Methods • The paper positions itself within the ecosystem of graph-based approaches and does a thorough job comparing with various GCL baselines. However, given the explosive growth of large language models for proteins (e.g., ESM), a more explicit recognition or mention of these non-graph methods would help situate the work in the broader landscape of protein representation learning. Without such discussion, some readers might overlook how these approaches could complement or compete with purely sequence-based techniques. 3. Dependency on High-Quality 3D Structures • While the approach benefits from realistic structural perturbations, it inherently assumes the existence of reasonably accurate 3D protein models (or experimental structures). In cases where only sequences are available (or structural data is uncertain), the method may not be directly applicable. The paper does not discuss how to handle partial or noisy structural data beyond artificially simulating noise. 4. Hyperparameter Tuning Complexity • The method includes several hyperparameters for controlling augmentation strength (fraction of edges dropped, number of secondary structures to rotate, etc.), which might be non-trivial to tune optimally. Although the authors present ablation curves, there is still a risk that real-world users would need extensive trial and error to find the “sweet spot” for a specific application. Other Comments Or Suggestions: 1. Highlighting Real-World Applications • The paper demonstrates strong performance on widely used benchmarks, but giving a short paragraph or example on potential practical use cases (e.g., ligand-binding site analysis, enzyme engineering) could clarify why preserving functional communities and realistic 3D geometry might be game-changing in a practical setting. 2. Discussion on Parameter Sensitivity • There is a helpful analysis of varying augmentation strengths. It might be useful to explicitly mention some rough guidelines for how a user might select these parameters in practice (e.g., typical values of ϵ for edge dropping or typical rotation angles for dihedral modifications). 3. Further Exploration of Community Detection • The FCI portion uses spectral theory to preserve functional communities. Some readers may appreciate a reference or brief mention of other community detection algorithms (like Louvain or Infomap) or the well-known “normalized cuts” approach to show possible variations or confirm that spectral clustering is a robust choice. 4. Comparisons to Equivariant Models • Although the authors mention 3D geometry-based GNN approaches, adding one or two lines about E(3)-equivariance or SE(3)-equivariance could clarify that 3-PSI and equivariant GNNs are complementary. They solve different problems (augmentations vs. architecture design) but both preserve geometry in different ways. Questions For Authors: 1. Question: When only partial PDB data is available or some residues are missing coordinates (e.g., unresolved loop regions in cryo-EM structures), how do you apply 3-PSI augmentations? Is there a fallback procedure or do you skip those proteins? 2. Computational Overhead of 3D Augmentations: How does the runtime for dihedral/secondary structure rotations compare with simpler graph augmentations in large-scale experiments? Did you measure any significant slowdown? 3. Criteria for Side-Chain Similarity: In Functional Community Invariance (FCI), side-chain similarity is computed via torsion angles. How robust is this measure for chemically diverse or modified amino acids (e.g., PTMs), and do you incorporate any external chemical knowledge or force-field parameters (e.g., from Amber or CHARMM)? 4. Data Splitting Protocols for Fold Classification: Which SCOP (or SCOPe) version did you use, and what was your sequence identity threshold to ensure minimal overlap between training and test sets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank reviewer UCwy for detailed reading and meaningful feedback.** *** > **_Q1_** *Not against large-scale pretrained language models.* **A1** In Appendix F.2, we have provided comparisons with protein LMs like ESM-1b. For extended experiments, like comparison to ESM-2, you can refer to Reviewer s9oD, A1. *** > **_Q2_** *Scalability and complexity analysis.* **A2** Time complexity for 3D augmentation is $O(n)$ for 3-PSI and $O(n^2)$ for FCI, as detailed in A3 in Reviewer s9oD. For 3D augmentation, each residue is transformed by a 3×3 rotation matrix. The memory complexity scales as $O(Bn)$, where $B$ is batch size and $n$ is number of residues per sample. For large dataset, we can focus augmentation on specific regions of interest (e.g., binding sites) rather than the entire protein to reduce the cost. Here we provide running time. For 3-PSI, it costs 22.38s and 18.91s each epoch on 3PSI-Diag and 3PSI-Alpha on Fold task. Random augmentation (Trad. 3D Aug. in paper) costs 3.56s. Overall, Diag and Alpha constitute about 18.0% and 15.3% of total training time (124s per epoch). Considering the performance improvements, the additional training time is a justifiable trade-off. *** > **_Q3_** *Only sequences are available...& Only partial PDB data is available...* **A3** We provide possible solutions: for no available experimental structure or PDB data missing, our model can use a predicted structure generated by homology modeling tools such as SWISS-MODEL[1]; for structures that are uncertain with PDB data missing, we can select those from homology modeling tools with higher confidence scores, perform augmentation on each, and then fuse the resulting features. As for incomplete PDB data, 3-PSI can also work when a moderate amount of residue remains and you can refer to experiment results in Reviewer s9oD, A2 (incomplete input PDB). If the primary PDB data is unavailable, we need to employ homology modeling tools to reconstruct the structure. *** > **_Q4_** *Several hyperparameters might be non-trivial to tune optimally & guidelines might needed* **A4** For hyperparameter selection, we follow a common paradigm. Values too small may not provide sufficient augmentation of protein conformation, while larger values may distort it. For example, when selecting ε for edge dropping, it is recommended to start with a small value (e.g., 0.1) and gradually adjust it up to 0.4 to avoid damaging the graph structure. For dense graphs, a slightly higher ε can be used, vice versa. For ϵ=0.2 and rotation number=2 can serve as a useful starting point for tuning on new datasets. *** > **_Q5_** *Giving an example on potential practical use...why preserving functional communities...* **A5** We have provided an experiment of ligand binding affinity task and you can refer to Reviewer ZGC5, A6. Our method achieves competitive performance as it preserves functional communities and protein 3D structure in augmentation, enabling model to learn meaningful representation like structural information. *** > **_Q6_** *Some readers may appreciate a reference of other community detection algorithms...* **A6** We will discuss these methods in the main text. Spectral methods can effectively capture edge-level interactions, providing a strong theoretical foundation for maintaining invariance. Therefore, we chose it. We plan to extend FCI to support other related methods. *** > **_Q7_** *Adding lines about E(3)-equivariance...3-PSI and equivariant GNNs are complementary.* **A7** 3-PSI focuses on augmenting proteins while preserving key structural properties to generate more biologically reasonable augmentations. In contrast, SE(3) emphasizes the invariance of network inputs to global rotations and translations, ensuring the model's robustness. They are complementary. We will discuss them. *** > **_Q8_** *Is torsion angle-based side-chain similarity robust? Does external chemical used...?* **A8** Previous work [2] shows that side‐chain torsion angles are effective features of residues. Also, it represents chemical properties of residues. Thus, we compute side‐chain similarity by torsion angles. While our implementation does not incorporate external chemical knowledge or force‐field parameters, we are exploring them to further enhance its robustness, particularly for chemically diverse and modified amino acids. *** > **_Q9_** *Data Splitting Protocols* **A9** Following previous work [3], we used SCOP 1.75, filtered with a pairwise identity of less than 95%, and employed a three-level homology reduction dataset as training. *** **Reference** [1] Generative models for graph-based protein design [2] Learning Hierarchical Protein Representations via Complete 3D Graph Networks [3] DeepSF: deep convolutional neural network for mapping protein sequences to folds *** **If your concerns have been addressed, could you kindly consider raising your score? We greatly appreciate your comments and support.**
Summary: This paper improves on the Graph Contrastive Learning (GCL) by introducing two graph augmentation techniques: Functional Community Invariance (FCI) and 3D Protein Structure Invariance (3-PSI). These augmentation techniques are designed to preserve the functional and structural integrity of proteins. The authors used end-to-end training on 4 datasets for classification tasks, integrating contrastive learning loss with classification errors. Experimental results show improvements in classification accuracy and F-1 score, and ablation studies showcase effective preservation of protein structures. Claims And Evidence: The authors claim that that existing augmentation techniques could lead to incorrect protein structure or disrupt protein functionalities. Such claim is valid and is supported by the design of FCI and 3-PSI augmentation techniques. FCI preserves functional community by controlling spectral changes and incorporating chemical similarity, and 3-PSI preserves the secondary and tertiary structures through controlled rotations of dihedral angles and secondary structures. Methods And Evaluation Criteria: The design of the augmentation techniques make sense, as they are rooted in biological principles to ensure that the augmentations are meaningful and realistic. However, the authors did not describe the GNN encoder used in detail, simply pointing to the work of Fan et al. (2023). The model was also trained on an end-to-end fashion for each classification task, which means the learnt representation would change for each downstream task. I have two concerns/suggestions: 1) To fully demonstrate the effectiveness of the proposed augmentation techniques, the authors could use the benchmark models (eg. ProNet, CDConv, GraphCL, GearNet, etc.) and compare performance before and after the augmentation methods are used. 2) The model is trained with equal weight to classification loss and CL loss (λ=1). There is no ablation study on how different λ impact model performance. 3) To demonstrate robustness of the learnt representation, a probable way is to train the GCL model using the proposed augmentation methods and use the learnt representation on downstream tasks. Regarding evaluation, the proposed metrics (accuracy and F-1 score) is suitable for classification tasks. Theoretical Claims: Yes, the equations and theorems in the main text are correct. One question, L_FCL is defined but L_(3-PSI) is not clearly defined. Experimental Designs Or Analyses: The authors used 4 datasets for classification tasks only. The model performance and ablation studies on these datasets are valid. However, since the protein structure and functionality is preserved, it is interesting to investigate the model performance on dataset such as ligand binding affinity (eg. PDBbind). Supplementary Material: Yes, all supplementary materials are reviewed. Relation To Broader Scientific Literature: The manuscript points out one important limitation in the literature, that the graph augmentation is solely based on topological features, ignoring the intrinsic biological properties of proteins. As one of the most important large molecules to study, how GNN models can be tailored to effectively learn protein representation is an important problem. The proposed augmentation techniques are carefully designed following biological principles and should have broader impact to future work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: * The manuscript provides a novel approach to integrate domain-specific knowledge in contrastive learning framework. * The paper is clearly written and well-structured. Weakness: * The spectral decomposition might involve high computational cost. The scalability of the proposed approach is not provided. * The sensitivity of model performance regarding some parameters (eg. λ) is not provided. Other Comments Or Suggestions: None Questions For Authors: 1. Regarding the GNN encoder structure. In the main text, the authors mentioned that they follow the work of Fan et al. (2023), which is CDConv. However, in Appendix C4, the author mentioned using EdgeGCN with pooling mechanism from Fan’s work. The GNN architecture should be described in the main text, why it is chosen. 2. The augmentation techniques are innovative and biological-aware. However, it would be possible that the augmentation technique works well with the chosen GNN architecture. The authors should eliminate such possibility by applying it on other GNN models, and showcase performance improvements despite the choice of backbone models. 3. Why is the end-to-end training technique chosen for each classification task? Such training loss could lead to representation bias for each task, rather than learning a robust protein representation for multiple downstream tasks. 4. How is the choice of λ in the loss function impact performance? 5. What is the definition of L_(3-PSI)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We are grateful to reviewer ZGC5 for insightful reviews.** *** > **__Q1__** *Did not describe the GNN encoder. In the main text, the authors...in Appendix C4...why GNN is chosen.* **A1** We sincerely apologize for lack of clarity. We employ GNNs because GNNs can flexibly integrate a protein’s topological and geometric information. Additionally, nodes and edges in the graph can incorporate physicochemical information such as residue distances; then we implement EdgeGCN which fuses edge features into the message-passing framework. For Fan et al's work, we employ their hierarchical pooling strategy rather than the encoder. When encoding protein graphs, the hierarchical pooling aggregates nodes layer by layer without losing critical structural information, thereby enabling the GNNs to summarize protein structure at a higher level. Thus, we fuse these components as our encoder. We will provide details in the main text. *** > **__Q2__** *The model...end-to-end fashion...& Such training lead to bias...& a probable way is to train...* **A2** We'd like to clarify that our goal is to learn effective protein representations. To this end, we adopt supervised contrastive learning to incorporate label information, which further facilitates representation learning on top of the contrastive framework. Following your suggestion, we conducted additional experiments under a **pure self-supervised** setting. The results show competitive performance, but incorporating supervision further improves results across datasets. ||EC|GO-BP|GO-MF|GO-CC|FOLD-Fold|FOLD-Super.|FOLD-Family|Reaction| |-|-|-|-|-|-|-|-|-| |pure self-supervised|0.863|0.443|0.655|0.467|56.9|78.0|99.4|87.2| |+ supervised|0.885|0.461|0.662|0.484|59.8|81.3|99.7|89.0| *** > **__Q3__** *To demonstrate the effectiveness...use the benchmark models and compare.* **A3** We additionally experimented with two models: CDConv and ProNet with the following results: ||EC|GO-BP|GO-MF|GO-CC|FOLD-fold|FOLD-Super.|FOLD-Family|Reaction| |-|-|-|-|-|-|-|-|-| |CDConv|87.9(+0.9)|0.445(-0.005)|0.664(+0.012)|0.479(+0.004)|58.1(+1.2)|81.6(+4.9)|99.8(+0.3)|87.8(-0.8)| |Pronet|-|-|-|-|51.8(-0.9)|72.5(+2.3)|99.5(+0.2)|87.1(+0.7)| where +/- shows performance improvement/reduction. After applying our augmentation strategy, performance improved across the majority of tasks, showing its effectiveness. *** > **__Q4__** *How different λ impact model performance.* **A4** We provide an ablation study as follows: |λ|EC|GO-BP|GO-MF|GO-CC|FOLD-Fold|FOLD-Super.|FOLD-Family|Reaction| |-|-|-|-|-|-|-|-|-| |0.2|0.882|0.451|0.656|0.467|56.5|80.2|**99.8**|86.2| |0.6|0.879|0.443|0.658|0.479|58.2|80.7| 99.6|87.5| |1.0|**0.885**|**0.461**|**0.662**|0.484|58.9|**81.3**|99.7|**89.0**| |1.4|0.880|0.457|0.649|**0.489**|**59.1**|80.6|99.7|87.8| |1.8|0.877|0.448|0.653|0.475|58.5|80.2|99.8|88.3| The results show that performance fluctuates slightly across different λ values, with λ=1 shows better performances on majority of metrics. It indicates model is not highly sensitive to λ, suggesting an ease of optimization. *** > **__Q5__** *$L_{\mathrm{FCL}}$ is defined but $L_{\mathrm{3-PSI}}$ is not clearly defined.* **A5** 3-PSI is used solely for data augmentation based on protein geometry and does not introduce learnable parameters, hence no standalone loss is required. Moreover, the loss defined in Eq.~(11), $L_{\text{GCL}}^{(3\text{-PSI})}$, specifically represents the GCL loss calculated using 3-PSI augmentation. *** > **__Q6__** *It is interesting to...ligand binding affinity* **A6** Thanks for your insightful suggestion. We briefly present the results with our best setting on ligand binding affinity task on PDBbind dataset. We select a representative baseline [1] for comparison. ||RMSE↓|Pearson↑|Spearman↑|RMSE↓|Pearson↑|Spearman↑| |-|-|-|-|-|-|-| |Holoprot-Superpixel|1.491|0.491|0.482|1.416|0.724|0.715| |FCI+3PSI-alpha|1.462|0.515|0.510|1.383|0.753|0.749| Results show that our model achieves competitive performance. The first three metrics are evaluated at **30%** sequence identity, while the last three are at **60%**. *** > **__Q7__** *The spectral decomposition...high computational cost & The scalability...not provided.* **A7** The original spectral decomposition has a time complexity of $O(n^3)$ and it is reduced to $O(n^2K)$ by Lanczos algorithm [2], where $n$ is the average number of residues per graph and $K$ is the number of selected eigenvalues. The probability matrix for FCI only needs to be precomputed once for each task. The majority of the training time is attributed to the GNN encoder. Thus, our method is scalable. The running time is detailed in A2 of Reviewer UCwy. *** **Reference** [1] Multi-Scale Representation Learning on Proteins [2] The Lanczos Algorithm with Selective Orthogonalization *** __If your concerns have been addressed, could you kindly raise the score? We greatly appreciate your comments and support.__
Summary: This paper introduces novel biology-aware graph augmentation strategies for protein representation learning within a Graph Contrastive Learning (GCL) framework. The authors identify limitations in existing GCL approaches that either focus exclusively on 2D topology (neglecting intrinsic biological properties) or lack effective 3D structure-based augmentation methods. To address these shortcomings, they develop two complementary strategies: (1) Functional Community Invariance, which preserves topology-driven community structures while incorporating residue-level chemical similarity, and (2) 3D Protein Structure Invariance, which employs dihedral angle perturbations and secondary structure rotations to maintain critical 3D structural information. Experiments across four protein-related tasks demonstrate consistent improvements over existing GCL methods and protein-specific models. The paper offers a valuable solution for GCL in the context of protein structure learning. If the authors could further enhance their main experiments by incorporating comparisons with relevant models, I would be inclined to increase my overall rating accordingly. Claims And Evidence: The claims are generally well-supported by evidence. The authors provide: 1. Theoretical motivation for both augmentation strategies. 2. Quantitative results showing performance improvements across multiple tasks. 3. Ablation studies demonstrating the contribution of each component. 4. Qualitative analyses showing preservation of functional communities and visualization of learned representations. The performance improvements are modest but consistent. Methods And Evaluation Criteria: The proposed methods are appropriate for the problem. The authors evaluate on standard protein-related benchmarks with comprehensive comparison across various baselines, including protein-specific methods, 2D topology-based GCL, and 3D structure-based GCL methods. However: 1. Since protein structures are assumed to be known in advance before model training, their annotations are easily obtainable. Alternatively, structure retrieval approaches like Foldseek [r1] could be used to find similar structures with known function annotations. This should be considered as a baseline in the comparison. 2. The evaluation would benefit from comparison with state-of-the-art structure learning models [r2,r3,r4,r5] and equivariant graph neural networks [r6] as encoders for protein-specific baselines. 3. The paper lacks complexity analysis, making it unclear whether 3-PSIAlpha + FCI could be easily adopted into mainstream transformer-based foundation models. [r1] Kempen et al. Fast and accurate protein structure search with Foldseek. Nature biotechnology 2024. [r2] Zhou et al. Uni-mol: A universal 3d molecular representation learning framework. ICLR 2023. [r3] Huang et al. Protein 3D Graph Structure Learning for Robust Structure-Based Protein Property Prediction. AAAI 2024. [r4] Lin et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science 2023. [r5] Wang et al. S‐PLM: Structure‐Aware Protein Language Model via Contrastive Learning Between Sequence and Structure. Advanced Science 2025. [r6] Liao et al. EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations. ICLR 2024. Theoretical Claims: The derivation of the Functional Community Invariance approach is sound, establishing clear connections between spectral constraints and community preservation. The authors effectively develop theoretical links between graph spectra and community structures, providing solid motivation for their approach. Experimental Designs Or Analyses: The experimental setup is comprehensive. The authors: 1. Compare against multiple baseline approaches. 2. Perform ablation studies to validate each key component. 3. Analyze augmentation strength effects, evaluate robustness against structural perturbations. 4. Provide qualitative analyses through visualization and functional community preservation. The performance improvements are modest but consistent across tasks and evaluation settings. The robustness analysis is particularly valuable, showing that their approach maintains better performance under structural perturbations. Supplementary Material: The supplementary material provides useful context, especially the comparison with protein foundation models in Appendix F, but could be enhanced by including more recent approaches as additional baselines. Relation To Broader Scientific Literature: This paper builds upon both graph contrastive learning and protein-specific approaches. However, the comparison with equivariant graph neural networks and protein foundation models could be more extensive. Recent works like ESM-2 [r1], EquiformerV2 [r2], and other models have shown strong performance on protein-related tasks but are not thoroughly compared against. [r1] Lin et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science 2023. [r2] Liao et al. EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations. ICLR 2024. Essential References Not Discussed: [r1] Kempen et al. Fast and accurate protein structure search with Foldseek. Nature biotechnology 2024. [r2] Zhou et al. Uni-mol: A universal 3d molecular representation learning framework. ICLR 2023. [r3] Huang et al. Protein 3D Graph Structure Learning for Robust Structure-Based Protein Property Prediction. AAAI 2024. [r4] Lin et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science 2023. [r5] Wang et al. S‐PLM: Structure‐Aware Protein Language Model via Contrastive Learning Between Sequence and Structure. Advanced Science 2025. [r6] Liao et al. EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations. ICLR 2024. Other Strengths And Weaknesses: The paper would benefit from deeper comparison with more recent models. Other Comments Or Suggestions: Figure 3, Page 5, 'SSI view' -> '3-PSI view' Questions For Authors: 1. How does the approach perform when using predicted protein structures (e.g., from AlphaFold) rather than experimental structures? How sensitive is the 3-PSI method to the initial quality of protein structures? Would lower-resolution structures significantly impact performance? This would significantly expand the applicability to proteins without solved structures. 2. Considering the complexity of edge add/drop, can the proposed GCL approaches easily be adopted to more complex encoders? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We thank reviewer s9oD for the constructive feedback and valuable suggestions.** *** > **__Q1__** *Incorporating comparisons with relevant models.* **A1** We have conducted extensive experiments with the other suggested baselines shown as follows. |Model|EC|GO-BP|GO-MF|GO-CC|FOLD-fold|FOLD-Super.|FOLD-Family|Reaction| |-|-|-|-|-|-|-|-|-| |Foldseek|-|**0.582**|0.570|0.472|-|-|-|**90.60**| |Uni-mol|0.721|0.347|0.441|0.397|31.35|60.97| 90.51|74.20| |S-PLM|**0.888**|0.495|**0.685**|**0.484**|37.74|77.95|98.82| 86.71| |P3G [r3]|0.784|0.379|0.548|0.448|-|-|-|-| |ESM-2| 0.861|0.460|0.663|0.427|38.50|**81.50**|99.20|-| |EquiformerV2|0.751|0.351|0.480|0.375| 29.88|65.18|88.02|76.42| |**Ours**|0.885|0.461|0.662 |**0.484**|**59.80** |81.30|**99.70**|89.00| We report results of S-PLM and P3G from their original papers, and ESM-2 from [1]. Uni-mol and EquiformerV2 are reproduced. For the GO task, we evaluated Foldseek on a subset of 150 samples due to time constraints and manual extraction of ground-truth labels. Despite this, our method achieves comparable performance, even though Foldseek relies on a **larger external protein database** (e.g., RCSB and PDB) for retrieval-based prediction. While our GO-BP result is lower, we outperform Foldseek on GO-MF and GO-CC. Trained from scratch on a smaller dataset, our model shows strong generalization and effective representation learning. Uni-mol and EquiformerV2 underperform due to their models being tailored to molecules rather than proteins. Compared to S-PLM, our method achieves competitive results on the EC and GO datasets, while consistently outperforming S-PLM on FOLD and Reaction. *** > **__Q2__** *How does the ... using predicted protein structures? How sensitive is the 3-PSI to the initial quality...? Would lower-resolution structures impact performance?* **A2** We appreciate your valuable questions. Our approach is designed agnostic to data source and currently being evaluated on experimental structures following previous works. Regarding the sensitivity of 3-PSI to the initial quality of protein structures, we design two experiments: (1) low resolution (2) partial PDB data available. (1) Low resolution: We follow the resolution classification proposed by [2] and divide the test set into low-resolution (≥3 Å) and high-resolution (<3 Å) structures for three tasks (we cannot retrieve resolution of data in Fold task). Result shows that 3-PSI maintains strong performance across both resolutions, suggesting that 3-PSI is robust to input resolution. (2) Partial PDB data available: We simulate incomplete protein structure. Here we remove 20% of residues and train the model. The result is shown in the same table. Here we can find that our model is robust and not sensitive to the initial quality of protein structure. Overall, our method shows robustness to diverse input qualities, which indicate that our method has the potential of being generalized to predicted protein structures. We will conduct experiments on predicted structures in our future work. ||EC|GO-BP|GO-MF|GO-CC|Reaction| |-|-|-|-|-|-| |**Resolution test**||||||||| |Low|0.903|0.485|0.648|0.504|0.921| |High|0.888|0.465|0.687|0.463|0.894| |**Incomplete pdb test**||||||||| |Incomplete|0.871|0.453|0.649|0.457|87.6| |Complete|0.885|0.461|0.662|0.484|89.0| *** > **__Q3__** *Complexity analysis and potential of being adopted to more complexed encoders.* **A3** Our method consists of two stages: preprocessing and training. Each protein structure undergoes preprocessing once. For 3-PSI, we iterate over residues to determine the current dihedral angles and secondary structures in $O(n)$ time, where $n$ is the number of residues. For FCI, we search the probability matrix for edge perturbation using the Lanczos algorithm [3] for spectral decomposition, with a time complexity of $O(n^2K)$, where $K$ is the number of selected eigenvalues. During training, each 3-PSI augmentation perturbs all residues with $O(n)$ time complexity, while FCI samples edges based on the possibility matrix with $O(n^2)$ complexity; however, both operations benefit significantly from NumPy’s SIMD vectorization, resulting in fast practical runtimes. The running time is detailed in A2 of Reviewer UCwy. Our method essentially provides an augmentation strategy and is not tightly coupled to specific encoder architecture. It can be extended to more complexed encoders. Here we adopt a Graph Transformer encoder and report the performance: |Model| EC|GO-BP|GO-MF|GO-CC|FOLD-fold|FOLD-Super.|FOLD-Family|Reaction| |-|-|-|-|-|-|-|-|-| |graph transformer|0.874|0.470|0.652|0.485|58.8|80.0|99.7|87.8| *** **Reference** [1] Endowing Protein Language Models with Structural Knowledge [2] Biomolecular Crystallography: Principles, Practice, and Application to Structural Biology [3] The Lanczos Algorithm with Selective Orthogonalization *** **Given these clarifications, could you kindly consider raising your score? We greatly appreciate your support.** --- Rebuttal Comment 1.1: Comment: Thank you for your response. The results of FoldSeek are interesting. As most of my concerns have been addressed, I intend to increase the overall evaluation. The authors should incorporate these valuable discussions and results into the main text of their revised manuscript. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our rebuttal. We sincerely appreciate your insightful feedback and will adjust our manuscript accordingly. Your support is invaluable, and we are confident that our work will make meaningful contributions to the community.
Summary: This paper investigates methods to improve Graph Contrastive Learning for protein representation learning by incorporating biologically-aware graph augmentation strategies. The authors propose two novel augmentation strategies: Functional Community Invariance (FCI) and 3D Protein Structure Invariance (3-PSI), which are integrated into a unified GCL framework for protein representation learning. Extensive experiments on four protein-related tasks show that their approach consistently improves classification accuracy and robustness over existing 2D topology-based and traditional 3D augmentation methods. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence except for the bounds of spectral changes in Theorem 4.1 (line 201 and line 202). The derivation of both the upper and lower bounds appears unclear and requires further clarification. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem except for the selection of the random rotation angle θ in Section 4.2 (line 246). There appears to be no theoretical justification for the choice of the random rotation angle, which requires further clarification. Theoretical Claims: I have verified the accuracy of the claim stated in lines 90–96: "To generate augmented graphs while preserving the 3D-related information of proteins, 3-PSI employs two distinct coordinate perturbation strategies: (1) rotations of backbone dihedral angles and (2) rotations of secondary structures (α-helices and β-sheets), ensuring that peptide planes and secondary structures remain intact during graph augmentation." It has been well-established that dihedral angles, α-helices, and β-sheets significantly influence a protein's structure and, consequently, its function. However, it is worth noting that other factors may also have a substantial impact on protein structure and function, which this paper has not considered. Experimental Designs Or Analyses: I have carefully examined the experimental designs and analyses. Overall, the experiments in this paper are highly convincing. However, when comparing 2D topology-based methods, the selected baseline methods are all graph contrastive learning approaches that do not specifically account for the topological structure of protein graphs. Intuitively, these methods may significantly impact the original protein structure. If there are 2D topology-based methods designed specifically for protein graph topology, it would be beneficial to use them as baselines for comparison. Supplementary Material: I have reviewed the section E. Proofs in the supplementary material. This paper provides detailed definitions and proofs in this section. Relation To Broader Scientific Literature: 1. This paper extends spectral graph contrastive learning by incorporating functional community constraints, building on prior work like GraphCL, GCS, and CI-GCL. 2. This paper bridges the gap between 2D and 3D protein graph learning, incorporating structural constraints often overlooked in self-supervised learning. 3. This paper improves generative augmentation techniques, refining methods used in homology modeling tools (e.g., SWISS-MODEL, MODELLER) by introducing rotation-aware 3D augmentation. Essential References Not Discussed: NA Other Strengths And Weaknesses: --Strengths: This paper introduces novel biologically-aware augmentations that explicitly consider both functional and structural integrity as well as improves robustness against structural perturbations, which is crucial for real-world protein modeling. It presents a novel approach to protein modeling, which could significantly inspire protein research and contribute to advancements in drug discovery. --Weaknesses: Some heuristic choices in augmentation design: The degree of perturbation (ϵ) and number of rotations (θ) remain task-specific hyperparameters, which might require tuning for new datasets. Other Comments Or Suggestions: There is an issue in section 4.1 (lemma 4.2, line 209): the weighted upper bound may need to be revised. Questions For Authors: 1. There appears to be no theoretical justification for the choice of the random rotation angle θ in Section 4.2 (line 246), which requires further clarification. 2. The derivation of both the upper and lower bounds in Theorem 4.1 (line 201 and line 202) appears unclear and requires further clarification. 3. For 3D-related information of proteins, besides dihedral angles, α-helices, and β-sheets, are there any other factors that may also have a substantial impact on protein structure and function? 4. Are there 2D topology-based methods designed specifically for protein graph topology? I am convinced that it would be beneficial to use them as baselines for 2D topology-based methods comparison. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely appreciate the reviewer exBg's thoughtful and valuable comments.** *** > **_Q1_** *The derivation of both the upper and lower bounds ... requires clarification. & The weighted upper bound may need to be revised.* **A1** We appreciate your valuable feedback on the proof. The proof may be unclear due to the use of different symbols for eigenvalues (k in main text, y in appendix). We will unify them to enhance clarity. We clarify the derivation of upper and lower-bounds as follows. When sampling an edge for augmentation, we first determine the change in the y-th eigenvalue of the adjacency matrix by applying the eigenvalue perturbation theory (Lemma E.2, Line 988). Next, we derive the upper and lower bounds by applying the triangle inequality when dropping or adding an edge (see derivations in Theorem E.3, Line 1034 and Line1056, respectively). Regarding the weighted upper bound, after careful verification, we confirm that no revision is necessary. The change in the eigenvalue for weighted graphs is scaled by the edge weight Wij (Line 1079). By following the same derivation steps as in the unweighted case, we then obtain the upper bound for weighted graphs (Line 1113). To improve clarity, we will revise the main text to better point readers to the corresponding derivations and proofs in the appendix for the upper and lower bounds. *** > **_Q2_** *The random rotation angle θ in Section 4.2 (line 246) requires clarification.* **A2** We appreciate your attention to the selection of the θ. Our goal is to enhance the diversity of protein structures through augmentation while preserving their functionality. Therefore, we prefer to start with relatively small angles. The choice of θ is further guided by our experimental results. We conducted experiments on two tasks with varying θ values. The results show that θ = 10° yields the best performance. Too small θ may lead to insufficient conformational diversity, while a larger θ risks distorting the original protein structure—supporting our hypothesis. | θ | FOLD-fold | FOLD-Super. | FOLD-Family | Reaction | |-------|-----------|-------------|-------------|----------| | 5 | 59.5 | 80.5 | 99.7 | 88.8 | | **10**| **59.8** | **81.3** | **99.7** | **89.0** | | 15 | 59.1 | 80.4 | 99.7 | 88.1 | | 20 | 58.6 | 79.9 | 99.4 | 87.4 | | 30 | 57.2 | 78.2 | 99.3 | 85.6 | | 50 | 56.4 | 77.6 | 98.9 | 83.8 | *** > **_Q3_** *2D topology-based methods for protein graph topology as baselines for comparison.* **A3** Thank you for your valuable suggestion. To the best of our knowledge, few GCL methods have been specifically designed for proteins. Therefore, we have additionally included two relevant GCL approaches designed for molecules in the following table. | Model | EC | GO-BP | GO-MF | GO-CC | FOLD-fold | FOLD-Super. | FOLD-Family | Reaction | |--------|-------|-------|-------|-------|-----------|------------|------------|----------| |MolCLR[1]|0.869|0.443| 0.623 | 0.455 | 54.0| 77.9 | 99.5 | 87.7 | |T-MGCL[2]|0.843|0.422| 0.628 | 0.468 | 55.0 | 77.6 | 99.5 | 85.3 | |Ours|**0.885**|**0.461**|**0.662**|**0.484**|**59.8**|**81.3**|**99.7**|**89.0**| *** > **_Q4_** *Some heuristic choices ... require tuning for new datasets.* **A4** Thank you for your attention. Experiments in Section 5.3 indicate that variations in these hyperparameters do not lead to significant changes in performance, suggesting that our proposed augmentation strategy is relatively robust to their selection. As illustrated in our experiment setting part Appendix C.2., a shared hyper-parameter configuration such as ϵ = 0.2 for GO, EC, and Reaction, and θ = 2 for EC, GOBP, and FOLD, works well across different datasets and tasks. These choices can serve as useful tuning basics for new datasets. *** > **_Q5_** *Other factors may also have a substantial impact on protein structure and function...* **A5** We sincerely appreciate the reviewer’s insightful comments. We acknowledge that protein structure and function can be influenced by various factors. We primarily use dihedral angles, α-helices and β-sheets rotation to maintain the essential protein structure and improve the protein representation learning. Our experiments have proven the importance of using these factors and keeping invariance in graph augmentation. In future work, we will explore more factors to further strengthen invariance to improve protein representation. *** **_Reference_** [1] Molecular Contrastive Learning of Representations via Graph Neural Networks [2] T-MGCL: Molecule Graph Contrastive Learning Based on Transformer for Molecular Property Prediction *** __If your concerns have been addressed, could you kindly consider raising your score? We greatly appreciate your comments and support.__
null
null
null
null
null
null
WildChat-50M: A Deep Dive Into the Role of Synthetic Data in Post-Training
Accept (poster)
Summary: This paper constructs a larger and more high-quality post-training chat dataset (called WildChat-50M) by getting responses to prompts from more than just one "data-generating model" (DGM). The authors get responses to prompts from the WildChat-1M dataset from 50 open-weight models. The dataset contains over 1M multi-turn conversations (2-3 turns on average). The authors curate a supervised fine-tuning dataset from WildChat-50M using a human-in-the-loop strategy and show that when Llama-3.1 8B is SFT-ed on it, it improves on 2 out of 9 benchmarks used for evalaution (length-controlled AlpacaEval and IF eval) over baselines (Tulu 3, Magpie Align, UltraChat). For the other 7/9 benchmarks, the model achieves similar performance as baselines, despite using 40% of the data used for Tulu 3. The authors show some additional empirical results, like that there is a strong effect of which DGM is used to construct the finetuning dataset on downstream performance. ## Update after rebuttal Most of my points have been addressed (except the claim in the paper about teaching performance of models with better benchmark performance), but I remain my score as I still believe there are remaining questions about the broader usefulness/quality of rewild as a posttraining mixture. Claims And Evidence: Not all claims made in this submission are supported by clear and convincing evidence. The authors do a lot of experiments, which is good, but the paper would've been stronger if there were less experiments and each experiment was given more attention. Now, for many claims there are confounders that could explain the result. I will detail issues with each claim below. **Main claim: Re-Wild is a strong SFT mixture**. The claim that Re-Wild is a strong SFT mixture is based on fine-tuning one model on it and showing it improves over 3 baselines on 2 benchmarks, and stays the same for others or gets worse (for Math and MUSR). While this is still on average an improvement, it would be great to be more upfront about this in the text. The result is still strong in my opinion, given that you outperform on two benchmarks with 40% of the data. This experiment is essentially the only one that gives some insight into the quality of WildChat-50M over other SFT datasets, which leaves many questions open; how does Re-Wild do for other base models compared to existing SFT datasets, how does it interact with scale? Additionally, what is the performance improvement over simply using WildChat-1M? **Blending DGMs doesn't benefit models**. Although you can say that for the two blending runs you do it did not improve performance, you can't claim from that that it won't work in general. For this claim in the main paper: *"This finding indicates that SDQ depends primarily on prompt diversity, and it is most effective to optimize, rather than generalize, responses"*, you would require more experiments. All you can say now is that it does not always help to blend different DGMs, but it might for other blends. **Models with strong benchmark performance are not better teachers for that benchmark**. Again, there are confounders here, like maybe llama is a better teacher for another model from the same family, not because of benchmark performance (as you mention yourself further down the paper). **LLMs that do not share pre- and post-training data still have very similar outputs**. How do you know these randomly selected models do not share pre- and post-training data? Methods And Evaluation Criteria: Most methods and evaluation criteria make sense for the problem at hand, except one: In order to investigate the effect of context window you truncate responses from a model with a larger context window. This does not make a lot of sense to me, as the truncated responses might not make sense anymore. Additionally, the average performance over benchmarks is not a good metric to look at in this experiment, and the authors do not report benchmark scores except the aggregate. Perhaps long-context is important for some not other and therefore truncating deteriorates performance on some not others? I would also like to see the full results of that experiment in the main paper as opposed to just the average over benchmarks. Theoretical Claims: N.A. Experimental Designs Or Analyses: As mentioned in the claims section above, the experimental design has flaws, where sometimes claims are made that are not properly supported by the experiments due to confounders being present. Supplementary Material: The supplementary material seems unfinished. The authors refer to Appendix D a few times that is supposed to contain full responses and model outputs, but there is nothing there. Relation To Broader Scientific Literature: The paper positions itself well within literature, but it's sometimes difficult to understand to what extent the contribution is an improvement over very related existing papers (like WildBench-1M), because they are not compared against. Some things can be described clearer w.r.t. prior research, e.g. it would be good to mention more clearly for the first experiment in section 3.1 that the base model trained is always llama 3.1 8B, also for the other baselines like tulu 3 etc (in the caption of Figure 1 for example). Essential References Not Discussed: N.A. Other Strengths And Weaknesses: **Strengths** - The paper contributes an important dataset, which is a strong contribution given that there is a lot of opacity surrounding post-training datasets - The paper does a lot of experiments aimed at understanding more about post-training on their dataset. **Weaknesses** All mentioned above in response to questions. Other Comments Or Suggestions: - Your citation for Cohere command-r plus is a policy primer that has not much to do with the model itself and the authors you cite are not from cohere. Should probably just refer to the model announcement on their website. - Nit: line 117-118l should remove the word "may" or change phrasing - In Appendix D, it would be useful to refer to where those things can be found. - Figure 3 and 4 are hard to interpret and do not add much, and it's completely unclear what the words meant in the original judgement by the models because the context is gone (as you mention yourself in line 366-375l) - Table 6; would be useful if you can highlight the best score for each group of base LMs that are finetuned. Questions For Authors: - I'm confused about the pretrained versus post-trained model variants distinction; which models do you consider pretrained? You say you use 19 pretrained models (which I already wonder how they would respond to instructions out of the box unless theyve been trained on some instruction data in pretraining), and then in section 3.2 you call a bunch of post-trained models pretrained? - Why release most of complete configuration files and not all? (line 124r) - Why is DGM Q72 outperforming others on Math (table 2) but in Figure 1 the model trained on Re-Wild is not? Or is in table 2 the model trained only on WildChat-50M data and in figure 1 on the full Re-Wild split? - Styling experiment: you refer to rows 1:4 but its unclear where. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful response. To the best of our ability, given the 5000 character limit, we address your comments and questions below. We agree that our claim about SDQ depending primarily on prompt diversity could be worded more carefully. In the camera-ready draft, we will re-scope the claim to be more in line with our findings, e.g., by inserting phrases such as “in our experimental setting”. When we wrote that LLMs “do not share pre- and post-training data”, we meant that they do not entirely share it (that at least some data differs between the models trained); we will add this qualifier to the camera-ready. To your point about being upfront about the performance of our mix in the text, the caption of figure 1 reads: “RE-WILD outperforms strong baselines, on average, across nine benchmarks. In particular, it exhibits strong performance on generalist chat and instruction following benchmarks.” We are glad your assessment that this is a strong result agrees with ours, hence, we refer to it as a strong mix. In Sec 2.2, we state: Sometimes we will not specify the SFT target model name; in that case, it will always be Llama-3.1-8B-Base := L8B. If you would like us to repeat this information in another section, we can. In Appendix D, we write “we have attached several relevant artifacts to this submission which would not have incorporated well into the body of the paper.” The artifacts in question are in the Supplementary Material ZIP attached to our submission. How does Re-Wild do for other base models compared to existing SFT datasets? This would be an interesting ablation – however, to conduct it, we would have had to retrain not only our own model but also every baseline, because the prior work does not. How we trained our baselines could in turn raise its own set of methodological questions. With limited resources, we feel consistency with prior work is sufficient. How does it interact with scale? We provide scaling experiments at 100k, 250k, and 500k in our paper (Fig. 2) and we have more in our repository, which we will share as part of the camera-ready. What is the performance improvement over simply using WildChat-1M? The improvement is significant; the results are available in our artifacts. We did not include this in the main paper out of respect for the WildChat-1M authors, who felt it would not be a fair comparison unless we improved their work by sampling every response from GPT-4, which was prohibitively expensive. However, we are prepared to reverse that choice and include it in the appendix if you feel it is vital. We cannot release all configs and logs because some of them were corrupted on our cluster. We will release all uncorrupted ones. In the camera-ready draft, we will be happy to remove the word “may” from line 117-118l, change the citation for Cohere command-r plus to the model announcement the Cohere website, and highlight the best score for each group of base LMs in Table 6. Regarding Figures 3 and 4, we will make available the detailed responses from the LLM judges in our repository associated with the camera-ready release – it is, therefore, a straightforward matter to recover the context around the words, should anyone wish to do so. We note, however, that many of these words are easily interpretable even out of context. To clarify the pretrained versus post-trained model variants distinction; all of our DGMs were post-trained. However, many of the post-trained variants started from the same pretrained checkpoint. Therefore, we have 19 unique pre-trained models (each of which is post-trained) and 35 post-trained model variants (with non-unique pre-trained models). We will add this clarification to our camera-ready. **Why is DGM Q72 outperforming others on Math (table 2) but in Figure 1 the model trained on Re-Wild is not?** This is a good question – the reason is that the baselines in Figure 1 are trained on different data than the models compared in Table 2. In particular, the Tulu 3 SFT mix, which outperforms ReWild SFT mix on MATH, contains significantly more math data in the mix. We will update the camera-ready draft to link to Table 3, rows 1:4; thanks for this correction. Thanks again for your time. In the event that you now feel more positively about our paper, we would appreciate it if you updated your score.
Summary: - The paper introduces WildChat-50M, the largest public chat dataset to date, featuring responses from 50+ different open-weight models (0.5B-104B parameters) participating in over 1M multi-turn conversations each. - The authors created Re-Wild, a new supervised fine-tuning (SFT) data mix that outperforms Allen AI's Tulu-3 mixture while using only 40% as many samples. - Key findings show that the choice of data generating model (DGM) significantly impacts downstream performance, sometimes more than model size or parameter count. - The research demonstrates that SFT models inherit stylistic elements and response patterns from their DGMs, with LLM judge preferences transferring from DGM to fine-tuned model. - Analysis reveals scaling laws for synthetic data, showing consistent performance improvements with larger dataset sizes. - Technical investigations found high similarity between responses from different models, suggesting LLMs produce more predictable outputs than humans. - The authors observed that models learn more effectively from DGMs in the same model family (approximate on-policy sampling). - Blending responses from multiple DGMs showed no performance benefits; individual DGM quality was more important than diversity. Claims And Evidence: **Well-supported claims:** - The RE-WILD data mix outperforming Tulu-3 with fewer samples is convincingly demonstrated in Figure 1 and through detailed benchmark comparisons across 9 different benchmarks. - The claim that DGM choice significantly impacts downstream performance is well-supported by Table 2, showing substantial variance across different models. - The scaling effects shown in Figure 2 provide clear evidence that performance improves with increased dataset size. **Adequately supported claims:** - LLM judge preferences transferring from DGMs to fine-tuned models is supported by Table 4 and word frequency analysis, though the selection of which judgments to analyze could introduce bias. **Claims needing stronger evidence:** - The assertion that Wildchat-50M is the "largest public chat dataset" lacks direct size comparisons with other datasets. - The derivation of chat transcripts isn't fully explained. - The claim about models learning better from same-family DGMs would benefit from more extensive cross-family experiments. Methods And Evaluation Criteria: The methods and evaluation criteria are generally appropriate for studying synthetic data in post-training, with several strengths and some limitations: **Strengths:** - The benchmark selection balances LLM-judge metrics (MTBench, AlpacaEval2, MixEval) with ground-truth benchmarks (MMLU, BBH, GPQA), which mitigates biases inherent to either approach. - The diverse set of 50+ models spanning different sizes and architectures supports generalizable conclusions about DGM quality. - Their multi-faceted analysis (scaling laws, DGM comparison, style inheritance) effectively addresses different aspects of synthetic data quality. **Limitations:** - The evaluation focuses heavily on Llama-3.1-8B Base as the SFT target, which may not generalize to other model families or sizes. - The response similarity analysis relies on automated metrics (ROUGE, METEOR) without additional evaluation to validate perceived similarity. - The set of 9 benchmarks, while diverse, still primarily measures general capabilities rather than fully capturing the range of potential LLM applications. - Limited analysis of how specific prompt types or domains benefit differently from various DGMs. Theoretical Claims: I'm not aware of any theoretical claims made by this paper. Experimental Designs Or Analyses: **Re-Wild Data Mix Evaluation:** - Sound: Compared against established baselines (Tulu-3, Magpie, Ultrachat) on multiple metrics - Issue: The composition of Re-Wild mix (Table 1) was "chosen heuristically" rather than through systematic optimization **DGM Comparison Analysis (Table 2):** - Sound: Controlled experiments across 6 models from 4 families (7B-104B parameters) - Sound: Comprehensive evaluation across 9 benchmarks - Issue: No statistical significance testing for performance differences between models, especially since some numbers seem quite close **Scaling Experiments (Figure 2):** - Sound: Consistent methodology across 100k, 250k, and 500k dataset sizes - Sound: Tested with multiple DGMs to validate trends - Issue: Did not extend to saturation points **Style Inheritance Analysis (Table 3):** - Sound: Quantitative measurement of stylistic elements using HTML tag frequency - Issue: Limited to analyzing only 80 conversations from MTBench **General Issues:** - Limited target model diversity (primarily using Llama-3.1-8B Base) - Fixed hyperparameters throughout experiments without extensive tuning - Limited evaluation of generalization to different domains or specific prompt types - General lack of indicators of statistical significance Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper's contributions connect to several established research threads in the LLM community: - Extends the original WildChat dataset (Zhao et al., 2024) and parallels LMSys-Chat-1M (Zheng et al., 2024) - Their DGM choice findings complement recent work on teacher-student model alignment - Uses MT-Bench (Zheng et al., 2023) while examining biases in LLM judges (Feuer et al., 2024) - Investigates scaling relationships specifically for synthetic data, complementing general scaling law research Essential References Not Discussed: N/A. Other Strengths And Weaknesses: **Strengths:** - **Multi-faceted Analysis:** Comprehensive investigation combining style inheritance, scaling laws, and model comparison in a single study. - **Effective Visualizations:** Figure 1's spider chart elegantly illustrates comparative model performance across multiple dimensions. - **Practical Impact:** Re-Wild demonstrates that careful DGM selection can achieve better performance with fewer samples, democratizing high-quality model training. **Weaknesses:** - **Limited Theoretical Framework:** The paper lacks a cohesive theoretical model explaining why certain DGMs perform better than others. - **Narrow Application Focus:** Focuses exclusively on SFT without exploring how findings might extend to RLHF or other post-training techniques. - **Presentation Issues:** Some figures (like Figure 3-4) would benefit from quantitative scales rather than relative word sizes. - **Terminology Inconsistency:** The paper introduces several abbreviations and naming conventions that can be difficult to track/understand without reading in depth. Other Comments Or Suggestions: N/A Questions For Authors: 1. Did you conduct experiments varying only the DGM while keeping dataset size fixed versus varying dataset size while keeping the DGM fixed? 2. What theoretical explanation do you propose for why same-family models perform better as DGMs? Is it architectural similarity, tokenization consistency, or some other factor? 3. Your main experiments use Llama-3.1-8B as the target model. Did you perform any experiments with radically different architecture families (e.g., Mistral, Qwen) as targets to verify your conclusions generalize? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful response. To the best of our ability, we briefly address each of your comments and questions below. To your point about confidence intervals; we agree that these would be valuable. Therefore, if our paper is accepted, we will add 95% confidence intervals using the normal approximation method to the main paper, where they are absent. We hope that this resolves your concern in this area. We agree that direct size comparisons with other datasets should be included. The largest such datasets we are aware of are WildChat-1M and LMSys-Chat-1M. Our dataset is more than 50x larger than either of those datasets. We will include this information in our camera-ready. We will improve our explanation of the derivation of chat transcripts by more thoroughly explaining the contributions of the original WildChat-1M, which included the acquisition of the human multi-turn conversations used in this work. We agree that our claim about models learning better from same-family DGMs would benefit from more extensive cross-family experiments; we will be happy to reword the claim to be more modest in scope in the camera-ready draft. The raw quantitative data used to generate Figures 3-4 will be made available in our repository for the camera ready release. To your point about the selection of which judgments to analyze potentially introducing bias, we agree, which is why we report results over all of MTBench without subselection. Our heuristic choice of SFT mix is consistent with prior work (Tulu 3); as of right now, automated methods for subselection trail human performance (Tulu 3), and it is a relatively low-cost annotation task for human experts. We agree that the response similarity analysis would benefit from a small-scale human study; if our paper is accepted, we will conduct such a study. We did conduct experiments varying only the DGM while keeping dataset size fixed; these can be found in our supplementary material ZIP attached to the submission. Sadly, at this time we can offer no theoretical basis for our findings; they are purely empirical. We consider this a useful direction for future work. We did perform experiments with Qwen as well as Llama; the results can be found in Appendix Table 6. To your point about RLHF; we agree with you that it would have been a valuable contribution, and we note as much in our limitations section; we consider this a very interesting and important direction for future work, and thank you for highlighting it. However, the paper, even in its current form, required considerable resources, and we hope you will take this into account. Thanks again for your time. In the event that you now feel more positively about our paper, we would appreciate it if you updated your score.
Summary: This paper constructed a large set of 50M synthetic conversations by using the initial human prompts from WildChat and various pretrained language models to generate responses and following turns. Based on the data, they further selected a subset and mixed with two other sources of data (MMLU Auxiliary Train and Tulu 3 Persona Hub Algebra) to construct form a dataset for instruction tuning base language models. This work also performed extensive analyses to study the effects of various choices in their dataset construction pipeline, such as whether blending different models helps. ### update after rebuttal Authors mentioned that they'll consider adding a section addressing some of my concerns. I'm leaning positively about this paper and am keeping my score. Claims And Evidence: 1. A critical question is whether the advantage of increased dataset scale (50x larger than the original WildChat) compensates for the potentially lower quality of synthetically-generated responses compared to human-written responses. The paper does not address whether this large-scale synthetic data genuinely surpasses smaller-scale, higher-quality human-written responses. I strongly suggest the authors conduct experiments to determine if synthetic data can effectively compensate for presumed quality deficits at larger scales. For example, the authors can add a comparison to Figure 2, where in addition to existing bars, add results for simply subsampling original WildChat to 100K, 250K, and 500K, and compare results. I'd hope to see that at smaller scales original data is better, but beyond a certain scale original data is no longer available and synthetic data finally catches up. 2. Section E Table 7 shows that blending responses from multiple language models yields no benefit (compared to using the strongest teacher). This result appears to undermine the motivation of exploring diverse synthetic responses, raising the question: is the primary benefit observed here due solely to the strong performance of a single DGM (Qwen 2.5 72B)? Methods And Evaluation Criteria: 1. Important details about how human turns within multi-turn conversations are generated remain unclear. Since pretrained language models typically generate only assistant responses, it is unclear how human dialogue turns were synthesized, how many turns were included per conversation, and whether such multi-turn interactions improve downstream performance. Clarifying these points would substantially strengthen the paper. 2. The ReWild dataset, although performing well, incorporates external datasets (MMLU Auxiliary Train, Tulu 3 Persona Hub Algebra), somewhat diluting the primary contribution of the WildChat-50M dataset itself. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: Yes, I read Section E. Relation To Broader Scientific Literature: WildChat-50M is a substantial resource, significantly expanding the availability of synthetic conversational data for training or fine-tuning language models, greatly benefiting the research community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: On page 4 there's too much white space. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful response. To the best of our ability, we briefly address each of your comments and questions below. Regarding experiments to determine if synthetic data can surpass the performance of smaller-scale, higher-quality human-written chat responses: unfortunately, we know of no such publicly available chat dataset. The original WildChat-1M dataset contained GPT 3.5 and GPT-4 synthetic responses (we did train models on that data; the results of those experiments are available in our artifacts). The closest comparisons of which we are aware are the large-scale post-training runs exclusively on FLAN data described in (https://arxiv.org/abs/2409.15268). FLAN has human prompts and human responses, but is not a chat dataset; that work found that FLAN-trained models significantly underperformed WildChat-trained ones, controlling for dataset size. We agree that more research needs to be done in this area. As to whether the primary benefit observed in our mix is due solely to the strong performance of a single DGM (Qwen 2.5 72B), our key contributions are (1) discovering which DGMs tend to produce higher quality responses and (2) providing experimental evidence on why they perform better. We agree that we could have more thoroughly explained the contributions of the original WildChat-1M paper, which included the acquisition of the human multi-turn conversations used in this work. We will add more information on this to our camera-ready draft. For the camera-ready version, we will also correct the issue of too much white space on page 4. --- Rebuttal Comment 1.1: Comment: Sorry for the confusion in my wording, but I meant compared to chatgpt generated data. For example, why would wildchat-50M be useful when there's WildChat-1M? Can quantity (50M as opposed to 1M) overcome quality (presumably at least the GPT-4 portion of WildChat has higher response quality than WildChat-1M)? --- Reply to Comment 1.1.1: Comment: Thanks for the response! This is a very intriguing question. While we did not have enough GPT-4 data in WildChat-1M to do extremely large-scale comparisons, we did conduct small-scale comparisons. In those, it did not appear to be the case that GPT-4 as a DGM was more helpful than the best open-weights models we tested. This surprising result makes more sense in the light of some of our other findings about how LLMs actually learn from DGMs; style rather than factuality. If the paper is accepted, we are happy to add a section discussing this to the appendix, if you wish.
Summary: Authors are proposing a new synthetic dataset: 'WILDCHAT-50M' Compare to other open datasets, 'WILDCHAT-50M' is much larger and includes synthetic data generated from many open source models other than GPT. -WILDCHAT-50M is the largest public chat dataset to date -It includes responses from over 50 different open-weight models -A comparative analysis was conducted using this dataset -RE-WILD, a public SFT mix, was created and outperformed a recent mixture from Allen AI Claims And Evidence: Claim - The choice of DATA Generative Model (DGM) significantly impacts downstream model performance on generalist chat benchmarks. Selecting a good DGM can compensate for small dataset size and outperform more complex methods and carefully curated SFT mixes. Evidence: compare the performance of six unique pretrained models from four distinct model families, including Qwen-2.5-72B-Instruct from Alibaba, Llama-3.3-70B-Instruct from Meta, Command-RPlus from Cohere, and Jamba-1.5-Mini from AI21. Benchmarked on: MTBench AlpacaEval BBH GPQA MATH MUSR IFEval MMLU Pro MixEval Showed that the results have large variance and is unpredictable. - Claim: There is no benefit in generating data generation models. Certain DGMs produce higher Synthetic Data Quality (SDQ) due to factors such as: Comprehensiveness, Clarity, Tone, Prompt responsiveness. These factors are highly heritable during the SFT process, even on generalist data. - Skills like world knowledge or mathematics are only heritable when data is curated for that specific purpose. - Large Language Models (LLMs) exhibit a high degree of similarity in prompt responses, suggesting a subtle distinction between high and low SDQ. Methods And Evaluation Criteria: For benchmarking, author used a few different benchmarks. MTBench AlpacaEval BBH GPQA MATH MUSR IFEval MMLU Pro MixEval Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: The experiment setups are straight forward. Training framework: Axolotl Eval framework: Evalchemy With all standard settings Supplementary Material: Yes, the authors gave plenty of supplemental material to support their claims. Relation To Broader Scientific Literature: Post-training techniques for language models: The paper builds upon prior work on post-training techniques, such as Supervised Fine-Tuning (SFT) , which has been shown to improve the performance of language models. The authors' contribution lies in exploring the effect of different Data Generating Models (DGMs) on SFT. Importance of dataset diversity: The paper's focus on creating a large and diverse dataset (WILDCHAT-50M) is in line with previous research highlighting the importance of dataset diversity for training robust language models. The authors' contribution is in providing a standardized benchmark suite that can be used to evaluate the performance of different DGMs. Comparative analysis of language models: The paper's comparative analysis of different DGMs is similar to previous studies that have compared the performance of various language models on specific tasks . However, the authors' contribution is in providing a comprehensive evaluation of multiple DGMs on a large and diverse dataset. Open science and reproducibility: The paper's commitment to open science and reproducibility is in line with recent efforts to promote transparency and accountability in AI research. The authors' decision to release their data, artifacts, and code publicly aligns with these values. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: - The authors presented a large synthetic dataset that is benefit for the open source community. Weakness: - Most of the claims in the paper are well established by past publications. There is little novelty in the claims. - Limited post-training approaches: Only SFT (Supervised Fine-Tuning) was used, and results may differ with other post-training methods. - Benchmark suite limitations: The benchmark suite is standardized, balanced, and large, but does not cover all use cases, particularly: Highly specialized tasks (e.g., coding, legal reasoning) Other Comments Or Suggestions: Check wording: When we fine-tune Llama-3.1 8B Base on RE-WILD, 'wehow' that our models outperform the SFT mix proposed in Tulu-3 Should be 'we show'? I suggest use grammarly. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful response. To the best of our ability, we briefly address each of your comments and questions below. You are correct that "wehow" is a typo and should read “we show”. We apologize for the inconvenience, and will remedy this in the camera-ready draft if we are accepted. You are also correct that our benchmark suite contains no examples of highly specialized coding or legal reasoning tasks. This exclusion was motivated by our findings about the limited usefulness of generalist chat data for such tasks (see our conclusions at the end of 3.3). To your point about our inability to include other post-training approaches in this work, we agree with you that it would have been a valuable contribution, and we note as much in our limitations section; we consider this a very interesting and important direction for future work, and thank you for highlighting it. However, the paper, even in its current form, required considerable resources, and we hope you will take this into account. Thanks again for your time. In the event that you now feel more positively about our paper, we would appreciate it if you updated your score.
Summary: The paper proposes a new synthetic chat dataset, WildChat-50M, which consists of generated responses from 50+ open weight models. The authors then created a new SFT datamix, Re-Wild, by combining WildChat-50M with two other datasets (MMLU Auxiliary Train, Tulu 3 Persona Hub Algebra). Main contribution of the paper: - Introduction of new datasets WildChat-50M and Re-Wild and their source codes - Ablation showing its SFT performance beats other open datamix on 9 benchmarks - Analysis on the effect of data generating models (DGM) efficiency and their impact on the synthetic data quality (SDQ), and hypothesis why certain DGMs outperforms the rest on certain benchmarks - Misc empirical insights on what tricks matter and what don't, e.g. choice of DGM, diversity of DGM, context window, ... Claims And Evidence: The major claim of the paper: - WildChat-50M is a useful dataset, evidenced by its derived SFT datamix, Re-Wild outperforming other open SFT datamix baselines in finetuning llama3 8b. - Various factors contributed to its source of effectiveness, including data volume, DGM category, etc. - The choice of DGM is critical and highly diversified across different benchmarks. Evidence: Figure 1-2 showing that - Overall Re-Wild outperforms strongly over other baselines. - SFT performance improves as the data volume increases. Table 2 showing that the choice of DGM differs drastically across benchmarks Methods And Evaluation Criteria: This is an empirical paper. It is well written and straightforward to follow through. The figures and tables are well presented and the introduction, related work provide thorough context on the related literature. While the paper is not technically novel, I'm leaning toward acceptance because - It introduces a new SFT dataset that's open source and useful - The design of this datamix is backed by extensive study and comprehensive analysis is included to inform the reader of key design choices - Efficiency is included and key parameters are transparently disclosed Thoughts on evaluation - While the reviewer acknowledges the performance gain after SFT, how well it carries over to RLHF is unknown -- would a stronger SFT base from finetuning on this synthetic data result in a stronger RLHF candidate? In an industry setup, usually SFT is followed by RLHF and the model quality after RLHF is what truly matters. Thoughts on ablations - The SFT datamix shows strong performance -- and there might have been considerable iteration cycles before it finally worked. The reviewer imagines the synthetic data generation to not be something straightforward and there are many decisions crucial to the success. Would be helpful to callout any common fallacies and summarize the best practices in synthetic data generation, e.g. is there some config that is easily overlooked but would ruin the entire dataset if not carefully engineered. Such empirical insights would add great benefits to the industrial community. Minor issues - Tab 1 / row 1 -- should it be WildChat-50M instead of WildChat-Q72? Theoretical Claims: no theoretical claim was proposed Experimental Designs Or Analyses: see above Supplementary Material: yes, mainly section F Relation To Broader Scientific Literature: The following are related: - LLM post training - supervised finetuning - synthetic data generation Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: see above Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful response. To the best of our ability, we briefly address each of your comments and questions below. To your point about our inability to include RLHF in this work (and therefore to answer your question about whether stronger SFT leads to stronger post-trained models), we agree with you that it would have been a valuable contribution, and we note as much in our limitations section; we consider this a very interesting and important direction for future work, and thank you for highlighting it. However, the paper, even in its current form, required considerable resources, and we hope you will take this into account. We agree that a section documenting common pitfalls and best practices in synthetic data generation, including configurations that might ruin the entire dataset, would be valuable; we do not include it here because of space constraints, however, if the paper is accepted, we will add such a section to the appendix in the camera-ready version. We are happy to confirm that Table 1, Row 1 is in fact not a typo – the name refers to samples with WildChat-1M prompts and Qwen 2.5 72B Instruct responses, following the convention introduced in Sec. 2.2. However, as we only explicitly define naming conventions for models and here implicitly use it to name a dataset, there could be confusion -- we will clarify that the names can apply to datasets generated using DGMs in the camera-ready draft. Thanks again for your time. In the event that you now feel more positively about our paper, we would appreciate it if you updated your score.
null
null
null
null
MONA: Myopic Optimization with Non-myopic Approval Can Mitigate Multi-step Reward Hacking
Accept (poster)
Summary: The paper investigates the problem of multi-step reward hacking, in which an agent trained with an ordinary reinforcement learning algorithm, due to the statement of the RL optimization problem, learns to optimize for the sum of current and future rewards, potentially learning multi-step strategies to increase its long-horizon reward in ways that are very hard for overseers to detect, compared to e.g. single-step reward hacking. In particular, the paper introduces a way of doing Myopic Optimization with Non-myopic Approval (MONA), in which the optimization objective, rather than consisting of just the instantaneous rewards, has an additional non-myopic approval reward, which represents the overseer's estimation on how much taking the action improves the agent's future prospects. Notably, the non-myopic approval cannot depend on actual observations of the action's consequences, as that would make this no different from the normal RL optimisation objective, in the sense of the agent having an incentive to control *future* states and rewards, which the MONA agent does not. The paper then presents empirical evidence in the form of three experiments—test-driven development, loan application evaluation, and a block-moving task—in which ordinary reinforcement learning leads to interesting multi-step reward hacking in the form of writing easy tests for future self, basic steganography, and confusing the reward-assigning overseer, respectively—and demonstrate that identical agents trained with a MONA objective, while (as expected) obtaining significantly less observed reward during training, perform much better in the ground-truth cases, and, in some of the cases, demonstrate that the MONA-trained agents do not reward hack. Claims And Evidence: Yes, the claims made in the paper are generally very well-supported by theoretical and empirical evidence. There are no problematic claims that I noticed. Methods And Evaluation Criteria: Yes; all three experiments are in environments where reward hacking is a natural outcome (that is, in all of the environments, the rewards are "mis-specified" in such a way that a non-myopic agent would figure out an undesired way to increase reward.) Theoretical Claims: The proof of theorem 3.1 seems straightforwardly correct, if not particularly strong in practice. Experimental Designs Or Analyses: I think the experimental designs are valid and well-done. I especially like the analysis shown in figure 4(c), where the counterfactual impact of the nationality is directly analyzed. Supplementary Material: Briefly looked at it. A lot of the supplementary material is actually pretty great, and I kind of wish this paper eventually gets turned into a 30-page manuscript where everything the authors wanted to include is part of one narrative (e.g., C3 is super interesting) Relation To Broader Scientific Literature: Reward hacking has been observed in many contexts, leading to potentially scary multi-step behaviors like steganography (e.g. Motwani et al, 2024). This paper presents a way to decrease the likelihood of multi-step reward hacking occurring, which significantly contributes to reinforcement learning generally, agent evaluations, and AI alignment. Essential References Not Discussed: References are good Other Strengths And Weaknesses: I think that overall the paper presents a very strong approach for dealing with multi-step reward hacking, and deals very well with the difficulties of formalization of what this means, while presenting strong experimental evidence to support the method. Section 4.4 is very interesting—in fact, this is the main part of the paper that I'm interested in: the reason being that if I had some sort of RL task for which I was suspicious of multi-step reward hacking being possible and I wanted to use MONA, it is sort of unclear to me what kind of reward I should use such that the model still performs fine in the potentially long-horizon ground truth setting but does not do multi-step reward hacking. Essentially, in section 2, the paper claims to "attempt to start a discussion of safety-performance tradeoffs: knowingly accept a possible reduction in observed performance to improve potentially unobservable properties". I think the existence of section 4.4 is a great strength of the paper, but it's a significant weakness, in my opinion, that the actual safety-performance tradeoff is underdiscussed. There is likely a meaningful set of Pareto frontiers here between how much implicit consideration of future outcomes the MONA reward function has and how much this affects performance at different thresholds for the acceptability of multi-step reward hacking, and this is the sort of thing that would make me significantly more excited about the applicability of this method outside of toy problems. Another thing which I would like the main paper to have discussed more is section C3; while the paper does not explicitly claim that MONA has any sort of guarantees on getting rid of reward hacking, Theorem 3.1 feels sort of misleading without the corresponding discussion; in fact, I spent a fair amount of time thinking about what the theorem actually implies, thinking of how I would write in my review that this doesn't seem to guarantee anything about multi-step reward hacking not happening, and then seeing that this is already addressed in the supplemental material (I realize there were significant space constraints, but hopefully you can fit in a discussion of this for the camera-ready version!) Other Comments Or Suggestions: I found myself wishing that the Camera Dropbox task got a better explanation—in section 4.1, the description didn't really make sense to me, even after looking at Figure 10 (why is pushing multiple boxes into the hole bad? How can you block the camera and push the very same box into a hole? what are "multiple rewards"?) This finally made sense to me after I read the first 4 sentences in D3; I would just move those to the main paper. It might also be nice to have a more explicit argument for why single-step reward hacking is "better"—this isn't super intuitively obvious to me, especially when you mention in Section 3.3 that non-myopic approval can be implemented by modelling a reward function that's based on human preferences/scores/feedback—aren't RMs kind of notoriously reward-hackable in difficult-to-detect ways? Questions For Authors: I don't have any questions where the response would be likely to significantly change my evaluation of the paper Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are happy you found the paper insightful, and thanks for the detailed comments on improving the clarity in the main paper. We agree that the most important open question about MONA is what the safety-performance Pareto frontier looks like in practice and we’re hoping this will be studied in future work! Unfortunately, we weren’t able to discuss all of the interesting aspects of MONA in the main paper. But we agree that Appendix C3 contains an important result and we will highlight this more in the main paper. We’ll also improve the description of the Camera Dropbox environment. Thanks for the concrete suggestion for doing this!
Summary: The paper introduces a method, called MONA, that mitigates multi-step reward hacking by limiting optimization to be myopic and adding a hand-crafted non-myopic approval reward. The paper provides three case studies to demonstrate how MONA can avoid multi-step reward hacking in comparison to ordinary RL and provides some insight on how even inaccurate foresight can work. Claims And Evidence: The paper motivates their method by claiming that it "restricts the the agent to strategies within the spotlight [of strategies that human experts can understand]" on line 025. It's not clear that restricting to myopic optimization will actually produce such strategies. None of the experiments provide evidence to this fact. The paper claims that MONA has only a "reasonable performance cost" and can still lead to superhuman agents. The experiments provided are not convincing in arguing that myopic optimization will be sufficient for this. The experiments do a good job of supporting the claims that MONA prevents multi-step reward hacking. However, the evaluation is hurt by relying solely on benchmarks where it's clear that the environment was designed so that multi-step optimization will lead to reward hacking. It would be nice to see MONA more broadly tested in existing reward hacking benchmarks to understand its limitations. Methods And Evaluation Criteria: The benchmarks are well designed for highlighting the potential problems of multi-step reward hacking. However, the evaluation seems overly designed to situations in which myopic optimization will succeed and the reward has been specifically setup in a way to elicit reward hacking. A broader evaluation is necessary to understand the limitations of MONA and myopic optimization. Theoretical Claims: I checked theorem 3.1 and it is sound. The result follows immediately from the construction of the CID. Experimental Designs Or Analyses: In the "Test-driven Development" experiment, the LM is prompted to be a "bad programmer" and write an "incorrect solution". This is a strange way to prompt an LM for programming tasks and doesn't match how a LM would be used or finetuned for such a task. It's not clear to me how this might confound the results. I would feel more comfortable about this experiment if instead a weaker LM were used that couldn't solve all of the tasks to begin with even when unprompted. The setup for the Camera experiment seems a bit backwards and is not a good reflection of a real-world scenario. Here, the non-myopic approval required by MONA is dependent on having the value function of a non-reward hacking policy, in which case we've already solved the problem. I also don't think the application of noise in this scenario is compelling: isn't the optimal (myopic) policy the same regardless of how much noise is applied to the value function? Supplementary Material: None Relation To Broader Scientific Literature: The paper relates closely to other existing ideas of using human-designed approval to mitigate reward hacking (e.g. decoupled approval) but extends these ideas by including a myopic version of the underlying reward. Essential References Not Discussed: None. Other Strengths And Weaknesses: Weaknesses: The paper does a poor job of engaging with how much myopic optimization might limit the capabilities of the model. I think a more thorough investigation of this to understand the limits of MONA would substantially strengthen the paper. As discussed in other parts of my review, I think the evaluations are overly hand-designed and don't provide convincing evidence that this approach will work in general. Strengths: The paper is well written and easy to follow. I think the chosen evaluations do a good job of highlighting setting in which myopic optimization can help prevent reward hacking. Other Comments Or Suggestions: The definition of reward hacking given at the start of section 2 is confusing and unsatisfying. What does "understood" refer to? What does the system designer need to understand? Since reward hacking is such a central aspect of the paper, I would like to see the paper engage with other definitions of reward hacking in the literature and carefully define how their method relates to these definitions. Questions For Authors: 1. It's difficult to parse the performance of MONA in Fig 7. In the top plot, is the value of MONA given by the first x-value (mainly failing) or the second x-value (full desired behavior). If it's the latter, what does x=0 represent (i.e. an optimization horizon of zero)? 2. It feels like the formulation of MONA is simply pushing the burden of reward hacking onto the non-myopic approval. The non-myopic approval still needs to be carefully designed to prevent its own form of reward hacking. Indeed, if the non-myopic approval somehow related to future states in the trajectory and is not perfectly specified, it could also lead to multi-step reward hacking (as mentioned in Section 4.4). Why does delegating the non-myopic aspect of the reward function to this auxiliary reward fundamentally mitigate the reward hacking problem? 2. The paper repeatedly states that it's desirable to have policies that humans can understand so that they can have proper oversight. I understand that myopic optimization will make policies weaker, but why will it necessarily make the policies easier to understand? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and the constructive criticism! We broadly agree with the key limitations that your review identifies (and we discuss them in Appendix B). It will be important for future work to study the competitiveness of MONA and to test MONA in more realistic environments. However, these questions were out-of-scope for the present paper, which already provides multiple major contributions. Our work is the first work to demonstrate multi-turn reward hacking during RL with LLMs; the first work to show steganography arising during end-to-end RL; and, the first work testing the effect of myopic optimization to address it. We believe these contributions are novel, impactful, and of high interest to the ICML community. Our environments are designed to make it easy to study multi-turn reward hacking. The main reason for this is that multi-turn reward hacking can be difficult to detect in practice and the practical use of multi-turn RL with LLMs is very limited at the moment. As multi-turn RL is used in more realistic environments, it will become feasible to study the competitiveness of MONA in practical settings. Let us also respond to specific questions you raised below. > I would feel more comfortable about this experiment if instead a weaker LM were used that couldn't solve all of the tasks to begin with even when unprompted. We use this unusual prompt because current LLMs have been heavily finetuned for coding which leaves little room for improvement via RL training. We tried using a smaller LM for this task, but could not find a model that cannot solve the tasks but can learn to do so via RL training (likely because coding performance in these models is already strongly elicited via finetuning). > The setup for the Camera experiment seems a bit backwards and is not a good reflection of a real-world scenario. [...] I also don't think the application of noise in this scenario is compelling: isn't the optimal (myopic) policy the same regardless of how much noise is applied to the value function? Using a non-reward hacking policy does, of course, give MONA an unfair advantage. The point of this experiment is not to check if MONA performs better than RL without this value function, but to study how well MONA works with noisy approval feedback. Applying noise to the value function _does_ change the optimal (myopic) policy. While the value function is the expected future reward of the non-reward hacking policy, we apply noise after the computation of the expectation. This can change the optimal policy if the relative order of states’ values changes. > is the value of MONA given by the first x-value (mainly failing) or the second x-value (full desired behavior). If it's the latter, what does x=0 represent (i.e. an optimization horizon of zero)? The second x value (x=1) corresponds to MONA. In our implementation, x=0 optimizes an all-zero reward function, so it effectively gives a random policy. This is indeed confusing, we’ll make sure to clarify this when revising the paper. > Why does delegating the non-myopic aspect of the reward function to this auxiliary reward fundamentally mitigate the reward hacking problem? The approval reward is not the primary reason MONA helps with multi-turn reward hacking. In most of our experiments (except the gridworld), we use the same reward function with MONA and ORL (also cf. our response to reviewer xDwo). The MONA agent does not get additional information compared to the ORL agent. Instead, MONA helps prevent reward hacking because of the way the rewards are propagated across the trajectory. MONA does not address single-turn reward hacking, i.e., the agent might still try to achieve high approval feedback in unintended ways. We believe reducing reward hacking to single-step reward hacking is progress because we expect single-step reward hacking to be significantly easier to detect (as discussed in Appendix B). We do not claim that MONA “fundamentally mitigates the reward hacking problem”. We claim that it removes the incentive for multi-step reward hacking which is particularly difficult to detect. > The paper repeatedly states that it's desirable to have policies that humans can understand so that they can have proper oversight. I understand that myopic optimization will make policies weaker, but why will it necessarily make the policies easier to understand? We typically think of the approval feedback as provided by human overseers. Approval means that humans recognise the value of this action for solving the task at hand. We expect this will make the agent’s actions more understandable to humans because the overseer will likely not approve actions that they don’t understand. Of course, this effect will depend on how exactly the approval feedback is constructed. We discuss this to some extent in Appendix B where we compare different approaches to constructing approval feedback. But, we will make sure to clarify this point in the main paper.
Summary: This paper introduces an approach (MONA) to mitigating the risk of multi-step reward hacking (where an agent executes undesirable, multi-step plans to achieve high reward) by decomposing the reward function into a myopic task reward and an non-myopic approval reward. They demonstrate that this can help reduce multi-step reward hacking in two short-horizon text-based tasks using LLM agents and one longer-horizon grid-world task using dynamic programming/RL. ### Update after rebuttal I am not changing my score. The authors have replied to my questions and minor concerns adequately, and I still think the paper should be accepted, though I am not quite prepared to move from "Accept" to "Strong Accept". Claims And Evidence: While multi-step reward hacking is not yet prevalent, the authors did a good job of motivating the importance of this problem without over-claiming. Their proposed approach is supported both by some straightforward (but still helpful) theoretical results and a range of empirical results with numerous variations and ablations. One claim that I did not fully understand was: > This is a non-trivial modification, and we believe it affects exploration and learning in subtle ways because that part of policy-space is less heavily modified by supervized fine-tuning and reinforcement learning before we begin our own training. in reference to "The model ... being told it is a “bad programmer” that should write an “incorrect solution” to cause it to perform poorly". What is the "part of policy-space" that the authors are referring to? Is it that which is conditioned on this prompt? While I agree that SFT and RL may not target exactly situations in which an agent has been told this, I would have thought that, e.g., there is still quite a lot of overlap or transfer (e.g., because the model is the context of having been presented a programming problem to solve). Methods And Evaluation Criteria: The chosen evaluation methods and baselines do make sense, although it would have been great to see slightly longer-horizon tasks for the LLM agents. This would have been closer to the problem setting the authors care about and would have also (I imagine) been more interesting when it came to the specification of the approval vs. myopic rewards, which is – to the best of my understanding – one of the main challenges of actually implementing MONA. My guess is that the authors don't do this because it is hard to get RL working for LLMs on longer horizon tasks (as they correctly note, theirs is the first time that RL-based steganography in LLMs has been observed, despite numerous past attempts). Still, with it now being possible to create reasoning models using RL, I would be interested to see how their methods apply there. Theoretical Claims: I did check the correctness of the theoretical claims, and I believe they are all correct. If I were being pedantic, I would say that the CID in Figure 2.b) should probably be such that the approval reward is downstream of some other variable (perhaps representing a common-sense understanding of the environmental dynamics or task structure) that is an ancestor of the future states or rewards, as this seems fundamental to the idea of what approval rewards are actually useful for. This clearly doesn't affect the presence of control incentives, but might matter for other analyses. (With that said, presumably if the reward function Figure 2.a is also designed deliberately then their might be similar nodes/edges there and whole thing might get a bit unwieldy, so I'm not suggesting that the authors necessarily modify the diagram, only that they consider this point.) Experimental Designs Or Analyses: I did check the experimental designs and analyses and they seem sound to me. I have already mentioned that one of the main issues is that the LLM tasks only have two steps. I also noted that although the authors run RL experiments for the gridworld task their results in the main body use value iteration. I assume this is probably because for the different variations and ablations value iteration is less noisy than PPO, but it would be good if the authors could clarify the reasoning behind their decision here. Supplementary Material: I reviewed all of the supplementary material. Relation To Broader Scientific Literature: This paper falls into the general literature on reward hacking and different means of reward provision and agent supervision. While the idea of more myopic agents being inherently safer has been discussed in the past and is an assumption of other prior schemes for safer AI systems, to the best of my knowledge this is the first time it has been investigated rigorously. Essential References Not Discussed: In general, I think the authors did a good job of discussing related work. As someone reasonably familiar with this area, I didn't spot any glaring omissions. It perhaps would have been nice to see references to some of the earlier, less formal discussions on this topic by Hubinger, Demski, etc., but I'm not sure I'd classify those as _essential_. Other Strengths And Weaknesses: Overall I thought this was a good paper. The idea behind MONA is a natural one but well-executed. The paper is clear and well-presented, and the results justify the authors' approach. While in some ways this is still preliminary work and there are lots of open questions, I think it's important that other researchers have something to build on, and that the challenges with myopic vs. non-myopic supervision are more widely appreciated. I also appreciated some of the more minor results in the paper, such as the demonstrations of RL-based steganography in LLMs and MONA unlearning reward hacking behaviour in the grid-world experiment. Aside from the issue about it ideally being nicer to have longer-horizon LLM experiments, there are two other issues I had with the paper, one of which is concrete and one of which is admittedly quite vague, and more of an open question to the authors: 1. I found it kind of confusing that the paper is premised around the idea that an agent's reward can helpfully be decomposed into $r^I$ and $r^A$ but then in the LLM experiments there is no such decomposition. Rather, $r^A$ seems to be implicitly captured by some post-hoc ground-truth evaluation that highlights the shortcomings of the reward hacking policy. The authors note that "in practice, we may choose to implement a mechanism to provide $r_{\text{MONA}}$ without explicitly distinguishing between $r^I$ and $r^A$" but at least for the purpose of exposition I would suggest more clearly writing down how the reader should interpret these two reward structures for the experiments in the paper (even if only in a footnote). 2. It seems that the efficacy of MONA fundamentally depends on how the approval reward is specified, as this is what makes MONA agents more competitive compared to standard agents that receive non-myopic rewards. Throughout the paper I had an uneasy intuition that we were sort of just 'passing the buck' of reward specification to the approval reward, and moreover that having an approval reward capturing what the overseer views to be "reasonable" isn't all that robust. E.g. for the loan applicant summaries in Figure 6, both of these seem pretty reasonable to me, at first glance. I'd be interested to hear the authors' response to this point, as I might be missing something. Other Comments Or Suggestions: Typos: - I don't think there should be a dotted information edge from $s_i$ to $a_{i+1}$ in Figure 2.b). (As a side-note I don't think you actually explain the CID notation anywhere, not that it is really essential to understanding the diagram.) - In D.2.2: "State and transition functions The initial" - It's "parsable", not "parse-able" Questions For Authors: 1. Please could you clarify the claim that "This is a non-trivial modification, and we believe it affects exploration and learning in subtle ways because that part of policy-space is less heavily modified by supervized fine-tuning and reinforcement learning before we begin our own training." as discussed further above? 2. My (admittedly limited) understanding is that current approaches to produce reasoning LLMs via RL work simply by using a sparse end-of-task reward rather than by direct process supervision. To the extent that step-by-step rewards are hard to create and/or don't actually work as well, how well do the authors expect their approach will transfer to these new methods of training more powerful foundation model agents that can plan over longer time horizons? 3. In Figure 4.a), if "MONA achieves roughly the best score possible without considering nationality" and the labels are only swapped on 30% of data points, why is MONA's return approximately 0.7/2 instead of 0.7 (compared to ORL's return of 1)? 4. In appendix D.1.5, whats is the "TFO" agent? I couldn't find the definition of this acronym anywhere. 5. I didn't understand the explanation of the results in appendix F.2. Shouldn't the ORL agent end up with a higher coordination advantage compared to MONA? 6. This is more of a high-level question, but do the authors see their method as mainly combatting reward-hacking in training or when during deployment, or both? To what extent is there a meaningful distinction here, and which do the authors think is the most important issue? For example, I can imagine an agent trained to be very competent at pursuing long-term plans as being more dangerous, even if given myopic objectives once deployed, but also that a myopically trained agent might eventually become sufficiently good at chaining those myopic objectives together to pursue non-myopic goals once deployed. 7. Why is PPO not used for the main grid-world experiments (as discussed above)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the review. We’re glad you like the paper! We’d like to clarify the role of the approval feedback in MONA and in our experiments specifically, as this partially addresses the two concerns you brought up. > I had an uneasy intuition that we were sort of just 'passing the buck' of reward specification to the approval reward, and moreover that having an approval reward capturing what the overseer views to be "reasonable" isn't all that robust. E.g. for the loan applicant summaries in Figure 6, both of these seem pretty reasonable to me, at first glance. You are right that defining a good approval reward is important in general. However in most of our experiments we focus on the effect of myopic optimization on the training incentives and do not define the approval feedback specifically for MONA (the gridworld is the only exception). In particular, in the Loan Applications experiments, the only reward on the loan summary is a monitoring penalty for explicitly using the applicant’s nationality. The reason the MONA agent avoids learning to steganographically encode nationality in the summary is because this is not explicitly reinforced and it is _not_ because the approval feedback penalizes encoding nationality (which it doesn’t). We could also define the approval feedback based on which summary an overseer views to be “reasonable” as you describe. When training with MONA, this feedback might result in the agent learning that both summaries are equally valid and it might sometimes output one or the other. But as long as the overseer's feedback is not systematically correlated with the applicant’s nationality, the MONA agent will not learn to encode nationality in the application summaries. > r_A seems to be implicitly captured by some post-hoc ground-truth evaluation that highlights the shortcomings of the reward hacking policy This is only true in the gridworld environment. In both the Test-Driven Development and the Loan Applications environment we do not distinguish between r_I and r_A explicitly, and the only difference between ORL and MONA here is myopic vs. multi-turn training. Also, let us respond to your specific questions below. > Please could you clarify the claim that "This is a non-trivial modification, [...]? Your interpretation is correct: we think a policy conditioned on the “You are a bad programmer” prompt might be qualitatively different from a policy “You are an expert programmer”. We agree that there is likely “a lot of overlap or transfer”. The comment in the paper was only to highlight that this difference might affect the results (even though we don’t have concrete evidence that it does). > how well do the authors expect their approach will transfer to these new methods of training more powerful foundation model agents that can plan over longer time horizons? To apply MONA to reasoning models, we'd train the model myopically to produce good reasoning steps (eg. as judged by humans). Similar to the experiments in our paper, we expect this approach to underperform ordinary RL training measured by the outcome reward. In current reasoning models the outcome reward is typically trustworthy, e.g., it is the correctness of a math problem. So we don't advocate for using MONA yet; MONA will become more relevant as we start using multi-turn RL with less trustworthy reward signals. > why is MONA's return approximately 0.7/2 instead of 0.7 (compared to ORL's return of 1)? The decision reward is +1 for a correct decision and -1 for an incorrect decision. So the best possible reward without using nationality is roughly 0.7 * 1 + 0.3 * (-1) = 0.4. This is described in Appendix D.2.4 but we’ll make sure to highlight it in the paper more. > In appendix D.1.5, whats is the "TFO" agent? Sorry, this is a typo and it should be “the _MONA_ agent”. > Shouldn't the ORL agent end up with a higher coordination advantage compared to MONA? It’s not clear which agent should have higher coordination advantage. In MONA the steps are rewarded independently but the second step is still trained to perform well on the distribution of application summaries the first step produces, which makes some “coordination” beneficial. > do the authors see their method as mainly combatting reward-hacking in training or when during deployment, or both? MONA is primarily a training method. Giving a non-myopically trained agent a myopic goal is not enough to get the benefits we describe in the paper. Using MONA during training can also help make agents easier to oversee at deployment. For example an agent that chains myopic objectives together needs to produce intermediate results which could make it easier to catch the agent pursuing undesired goals. > Why is PPO not used for the main grid-world experiments (as discussed above)? The results with PPO are much noisier, as you can see in Appendix F. So to see similarly clear trends we’d need to average many PPO runs for each optimization horizon. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response, and I apologise that my reply is quite late, as it doesn't give the authors much time to follow up. All my other questions apart from the ones below have been satisfactorily addressed. ## $r_A$ and $r_I$ in LLM Experiments > In both the Test-Driven Development and the Loan Applications environment we do not distinguish between r_I and r_A explicitly I _think_ I understand this now, but I still believe the paper is slightly confusing on this point. In particular, in their response I take the authors to be saying something like "actually, in the LLM experiments we don't really have $r_A$ when we train MONA, as $r_A$ is the non-myopic bit of the reward and we only train the agent myopically" (though they should definitely correct me if I am wrong), but then in the paper they also say: > To apply MONA, we can set $r_I = r$, but we **need to** add an additional non-myopic approval reward $r_A$ before performing myopic optimization according to (3) (emphasis mine) which seems to suggest that $r_A$ (i.e. the non-myopic approval reward) is **necessary** in order to be able to apply MONA. If I am not mistaken in my interpretation, then perhaps a slight re-wording of this sentence and/or a clarificatory footnote along the lines of the authors' rebuttal above would help. ## Coordination Advantage > It’s not clear which agent should have higher coordination advantage. In MONA the steps are rewarded independently but the second step is still trained to perform well on the distribution of application summaries the first step produces, which makes some “coordination” beneficial. I still didn't get this sorry. In the paper it says: > Not propagating reward from the decision outcome to the application summary causes MONA to learn to write neutral summaries in Step 1 but then learn to make the correct loan decision in Step 2. So the decision making in Step 2 is crucial for achieving a high reward, which leads to a high coordination advantage. In particular, regarding the second sentence, does the MONA agent perform better in step 2 perform better simply because it is being trained instead of being frozen? If so I would expect basically any training scheme (that works) to lead to a coordination advantage, because a coordination advantage will (assuming having a reasonable summary to work from) boil down to to the advantage from training the agent to succeed in step 2. If instead the frozen model _has_ already been trained to perform step 2 well, where is the MONA agent's advantage coming from? --- Reply to Comment 1.1.1: Comment: Thanks for the response! Let us try to address your remaining questions below. > "actually, in the LLM experiments we don't really have $r_A$ when we train MONA, as is the non-myopic bit of the reward and we only train the agent myopically" (though they should definitely correct me if I am wrong) This is almost correct. We don't explicitly define $r_A$, but our rewards do sometimes have a non-myopic component implicitly. For example, the test-correctness in the Test Driven Development environment is not a purely myopic reward; our "test correctness" reward says something about how useful the tests are for solving the problem later so it is more like $r_A$ than $r_I$. This is what we mean in the paper, when we say "Therefore, in practice, we may choose to implement a mechanism to provide $r_{MONA}$ without explicitly distinguishing between $r_I$ and $r_A$." > (emphasis mine) which seems to suggest that (i.e. the non-myopic approval reward) is necessary in order to be able to apply MONA. If I am not mistaken in my interpretation, then perhaps a slight re-wording of this sentence and/or a clarificatory footnote along the lines of the authors' rebuttal above would help. We agree this sentence is confusing and will adapt it. Thanks for pointing this out! What we want to say here is more along the lines of "in most environments without a non-myopic approval reward myopic optimization will likely not be competitve". > In particular, regarding the second sentence, does the MONA agent perform better in step 2 perform better simply because it is being trained instead of being frozen? If so I would expect basically any training scheme (that works) to lead to a coordination advantage, because a coordination advantage will (assuming having a reasonable summary to work from) boil down to to the advantage from training the agent to succeed in step 2. If instead the frozen model has already been trained to perform step 2 well, where is the MONA agent's advantage coming from? Yes this is right. The way we set up this metric we can't really distinguish between "coordination advantage" and "learned decision making advantage". We're really missing an ablation here where we freeze step 1 and train step 2 (our frozen step 2 model is not trained at all). That's a good point and we'll clarify this in the paper! The main point of this experiment was to show that the ORL agent not just learns a single-step strategy for encoding the decision in the sentiment of the summary but that it also learns how to exploit this better in decision making. To check that the distinction between "coordination" and "learned decision making" is not as important. --- Thanks again for your constructive feedback. These are important clarifications and they will help improve the paper!
null
null
null
null
null
null
null
null
Differentiable Structure Learning with Ancestral Constraints
Accept (poster)
Summary: The paper proposes a method to integrate prior knowledge of presence/absence of certain edges or paths into differentiable structural learning frameworks. The paper presents theoretical analyses of several strategies, and the related issues, for handling such constraints in a continuous optimization regime. The authors then propose an alternative continuous characterization of the constraints along with an order-guided optimization strategy to mitigate the identified issues. ### Update after rebuttal: I retain my view regarding the novelty of the practical algorithm of the method, which to me is a incremental, rather limited, extension of the existing frameworks (those mentioned in the discussions). I understand that the other reviewers may hold a different viewpoint regarding this. However, I highly value the theoretical analysis of the problem at hand and I think it can contribute to the understanding of optimization behavior induced by order constraints. After reading the authors' comment, I am slightly more positive about the work and decided to raise my score to 3, though honestly a score of 2.5 would better justify the update. This is also to encourage the authors to clarify the similarities/differences of the approach compared to existing methods, if the authors in fact leverage their results. Claims And Evidence: The paper provides solid theoretical results for most of the key claims and extensive empirical evidence overall to substantiate the effectiveness of the proposed method. However, the following points are problematic to me. 1. In Section 4.1, the authors point out an issue of violating paths when directly optimize Eq. (18) due to the sensitivity to different intializations for $W$. In the original implementation of NOTEARS or DAGMA, which the authors are also using, $W$ is often initialized at zeros and this has so far been shown to work sufficiently well in many settings. Furthermore, while the case in Proposition 5 may arise, a positive view from Proposition 5 is that if the current graph contains correct paths, the loss will help prevent edges that causes conflicting paths. To fully establish it as a motivation for the proposed method, the authors should provide some empirical results for the prevalence of such issue in practice, which is currently lacking in the manuscript. 2. The claim this is the first work that addresses the integration of ancestral constraints into differentiable structure learning is rather misleading. At least based on the authors' discussions of related works in Appendix A, we have works with same problem setup like Wang et al. (2024), Sun et al. (2023), Ban et al. (2024) (where ancestral constraints are part of the partial order constraints). Methods And Evaluation Criteria: My understanding from Section 4.2 is that the proposed method involves two steps: (1) solve for $W$ with path absence constraints by Eq (23), (2) use the resulting $W$ as initial points and solve for final graph by Eq (25) with path presence constraints. From Section 4.2, the task is reformulated into an structure learning objective with partial order constraints, which Ban et al. (2024) have already addressed, also stated so by the authors in lines 736 - 738. Furthermore, Eq. (25) corresponds to the optimization problem with total ordering constraint and it is the relaxation of such constrained optimization problem proposed in Eq. (7) of Ban et al. Therefore, Proposition 7 seems to me purely a re-statement of Proposition 3 of Ban et al. (2024). Furthermore, the insight from Proposition 7 is not new in that acyclicity constraint is no longer needed given the knowledge of total ordering, which has been actively exploited; some recent works include Deng et al., (2023), Shahverdikondori et al. (2024) *Deng, C., Bello, K., Aragam, B., & Ravikumar, P. K. (2023, July). Optimizing notears objectives via topological swaps. In International Conference on Machine Learning (pp. 7563-7595). PMLR.* *Shahverdikondori, M., Mokhtarian, E., and Kiyavash, N.QWO: Speeding up permutation-based causal discovery in liGAMs. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024* Theoretical Claims: There is no issue with the correctness of the theoretical claims. Experimental Designs Or Analyses: The authors conduct experiments on well-established benchmarks and report the standard metrics for evaluation. My main concern is in the lack of baseline comparison and that the paper only reports the performance of the proposed method. It is not surprising that incorporating additional order-based constraints yield performance improvement, which is expected. As mentioned above, there are other algorithms for integrating constraints, certainly with different specifications and not necessarily fully differentiable (e.g., O’Donnell et al., 2006, Chen et al., 2016). Therefore, it is important to have an empirical assessment of these methods as baselines to understand how well the proposed method works. In particular, the authors should compare the method with Wang et al. (2024), Sun et al. (2023) to empirical verify the sub-optimality of the ReLU characterization. Supplementary Material: I have reviewed the codes in the supplementary material. Relation To Broader Scientific Literature: The paper introduces a method to incorporate general ancestral constraints into differentiable structure learning frameworks. There are previous works addressing ancestral constraints yet with some limitations, namely Wang et al., which assumes knowledge of path lengths or Chen et al., (2016) which do not consider differentiable frameworks. Essential References Not Discussed: As for order-based constrained optimization, Deng et al. (2024) is a recent work that improves NOTEARS by leveraging prior knowledge of topological ordering. The authors can consider this method as an alternative for the currently proposed one in step 2. *Deng, C., Bello, K., Aragam, B., & Ravikumar, P. K. (2023, July). Optimizing notears objectives via topological swaps. In International Conference on Machine Learning (pp. 7563-7595). PMLR.* Other Strengths And Weaknesses: **Strengths:** The paper offers a systematic analysis of the challenges associated with incorporating ancestral constraints into differentiable causal discovery frameworks, both from a theoretical and empirical perspective. In my view, this is the key point of differentiation of this work compared to previous studies. While the concept of applying binary masking in causal discovery is not new, the in-depth theoretical analysis presented here is particularly interesting. Given the maturity of the gradient-based causal discovery literature, it is crucial to explore the optimization behavior and the role that different constraints play in the optimization process. **Weaknesses:** As mentioned above, the proposed objective to me bears striking resemblances with the previous results, particularly Ban et al. (2024). Apart from those, the result in Eq (16) is also highly relevant. The proposed binary-masked characterization is a threshold-dependent relaxed equivalence of the characterization in Proposition 4 of Ban et al. which states that no directed path from $X_i$ to $X_j$ if and only $(\sum_{k=1}^{d} (W \circ W)_{i,j}=0$. Here $|W|$ and $W \circ W$ are different ways to ensure positivity of the weighted adjacency matrix (Wei et al., 2020). It is acceptable to build on or reuse existing results, but proper citations need including, especially when the authors have acknowledged these works in the paper. Other Comments Or Suggestions: I currently vote for a weak reject mainly due to the questionable novelty of the proposed theoretical results in relation to the existing works. However, I am willing to raise the score if the authors could provide explanations as well as additional empirical evidence to demonstrate the competitiveness or superiority of the proposed method. Some further comments are: 1. In line 734, the ReLU constraints are discussed in works like Wang et al. (2024) and Sun et al. (2024), which also need to be explicitly cited in the Section 3.1 discussion. 2. Furthermore, I do not find the implementation of DAGMA in the provided code. Despite the filename, the codes have NOTEARS implementation. 3. The authors should summarize the final algorithm to highlight what the key contribution is. Questions For Authors: The current theoretical analyses are conducted in the linear case. Could the authors further comment on the optimization behavior for the non-linear case, with respect to the results in Proposition 4 and 5? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review. We first address your major concerns regarding novelty and technical differences. We abbreviate differentiable structure learning for causal discovery as DCD. # Novelty over Partial Orders (Ban et al. (2024)) You consider path existence as part of partial orders. In fact, path existence imposes a **stronger** constraint than partial ordering for DCD; it is partial ordering that forms a subset of path existence constraints. Consider this example: Suppose variables $A$ and $B$ are not reachable through any directed path in the true graph. Still, there exists a total ordering $\pi$ satisfying the true topological order where $A \prec B$. Here, the partial order constraint $A \prec B$ is correct (not contradicting ground truth), yet the path $A\leadsto B$ does not exist. Since the existence of path $A \leadsto B$ implies $A \prec B$, but not vice versa, path existence provides strictly stronger structural information. Additionally, optimizing DCD with path existence constraints introduces unique challenges (inequivalence and inherent gradient conflicts) absent in partial-order-based DCD. Therefore, DCD with ancestral (path existence) constraints represents a novel, significantly more challenging task than DCD with partial ordering. # Essential Technical Differences from Prior Work - Chen et al. (2016) consider discrete score-and-search methods, not encountering differentiability or the related issues described in Sec. 1. - Sun et al. (2023) focus exclusively on edge constraints. - Wang et al. (2024) assume the length of the path $(i,j)$ is known beforehand and formulate the loss as $\text{ReLU}(\epsilon - |W|^k_{i,j})$. Even ignoring this strong assumption, their formulation is theoretically incorrect and suffers from the same inequivalence issue as the naive loss $\text{ReLU}(\epsilon - \sum_k |W|^k_{i,j})$ discussed in Sec. 3.1. Moreover, they do not address the critical order-violation issue. Thus, it is rational to state that this work is the first to systematically (correctly) address DCD with ancestral constraints. # Evidence on Oder-Violation We now provide explicit evidence of the order-violation issue. Specifically, we report: - **@OrderViolation**: Number of constraints specifying paths that violate the recovered ordering (i.e., reversed paths). - **@FailedPaths**: Number of unsatisfied constraints. - **Data loss**, F1 score and SHD. We compare PE-NOTEARS-zero (zero initialization) with PE-NOTEARS (order-guided optimization) across varying path loss weights. Results (30 nodes, ER2 graph, linear SEM with Gaussian noise, 80% paths) are summarized below as **PE-NOTEARS / PE-NOTEARS-zero**. | PE loss weight | 1 | 10 | 50 | 100 | 500 | | --------------- | ----------- | ----------- | ----------- | ------------ | ------------ | | F1 | 0.68 / 0.60 | 0.68 / 0.41 | 0.68 / 0.22 | 0.68 / 0.19 | 0.68 / 0.17 | | Data Loss | 8.4 / 21.3 | 9.0 / 45.9 | 9.1 / 115.1 | 10.3 / 160.5 | 11.4 / 276.6 | | @OrderViolation | 0.0 / 0.1 | 0.0 / 0.8 | 0.0 / 3.7 | 0.0 / 6.7 | 0.0 / 19.0 | | @FailedPaths | 0.3 / 9.3 | 0.2 / 10.7 | 0.5 / 27.2 | 1.2 / 22.2 | 0.7 / 24.2 | The order-violation issue from Example 1 and Proposition 5 occurs when path loss significantly impacts optimization. To simulate this, we vary the path-loss weight from 1 to 500. As the path-loss weight increases, @OrderViolation (the count of order-violating paths) grows substantially, dominating @FailedPaths, aligning with our identified issue. In this case, reducing path-loss impact with zero initialization leads to improved optimization. In contrast, PE-NOTEARS with order guidance consistently achieves stable and strong performance even under significant path-loss weights, aligning well with stable prior-based structure learning. These results clearly illustrate the order-violation issue arising from zero initialization and demonstrate the effectiveness of order guidance in resolving this issue. # Questions Propositions 4 and 5 also hold in the nonlinear setting: **Proposition 4:** Nonlinear models typically define edge weights $ W $ from parameters $ \theta $ through non-negative transformations, e.g., $ W_{i,j} = \sqrt{\sum_{q=1}^m\theta_{i,j,q}^2} $. In such cases, the absolute value of edge weights directly correlates with the magnitude of parameters. Since path existence constraints and acyclicity constraints push $ |W_{i,j}| $ in opposite directions, they consequently push parameters $ \theta_{i,j,q} $ in opposing directions. Thus, the gradient conflict identified in Proposition 4 persists in nonlinear settings. **Proposition 5:** In nonlinear models, regularization ensures parameters $ \theta $ remain bounded, implying edge weights $ W $ are also bounded. Thus, the argument presented for Proposition 5 directly applies. # Other Responses Due to the character limit, please see responses to Reviewer mwJt for this part. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. I understand that path existence constraint poses a challenge in its own right and appreciate the thorough analyses up to Section 4. I also understand the difference in the motivations of the two papers. However, when it comes to practical methodology, the technical formulation is indeed the same to me. Specifically, the final objective in Eq. (25) is the NOTEARS relaxation of the primal problem in Eq. (7) of Ban et al. The proposed objective ends up making use of partial/total ordering as done in Ban et al., since the knowledge of the ordering seems sufficient to deal with path absence/existence. This is what undermines the significance of the contribution to me. Furthermore, the improvement in performance compared to Li et al. seem insignificant to me. For the above reasons, I remain unconvinced in the practicality of the proposed method, and while I appreciate the authors' efforts, I maintain my evaluation. --- Reply to Comment 1.1.1: Comment: Thank you again for your detailed consideration. To clarify, **Equation (25) is not the final objective of our approach (not even as part of the approach); rather, Equation (18) serves as our final objective function**, explicitly addressing path existence-based structure learning. This is stated on line 287, right column, of our paper. Here are our detailed responses relative to your concerns: 1. **Partial orders alone are insufficient to handle path existence constraints.** Partial orders can equivalently be viewed as constraints on path absence (Eq. 23 in our paper). Thus, while they prevent certain erroneous edges, they **do not actively constrain the existence of particular paths**. Practically, path existence-based differentiable structure learning **typically recovers more missing edges** compared to partial-order-based approaches, even at the risk of introducing a small number of erroneous edges (see our response to reviewer mwJt, point 4, and Figure 12 in Appendix D). This is particularly crucial in practical scenarios where **discovering previously unknown causal relationships is prioritized over strict correctness**. 2. **Major technical differences to Ban et al. (2024)**: **The only technical intersection** with partial-order-based methods (e.g., Ban et al., 2024) is our adoption of solutions from partial-order-based structure learning as initializations. However, this initialization is **solely to address the order-violation issue**, which we explicitly identify as **a unique challenge arising from path existence constraints**. **Our primary method and final objective function explicitly target path existence-based structure learning (Eq. 18).** The equation **(Eq. 25) you mentioned is not as part of our proposed methodological approach**. It is provided solely within Proposition 7 to clarify theoretical insights on why partial order-based solutions provide effective initialization. 3. **Comparison to Li et al. (2018)**: Li et al.'s method is a discrete search strategy explicitly designed to incorporate ancestral constraints by direct graph evaluation during the search. Given identical prior knowledge, it is **natural to observe comparable improvements** between discrete and differentiable structure learning methods, as **both correctly leverage prior knowledge.** Therefore, our observed performance improvements (with **differentiable structure learning consistently yielding stronger overall results**) align with expectations, highlighting the practical efficacy of our differentiable path existence-based approach. 4. **Contribution Clarification:** Our main contribution lies in enabling current differentiable structure learning methods to integrate ancestral constraints—the remaining significant structural constraints (apart from edge constraints and partial orders) not previously incorporated into differentiable structure learning. Technically, continuously optimizing with path existence constraints requires: **1) an equivalent characterization of path existence and 2) favorable optimization dynamics despite gradient conflicts between path existence and acyclicity**. Our paper identifies and addresses these issues explicitly. Practically, ancestral constraint-based differentiable structure learning combines the power of neural networks and GPU resources with the reliability of external knowledge-guided constraints to **efficiently and comprehensively recover authentic, previously unknown causal mechanisms**. Furthermore, this work provides new ideas applicable to broader AI research benefiting from NOTEARS-based causal analyses, such as computer vision [1], fault diagnosis[2], and multi-agent systems [3]. [1] Zhang, C., Jia, B., Edmonds, M., Zhu, S. C., & Zhu, Y. ACRE: Abstract causal reasoning beyond covariation. CVPR 2021. [2] Dai, E., & Chen, J. Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series. ICLR 2022. [3] Ruan, J., Du, Y., et al. GCS: Graph-Based Coordination Strategy for Multi-Agent Reinforcement Learning. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems. We appreciate your engagement with our responses and hope this clearly illustrates the practical and technical contributions of our work.
Summary: This paper introduces a framework for integrating ancestral constraints into differentiable structure learning of causal DAGs, addressing challenges in representing path existence and order violations. The authors propose a binary-masked characterization method and an order-guided optimization strategy to improve constraint adherence. Theoretical analysis and empirical evaluations on synthetic and real-world datasets demonstrate the method’s effectiveness. Claims And Evidence: All claims in the paper are well motivated and supported with theoretical analysis. Methods And Evaluation Criteria: 1. The proposed method appears well-motivated and reasonable for integrating ancestral constraints into differentiable structure learning. 2. Would it be possible to compare with Wang et al.’s work? 3. What are the additional computational overheads (e.g., running time, number of iteration to converge) compared to standard NOTEARS? 4. Section 5.4: Why does adding more constraints sometimes lead to suboptimal performance? Some discussion on this trade-off would be helpful. For example, given a large dataset with over 200 nodes, would incorporating constraints overwhelm the optimization process and lead to a performance downgrade? 5. D.1. Time Complexity: The comparison appears somewhat vague. Can I interpret the actual running time as **PO-NOTEARS + PE-NOTEARS-path**? If so, the comparison between PE-NOTEARS-path and NOTEARS seems unfair, as PE-NOTEARS-path benefits from a well-optimized initial guess. Theoretical Claims: I have not examined the proofs in detail. Experimental Designs Or Analyses: Please refer to my comments above. Supplementary Material: I've checked "A. Related Work" and "D. Complete Experimental Results and Analysis." Relation To Broader Scientific Literature: The work is important because it provides a solid foundation for differentiable causal structure learning by enabling the incorporation of constraints beyond simple edge presence or absence. Essential References Not Discussed: I’d encourage the authors to discuss “Scalable Differentiable Causal Discovery in the Presence of Latent Confounders with Skeleton Posterior” in the related work, as it offers an alternative approach to incorporating edge constraints into the optimization process without directly setting W_ij=0. Other Strengths And Weaknesses: Strengths - The paper is well organized and the presentation is very clear. - The method is clean and supported with theoretical analysis. Weaknesses - Some concerns in the evaluation. Please refer to my comments above. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to my questions in "Methods And Evaluation Criteria." Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your careful review. Here are our responses. # Responses 2. Wang et al. formulate the path existence loss as $\text{ReLU}(\epsilon - |W|^k_{i,j})$ (Eq. (22) in their paper), assuming a known path length $k$. Without this assumption, the formulation naturally generalizes to $\text{ReLU}(\epsilon - \sum_k |W|^k_{i,j})$, precisely matching the intuitive function $\bar{p}(W)$ introduced in Sec. 3.1. Thus, the ablation labeled *-intuitive* reflects Wang et al.'s method under unknown path length. We will clarify this explicitly in the revised manuscript. 3. See point 5. 4. Specifying path existence (PE) benefits the recovery of missing edges without need of exactly identifying them, but it also risks introducing extra edges along the path. The observed degradation in F1 (or increased SHD) when adding constraints arises because the number of newly introduced extra edges outweighs the number of correctly recovered missing edges. However, since our method begins optimization from a good solution obtained by integrating partial order (PO) constraints—where extra edges have already been mitigated—FDR of PE-NOTEARS consistently improves compared to NOTEARS. For a clear insight, we report the results (including a new case of 200 nodes) of structure learning under PO and PE constraints as **PO-NOTEARS / PE-NOTEARS (/ NOTEARS)**. Settings are ER2 graphs, 20 samples, linear Gaussian SEM, and 80% paths. | @Node | 10 | 20 | 30 | 50 | 200 | | ------------ | ----------- | ----------- | ----------- | ----------- | ------------------ | | TPR | 0.78 / 0.81 | 0.74 / 0.79 | 0.69 / 0.75 | 0.62 / 0.66 | 0.39 / 0.42 / 0.21 | | FDR | 0.09 / 0.13 | 0.13 / 0.19 | 0.15 / 0.19 | 0.22 / 0.26 | 0.20 / 0.24 / 0.53 | | PathRecovery | 0.96 / 0.99 | 0.84 / 0.98 | 0.81 / 0.96 | 0.74 / 0.89 | 0.46 / 0.61 / 0.12 | PE constraints improve the TPR over PO-NOTEARS and recover more missing causal paths, but they can also increase the FDR. Nevertheless, **PE-NOTEARS consistently outperforms NOTEARS**, including the case of 200 nodes. Importantly, explicitly enforcing path existence is crucial in scenarios where **uncovering previously unknown causal mechanisms** is prioritized over simply ensuring correctness for all identified causal edges. 5. You are correct. The reported runtimes for PO-NOTEARS and PE-NOTEARS-path reflect the two stages of PE-NOTEARS separately. We'll include the total PE-NOTEARS runtime in the revised manuscript. Iterations remain similar (~20–30) across NOTEARS, PO-NOTEARS, and PE-NOTEARS-path. Additionally, we've reduced complexity to $O(d^3 \log d)$, as addressed in point 3 for reviewer tD5o. ## **Partial responses to Reviewer MDE7** ## Methods We assume you refer to Eq. (18) (path existence-based DCD) rather than Eq. (25) in step (2). 1. The main contribution is identifying this order-violation issue and effectively utilizing partial orders to mitigate it, rather than proposing a novel partial order-based DCD algorithm itself. 2. Proposition 7 states that arbitrary optima of NOTEARS correspond precisely to optima under certain total orderings, fundamentally differing from Proposition 3 by Ban et al. (which states convexity given a total ordering). This clarifies that total orderings fully characterize NOTEARS optima, highlighting that partial orders refine total orderings towards better solutions. ### Experiments 1. Score-and-search methods perform poorly on SEM-generated data due to scoring mismatch (e.g., BIC). Thus, we evaluate performance on real-world data (Sachs-500), reporting the F1 scores below under the same set of random paths. | @Paths | 0 | 25 | 50 | 75 | | ---------------- | ---- | ---- | ---- | ---- | | Li et al. (2018) | 0.33 | 0.43 | 0.41 | 0.48 | | PENOTEARS | 0.42 | 0.42 | 0.49 | 0.54 | 2. The method in Wang et al. (2024) corresponds exactly to our intuitive path existence formulation without known path-lengths (See point 2, responses to reviewer mwJt). Hence, the ablation labeled *-intuitive* reflects their approach, which will be noted. ## References We will introduce the relevant work by Deng et al. (2024) in the paper. ## Weaknesses We will explicitly cite Proposition 6 as Proposition 2 from Ban et al. (2024). However, Eq. (16) is independent of Ban et al. (2024): it defines a threshold-based binary mask indicating absent paths, following standard practice in DCD (originally from NOTEARS' acyclicity loss forbidding self-loops, see Sec. 2.3). Although Ban et al. (2024) also use path absence, their intent (partial orders) differs from ours, where we introduce $b(W)$ specifically to address the unnecessary issue of in path-existence characterization. ## Suggestions - We will add these citations in Sec. 3.1. - We apologize for the oversight regarding DAGMA codes and will correct it. - We will include a summary algorithm highlighting key contributions.
Summary: The paper addresses the challenge of integrating ancestral constraints into differentiable structure learning for causal directed acyclic graphs (DAGs). The key problem is how to incorporate prior knowledge about the existence or absence of paths between variables (ancestral constraints) into the learning process, which is typically formulated as a continuous optimization problem. The authors identify two main issues: the non-equivalence of relaxed characterizations for representing path existence and the order violations among paths during optimization. To tackle these challenges, the paper proposes a **binary-masked characterization method** and an **order-guided optimization strategy**. The binary-masked method ensures an equivalent representation of path existence by selectively activating constraints based on whether a path already exists. The order-guided strategy enforces partial order constraints implied by path existence, ensuring that the optimization process avoids order-violating paths and converges to favorable optima. ### Key Contributions: 1. **First Systematic Approach**: This is the first paper to systematically address the integration of ancestral constraints into differentiable structure learning, allowing the use of abstract prior knowledge to guide the discovery of fine-grained causal mechanisms. 2. **Binary-Masked Characterization**: The authors propose a binary-masked continuous relaxation that accurately represents path existence, addressing the non-equivalence issue in previous relaxed characterizations. 3. **Order-Guided Optimization**: The paper introduces an optimization strategy that enforces partial order constraints derived from path existence, ensuring that the optimization process avoids suboptimal solutions caused by order violations. ### Methodology: - **Path Existence Constraints**: The authors formulate the problem of path existence constraints using a continuous relaxation of the path existence condition. They show that previous relaxed characterizations fail to equivalently represent path existence and propose a binary-masked method to address this issue. - **Order-Guided Optimization**: The optimization strategy begins by enforcing partial order constraints implied by path existence, deriving a DAG that satisfies all specified path orders. This order-consistent DAG is then used as the initial adjacency matrix to optimize the path existence-constrained problem. ### Theoretical and Empirical Validation: - The paper provides theoretical justification for the correctness of the proposed approach, showing that the binary-masked characterization and order-guided optimization strategy effectively address the challenges of integrating ancestral constraints. - Experimental evaluations on both synthetic and real-world datasets demonstrate the effectiveness of the proposed method. The results show that the approach outperforms baseline methods and ablation variants, achieving better adherence to path existence constraints and higher accuracy in recovering causal structures. Claims And Evidence: To the best of my knowledge, there is sufficient evidence to the claims. Methods And Evaluation Criteria: #### **Strengths:** 1. **Novel Contribution to Differentiable Structure Learning**: - The paper makes a significant contribution by addressing the integration of **ancestral constraints** (path existence constraints) into differentiable structure learning. This is a novel and important extension of existing methods, as it allows for the incorporation of abstract prior knowledge about causal relationships, which is often available in real-world applications. - The proposed **binary-masked characterization** and **order-guided optimization strategy** are innovative solutions to the challenges of representing path existence and avoiding order violations during optimization. 2. **Theoretical Justification**: - The paper provides a strong theoretical foundation for the proposed methods, including proofs and lemmas that justify the correctness of the binary-masked characterization and the order-guided optimization strategy. This theoretical rigor enhances the credibility of the approach. 3. **Empirical Validation**: - The authors conduct extensive experiments on both **synthetic and real-world datasets**, demonstrating the effectiveness of their approach. The results show that the proposed method outperforms baseline methods and ablation variants, particularly in terms of **path recovery rate** and **adherence to path existence constraints**. - The experiments also highlight the robustness of the method across different settings, including varying numbers of nodes, edge densities, and sample sizes. 4. **Generalizability**: - The proposed method is shown to be compatible with different backbone differentiable structure learning algorithms, such as **NOTEARS**, **DAGMA**, and **GOLEM**. This demonstrates the broad applicability of the approach across various frameworks. 5. **Addressing Order Violations**: - The **order-guided optimization strategy** is a key strength, as it effectively addresses the issue of order violations among paths, which can lead to suboptimal solutions. By enforcing partial order constraints, the method ensures that the optimization process converges to solutions that respect the specified path existence constraints. --- #### **Weaknesses:** 1. **Non-Differentiability of Binary Mask**: - A significant drawback of the proposed method is that the **binary mask** $ b(\mathbf{W}) $ is **not differentiable**. This undermines the core idea of differentiable structure learning, as the optimization process relies on gradient-based methods. The non-differentiability of $ b(\mathbf{W}) $ could lead to instability during optimization and make the method less effective in practice. - The authors should consider alternative approaches to ensure differentiability, such as using smooth approximations of the binary mask or exploring other differentiable representations of path existence. 2. **Gradient Conflicts**: - The proposed constraints for ensuring path existence can lead to **gradient conflicts** with the acyclicity loss. This is a critical issue, as gradient conflicts can hinder the optimization process and result in suboptimal solutions. - A simpler and more effective solution would be to relax the problem by constraining the **absence of the reverse edge** $(j, i)$ instead of enforcing the existence of the path $(i, j)$. This can be done by setting $ p(W)_{ij} = 0 $, which avoids gradient conflicts and is easier to optimize. The authors should consider adding this approach as a baseline in their experiments. 3. **Computational Complexity**: - The proposed method involves computing $ p(W) = \sum_{k=1}^{d} |W|^k $, which has a time complexity of $ O(d^4) $. This is significantly higher than the $ O(d^3) $ complexity of standard acyclicity losses, making the method computationally expensive, especially for large graphs. One possible way is to use the approach in Zhang, Zhen, et al. "Truncated matrix power iteration for differentiable DAG learning." Advances in Neural Information Processing Systems 35 (2022): 18390-18402, which should be $O(d^3 \log d )$. A possibily further speed up can be find in Zhang et. al. Analytic DAG Constraints for Differentiable DAG Learning, ICLR 2025, which is $O(d^3)$. - While the authors suggest that GPU acceleration can mitigate this issue, the high computational cost remains a limitation, particularly for large-scale applications. 4. **Limited Discussion on Path Absence Constraints**: - The paper briefly mentions path absence constraints but does not explore them in depth. Path absence constraints are easier to enforce and align well with the acyclicity loss, as they do not cause gradient conflicts. The authors should consider expanding their discussion on path absence constraints and comparing their performance with path existence constraints. --- #### **Suggestions for Improvement**: 1. **Address Non-Differentiability**: - The authors should explore differentiable approximations of the binary mask $ b(\mathbf{W}) $ or alternative representations of path existence that are fully differentiable. This would improve the stability and effectiveness of the optimization process. 2. **Incorporate Constraints without Gradient Conflicting**: - The authors should consider adding the simpler approach of constraining the absence of the reverse edge $(j, i)$ as a baseline. This approach avoids gradient conflicts and is easier to optimize, providing a useful comparison to the proposed method. 3. **Better Time Complexity** - The time complexity of the approach may be reduced to $O(d^3)$. Theoretical Claims: I have checked the proof in detail and they should be correct. Experimental Designs Or Analyses: The experimental design is good and sound. Supplementary Material: I have checked the proofs. Relation To Broader Scientific Literature: I did not find a issue in the part. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: See above weakness and strengths. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed review and thoughtful comments. Here are our responses. ### 1. Non-Differentiability of Binary Mask We provide empirical evidence that incorporating the binary masked path existence loss, $\bar{p}(W)\circ b(W)$, does not compromise optimization stability. To evaluate this, we define an edge as *unstable* if its weight lies near the threshold $\epsilon_0 = 0.3$ used to compute $b(W)$. Here, an edge is unstable when $|W_{i,j}| \in [2.9, 3.1]$. We then report the proportion of unstable edges relative to the total recovered edges for both PE-NOTEARS (which employs the binary mask) and the original NOTEARS method. Settings are: sample size $n=20$, ER2 graphs, Gaussian noise, and a path percentage $p=80$. | @Node | NOTEARS | PE-NOTEARS | | ---- | --------------- | --------------- | | 10 | $3.4 \pm 2.7$% | $5.5 \pm 4.7$% | | 20 | $5.8 \pm 3.1$% | $5.7 \pm 2.6$% | | 30 | $5.5 \pm 1.8$% | $5.4 \pm 1.5$% | | 50 | $10.0 \pm 2.0$% | $10.0 \pm 2.1$% | The results show that the proportion of unstable edges in PE-NOTEARS is comparable to that in NOTEARS. If the non-differentiability of $b(W)$ were to introduce instability, we would expect a significantly higher number of edges with weights near the threshold, where the binary mask could abruptly flip between 0 and 1. However, the results indicate that this is not the case. Thus, by demonstrating that weights rarely fall into the narrow unstable interval, we confirm that the binary masked loss remains stable. The stable behavior of the underlying continuous function $\bar{p}(W)$ ensures that most updates occur in regions where $b(W)$ is constant, preserving smooth optimization dynamics. This evidence supports that the binary mask $b(W)$ does not adversely affect the stability of the optimization process. We also explored a *purely continuous formulation* for characterizing path existence, as detailed in Appendix B.2. This approach leverages the logical duality between path presence and absence, ensuring a strict correspondence with the existence of at least one path. However, the dual formulation involves computing a product over terms that grows exponentially with the number of nodes, which leads to numerical stability issues in practice. ### 2. Gradient Conflicts The gradient conflict arises inherently between path existence and acyclicity constraints, as illustrated in Proposition 4. When enforcing path existence, we push edge weights toward larger absolute values, whereas the acyclicity constraint forbids cycles, thereby pushing edge weights in the opposite direction. A loss in gradient consistent with acyclicity thus **cannot simultaneously enforce the existence of specific paths**. Your suggested formulation—the absence of reversed paths—actually represents partial ordering implied by path existence, which is a **weaker** condition that does not enforce explicit paths. Indeed, optimizing with partial-order constraints serves as the first part in our method to derive order-consistent initializations for paths, as detailed in Sec. 4.2. We also empirically compare partial-order-based and path-existence-based structure learning in Fig. 12 (Appendix D). Results demonstrate that partial-order-based methods satisfy fewer path-existence constraints than our explicit path-existence-based method. See point 4, responses to Reviewer mwJt for more details. ### 3. Computational Complexity Thank you for your suggestion. We implemented the accelerated matrix power (and gradient) computation algorithm—fast TMPI with $O(d^3\log d)$ complexity—proposed in *Truncated Matrix Power Iteration for Differentiable DAG Learning*. Below, we compare fast TMPI against the original direct matrix power operation under the setting: ER2 graph, linear SEM with Gaussian noise, and 80% prior paths. Results are presented as **Direct Matrix Power / fast TMPI** for PE-NOTEARS. | @Node | 10 | 20 | 30 | 50 | | -------- | ----------- | ------------ | ------------- | ------------- | | Time (s) | 29.8 / 26.7 | 106.4 / 90.3 | 269.3 / 188.3 | 973.6 / 500.8 | | F1 | 0.87 / 0.88 | 0.77 / 0.77 | 0.68 / 0.68 | 0.61 / 0.61 | We observe that PE-NOTEARS with fast TMPI achieves a significant speedup on large-scale graphs while maintaining comparable performance. This confirms that fast TMPI integrates effectively into our approach, reducing time complexity from $O(d^4)$ to $O(d^3\log d)$, making it competitive with the $O(d^3)$ complexity of NOTEARS. ### 4. Discussion on Path Absence Constraints We will further discuss the connection between path absence constraints and partial orders to provide deeper insights. Additionally, we will present a direct comparison between path existence and path absence constraints (currently shown in separate figures) to improve clarity.
Summary: The paper addresses the problem of incorporating ancestral (path) constraints into differentiable causal structure learning methods, specifically NOTEARS-style algorithms. The authors identify two key issues with existing differentiable formulations: a non-equivalence issue in previous continuous relaxations of path-existence constraints, and an order-violation issue where constraints might fail during gradient-based optimization. They propose a binary-masked characterization of the path-existence constraint that precisely captures path presence equivalently and an order-guided optimization strategy that initializes optimization using a DAG consistent with ancestral orders. The contributions consist of (i) a principled framework for differentiable causal discovery with arbitrary ancestral constraints, (ii) a new equivalently valid path constraint formulation, and (iii) an order-guided optimization strategy to satisfy constraints and achieve higher accuracy. Claims And Evidence: The claims made by the authors are generally well supported by theoretical proofs and fair experiments. In particular, the central theoretical claim--that the masked constraint formulation exactly characterizes path existence--is substantiated by clear lemmas and theorems (Theorem 1) with complete proofs provided in the appendix. The empirical claims are also well supported by comprehensive experiments and ablation studies, demonstrating improvements in DAG accuracy (SHD and F1 scores) and better satisfaction of path constraints compared to naive methods and baseline NOTEARS. Ablations specifically isolate the effects of the proposed masked constraint and order-guided initialization, validating each independently. Methods And Evaluation Criteria: I think the proposed methods and evaluation criteria are appropriate. The authors select established differentiable learners (NOTEARS, DAGMA, GOLEM) as baselines, conduct extensive evaluations using synthetic benchmarks varying graph sizes/types and amounts of prior knowledge, and utilize standard accuracy metrics (SHD, TPR, FDR, F1). Importantly, they explicitly measure a "path recovery rate", directly assessing constraint satisfaction--a reasonable metric for their problem setting. Theoretical Claims: I checked in fair depth the theoretical claims provided in the main text and skimmed the appendix. The authors' derivations seem sound. Experimental Designs Or Analyses: The experimental design was sound and supports the authors' claims. Experiments varied constraints, initializations, and thresholds, thoroughly validating robustness and highlighting specific advantages. Minor points were noted, such as the absence of explicit testing for the scenario with incorrect priors (which could influence practical applicability). Nevertheless, I think the overall empirical evidence was compelling. Supplementary Material: I skimmed through most of the appendix. I payed particular attention to Appendix B which offers motivation and analyzes why alternative formulations failed, providing useful insight into their design decisions. Checked Appendix C which includes the proofs of all theoretical claims. Skimmed through Appendix D which provides extensive additional experiments (nonlinear models, absence constraints, other learners, runtime analyses). Relation To Broader Scientific Literature: I think the work is well-situated within existing literature, clearly bridging classical approaches (eg. constraint-based or score-based methods utilizing ancestral constraints) with modern differentiable structure learning methods. It identifies clear limitations of previous differentiable methods and improves on recent works by offering a general, theoretically sound approach for arbitrary ancestral constraints. Essential References Not Discussed: In general the authors do a good job on referencing relevant literature. I think the authors could consider citing [1], which is an older foundational work on structural priors and partial orders, as well as [2], which is a more recent work also dealing with partial orders and notears-like differentiable structure learning. [1] Heckerman, et al. "Learning Bayesian networks: The combination of knowledge and statistical data." Machine learning 1995. [2] Deng et al. "Optimizing notears objectives via topological swaps". ICML 2023. Other Strengths And Weaknesses: Key strengths include the originality and theoretical rigor of the proposed masked path formulation, systematic empirical validation, robust experimental design, and clear presentation. The authors identified fundamental limitations of naive differentiable constraints and clearly justified their proposed solutions. The resulting method demonstrated practical effectiveness, especially in scenarios with limited data but high-quality expert knowledge. Weaknesses primarily involve computational complexity ($O(d^4)$), limiting scalability to large graphs (though GPU acceleration partly mitigates this), reliance on threshold hyperparameters $(\epsilon,\epsilon_0)$, and unaddressed robustness to incorrect or noisy priors. These aspects, however, are openly acknowledged by the authors and represent reasonable future directions rather than critical flaws. Other Comments Or Suggestions: No minor comments. Questions For Authors: 1. How sensitive are your results to the choice of edge threshold $\epsilon_0$ used for binary masks, and how should practitioners select it in practice? 2. How would the algorithm behave if provided ancestral constraints are incorrect or contradictory? Can it gracefully handle noisy or uncertain priors? 3. Did the choice of DAG from partial-order initialization significantly affect the final results when multiple DAGs satisfy given partial orders? Should multiple initializations be tried? 4. Could the growing strength of the acyclicity penalty potentially override satisfaction of path constraints during optimization? How was this handled? 5. Do you have ideas or preliminary results on efficiently approximating or reducing computational complexity ($O(d^4)$) of the path existence formulation to scale beyond graphs of 50 nodes? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review. We will discuss the relevant references you mentioned in the paper. Here are our responses to your questions. # Reply to Questions 1. The edge threshold $\epsilon_0$ is a standard parameter in differentiable structure learning, set to $0.3$ following the default in NOTEARS in the paper. To address your concern, we experiment with varying $\epsilon_0$ and report the results (**NOTEARS / PE-NOTEARS**) below. The settings are: 20 samples, 20 nodes, ER2 graphs, linear Gaussian SEM, and 80% paths. | $\epsilon_0$ | 0.1 | 0.3 | 0.5 | 0.7 | 1.0 | | ------------ | ----------- | ----------- | ----------- | ----------- | ----------- | | FDR | 0.58 / 0.57 | 0.43 / 0.29 | 0.27 / 0.19 | 0.16 / 0.11 | 0.11 / 0.02 | | TPR | 0.75 / 0.71 | 0.68 / 0.84 | 0.60 / 0.79 | 0.50 / 0.70 | 0.25 / 0.57 | | PathRecovery | 0.90 / 0.91 | 0.83 / 1.00 | 0.70 / 0.98 | 0.52 / 0.88 | 0.21 / 0.72 | When $\epsilon_0$ is set too small, noisy edge weights insignificant for causality frequently fall near the threshold, causing instability as the binary mask $b(W)$ repeatedly flips between 0 and 1. Conversely, when $\epsilon_0$ is too large, the path existence loss strongly conflicts with the data fit loss by forcing edge weights to exceed their true values. Thus, data fit can override the path recovery. In practice, selecting $\epsilon_0$ can be guided by either the optimal path recovery rate (e.g., optimal around $\epsilon_0=0.3,0.5$ in this case) or the distribution of edge weights from NOTEARS (above the min region where edge weights are concentrated). 2. To evaluate error tolerance, we randomly introduce erroneous paths at a certain ratio (termed ErrorRatio) relative to correct ones (without creating cycles). Below, we report results comparing F1 scores and recovery rates of correct (@✅) and erroneous (@❌) paths for **NOTEARS / PE-NOTEARS**. The settings are: 20 nodes, ER2 graphs, 20 samples, linear Gaussian SEM, and 80% paths. | ErrorRatio | 10% | 20% | 30% | 40% | | ---------- | ----------- | ----------- | ----------- | ----------- | | F1 | 0.62 / 0.68 | 0.62 / 0.70 | 0.62 / 0.52 | 0.62 / 0.32 | | @✅ | 0.83 / 0.97 | 0.83 / 0.99 | 0.83 / 0.84 | 0.83 / 0.85 | | @❌ | 0.20 / 0.50 | 0.23 / 0.61 | 0.20 / 0.40 | 0.17 / 0.38 | When the ErrorRatio exceeds approximately 30%, PE-NOTEARS underperforms NOTEARS, exhibiting significantly reduced F1 recovery of correct paths. This result provides a practical estimate of the error tolerance of PE-NOTEARS. 3. Order-guided initialization is obtained by solving the partial order-based structure learning problem in Eq. (23), providing a stable initialization that typically does not require additional selection. If you instead refer to an arbitrary initialization that satisfies partial orders without solving Eq. (23), we tested an intuitive initialization using the matrix $\epsilon_0 A$ (where $A$ is the path mask), denoted as PE-NOTEARS-A. We report results for **NOTEARS / PE-NOTEARS / PE-NOTEARS-A** below: | @Node | 20 | 30 | 50 | | ------------ | ------------------ | ------------------ | ------------------ | | F1 | 0.62 / 0.77 / 0.67 | 0.52 / 0.68 / 0.59 | 0.45 / 0.61 / 0.55 | | PathRecovery | 0.82 / 0.99 / 0.99 | 0.83 / 1.00 / 0.99 | 0.79 / 0.98 / 0.96 | We observe that while the simpler partial order-consistent initialization (PE-NOTEARS-A) recovers most paths, it yields less optimal DAG structures compared to order-guided optimization. This is because solving Eq. (23) achieves a good optima simultaneously for partial orders and data fit, effectively mitigating conflicts with both acyclicity and data fit. In contrast, the partial-order-only initialization does not resolve conflicts with data fit, resulting in suboptimal performance. 4. Actually, our order-guided optimization effectively resolves the issue of acyclicity constraints overriding path existence. To demonstrate this, we experiment with large path existence loss weights, ensuring this loss dominates the data fit loss and leaves the main trade-off between path existence and acyclicity. In the experiment, PE-NOTEARS (with order guidance) still achieves strong adherence to prior paths and maintains good structural metrics. In contrast, methods without order guidance suffer significantly from acyclicity constraints overriding path existence, resulting in numerous unsatisfied paths due to order violations. Please refer to the related results in the section *Evidence on Order-Violation* of our responses to reviewer MDE7. 5. Following the suggestion of reviewer tD5o, we have reduced the complexity to $O(d^3 \log d)$. Please refer to point 3 in our responses to reviewer tD5o for detailed results.
null
null
null
null
null
null
Learning Gaussian DAG Models without Condition Number Bounds
Accept (poster)
Summary: In this paper, the authors revisit the problem of learning an $n$-variate Gaussian graphical model from samples. This problem is known to be solvable in $O(d\log n)$ samples where $d$ is the degree. However, there is a hidden polynomial dependence on the condition number of the covariance matrix of observations which can be polynomial in $n$ in the worst case. Instead, the authors make an assumption that the sum of squares of the linear SEM coefficients ($\tau$ parameter) of a node is bounded. Together with this, they also have the usual assumptions such as each noise is standard Gaussian and the coefficients are not too close to 0 ($b_{min}$ parameter). Under this assumption, they manage to give an algorithm that runs in $O(n^{2d+2})$ time and $O(d\log n)$ samples. They also show using Fano's method that their dependence on the previous two parameters is also optimal. Although, there is a gap of $d$ between the upper and lower bounds. Subsequently, they give an alternate algorithm that runs in poly(n,d) time, but its sample complexity is worse than that of the first algorithm. They further show that their first assumption is weaker that the condition number assumption. Technically, their algorithm is split is two phases: 1) topological order recovery using a greedy method that picks the lowest variance first 2) parent set recovery using regression using an existing result for undirected graphs. The authors perform experiments with synthetic data to confirm their findings. The results show that their accuracy for graph recovery is better than the existing methods once the sample size becomes large. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Although, it looks to me too few (only one) prior work has been compared against. Section E in the appendix had some more such comparisons. Maybe some of it could be brought in the main body. Theoretical Claims: I briefly looked at section A and B. It looks okay at a quick glance. I didn't go through the proofs in detail. Experimental Designs Or Analyses: It would have been more convincing if they also had experimental results for benchmark datasets along with synthetic datasets. Supplementary Material: I briefly looked at the all the appendix. Relation To Broader Scientific Literature: I think the claim that $\tau$ is a better assumption has been theoretically justified. The proposed algorithms is also performing better empirically. Essential References Not Discussed: NA Other Strengths And Weaknesses: In section 2.4 the authors interpret their theoretical results in comparison with the existing literature. It would have been also nice if they can point out their main technical novelty that gave them improved experimental result. What is the new thing they did algorithmically that gave the improvement? If they discuss this in more detail, it will make the paper more understandable. Other Comments Or Suggestions: NA Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed response and thorough examination of our work. We answer the main points that were raised below. >Although, it looks to me too few (only one) prior work has been compared against. Section E in the appendix had some more such comparisons. Maybe some of it could be brought in the main body. The reviewer is absolutely correct, the manuscript would benefit from some further discussion of prior work in the main body. We will make sure to bring some of this discussion earlier instead of only appearing in the appendix. >It would have been more convincing if they also had experimental results for benchmark datasets along with synthetic datasets. Thank you for raising this point. We agree that including benchmark datasets in the simulations would complement the theoretical picture even more. Our focus in this work was to theoretically establish the optimal sample complexity for this problem and to demonstrate how that translates to improvements over previous algorithms. Since the algorithms we were comparing against were tested on synthetic datasets, we made the same choice. >In section 2.4 the authors interpret their theoretical results in comparison with the existing literature. It would have been also nice if they can point out their main technical novelty that gave them improved experimental result. What is the new thing they did algorithmically that gave the improvement? If they discuss this in more detail, it will make the paper more understandable. Thank you for raising this important point. We will make sure to clarify the experimental improvement in the final version. Our algorithm for finding the parents of a given node $X_i$ is based on a connection that we observed between our problem and that of learning undirected gaussian graphical models. Thus, it proceeds by choosing a candidate neighborhood $S$ of size $d$ and then regressing $X_i$ with $X_{S\cup T}$, where $T$ ranges over all other possible neighborhoods of size $d$. It only accepts a candidate neighborhood $S$ as the true one if for all these regressions we never find a node outside $S$ with large coefficient. In contrast, the previously known algorithm chooses the subset $S$ with the smallest Mean Squared Error (MSE) in the regression of $X_i$. This is a superset of the true neighborhood, so they find all nodes in $S$ such that removing them from $S$ increases the MSE by less than $\gamma$ and delete them. Unfortunately, their choice of $\gamma = \Theta(b_{\min}^2)$ is too big, resulting in many true parents being deleted. In our analysis, we quantify the correct threshold for deletion and show that it also depends on $\tau$. This difference has a significant impact on performance. --- Rebuttal Comment 1.1: Comment: Based on the authors' response I am more convinced about the strength of the paper. So, I am updating my score accordingly.
Summary: This submission investigates the estimation of Gaussian DAG structure from i.i.d. samples and under equal-variance assumption. The main findings are two-fold: the authors give a polynomial time algorithm to recover the structure based on ideas of sparse regression; the authors shows that the sample complexity can be expressed in terms of the maximum sump of squares of the coefficients for the out-edges. Furthermore, they show that this complexity parameter improve upon standard analyses based on conditional number, as this latter may grow polynomially with the dimension. Hence, their method is particularly interesting in high-dimensional settings where the number of nodes of the graph is large. Claims And Evidence: The submission developed the theoretical aspects under reasonable assumptions. The authors proved upper and lower bounds, and sample complexity to support their analysis. Numerical experiments on synthetic data confirms the benefits of their method in high-dimensional settings. Methods And Evaluation Criteria: Evaluated on synthetic data Theoretical Claims: I have checked the sample complexity of Algorithm 2 (Theorem 2.6), various lemmas in the main document. Experimental Designs Or Analyses: I did not run their code but I believe that their reported experiments are sounds. Supplementary Material: Essentially the proof of Theorem 2.6. Relation To Broader Scientific Literature: This paper investigates DAG models, applications of DAGs can be interested in these results and algorithms. Essential References Not Discussed: Not to my point of view. Other Strengths And Weaknesses: The submission is well written and delivers a quite exhaustive analysis going from information theoretic result to polynomial time algorithm with guarantees. Other Comments Or Suggestions: Typos: 024 space between : PC-algorithm(Spirtes et al., 2001) 070 space between : model selection(Spiegelhalter other spaces between word+ref, please check. 149 variance should be one: \sigma^2=1, wlog 368 \in missing : jpa(i) Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging feedback and for pointing out these omissions, we will make sure to correct them in the final version.
Summary: This paper studies the sample complexity of learning linear Gaussian DAGs under equal variance assumption. It proposes a new algorithm and provides the graph recovery guarantee independent of the condition number. Both the upper and lower bounds of sample complexity are proved. The authors also provide simulation to verify the proposed method and demonstrate improvement compared to prior results. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, it is evaluated on simulated dataset. The data generation process of the simulated data and the evaluation criteria makes sense to me. Theoretical Claims: I checked the proof of Lemma 2.2. It seems reasonable. Experimental Designs Or Analyses: Yes, I checked the simulation setting. The setup seems reasonable. Supplementary Material: I checked Section D.1. Relation To Broader Scientific Literature: Learning graphical models can help drive deeper understandings for the physical process of biology, neuroscience and so on. Applications in this domain often have limited sample size. Hence, understanding the sample complexity of the algorithm helps better assess the quality of the estimation and understand its limitation. Furthermore, this paper proposes a new algorithm that is shown to recover the graph without dependence in condition number, which often grows with respect to the number of parameters. This makes estimation in high-dimensional setting with small sample size possible. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: 1. This paper is clearly written and the problem it studies is interesting and contributes to the theoretical aspect of graphical models. 2. Algorithm 1 suffers from computational complexity. Algorithm 2, while more efficient, has some loss in sample complexity. Other Comments Or Suggestions: 1. For figure 2, it would also be good to include the error bar. 2. In Algorithm 1, it would be clearer to state how T is sorted and how to handle the topological order when two nodes have same MSE. 3. For the experiments, it would be useful to report the TPR and FPR of the edge discovery. Questions For Authors: 1. To run Algorithm 1,2, knowing $b_min$ is required. How is this value estimated in practice? 2. In Lemma 2.2, the authors showed that $\tau(G)\leq \kappa$. Is there an example that $\tau(G)$ has an order of magnitude smaller than $\kappa$? Since Theorem 2.1 depends on $\tau(G)$. The theorem would be more convincing if there is an example demonstrating that $\tau(G)$ is of magnitude smaller than $\kappa$. 3. The simulation results are in the low-dimensional regime. It would be nice to see the result in higher dimensional setting (when the ratio of n/m is smaller, and n is larger). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging words and detailed feedback. Below we answer the main points that were raised. # For comments: >For figure 2, it would also be good to include the error bar. Thank you very much for this suggestion. We will make sure to include the error bars in the final version. >In Algorithm 1, it would be clearer to state how T is sorted and how to handle the topological order when two nodes have the same MSE. Thank you very much for this question, it is very important to clarify how $T$ is updated since that might not be clear from the pseudocode. Essentially, $T$ is a list of nodes that is initialized as empty and every time we find the node with the smallest MSE we append it at the end of the list. If two nodes happen to have the same (smallest) MSE, then we choose either of them to append to $T$. This is because having the same MSE indicates they are both valid to be the next node in the topological ordering since they have the same conditional variance given the previous nodes, so we can pick any of them as the next node. >For the experiments, it would be useful to report the TPR and FPR of the edge discovery. Answer: We agree this would be an excellent benchmark for performance, we will make sure to report these in the final version of the manuscript. # For questions: >To run Algorithm 1,2, knowing $b_{\min}$ is required. How is this value estimated in practice? We thank the reviewer for this insightful question. If we interpret the question correctly, this question is about how the algorithm would operate without knowing $b_{\min}$ beforehand. We have included a detailed discussion in appendix E.3 about adaptivity when various parameters (including $b_{\min}$) are unknown. We will make sure to make this discussion more visible in the main text. Essentially, without knowing $b_{\min}$, the algorithm would proceed with an initial estimate $b$ of $b_{\min}$. Then, we run the Algorithm assuming $b_{\min} = b$ and in the end, we check whether the estimated strengths of all the edges are either less than $b/10$ or larger than $b$. If we detect some edge that has strength between $b/10$ and $b$, we half $b$ and run the process again. We can show that if our number of samples is the one specified by our main Theorems for Algorithms 1 ,2, then such a process will eventually terminate and return a $b$ such that all edges are either smaller than $b/10$ or larger than $b$. >In Lemma 2.2, the authors showed that $\tau(G)\le\kappa$. Is there an example that $\tau (G)$ has an order of magnitude smaller than $\kappa$? Since Theorem 2.1 depends on $\tau (G)$. The theorem would be more convincing if there is an example demonstrating that $\tau (G)$ is of magnitude smaller than $\kappa$. Thank you very much for raising this point, we absolutely agree that such an example would help demonstrate the improvement over previous bounds. Such an example is essentially provided in Lemma 2.5 (ii), where the topology is a binary tree, where each edge has $2^{-1/4}$ weight (which makes $\tau$ a constant), but the condition number grows as $\kappa = \Omega(\sqrt{n})$. We will make sure to highlight this example immediately after presenting Theorem 2.1, so that it is clear to the reader why the dependence on $\tau$ is preferred over $\kappa$. >The simulation results are in the low-dimensional regime. It would be nice to see the result in higher dimensional setting (when the ratio of n/m is smaller, and n is larger). Thank you for pointing out. We will make sure to include simulations with higher $n$ that better reflect the high dimensional nature of the problem.
null
null
null
null
null
null
null
null
Predicting High-precision Depth on Low-Precision Devices Using 2D Hilbert Curves
Accept (poster)
Summary: The paper presents an approach for neural network quantisation in monocular and binocular settings. The main idea is to decompose the high dynamic range depth into two low dynamic range components using a Hilbert curve and train a full precision DNN to predict these components. In practice, standard quantisation methods are used, followed by a post-processing step to reconstruct the depth from the low bit-accuracy Hilbert curve components. The performance of the approach is evaluated by testing the performance of depth estimation after quantisation of weights and activations; the models show superior performance despite lower quantisation levels using the proposed approach. ## update after rebuttal Having read all the reviews and replies, I maintain my positive score. The rebuttal provided important answers to the open points. The new material should be included in the final version. Claims And Evidence: The problem has often been approached using ML methods. The proposed methodology is well suited to the problem. Moreover, the evaluation is done on standard datasets. Methods And Evaluation Criteria: Further evaluation at KITTI or NYU Depth V2 is recommended. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The experimental design is fine. There are missing evaluations. Supplementary Material: The visual results are helpful for demonstrating the proposed idea. Relation To Broader Scientific Literature: The output representation is novel compared to the current literature. The paper presented interesting ideas. Essential References Not Discussed: The paper could discuss further works on monocular depth estimation with limited resources. For example: - Fast monocular depth estimation on embedded systems (2019). - RRNet: Repetition- reduction network for energy efficient depth estimation (2020). - Visual domain adaptation for monocular depth estimation on resource-constrained hardware (2021). - LightDepthNet: Lightweight CNN architecture for monocular depth estimation on edge devices (2023). - Spatial-aware dynamic lightweight self-supervised monocular depth estimation (2023). Other Strengths And Weaknesses: Paper thicknesses: - The idea of predicting a point on a 2D parametric lower-order Hilbert curve is interesting. The paper presents a novel parametrization of the output space. - The paper is well written and easy to follow. The method is well described. - The approach works well in practice. The experiments show consistent and significant improvements over the standard quantisation protocol. Weaknesses of the paper: - The motivation for using Hilbert curves needs further motivation. For example, one could explore alternative techniques such as Singular Value Decomposition (SVD) or Principal Component Analysis (PCA) to reduce the dimensionality of depth data and then reconstruct. This is a minor point, but it would help the paper. - All experiments are performed on the ScanNet benchmark. More benchmarks would be welcome to support the claims made in the paper. The method should work equally well, but it needs to be demonstrated on other datasets, e.g. KITTI or NYU Depth V2. - The results shown in Table 1 are not really easy to follow, it is hard for the reader to know what exactly is being shown here, even after reading the text. The presentation of the results is crucial and clarity needs further work. Other Comments Or Suggestions: Not in particular. Questions For Authors: A clearer method motivation would be helpful. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer QXvZ for his/her positive feedback. We are encouraged that QXvZ found the paper well-written and easy to follow, recognized novelty of the main idea, and soundness of the experimental design. We address reviewer comments below and will incorporate all feedback in the final version. - **“The motivation for using Hilbert curves needs further motivation. For example, one could explore alternative techniques such as Singular Value Decomposition (SVD) or Principal Component Analysis (PCA) to reduce the dimensionality of depth data and then reconstruct. This is a minor point, but it would help the paper.”**: SVD and PCA are methods for dimensionality reduction. In the context of our work, they are not applicable because depth has only one dimension. In contrast, we apply a transform that increases data dimensionality (single channel disparity/depth is transformed into two components of the Hilbert curve). In this manner, we transform high dynamic range depth into several lower dynamic range Hilbert curve components. Such a transformation can be done in many ways. We argued that not all of them are suitable for a full precision model training (e.g. 16 bit depth representation as the eight bit components is unsuitable, Figure 3, L141-151). We also provided arguments in favor of space-filling curves (L153-196). In addition, in the supplementary material, we discussed in detail why the Hilbert curve was specifically chosen among many available space-filling curves (Appendix A). Our intuition is that using other curves (e.g. Peano and Quadratic Gosper) is possible but can provide benefits only in some specific applications. Therefore, for the paper clarity and conciseness, we decided to concentrate on Hilbert curves. - **“The paper could discuss further works on monocular depth estimation with limited resources.”**: Thank you for suggesting this additional discussion and useful links! In the camera-ready paper, we will change the initial phrase in the first paragraph at page 2, Line 059 to “The efficiency of DNN inference on low-end devices can be achieved by applying several strategies. One is to optimize a model architecture to reduce number of parameters, latency, avoid using remove computationally intensive layers, use network pruning (Fast monocular depth estimation on embedded systems (2019), RRNet: Repetition- reduction network for energy efficient depth estimation (2020), LightDepthNet: Lightweight CNN architecture for monocular depth estimation on edge devices (2023), Spatial-aware dynamic lightweight self-supervised monocular depth estimation (2023)). Another strategy consists in usage of low-precision computations (Li et al., 2021; Jacob et al., 2018).”. - **“The results shown in Table 1 are not really easy to follow”**: Thank you for pointing this out! In the camera-ready paper, we will apply color coding of values for FP32 model, W8A8 model on DSP, W8A8 model on CPU (similar to the Table 2 at page 7) and explain it in the Table 1 caption. Also, we consider enlarging Table 1 from one to two columns because camera-ready paper provides one extra page for the main paper. - **“All experiments are performed on the ScanNet benchmark. More benchmarks would be welcome to support the claims made in the paper. The method should work equally well, but it needs to be demonstrated on other datasets, e.g. KITTI or NYU Depth V2.”** The NYU Depth V2 and ScanNet datasets are similar (indoor datasets collected by Kinect device). Our choice of ScanNet is determined by its size, availability of camera poses and mesh for each scene. During experiments design we seek how to check performance on other datasets and in other domains. Instead of the KITTI dataset suggested by the reviewer, which belongs to the same depth estimation domain, we selected the MS COCO dataset and Human pose estimation domain. Our results for the COCO dataset (Appendix E) support claims made in the paper. We don’t have results on the KITTI dataset at hand and time is needed to prepare them (to support KITTI in our own training framework, train full precision models, prepare a quantization dataset for the KITTI dataset, quantize models, and perform measurements on device). If this additional experiment is required, we are ready to perform it and report our findings in the final paper version. In short term (by 8th April), we can benchmark existing models trained on ScanNet on the training part of the KITTI dataset. In this case, we cannot guarantee competitive full precision models quality on KITTI, only compare full precision, FP16, W8A8, W8A16 models performance.
Summary: This paper proposes a new method for high-precision depth prediction on devices with low-precision arithmetic. The authors introduce an innovative technique that represents high dynamic range depth as two low dynamic range components of a 2D Hilbert curve. This approach enables depth maps with higher bit precision than conventional quantized 8-bit models. The algorithmic framework encompasses encoding and decoding depth information using 2D Hilbert curves, as well as incorporating this methodology into neural network training. Experiments with DispNet and DPT models on stereo matching tasks demonstrate the effectiveness of the proposed method. ## update after rebuttal Thank you for your valuable feedback! Major concerns regarding evaluation metrics, baseline comparisons, and figures have been addressed. After considering your feedback, I will increase my rating. Claims And Evidence: 1) The Hilbert curve method increases bit precision of predicted depth by up to three bits: This claim is supported by theoretical analysis and empirical results. 2) W8A8 models with the proposed method outperform original W8A16 models while improving efficiency: This claim is supported by the experiments. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are not well-aligned with the problem of enabling high-precision depth prediction on low-precision devices. A significant concern is that while the algorithm in this paper is designed for stereo matching, the evaluation metrics presented focus primarily on monocular depth estimation. The authors should supplement their analysis with standard stereo matching evaluation metrics to properly assess the method's performance in its intended application domain. Theoretical Claims: N.A. Experimental Designs Or Analyses: My major confusion regarding this paper stems from the experimental design. 1) The evaluation metrics used are standard metrics for monocular depth estimation, rather than stereo matching (which would typically include EPE, D1, and D2). This creates a fundamental mismatch between the claimed application and its evaluation. 2) The selected baseline algorithms are insufficient to represent state-of-the-art techniques. For stereo matching, they could have included IGEV (CVPR'23), while for monocular depth estimation, NDDepth (ICCV'23) would have been appropriate comparisons. 3) The paper lacks comparative experiments with other recent quantization methods, which would be essential to properly contextualize the performance of the proposed approach. Supplementary Material: Yes, I reviewed the Supplementary material (A & B), which is correct. Relation To Broader Scientific Literature: N.A Essential References Not Discussed: N.A Other Strengths And Weaknesses: N.A Other Comments Or Suggestions: 1) Why does Fig. 3(f) appear significantly different from the other visualizations. The visual discrepancy requires clarification to understand the results being presented properly. Questions For Authors: N.A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer RnCh for his/her thoughtful feedback. We are encouraged that RnCh found our claims supported by theoretical analysis and empirical results. We address reviewer comments below and will incorporate all feedback in the final version. - **“The evaluation metrics used are standard metrics for monocular depth estimation, rather than stereo matching (which would typically include EPE, D1, and D2)…”**: We agree that EPE and D1 metrics are typically used in the stereo matching domain. At the time of paper submission, we had both sets of metrics, including EPE and D1. Given the paper size constraint, we decided to include relative depth error and Sc (DCT cosine similarity) as primary metrics because they clearly reveal the effect of our idea application. In particular, Sc measures quality loss due to a decrease in bit-precision. For EPE and D1 metrics we have the following results. For the DispNet model, EPE/D1 is 0.29 px./1.81% for the full precision model and 0.69 px./5.34% for the W8A8 model; for h3DispNet EPE/D1 is 0.24 px./1.24% for the full precision model and 0.24 px./1.25% for the W8A8 model. The D2 metric is not applicable in our case, because we consider models that predict disparity only for the left frame. In the final paper version, we will update Table 1 and Table 2 by adding EPE and D1 metrics. - **“The selected baseline algorithms are insufficient to represent state-of-the-art techniques. For stereo matching, they could have included IGEV (CVPR'23), while for monocular depth estimation, NDDepth (ICCV'23) would have been appropriate comparisons”**: IGEV belongs to the class of stereo-matching models (RAFT-Stereo, Selective-IGEV) with recurrent refinement of predicted disparity using GRU or similar recurrent units. In this architecture, the predicted disparity is refined over multiple iterations by adding a small correction signal to the initially predicted disparity map. The integration of the proposed idea of depth/disparity representation as two Hilbert curve components into IGEV differs significantly from one-stage models like DispNet and DPT that we consider in the paper. While we propose to modify the output signal representation, IGEV modification requires integration of Hilbert components inside the model at multiple places. We think that our idea can be applied to IGEV, and this is currently our active research direction. However, this work is not completed, and even if completed, it cannot be added to the current publication without sacrificing clarity. A similar situation is with NDDepth: this model has depth refinement using GRU units. Apart from this, NDDepth predicts other components like normals, uncertainly maps, and planar regions that we do not claim to support at this stage of our research. In the camera-ready paper, we propose to clarify this situation by adding one paragraph to Discussion & Limitations section explaining that support of recurrent models requires further research efforts. - **“The paper lacks comparative experiments with other recent quantization methods...”** We would like to stress that we do not propose a new quantization method. Our idea is to modify a full precision model, use *existing* quantization methods without any modifications, and add a simple post-processing stage of quantized model output. Improved quantization quality provided by recent quantization methods (post-training quantization or quantization-aware training) will improve the quality of Hilbert components quantization and quality of depth/disparity calculated after post-processing. For a fair comparison, we need to find a similar approach that builds on top of existing quantization methods. However, in the available literature we found none. Therefore, we provided the best possible experimental evidence by comparing a standard PTQ quantization method (widely used SNPE library that quantizes models for Qualcomm chipsets) without and with our approach. - **“Why does Fig. 3(f) appear significantly different from the other visualizations. The visual discrepancy requires clarification to understand the results being presented properly.”**: The pair of images 3e, 3f shows decomposition of 16-bit depth into two 8-bit values, with 3e being the most significant byte and 3f – the least significant byte. In this representation, the least significant byte shows high-frequency variations making it significantly different in appearance from the original depth. We added these images to illustrate a possible but inconvenient way of factorizing high-precision depth into two low-precision components. To clarify this, in the camera-ready paper, we will add the following phrase in the Figure 3 caption “Fine details in (f) are the least significant byte of depth (a) represented in 16 bit format. High-frequency oscillations make it appear different from the original depth and difficult to predict by a DNN model”.
Summary: The paper presents a method for achieving high-precision depth prediction on low-precision devices by representing depth as two components of a 2D Hilbert curve. A full-precision DNN is trained to predict these components, and a post-processing step reconstructs high-precision depth from low-precision predictions. The key findings indicate that the method can increase the bit precision of predicted depth by up to three bits and reduce quantization error by up to 4.6 times. Experiments show that the modified model, quantized to the W8A8 format, can achieve similar or better depth prediction quality than the original W8A16 model, with reduced inference time and power consumption. Claims And Evidence: - Claims: The proposed method can improve the bit precision of predicted depth, reduce quantization error, and enable efficient depth prediction on low-precision devices. - Evidence: Experiments on DispNet and DPT models provide substantial evidence. By comparing the performance of the original and modified models in terms of metrics such as Abs Rel, RMSE, δ₁, and SC, as well as their performance on CPU and DSP, the paper validates these claims. However, the assumption of independent quantization errors for Hilbert components does not fully hold in practice, which may affect the universality of some conclusions, although the overall advantages of the method are still demonstrated. Methods And Evaluation Criteria: - Methods: Representing depth as Hilbert curve components is innovative, and experiments verify its effectiveness in enhancing precision and reducing errors. Nevertheless, the method requires retraining the full-precision model and imposes certain constraints on the errors of the quantized depth prediction model, which may limit its practical applications. - Evaluation Criteria: Using standard depth prediction metrics (Abs Rel, RMSE, δ₁) and the newly proposed cosine similarity (SC) based on DCT coefficients to evaluate depth prediction quality is reasonable and comprehensive. Measuring quantization error using standard deviation (SD) is also appropriate for effectively assessing model performance. - Theoretical Claims:The theoretical analysis of using the Hilbert curve for depth coding in the paper is reasonable. The discussion on the properties of the curve (continuity, non-self-intersection, boundedness, and self-avoidance) is sufficient, explaining the rationale for choosing the Hilbert curve. The theoretical derivation in the quantization error analysis is logically sound. However, the discrepancy between the assumption of independent quantization errors of Hilbert components and the actual data in practical applications suggests a certain disconnect between theory and practice, which requires further research and improvement. Theoretical Claims: The theoretical analysis of using the Hilbert curve for depth coding in the paper is reasonable. The discussion on the properties of the curve (continuity, non-self-intersection, boundedness, and self-avoidance) is sufficient, explaining the rationale for choosing the Hilbert curve. The theoretical derivation in the quantization error analysis is logically sound. However, the discrepancy between the assumption of independent quantization errors of Hilbert components and the actual data in practical applications suggests a certain disconnect between theory and practice, which requires further research and improvement. Experimental Designs Or Analyses: - Experimental Designs: The selection of DispNet and DPT models for experiments is appropriate, and the modifications to the models are clearly described. The choice and processing of the dataset are reasonable, and the experimental settings (such as model input and output sizes, training-validation-test set division) are well-defined. Considering different models, quantization formats, and device operations (CPU and DSP) makes the experimental design comprehensive. - Experimental Analyses: The analysis of experimental results is detailed. By comparing the performance of the original and modified models across multiple indicators, evaluating the models from different perspectives (accuracy, error, runtime, power consumption), and conducting an in-depth analysis of quantization error reduction, the paper provides a comprehensive understanding. However, more attention should be paid to the impact of the discrepancy between actual data and theoretical assumptions on the results when analyzing quantization errors. Supplementary Material: Since no information about the supplementary material is available, a comprehensive evaluation cannot be conducted. It is recommended that the authors elaborate on the content of the supplementary material to better assess its supportive role in the paper. Relation To Broader Scientific Literature: The paper reviews relevant research in the field of depth prediction in the introduction, elaborating on the development trends and existing problems of current depth prediction methods. It clearly states that its method aims to address the issues of depth prediction accuracy and quantization error on low-precision devices, demonstrating a close relationship with previous studies. In the methodology section, relevant concepts such as quantization techniques and space-filling curves are cited and discussed, indicating both continuity and innovation in the research. However, when elaborating on the relationship with existing literature, the differences and advantages of this method compared to other methods for improving quantization errors could be further emphasized. Essential References Not Discussed: none Other Strengths And Weaknesses: Strengths: - Innovation: Representing depth using 2D Hilbert curves is novel, offering a new approach to depth prediction on low-precision devices. - Performance Improvement: Experimental results demonstrate that the method can effectively enhance depth prediction accuracy, reduce quantization errors, and decrease the inference time and power consumption of the model, showing high practical value. - Application Potential: The method is applicable to various depth prediction tasks, including monocular and binocular depth prediction, multi-view stereo, depth completion, depth quality enhancement, and depth inpainting, indicating broad application prospects. Weaknesses: - Model Training Cost: Retraining the full-precision model increases training costs and time. - Error Assumption Issue: The assumption of independent quantization errors for Hilbert components does not match actual data, potentially affecting the universality of the conclusions. - Hardware Dependence: The effectiveness of the method depends on specific hardware environments (such as devices supporting low-precision calculations), and it may not be applicable on some hardware that does not support such calculations. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer GYWs for his/her positive feedback. We are glad GYWs has found our idea innovative and recognized that we provided substantial evidence to validate our claims. We are encouraged that GYWs found our approach novel, with high practical value and broad application prospects. We address reviewer comments below and will incorporate all feedback in the final version. - **“… the assumption of independent quantization errors for Hilbert components does not fully hold in practice, which may affect the universality of some conclusions....”; “… more attention should be paid to the impact of the discrepancy between actual data and theoretical assumptions on the results when analyzing quantization errors”; “The assumption of independent quantization errors for Hilbert components does not match actual data, potentially affecting the universality of the conclusions”**: We believe that some misunderstanding may stem from the statement “Quantization error compression takes place if (a) Hilbert components quantization error is independent between channels and identically distributed …” in Subsection 2.4., L212. Our main goal was to overcome the bit-precision limitation of low-end devices, and this goal is achieved regardless of the independence of quantization errors. We observe the bit-precision increase in all experiments (depth prediction and human pose estimation) and for all models (DispNet, DPT, ResNet-RS). In addition to this main goal, we assumed that quantization error after transforming Hilbert components to the depth/disparity map can be reduced by up to the Hilbert curve length times, however, this effect requires independency of quantization error in Hilbert components. If errors in Hilbert components are dependent, only the bit-precision increases, if they are independent, the bit-precision increases and quantization error reduces. Our experiments revealed that this effect does take place, but the independence of Hilbert components is task-dependent. We consider this result not as a limitation of the proposed idea but as an indication of its potential beyond our initial goal. Being limited by the paper size, we focused on explaining the main idea. In addition, we made our best efforts to describe the observed effect of quantization error reduction and provide our working hypothesis supported by available experiments: the effect strength depends on relation between quantization errors of Hilbert curve components. We hope that further experiments can clarify this question, probably, by assuring quantization errors independence in the loss function with quantization errors modelling with QAT. In the camera-ready paper, we will stress that assumption of independent quantization errors for Hilbert components is not needed for achieving our main goal of increasing bit-precision. - **“Since no information about the supplementary material is available”**: According to the ICML call for papers, supplementary materials should not be submitted as a separate file but rather included as appendices after the main paper body. In our paper, supplementary materials are present as appendices A-E (pages 12-19), where we provide additional information on space-filling curve selection, DPT model modification, experiment on mesh fusion quality, more examples of predicted depth maps, and experiments on human pose estimation. Each appendix has a reference in the main paper body. We hope this additional information is available and will provide valuable details of our work. - **“…the differences and advantages of this method compared to other methods for improving quantization errors could be further emphasized.”** To address this suggestion, we propose to add a short subsection in the camera-ready paper with the following content: (1) an illustration of the required modification of a full-precision model and its training pipeline, a block diagram of modified model inference on device with indication of parts that are run on DSP (Hilbert components prediction) and on CPU (post-processing), and data transfer between DSP and CPU; (2) additional text where we will emphasize that existing quantization methods (PTQ, QAT) can be integrated into this scheme and bring improvement in specific domains or for specific architectures. We will also emphasize the difference between PTQ, QAT, and our method. - **“The effectiveness of the method depends on specific hardware environments”** This is not quite so. Specific hardware influences all methods related to on-device models deployment because of supported arithmetics limitations. Our method adds a computationally simple post-processing step (LUT table) and can be applied to any hardware that supports quantized models inference. We experimented with CPU inference, DSP inference of FP16 model, DSP inference of W8A16 and W8A8 models and in all cases found improvement in quantization quality (higher bit-precision, lower standard deviation of quantization error).
Summary: This paper focuses on high-precision depth estimation on low-precision devices. It leverages a 2D Hilbert Curves for better representation of high dynamic range depth. Extensive experimental results demonstrate the superiority of the proposed method on both accuracy and computational overhead. Claims And Evidence: Yes, the paper clearly states the reasons for introducing the Hilbert curve and gives a proper analysis. Methods And Evaluation Criteria: Yes, the evaluation includes estimation accuracy, runtime, and power consumption, which are appropriate for the application at hand. Theoretical Claims: The paper provides a proper analysis of the expected properties of the parametric curves and loss function modifications. Experimental Designs Or Analyses: I have checked the experimental results in the main text. Supplementary Material: I have reviewed the network architecture and additional quantitative results in the supplementary material. Relation To Broader Scientific Literature: This paper targets on the depth estimation problem and the evaluation is based on two prior depth estimation works: DispNet and DPT. Essential References Not Discussed: No. Other Strengths And Weaknesses: - The paper is well-structured and well-argued. - The experimental results demonstrate the effectiveness of proposed method. Other Comments Or Suggestions: - I suggest thickening the text in Figure 2 and the lines in Figure 4. They are too thin to be seen in the paper. Questions For Authors: - The depth estimation results shown in the paper are predicted in scenes with simple structures. Can you show the depth estimation results of some complex scenes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer aLW4 for his/her positive feedback. We are encouraged that aLW4 found the paper well-argued and well-structured, recognized clarity of the main idea, effectiveness of the proposed method and soundness of the experimental analysis. We address reviewer comments below and will incorporate all feedback in the final version. - **“I suggest thickening the text in Figure 2 and the lines in Figure 4. They are too thin to be seen in the paper”**: Agree, we will make this change in the camera-ready paper. - **“The depth estimation results shown in the paper are predicted in scenes with simple structures. Can you show the depth estimation results of some complex scenes?”**: Yes, we can add an example for a more complex scene. We selected a complex frame representing a kitchen with many small objects on the kitchen table and kitchen cabinet with many shelves (ScanNet, scene0804_00, camera pose for frame #340). This example confirms quantization error reduction after applying our approach, and we will add it in the Appendix D of the camera-ready paper. --- Rebuttal Comment 1.1: Comment: Thanks for your feedback! My concerns are addressed and I will keep my rating unchanged.
null
null
null
null
null
null
MIPT: Multilevel Informed Prompt Tuning for Robust Molecular Property Prediction
Accept (poster)
Summary: The paper introduces Multilevel Informed Prompt Tuning (MIPT), a framework that enhances pretrained Graph Neural Networks for molecular property prediction. MIPT significantly outperforms existing methods, achieving higher ROC-AUC scores while reducing the number of trainable parameters. Key contributions include a multilevel prompt learning module for capturing task-specific knowledge, a noise penalty mechanism to mitigate irrelevant information, and low-rank adaptation for efficient tuning. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes.Mainly experiments. Experimental Designs Or Analyses: 1. LoRA has proven its effectiveness through ablation experiments, but why weren't other feature extraction methods used? Was there a comparison with other methods to make the final decision? 2. The title of Table 3 is inconsistent with its content. The title mentions a comparison on Tox21, but the final results are based on BBBP. Additionally, why wasn't the comparison conducted across all datasets? 3. Both the abstract and the method section mention the random noise mask, so why wasn't this discussed in the ablation study? Supplementary Material: Yes. Relation To Broader Scientific Literature: 1. Advances in GNNs MIPT builds on recent advancements in GNN architectures, which have been shown to effectively capture complex relationships in molecular data. 2. Prompt Tuning Techniques MIPT extends this prompt tuning to LoRA, contributing to the body of work that explores how prompt-based methods can enhance performance across various domains. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. The H_v(k) in the line above Eq. 2 is incorrect. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer dDZn: Thank you for your valuable and constructive comments. We have revised the paper according to your suggestions. ## Experimental Designs Or Analyses **1. Why choose LoRA**: To learn the features of both node-level and graph-level, we used multi-level fine-tuning, but in order not to increase the training cost, we chose lightweight LoRA. Although there are alternative methods such as Adapter tuning [1] and BitFit [2], previous study [3] have shown that LoRA achieves the best balance between efficiency and performance. [1] Parameter-Efficient Transfer Learning for NLP. [2] Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. [3] LoRA: Low-Rank Adaptation of Large Language Models. **2. Not all datasets were evaluated:** We apologized for the mistake in title. Due to space, we did not include all the data, which we will add to the appendix. Here we supplement the performance results of other datasets. Table. Ablation analysis of different configurations on Tox21, Toxcast, HIV, and MUV datasets based on RUC-AOC(%). | Module | Loss | Tox21 | Toxcast | HIV | MUV | |-------|-------|-------|-------|-------|-------| | LoRA | $L_{cont}$ | 75.31 | 64.34 | 75.95 | 82.44 | | MGIP | $L_{cont}$ | 80.57 | 68.78 | 79.97 | 84.47 | | LoRA+MGIP| $L_{cont}$ | 80.19 | 67.70 | 78.50 | 83.07 | | LoRA | $L_{NPP}$+$L_{cont}$ | 76.40 | 68.55 | 75.03 | 79.48 | | MGIP | $L_{NPP}$+$L_{cont}$ | 80.29 | 68.27 | 80.42 | 84.71| | LoRA+MGIP | $L_{NPP}$ | 80.45 | 68.52 | 79.84 | 82.45 | | LoRA+MGIP | $L_{NPP}$+$L_{cont}$ | 80.60 | 68.81 | 81.76 | 84.96 | **3. The ablation experiment does not include the random noise mask.** Two parts of our approach mention noise. The random node mask discussed in the method is used to augment the original features, which are included in the GIP and the ablation experiment (MGIP). The noise penalty mechanism, aimed at reducing the uncertainty of the method and increasing confidence, has been applied in the ablation analysis (NPP). We appreciate the reviewer for spending time reviewing our paper and offering valuable suggestions. If you have any further questions, please tell us and we are willing to address your concerns.
Summary: This manuscript introduces a novel Multilevel Informed Prompt Tuning (MIPT) framework designed to enhance pre-trained molecular encoders for molecular property prediction tasks. The key contributions include a multi-level prompt learning network and a noise penalty mechanism. The proposed prompt learning network effectively mitigates the gap between pre-training and downstream tasks, while the noise penalty mechanism addresses potential mismatches between pre-trained representations and task-specific representations. Extensive experiments on both public and real-world datasets demonstrate that MIPT achieves significant improvements in molecule-related tasks. Claims And Evidence: Yes, this manuscript provides extensive experimental results, including comparisons with baseline models, ablation studies, and hyperparameter experiments. Methods And Evaluation Criteria: Yes Theoretical Claims: The manuscript includes a theoretical proof related to the Noise Prompt Penalty (NPP) to demonstrate its robustness in mitigating the impact of noisy samples. Experimental Designs Or Analyses: In Table 2, the performance improvement on some datasets is not significant. Additional analysis should be provided. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: The manuscript presents a clear and well-structured motivation, precisely identifying the key challenges of pre-trained GNNs in molecular property prediction. Essential References Not Discussed: [1] Pin-Tuning: Parameter-Efficient In-Context Tuning for Few-Shot Molecular Property Prediction. [2] MMGNN:AMolecularMerged Graph Neural Network for Explainable Solvation Free Energy Prediction Other Strengths And Weaknesses: Strengths 1) The manuscript presents a clear and well-structured motivation, precisely identifying the key challenges of pre-trained GNNs in molecular property prediction. It highlights the limitations of existing methods and naturally introduces its core solution。 2) The manuscript proposes multi-level prompt learning to extract task-specific information at both the node and graph levels, enabling the model to better adapt to downstream tasks and effectively bridging the gap between pre-training and real-world applications. 3) The manuscript employs a Gaussian Mixture Model (GMM) to model the confidence distribution of samples, introducing a noise penalty mechanism to suppress irrelevant noise, allowing the model to focus on truly meaningful features and enhancing overall performance. 4) The proposed method achieves significant performance improvements in molecular property prediction tasks across multiple pre-trained models. Weaknesses 1) LoRA, as a parameter-efficient fine-tuning method, has been widely adopted in the LLM domain. The manuscript should more clearly clarify LoRA's unique advantages in molecular property prediction tasks and how it has been specifically optimized for molecular graph data. 2) What is this work different from Pin-tuning [1]? 3) What is the role of \(\epsilon^{(k)}\) in Eq. (5)? 4) In Table 2, the performance improvement on some datasets is not significant. Additional analysis should be provided. 5) Algorithm 1 needs careful proofreading 6) The manuscript does not provide the code. [1] Pin-Tuning: Parameter-Efficient In-Context Tuning for Few-Shot Molecular Property Prediction. Other Comments Or Suggestions: The manuscript requires more careful proofreading to avoid some nitpicks. For example: 1. In line 24, "MIT" should be corrected to "MIPT". 2. In line 216, there is a missing space after "conditions". 3. In line 341, the last numerical value should not be in boldface. Questions For Authors: Please refer to Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer Cxne: Thank you for your valuable and constructive comments. We have revised the paper according to your suggestions. **W1: LoRA, as a parameter-efficient fine-tuning method, has been widely adopted in the LLM domain. The manuscript should more clearly clarify LoRA's unique advantages in molecular property prediction tasks and how it has been specifically optimized for molecular graph data.** Our work is the first to adopt LoRA for fine-tuning GNNs in molecular property prediction. In NLP, LoRA is typically employed to cut down training costs. However, our goal is to construct graph prompts that capture the relationships between graph structure and node features, thereby optimizing LoRA specifically for molecular graph data while maintaining parameter efficiency. **W2: Different from Pin-tuning.** Pin-tuning is designed to be based on adaptor fine-tuning strategy for few-shot learning. Unlike Pin-Tuning, our method bridges pre-training and fine-tuning via multilevel graph prompts that align node/graph structures with task requirements. Additionally, we incorporate a noise penalty to mitigate mismatches between pretrained representations and downstream tasks. **W3: What is the role of $\epsilon^{(k)}$ in Eq. (5)?** In Eq. (5), $\epsilon^{(k)}$ is a learnable parameter that modulates the contribution of the node's own features relative to its neighbors during the update at the $k$-th layer. Essentially, it scales the self-feature term, allowing the network to adaptively balance the importance of a node's current representation with the aggregated information from its neighbors. **W4: In Table 2, the performance improvement on some datasets is not significant. Additional analysis should be provided.** We acknowledge that the performance improvement of our method is modest on some datasets. We are aware that public molecular datasets are limited, and different SOTA methods tackle these challenges in various ways. For instance, Uni-Mol incorporates 3D information, while InstructMol uses pseudo-labels to enhance model confidence. In contrast, our approach mainly aims to bridge the gap between pre-training and fine-tuning. Although our method may not achieve SOTA performance on every molecular property, the enhancements it brings in transferability and robustness are still of great significance. **W5: Algorithm proofreading.** We have thoroughly proofread and revised Algorithm 1 to ensure its clarity and correctness. **W6: The manuscript does not provide the code.** We will make the code publicly available after the submission is published. **Minor issues.** For typos, we’ve corrected error and updated the description in the paper. We appreciate the reviewer for spending time reviewing our paper and offering valuable suggestions. If you have any further questions, please tell us and we are willing to address your concerns.
Summary: The paper introduces a novel framework called Multilevel Informed Prompt Tuning (MIPT) aimed at enhancing the performance of pretrained GNNs in molecular property prediction tasks. MIPT employs a lightweight multilevel prompt learning module to capture task-specific knowledge at both node and graph levels, while incorporating a noise penalty mechanism to mitigate the impact of irrelevant information. Experimental results demonstrate that MIPT outperforms baseline models across various molecular tasks, showcasing its effectiveness, scalability, and broad applicability, while also highlighting areas for future research, particularly in few-shot learning and stability across different graph structures. Claims And Evidence: Yes Methods And Evaluation Criteria: When comparing the fine-tuning methods, the authors only compared the FT and GPF/GPF-plus strategies, and the results of other PEFT strategies should be added to illustrate the superiority of the methods. In addition, the authors should add more benchmarks on graph prompts to compare. Theoretical Claims: I carefully examined the theory and its proof and found no major errors. Experimental Designs Or Analyses: This paper presents a comprehensive evaluation of multiple benchmarks, comparing not only the SOTA model that has been pre-trained and fine-tuned but also the SOTA model specifically for molecular property prediction. The experimental design is well-conceived and effectively addresses the research objectives. Supplementary Material: Yes Relation To Broader Scientific Literature: This paper introduces the innovative concept of molecular prompt tuning, which effectively bridges the gap between pre-training and fine-tuning without altering the molecular structure, while also enhancing interpretability. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths** -This paper proposes MIPT, including multi-level prompt tuning and noise penalty mechanism, which provide a new solution for molecular knowledge transfer. - This paper is well-motivated and easy-to-follow. - The contributions are significant and somewhat new. **Weaknesses** - In this paper, only the benchmark of classification tasks is studied, and it is necessary to further increase the generality of other types of tasks to illustrate tasks, such as regression tasks. - The SOTA performance in Table 2 is not available on all datasets. - The author does not have open source code. I am willing to consider raising the score based on your rebuttal to the following questions: - The two LoRA module in Sec.4.1 and Sec.4.2 should provide more details on how they differ. - Authors should compare their work with other benchmarks about graph prompt. - The LoRA have already been used in many tasks, such as NLP tasks. The technical novelty of this paper is limited. Other Comments Or Suggestions: **Minor Weaknesses** - The molecular graph provided in the upper and lower sections of Figure 1 is not identical. - “MIT” in L430 should be “MIPT”. - L696, Implementation Details are misaligned. Questions For Authors: - What is the difference between $L_{NPP}$ and $L_{cls}$ in Figure 1? - The author emphasizes the multi-level graph prompt, essentially using the LoRA module twice, so the increase in performance could be due to an increase in the amount of training parameters? Please specify. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer kVKq: Thank you for your valuable and constructive comments. We have revised the paper according to your suggestions. ## Weakness **W1: In this paper, only the benchmark of classification tasks is studied, and it is necessary to further increase the generality of other types of tasks to illustrate tasks, such as regression tasks.** In our paper, while the primary focus was on benchmark classification tasks, we recognize the importance of demonstrating the generality of our approach across other types of tasks, such as regression. We extended our experiments to regression tasks on Table I. The supplementary results indicate that our method maintains its superiority even in the regression setting. Table I. Comparison of the performance of different tuning strategies on regression tasks based on MAE. | Tuning Strategy | ESOL | Lipo | |-------|-------|-------| | FT | 1.1262 | 0.6942| | GPF | 1.1125 | 0.6832 | | GPF-plus | 1.1104 | 0.6789 | | Ours | 0.6788 | 0.6702 | **W2: The SOTA performance in Table 2 is not available on all datasets.** Thank you for pointing this out. Indeed, in molecular representation learning, the scope and design of different models can vary significantly, often incorporating specialized features or data. For example, UniMol leverages 3D information, and InstructMol adopts pseudo-labeling strategies to enhance confidence. While these SOTA methods focus on addressing certain limitations in molecular representation, our primary goal is to bridge the gap between pre-training and fine-tuning. We believe that even if not all molecular properties achieve SOTA performance, they still hold significant value. **W3: The author does not have open source code.** We will make the code publicly available after the submission is published. **W4: The two LoRA module in Sec.4.1 and Sec.4.2 should provide more details on how they differ.** We emphasis that the node-level LoRA is used for the node features of the adaptive learning molecule, and the LoRA on the graph-level features is a variant of LoRA, which is used as a prompt for adaptive learning of graph-level features. **W5: More benchmarks about Graph Prompt-based Methods.** We have conducted a comprehensive comparison with existing graph prompt benchmarks. As shown in the Table II, our method outperforms both GPPT and GraphPrompt on several key datasets, particularly on BBBP, Toxcast, SIDER, HIV, and BACE. Table II. Comparison of benchmarks about graph prompt for the models pre-trained by Edge Prediction based on RUC-AOC(%). | | BBBP | Tox21| Toxcast| SIDER| Clintox| MUV| HIV| BACE| |-------|-------|-------|-------|-------|-------|-------|-------|-------| | GPPT | 64.13 | 66.41 | 60.34 | 54.86 | 59.81 | 63.05 | 60.54 | 70.85 | | GraphPrompt | 69.29 | 68.09 | 60.54 | 58.71 | 55.37 | 62.35 | 59.31 | 67.70 | | Ours | 72.73 | 80.82 | 67.44 | 79.46 | 79.29 | 80.02 | 78.68 | 82.91 | **W6: The LoRA have already been used in many tasks, such as NLP tasks. The technical novelty of this paper is limited.** We agree that LoRA is an excellent and widely-used technique in many domains, including NLP. However, our paper does not claim novelty for the LoRA component itself. Instead, our main contribution lies in multilevel graph informed prompt and noise penalization mechanisms. This combination uniquely bridges the gap between pre-training and fine-tuning in molecular graph learning. ## Questions **Q1: What is the difference between $L_{NPP}$ and $L_{cls}$ in Figure 1?** $L_{cls}$ (Eq. 14) is the standard classification loss (binary cross-entropy loss) used to train the model on the downstream task. In contrast, $L_{NPP}$ (Eq. 18) adds a Noise Penalty Mechanism to $L_{cls}$. **Q2: The author emphasizes the multi-level graph prompt, essentially using the LoRA module twice, so the increase in performance could be due to an increase in the amount of training parameters?** Table III. The number of tunable parameters for different tuning strategies. | Tuning Strategy | Size of Total Tunable Part | |----------------|---------------------| | FT | 1.92M-2.14M | | GPF | 0.9K-0.21M | | GPF-plus | 3.6K-0.192M | | Ours | 0.175M-0.295M | While it is true that our method employs the LoRA module at multiple levels, the increase in performance cannot be solely attributed to a higher number of tunable parameters. We had provided the parametric analysis in tab. 5 in the appendix. Although our approach uses more parameters than GPF/GPF-plus, it is still far below FT, which utilizes 1.92M-2.14M parameters yet performs worse. This clearly indicates that our performance gains stem from the unique integration of the multilevel graph-informed prompt and the noise penalization mechanism, which effectively bridges the gap between pre-training and fine-tuning in molecular graph learning.
Summary: This paper addresses the challenge of prompt tuning for pretrained models in molecular property prediction. It introduces a multi-level prompt learning module to enhance task adaptation and a noise penalty mechanism to improve robustness, adaptability, and efficiency. Extensive experimental evaluations demonstrate the effectiveness and superiority of the proposed approach across various molecular prediction tasks. Claims And Evidence: Yes, this paper is well-written and claims are well-supported Methods And Evaluation Criteria: Yes, the evaluation is comprehensive and the experimental setup is reasonable, while I'm curious about the effectiveness of the proposed method on other graph-related applications. Theoretical Claims: Yes. Experimental Designs Or Analyses: The experimental setup is well-structured and includes multiple baselines Supplementary Material: Yes Relation To Broader Scientific Literature: prompt learning Essential References Not Discussed: no Other Strengths And Weaknesses: Please refers to the above parts and also the questions. Other Comments Or Suggestions: Please refers to the above parts and also the questions. Questions For Authors: 1. This method does not appear to be specifically designed for molecular property prediction. Is the prompt-tuning approach also effective for other tasks? 2. How does the fine-tuning time compare across different fine-tuning strategies? 3. How much data is required to fine-tune the model effectively? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer Swha: Thanks for your positive review of our paper and for your thoughtful comments. **For W1 model versatility.** - Thank you for your positive feedback on our approach. We appreciate your recognition of the potential our MIPT framework for other graph-level applications. Below is a comparison of the performance based on *RMSE* of different tuning strategies on regression tasks. | Tuning Strategy | ESOL | Lipo | |-------|-------|-------| | FT | 1.1262 | 0.6942| | GPF | 1.1125 | 0.6832 | | GPF-plus | 1.1104 | 0.6789 | | Ours | 0.6788 | 0.6702 | - In our initial research, we focus on molecular property prediction, where MIPT demonstrated exceptional performance. To explore its broader applicability, we extended our method to regression tasks according to your constructive comments. The experimental results indicate that our approach maintains superior performance in these tasks. We remain committed to further investigating and validating the framework's performance across a wider range of graph-level tasks. **For W2 model complexity.** Since fine-tuning time is often difficult to compare fairly due to differences in hardware and system environment, we analyze the training costs based on the number of tunable parameters. Table I. The number of tunable parameters for different tuning strategies. \* is the frozen parameters. | Tuning Strategy | Size of GNN (Encoder) | Size of Prompt | Size of Graph Linear Layer | Size of Total Tunable Part | |----------------|---------------------|---------------|----------------------|----------------------| | FT | 1.86M | 0 | 0.6K-0.18M | 1.92M-2.14M | | GPF | 1.86M* | 0.3K | 0.6K-0.18M | 0.9K-0.21M | | GPF-plus | 1.86M* | 3-12K | 0.6K-0.18M | 3.6K-0.192M | | Ours | 1.86M* | 0.115M | 0.6K-0.18M | 0.175M-0.295M | * **FT** requires the highest number of tunable parameters (1.92M-2.14M), leading to the highest training cost. * **GPF** and **GPF-plus** significantly reduce the number of tunable parameters by freezing the GNN encoder. This makes them more parameter-efficient compared to FT. * **Our method** also freezes the GNN encoder but introduces a larger prompt size (0.115M). Despite this, the total number of tunable parameters (0.175M-0.295M) remains significantly lower than FT, suggesting a balance between efficiency and effectiveness. In summary, ​**GPF and GPF-plus are the most parameter-efficient**​, while ​**our approach achieves a middle ground**​, tuning more parameters than GPF but significantly fewer than FT, likely leading to a moderate training cost. **For Q3. the datasize of fine-tune:** In our study, we use the entire training set during the fine-tuning phase. This approach, which fine-tunes only a subset of model parameters rather than the entire network (see our discussion in ​**Q2**​), is a common practice in this field. Prior works [1,2,3,4] also employ the full training dataset for model fine-tuning, ensuring that the model benefits from as much labeled information as possible. We appreciate the reviewer for spending time reviewing our paper and offering valuable suggestions. If you have any further questions, please tell us and we are willing to address your concerns.
null
null
null
null
null
null
Learning Cascade Ranking as One Network
Accept (poster)
Summary: The paper introduces LCRON (Learning Cascade Ranking as One Network), a novel end-to-end training framework for multi-stage ranking systems. LCRON formulates cascade ranking as a unified network with a new surrogate loss that aligns all stages with the overall top-$k$ selection objective​. In particular, it derives a differentiable lower bound on the probability that ground-truth items survive through all stages, and uses this as the primary training signal for the entire cascade. To complement the end-to-end loss, the authors design an auxiliary per-stage loss that optimizes each stage’s Recall in isolation​. This single-stage loss (inspired by prior work on differentiable ranking) ensures each model selects ground-truth items from the full candidate set, mitigating issues like gradient vanishing and tightening the bound between the surrogate objective and true cascade performance. Experiments demonstrate significant improvements with LCRON over existing cascade ranking methods. On the public RecFlow benchmark (a multi-stage recommendation dataset), LCRON achieves the best end-to-end Recall under realistic streaming evaluations​. In an industrial online advertising platform, LCRON delivered substantial business gains (e.g. +4.10% revenue and +1.60% user conversions compared to the strongest baseline)​. These results validate that aligning training with the cascade’s global objective and enforcing cross-stage consistency leads to a more effective and robust multi-stage ranking system​. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The paper’s theoretical core is the derivation of a lower bound for the joint survival probability of ground-truth items in the cascade. This is presented by defining an approximate joint probability $\hat{P}\_{CS}$ as the product of per-stage selection probabilities​ and showing that $\hat{P}\_{CS}$ is a provable lower bound of the true cascade selection probability $P_{CS}$​. The steps in this derivation (Eq. 7–8 in the paper) appear to be sound and mathematically correct, relying on the fact that certain fractions are always $\leq 1$. The bound is logically valid given the independence assumption between stages’ selection events. The authors claim that the auxiliary single-stage loss “tightens the bound” – i.e., reduces the gap between the lower bound $\hat{P}\_{CS}$ and the true $P_{CS}$. In Appendix A, they analyze the gap $\Delta = P_{CS} - \hat{P}\_{CS}$ and relate it to the consistency between stages​. The theory suggests that if each stage more reliably retrieves the relevant item (as enforced by $L_\text{single}$ on the full candidate set), then $\hat{P}\_{CS}$ approaches $P_{CS}$. This reasoning is sound in principle – a more consistent cascade (each stage capturing the true item) will make the product approximation more accurate. The provided equations support this claim, though they rely on expectations over permutations and some assumptions of stage-independence. One minor concern is that the paper could better explain in words some of the theoretical conclusions. For instance, the phrase “drive the reduction of this bound”​ is used to describe the effect of the auxiliary loss – while the math shows $L_{single}$ helps close the gap, a clearer intuitive explanation of how reducing the bound gap translates to better Recall would help. Additionally, the notation $\left(P_{\pi}\right)\_i$ introduced in the theoretical section is a bit confusing​. It represents the $i$th component of the sampling distribution $P_{\pi}$, but it initially looks like a stray parenthesis. This could be clarified to avoid any doubt in the correctness of the notation (see questions for authors). Overall, the theoretical content is sound; it supports the method’s design well, with just a few places where more explicit justification or clearer notation would strengthen confidence. Experimental Designs Or Analyses: The experimental evaluation is comprehensive and solid. The authors use both offline experiments (on a public dataset) and online experiments (live A/B test in an industrial system), which is a strong indicator of the method’s practical value. The offline experiments include streaming evaluations (multiple train-test splits over time)​, which adds credibility by simulating a real production training pipeline. These design choices make the results more trustworthy and less prone to overfitting on a single static split. The paper includes analyses to isolate the impact of each component. Notably, an ablation study removes the end-to-end loss or the single-stage losses to show their contributions. The results (as described in the text) highlight that without $L_\text{single}$ or without $L_{e2e}$, the performance drops, confirming that both are needed for the full benefit. This analysis supports the claim that the combination of losses is effective. There’s also mention of a sensitivity analysis (in Appendix C) for hyperparameters, indicating the authors checked robustness of the results with respect to loss weighting or other parameters – a good experimental practice. One aspect not fully explored is the effect of cascade depth. The experiments seem to focus on a two-stage cascade (likely because RecFlow has two ranked stages). It would strengthen the paper to see an experiment with $M=3$ stages or more, to verify that LCRON’s benefits carry over to longer cascades. For example, adding an extra intermediate ranking stage and evaluating whether LCRON still outperforms baselines (and how the losses scale) would address any concern that the method is tuned specifically to two stages. This is a missing analysis that would be valuable, although its absence does not invalidate the current results – it’s more about demonstrating scalability. Supplementary Material: Yes, appendix A, B and C. Relation To Broader Scientific Literature: This paper positions itself relative to three main recent approaches in multi-stage ranking: RankFlow (Qin et al., 2022), FS-LTR (Zheng et al., 2024), and ARF (Wang et al., 2024). RankFlow introduced joint optimization via iterative feedback between stages, and FS-LTR proposed training all stages on the full cascade data (to mitigate sample bias)​. However, neither explicitly optimizes the final-stage metric. ARF introduced a differentiable surrogate loss for Recall but only for a single stage​. LCRON clearly extends these ideas: it combines the full-cascade training philosophy of FS-LTR with the metric-driven loss of ARF, achieving an end-to-end training that RankFlow attempted, but in a single unified model rather than an iterative process. This represents a notable advancement, as no existing approach simultaneously addressed both the sample bias and objective misalignment before (as the authors point out)​. The cascade ranking concept isn’t new (e.g., Wang et al., 2011 introduced an efficient cascade ranker; Gallagher et al., 2019 (ICC) optimized fused stage scores via LambdaRank​). LCRON differs by focusing on directly optimizing the selection probability of relevant items. Earlier methods often optimized proxy objectives (like combined scores or separate stage objectives) and could suffer from stage inconsistency or bias. By using differentiable sorting and a probabilistic formulation, LCRON provides a more principled end-to-end solution. This is a meaningful improvement on the foundations laid by those works, aligning with a broader trend in ranking research to move from heuristic multi-stage training to theoretically grounded joint optimization. Compared to FS-LTR (which is a strong recent baseline), LCRON adds the missing piece of objective alignment. FS-LTR trained all stages together but still used traditional loss functions per stage, whereas LCRON introduces losses that correspond to Recall directly​. Similarly, compared to ARF’s single-stage recall optimization, LCRON shows how to incorporate that idea across an entire cascade and handle the interactions between stages (for example, by multiplying probabilities from stage 1 and stage 2). The results in the paper demonstrate that these improvements are not just theoretical – LCRON outperforms ARF and FS-LTR in practice, indicating the combination of techniques is effective. Essential References Not Discussed: Fine-Grained Stage Alignment: One relevant recent work that was referenced but not discussed in detail is the FAA: Fine-grained Attention Alignment for cascade ranking (Li et al., 2023)​. FAA addresses the cascade ranking problem by aligning representations (attention) between stages. While LCRON approaches cascade optimization from a loss function perspective, FAA’s approach is complementary – focusing on feature consistency. A brief discussion of how LCRON’s objective alignment differs from or could be combined with representation alignment (as in FAA) would strengthen the related work section. Other Strengths And Weaknesses: Strengths: The idea of training all cascade stages as one unified network with a bound-approximating loss is a notable innovation. While it builds on elements from prior work (full-stage sampling, differentiable sorting), the particular combination – especially the novel end-to-end loss formulation ($L_\text{e2e}$) and the bound-tightening strategy with $L_\text{e2e}$ – is original. This approach has not been explicitly done before in the literature, making it a fresh contribution. The contributions have high significance for both research and industry. Improving multi-stage ranking has direct implications for large-scale recommender systems and search engines. The fact that LCRON showed measurable gains in a production environment (revenue and conversions) indicates that this method can impact real systems, not just benchmark scores. For the research community, LCRON’s approach could inspire more work on global objective optimization in cascades and on using differentiable ranking techniques in multi-stage pipelines. It effectively addresses a gap in the literature, so its acceptance would add valuable knowledge and potentially a new baseline for others to compare against. Weaknesses: One weakness is the complexity introduced by the method. Training with differentiable sorting and multiple loss components (even if combined via learned weights) can be harder to implement and tune than traditional methods. The paper mitigates this by using the UWL scheme to automatically balance loss weights, but the approach still requires careful engineering (e.g., setting the softmax temperature $\tau$ for NeuralSort). Another minor weakness is that some claims (like robustness across model capacities) were not directly verified, as noted earlier. Lastly, the method’s benefit was clearly shown for two-stage cascades; it’s an open question how it performs with more stages or in scenarios with dramatically different stage characteristics (this could be explored in future work). These weaknesses, however, are not fundamental flaws but areas to keep in mind when applying the method. Other Comments Or Suggestions: 1. Typos/Grammar: There are a few minor typos that should be corrected for the camera-ready version. For example, the abbreviation explanation for LCRON in the introduction reads “Learnig Cascade Ranking as One Network”​, missing an “n” in “Learning.” Such small errors should be fixed for clarity. Also, “auxillary” should be “auxiliary” (noticed in a section heading or text around L1095-L1103​). 2. The notation $(P_{\pi})\_i$​ is a bit confusing – it looks like a stray parenthesis. It would be clearer to denote this as $P_{\pi}(i)$ or something like $P_{\pi_i}$ if the intent is to index the vector $P_{\pi}$. Additionally, in Equation 10 (the definition of $P_{M_i}^{q_i}$ via the soft permutation matrix), it might be clearer to denote the resulting probability as $\hat{P}\_{M_i}^{q_i}$since it’s obtained through an approximation (soft sorting). Consistently using the hat notation for approximated probabilities (as done for $\hat{P}_{CS}$) would help the reader keep track of what is exact vs. relaxed. Questions For Authors: 1. In the abstract, you write “design an auxiliary loss for each stage to drive the reduction of this bound”​. Could you clarify this phrasing? Does it mean that optimizing the auxiliary loss empirically tightens the lower bound (i.e., increases $\hat{P}\_{CS}$ toward $P_{CS}$)? Please elaborate on how $L_\text{single}$ concretely contributes to reducing the gap between the lower bound and the true joint probability – for instance, is there a way to measure or prove this reduction happens as $L_\text{single}$ is minimized? A bit more intuition here would help: we see the math in Appendix A, but a clearer explanation of how the auxiliary loss “drives the bound’s reduction” would be appreciated. 2. Notation $(P_{\pi})\_i$ – is this a typo? In Section 4.2, you define $P_{\pi}$ as the probability of a sampling $\pi$, and then use the notation $(P_{\pi})\_i$​. This notation was a bit hard to parse – it looks like $P_{\pi}$ might be a vector or distribution, and you’re referring to its $i$-th component. Could you confirm what $(P_{\pi})\_i$ means exactly? If $P_{\pi}$ is a distribution over sampled sets, perhaps $(P_{\pi})\_i$ is the probability that a particular item $i$ is included? The parentheses made it read like a possible typo. Should it be $P{\pi_i}$ (the probability of a specific permutation) or $P(\pi_i)$? Clarifying this would help readers follow the derivation in Eq. 6 without confusion. 3. Have you considered or tested LCRON on a cascade with three stages (or more)? While RecFlow provides two-stage data, it would be insightful to know how the approach scales to an additional stage. For example, if we had a retrieval model $M_1$, an intermediate ranker $M_2$, and a final ranker $M_3$, would the LCRON formulation easily extend (with $L_\text{e2e}$ using $\prod_{i=1}^{3} P^{q_i}\_{M_i}$ and each stage having $L_\text{single}$)? If you have any preliminary results or reasoning for $M=3$, please share them. This would help convince readers (and practitioners) that LCRON generalizes beyond the two-stage scenario. Are there any expected difficulties for $M>2$ (e.g., increased gradient variance or more hyperparameters), or does it plug in seamlessly? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed and insightful review. For the questions: 1) Yes, the abstract intends to state that L_single helps reduce the gap between $P_{CS}$ and $\hat{P_{CS}}$. In lines 576-588 of the appendix, we explained the conditions for the gap to be reduced to 0 (i.e., the equation in eq14 holds). It can be found that the consistency of the top q2 sets selected by the two models is a sufficient condition for the gap reducing to 0, so optimizing L_single helps to reduce the gap. Indeed, if the role of auxiliary loss can be explained more intuitively in the abstract and introduction, it will make the article easier to understand. Thank you very much for the suggestion regarding the description of the abstract. However, it is a little difficult to intuitively explain why l_single works without a mathematical description. We welcome suggestions for clearer phrasing and will revise accordingly. 2) Thank you very much for your careful evaluation of our work. There should be $P_{\pi}$ rather than ${(P_{\pi})\_i}$, which is indeed a typo and leads to confusion. $P_{\pi}$ represents the probability distribution of sampling to set $\pi$. When $\pi$ is given, $P_{\pi}$ is a scalar. In addition, to make the description more rigorous, all $\pi\sim P_{M_1}^{q_1}$ in the text should be replaced by $\pi\sim P_{\pi}$, indicating that $\pi$ is sampled from $P_{\pi}$. $P_{M_1}^{q_1}$ only represents the mean vector of the sampling distribution. We will fix these typos. 3) Limited by our industrial scenario, we can only implement two-stage experiments. Thanks for your recognition of the comprehensive and solid experimental part. We also believe that the experiment in the scenario of M>2 can further verify the scalability of LCRON and enhance the depth of this paper. RecFlow contains data from more stages. We constructed a three-stage (M=3) cascade ranking system on the public RecFlow benchmark, using its prerank_neg, coarse_neg, rank_neg, rerank_neg, and rerank_pos samples (rerank_pos is the ground truth). The three stages utilized DSSM, MLP, and DIN architectures, respectively. The results are shown in the following, formatted as mean±std(p-value). LCRON still outperforms the baselines (statistically significant), showing the scalability of LCRON. |Method|Joint Recall| |--|--| |BCE|0.7191±0.0005(0.0000)| |ICC|0.6386±0.0071(0.0000)| |RankFlow|0.7308±0.0005(0.0000)| |FS-RankNet|0.6200±0.0010(0.0000)| |FS-LambdaLoss|0.7319±0.0038(0.0214)| |ARF|0.7256±0.0004(0.0000)| |ARF-v2|0.7332±0.0020 (0.0076)| |LCRON(ours)|**0.7390±0.0008**| For the weaknesses: 1) About Complexity & Deployment: Differentiable sorting techniques such as NeuralSort and SoftSort typically have an O(n²) complexity, which is the same as common LTR methods like LambdaLoss, where n refers to the number of sampled items within a single impression. In real-world applications, n is usually not too large (e.g. n=20 in our system), LCRON incurs no additional training cost compared to the baselines. For public experiments, we use A800 GPU to conduct public experiments. The GPU memory used for BCE, ICC, RankFlow, FS-RankNet, FS-LambdaLoss, ARF, LCRON are 28.4/28.9/28.9/28.4/28.9/28.4/28.9GB, respectively. The runtime of one epoch of them are 5358/5376/5057/5104/5076/6145/5339s, respectively. Deployment requires training stages in a single TensorFlow job and loading weights into separate meta files—it is not difficult for industrial serving teams (Appendix Lines 643–646). Moreover, $\tau$ is the sole hyper-parameter of LCRON, thus its use typically requires only a small amount of hyperparameter tuning. To sum up, LCRON incurs no significant training/deployment overhead in real-world applications. 2) Scalability: Please see the response to Q3 above. 3) Regarding the robustness across capacities: Our original claim intended to highlight LCRON's consistent performance across different model architectures (e.g., DSSM+MLP, DSSM+DIN), suggesting robustness on capacities. We agree that the experiments do not directly validate robustness across model capacities. To address this, we will refine this claim of the abstract in the revised version and add a discussion in the Limitations and Future Work section to guide further exploration. For the missing reference: Thank you for pointing out this work. We will discuss FAA in the Related Work section to enhance the comprehensiveness of related work. For the typos: Thank you for pointing out these typos. Additionally, we noticed that the Recall and NDCG for each single stage in the manuscript were mistakenly swapped. All noted errors will be corrected, and we will thoroughly proofread the manuscript to ensure accuracy. Due to space limitations, we only show key results in the rebuttal text. Full additional results can be found in this anonymized github link: https://anonymous.4open.science/r/2025038594/ If you have any further questions or concerns, we will make every effort to provide further clarification.
Summary: This paper proposes LCRON (Learning Cascade Ranking as One Network), a new method for optimizing cascading sorting systems. LCRON implements end-to-end training through two agent loss functions (Le2e and Lsingle) to ensure that the goals of each stage are consistent with the overall goals of the system and enhance the interaction between stages. Experiments have shown that LCRON outperforms existing methods in both public benchmarks and industrial applications, significantly improving ad revenue and user conversion rates. ## update after rebuttal Thanks for your careful response, and I consider the previous score reasonable and will keep the previous rating. Claims And Evidence: The paper's main claims are supported by convincing evidence. Extensive experiments on four datasets demonstrate LCRON’s superiority over state-of-the-art methods across multiple metrics. Methods And Evaluation Criteria: The proposed methods make sense for the problem. LCRON innovatively introduces two novel surrogate loss functions (Le2e and Lsingle) to align the training objectives across multiple stages of cascade ranking, ensuring end-to-end optimization and enhanced interaction awareness between stages. The experimental evaluation utilizes the RecFlow benchmark dataset, which is specifically designed for cascade ranking systems and includes multi-stage samples from real-world recommendation systems. Additionally, the evaluation compares LCRON against state-of-the-art baseline methods, demonstrating its effectiveness in both public benchmarks and industrial applications. Theoretical Claims: I checked the correctness of the proofs for theoretical claims, including the derivation of the lower bound for the survival probability of ground-truth items in the cascade ranking system. The theoretical analysis is sound and aligns with the experimental results, showing that Lsingle effectively reduces the gap and enhances the overall performance of the cascade ranking system. No significant issues were found in the theoretical claims. Experimental Designs Or Analyses: I checked the validity of the experimental designs and analyses. The experiments are conducted on the RecFlow benchmark dataset, which is specifically designed for cascade ranking systems and includes multi-stage samples from real-world recommendation systems. The results are evaluated using the Recall@K@m metric, which is a standard evaluation criterion for cascade ranking systems. Additionally, the paper includes an ablation study, streaming evaluation, and online A/B testing to comprehensively validate the effectiveness of LCRON. The issues are listed behind in the Weaknesses. Supplementary Material: There is no supplementary material for this paper. Relation To Broader Scientific Literature: LCRON advances cascade ranking by addressing limitations of traditional methods. It builds on interaction-aware training (e.g., RankFlow, FS-LTR) and differentiable sorting (e.g., ARF), introducing novel losses (Le2e, Lsingle) for end-to-end optimization and stage-specific supervision. These contributions align with broader trends in multi-task learning and differentiable techniques, offering a robust solution for cascade ranking systems. Essential References Not Discussed: There are no related works that are not currently discussed in the paper. Other Strengths And Weaknesses: The paper proposes LCRON (Learning Cascade Ranking as One Network), a framework for optimizing cascade ranking systems by introducing two new surrogate loss functions (Le2e and Lsingle) to align training objectives across stages and enable end-to-end optimization. This approach is original and significant, as it addresses key limitations in traditional cascade ranking training methods, such as misaligned objectives and insufficient stage interaction. The application to real-world recommendation and advertising systems further highlights its practical significance. Weaknesses: 1. Comparison with Existing Techniques: While LCRON is compared with state-of-the-art methods like RankFlow and FS-LTR, a deeper comparison with other differentiable ranking techniques or multi-stage optimization approaches could further highlight its advantages. 2. Ablation Studies: Although the paper includes an ablation study, additional experiments to isolate the impact of individual components (e.g., the role of differentiable sorting or the interaction between Le2e and Lsingle) would strengthen the claims. Other Comments Or Suggestions: I would like to learn about the authors' response to the weaknesses listed above, which may give me a clearer perspective on the paper's contribution. Questions For Authors: I would like to learn about the authors' response to the weaknesses listed above, which may give me a clearer perspective on the paper's contribution. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your detailed and insightful review. For weaknesses 1 & 2: To the best of our knowledge, we have already compared LCRON with existing multi-stage optimization methods. **To further validate the effectiveness of solely using differentiable sorting techniques (i.e., aligning model predictions with label permutation matrices through CE loss)**, which shares the same underlying rationale as FS-RankNet in making models fit complete orders, **we add two new baselines**. We conduct experiments on NeuralSort [1] and SoftSort [2]. In ablation studies, we separately validated the effects of $L_{e2e}$​ and $L_{single}$. Since differentiable sorting techniques form the foundation of both $L_{e2e}$ and $L_{single}$, the results of **"NeuralSort"** and **"SoftSort"**, which evaluate standalone differentiable sorting techniques, **also serve as an ablation of LCRON**. Moreover, **we further conduct experiments using the SoftSort [2] operator as the foundation of LCRON, to study its generalization capability across different differentiable sorting operators**. All additional results are shown in the following table, formatted as mean±std(p-value). Each method was run 5 times, and we conducted t-tests between LCRON (NeuralSort) and each of the other methods. | Methods | Joint Recall (Golden Metric) | Recall of Ranking Model | NDCG of Ranking Model | Recall of Retrieval Model | NDCG of Retrieval Model | | ------------------------- | ----------------------------------- | ---------------------------- | ---------------------- | ---------------------------- | -----------------------------| | **NeuralSort** | 0.8210±0.0016(0.0000) | 0.8233±0.0007(0.0000) | 0.7138±0.0004(0.0000) | 0.9469±0.0010(0.0000) | 0.6979±0.0013(0.0000) | | **SoftSort** | 0.8103±0.0013(0.0000) | 0.8138±0.0011(0.0000) | 0.7148±0.0006(0.0000) | 0.9386±0.0003(0.0000) | 0.7066±0.0005(0.0000) | | **LCRON(SoftSort)** | 0.8723±0.0008(0.0615) | 0.8720±0.0009(0.0485) | 0.7246±0.0096(0.3395) | 0.9703±0.0015(0.6750) | 0.7035±0.0265(0.3782) | | **LCRON(NeuralSort)** | 0.8732±0.0005 | 0.8731±0.0004 | 0.7292±0.0008 | 0.9700±0.0004 | 0.7152±0.0009 | The results show that LCRON(SoftSort) can also achieve significantly better performance than baselines. These experiments verify LCRON's generalization capability across different differentiable sorting operators, also suggesting that LCRON's effectiveness could benefit from more advanced differentiable sorting techniques. We will add these results to the new version. If you have any further questions or concerns, we will make every effort to provide further clarification. References: [1] Grover, A., Wang, E., Zweig, A., and Ermon, S. Stochastic optimization of sorting networks via continuous relaxations. In ICLR, 2019. [2] Prillo, S. and Eisenschlos, J. Softsort: A continuous relaxation for the argsort operator. In International Conference on Machine Learning, pp. 7793–7802. PMLR, 2020.
Summary: The paper addresses two key challenges in cascade ranking: (i) the misalignment of training objectives across different stages and (ii) the discrepancy between training and test environments caused by multi-stage ranking and filtering. To overcome these issues, the authors propose a novel loss function comprising: an end-to-end term that optimizes the global objective of the cascade ranking system, and stage-wise terms that help models adapt to changes in the sample space distribution introduced by upstream ranking stages. The paper adopts the model architecture for all stages from RecFlow [1], focusing its contributions on refining the training loss. [1] RecFlow, ICLR'25 Claims And Evidence: The paper introduces LCRON, a novel loss function for cascade ranking that simultaneously addresses two key challenges: misalignment of training objectives across stages and discrepancies between training and testing environments. By incorporating an end-to-end loss term alongside stage-wise losses, the method aims to improve coordination across ranking stages and enhance overall system performance. Experimental results indicate that LCRON achieves the highest end-to-end recall among the tested methods. However, a key limitation of the evaluation is that all baseline methods share the same model architecture and differ only in their loss functions. While this ensures a controlled comparison, it leaves open the question of whether the improvements stem from the proposed loss function itself or from interactions with the underlying architecture. Additionally, the most competitive baseline, FS-LambdaLoss, outperforms LCRON in both ranking and retrieval stages in terms of Recall, suggesting that while LCRON improves joint performance, its per-stage effectiveness varies. Furthermore, the overall gains in end-to-end recall, though positive, are relatively small, and validating them with statistical significance tests would strengthen the claims. Methods And Evaluation Criteria: The benchmark datasets used in this paper are well-suited to the cascade ranking problem, providing a realistic testbed for evaluating multi-stage ranking systems. However, the authors limit their evaluation to a two-stage setup, even though the RecFlow benchmark includes four stages. While the approach can theoretically be extended to more stages, it remains unclear how the proposed surrogate loss function performs across all stages of a fully deployed cascade ranking system. Evaluating LCRON in a true multi-stage setting would provide deeper insights into its effectiveness and scalability. Theoretical Claims: The paper provides a clear explanation of the proposed loss function. However, I am unable to verify the theoretical claims due to my limited expertise in extensive mathematical proofs. Experimental Designs Or Analyses: Please see "Claims And Evidence" and "Methods And Evaluation Criteria" Supplementary Material: I have reviewed the experiments elaborated in the supplementary. Relation To Broader Scientific Literature: The paper brings up two important challenges in cascade ranking and suggests using surrogate loss functions to tackle them. Since cascade ranking is widely used in industrial recommendation systems, this is a valuable contribution to the field. However, the approach would be more convincing with a stronger set of experiments, especially testing across more stages, broader set of architectures, and validating the improvements with significance tests. Essential References Not Discussed: The paper discusses key references related to cascade ranking and provides a solid foundation for its contributions. But it should be noted that, some closely related work on the interaction between retrieval and ranking stages, such as Stochastic Retrieval-Conditioned Reranking (Zamani et al., ICTIR'22), is not covered. Including such references could provide additional context and help position the proposed approach within the broader landscape of retrieval and ranking research. Other Strengths And Weaknesses: Strengths: - The paper identifies key challenges in cascade ranking - misalignment of training objectives across stages and discrepancies between training and test environments; and introduces a novel loss function which optimizes both end-to-end performance and stage-wise alignment through a novel surrogate loss function. - The method is evaluated on both public benchmark and an industrial dataset, with online A/B testing showing a 4.1% increase in revenue and a 1.6% increase in user conversions, highlighting its practical impact. Weaknesses: - While the RecFlow dataset includes four ranking stages, the paper evaluates LCRON only on a two-stage setup, leaving it unclear how well the method scales to fully deployed multi-stage cascade ranking systems. - All baselines share the same model architecture and differ only in their loss functions, limiting the scope of comparison, and the strongest baseline (FS-LambdaLoss) outperforms LCRON in both retrieval and ranking stages in terms of Recall, raising questions about whether the overall gains justify the added complexity. - The reported improvements in end-to-end recall are relatively small, and the paper does not provide statistical significance tests to confirm their robustness, making it difficult to determine whether the gains are meaningful or within the margin of variance. Other Comments Or Suggestions: -- Questions For Authors: Please see above sections Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks very much for the detailed and insightful review. For the weaknesses: 1) We adopted a two-stage setup (retrieval + ranking) because two-stage cascading represents the most classic form of cascade ranking, as seen in previous works like FS-LTR. From a practical perspective, to the best of our knowledge, real-world online cascade ranking systems typically employ 3–4 stages to achieve an optimal trade-off between effectiveness and efficiency. Due to real-world constraints (e.g., business requirements, team organizations), retrieval + preranking often serves as the most feasible scenario for validating and implementing multi-stage joint optimization. Thus, we believe our experiments retain generality and sufficiently demonstrate the value of our method in real-world applications. Nevertheless, we acknowledge that testing LCRON on scenarios with >2 stages could further validate its scalability and enhance the depth of this work. Due to some limitations of our industrial scenario, we can not deploy 3 or more stages experiments. We constructed a three-stage cascade ranking system and verified the effectiveness of LCRON on RecFlow. **The detailed settings and results are in the response to reviewer VkMd. Experimental results show that LCRON still significantly outperforms baselines in this three-stage setting**. 2) **a)** We **aligned all baseline model architectures and isolated parameters across stages** strictly (as illustrated in lines 301-304). This **ensures performance improvements are not attributed to parameter sharing or architectural interactions**. Our public experiments used DSSM and DIN (with attention), while online experiments used DSSM and MLP—**both setups cover mainstream architectures (DSSM for Retrieval, MLP for Pre-ranking, attention-based models [e.g., DIN] for Ranking) in recommendation/advertising cascade systems**. We believe this sufficiently demonstrates LCRON’s generalization across architectures. **b)** In Table 2, the metrics under "ranking" and "retrieval" reflect individual model performance on full samples, not their combined cascade performance. **The comparison between FS-LambdaLoss and LCRON highlights that neglecting inter-stage interactions may lead to suboptimal cascade results despite strong standalone model performance**. This confirms that LCRON priorities end-to-end cascade effectiveness over individual model optimization. Note that end-to-end recall is the golden metric for cascade ranking, as illustrated in lines 314-322. **c) About the complexity, please see the response to reviewer VkMd (for weakness 1)**. 3) We prioritized significance testing (via unpaired t-tests with 5 runs per method) for ablation studies (Table 3), where smaller performance gaps required rigorous validation. The larger margins in baseline comparisons (Table 2) inherently imply statistical significance. Furthermore, online experiments confirmed LCRON’s statistically significant superiority over strong baselines like FS-LTR and ARF-v2 in public benchmarks. We believe these results collectively justify the robustness of Table 2’s conclusions. However, to address potential concerns about the statistical significance of improvements in Table 2, we conducted additional significance tests on the baselines. **The results are shown in the following table, formatted as mean±std(p-value). LCRON demonstrates statistically significant (p-value<0.05) superiority over all baselines on joint recall (namely end-to-end recall)**. |Method/Metric|JointRecall@10@20↑| |-------------|-------------------| |BCE|0.8539±0.0006(0.0000)| |ICC|0.8132±0.0003(0.0000)| |RankFlow|0.8647±0.0007(0.0000)| |FS-RankNet|0.7881±0.0007(0.0000)| |FS-LambdaLoss|0.8666±0.0016(0.0004)| |ARF|0.8608±0.0006(0.0000)| |ARF-v2|0.8678±0.0009(0.0000)| |LCRON(ours)|0.8732±0.0005| For the missing reference: Thank you for highlighting this work. We notice that it focuses on improving "retrieval + ranking" cascade systems but employs a non-learnable retrieval component (BM25) paired with BERT for ranking. Specifically, it jointly optimizes the number of retrieved documents N and the ranking model. This differs from our focus on joint learning across fully learnable cascade stages. We will discuss this work in the Related Work section to clarify its distinctions from our approach. &nbsp; Due to space limitations, we only show the key results (e.g., end-to-end recall) in the rebuttal text. Full additional results can be found in this anonymized github link: https://anonymous.4open.science/r/2025038594/ If you have any further questions or concerns, we will make every effort to provide further clarification.
Summary: The paper "Learning Cascade Ranking as One Network" introduces LCRON, a novel approach for training cascade ranking systems in an end-to-end manner. Traditional cascade ranking architectures suffer from misalignment between training objectives across different stages and discrepancies between training and testing environments. The paper proposes a new surrogate loss function that optimizes the lower bound of the survival probability of ground-truth items through all stages, ensuring a better alignment of training objectives. The authors also introduce an auxiliary loss for each stage to improve robustness. Experimental results on public (RecFlow) and industrial benchmarks demonstrate that LCRON outperforms existing approaches in terms of recall and conversion metrics, achieving a 4.1% increase in advertising revenue and a 1.6% increase in user conversions in a real-world deployment. Claims And Evidence: The main claims of the paper are: - LCRON aligns training objectives across all cascade ranking stages: Supported by the proposed surrogate loss function, which explicitly optimizes the recall of the entire system rather than individual stages. - LCRON improves end-to-end recall compared to existing methods: Empirical results from RecFlow and industrial benchmarks confirm higher recall scores. - LCRON enhances commercial performance in real-world applications: A/B testing in a real-world advertising system shows notable revenue and conversion improvements. While these claims are mostly well-supported, the empirical results focus primarily on recall metrics, and it would be useful to evaluate additional ranking quality metrics (e.g., precision, diversity). Methods And Evaluation Criteria: The paper presents a clear research question: How can cascade ranking be trained end-to-end while aligning training objectives across all stages? The hypothesis, which proposes that a surrogate loss can improve ranking alignment and recall, is consistent with the methodology and results. The experimental design is appropriate for addressing this question. *Baselines*. The paper compares LCRON against state-of-the-art methods such as BCE, ICC, RankFlow, FS-LTR, and ARF. The chosen baselines are well-justified, covering both simple (BCE) and advanced (RankFlow, FS-LTR) approaches. However, it is unclear whether baseline results were obtained from previous papers or rerun under the same conditions. Explicit clarification on this would improve reproducibility. *Evaluation Metrics*. The primary evaluation metric is Recall@K@M, which is relevant for cascade ranking. The authors also report NDCG@K as a secondary metric. While these metrics align with the research goal, additional discussion on trade-offs between precision, recall, and ranking diversity would be beneficial. *Data Collection and Preprocessing*. The dataset choice (RecFlow) is appropriate, as it contains multi-stage ranking samples. Data preprocessing steps (e.g., filtering of interactions) are not detailed. If any pruning was performed, the justification should be included. *Data-Splitting and Generalization*. The train-test split is conducted over time, which is standard for ranking models. All models appear to be trained on the same splits, but cross-validation techniques are not explicitly mentioned. *Hyperparameter Optimization*. The optimization strategy is briefly discussed but lacks detail on parameter ranges and tuning procedures. It is unclear how many configurations were tested or how hyperparameters were selected, which could impact result reproducibility. *Experiment Execution and Sensitivity Analysis*. The experimental setup appears fair, but hardware details (e.g., GPU models, memory) are missing. There is no explicit discussion of statistical significance tests (e.g., p-values or confidence intervals) in Table 2, which would strengthen the analysis. Sensitivity analysis is limited to the temperature parameter in differentiable sorting, but other key hyperparameters (e.g., learning rate, batch size) are not explored. Theoretical Claims: The theoretical justification for LCRON’s surrogate loss function appears sound. The derivation of the lower bound on the survival probability of ground-truth items is well-structured and aligns with the recall optimization objective. Experimental Designs Or Analyses: The experimental design is strong and well-structured, with: - Comparisons against state-of-the-art baselines (BCE, ICC, RankFlow, FS-LTR, ARF) - Public and industrial benchmarks - Ablation studies to test the contribution of each component - Streaming evaluation to simulate real-world training conditions A few areas for improvement are (i) Computational cost analysis: how does LCRON’s training time compare to existing methods? (ii) Robustness to dataset shifts: given that RecFlow spans multiple time periods, an explicit analysis of temporal generalization would be beneficial. Supplementary Material: the supplementary material includes the derivation of the lower bound on the survival probability of ground-truth items, the implementation details for online experiments, as well as sensitivity analyses on hyperparameters. The sensitivity analysis of the temperature parameter in differentiable sorting is useful but could be extended to include other key hyperparameters (e.g., learning rate, batch size). Relation To Broader Scientific Literature: The paper builds upon and improves several prior works in cascade ranking, particularly: LambdaRank and Learning-to-Rank approaches, Differentiable sorting techniques (NeuralSort, SoftSort), Interaction-aware training methods (RankFlow, FS-LTR). The work is well-grounded in prior literature. Essential References Not Discussed: While the paper covers major references, it could benefit from discussing additional works on differentiable sorting and ranking, such as: - Blondel, M., Teboul, O., Berthet, Q., & Djolonga, J. (2020, November). Fast differentiable sorting and ranking. In International Conference on Machine Learning (pp. 950-959). PMLR. - Pobrotyn, P., & Białobrzeski, R. (2021). Neuralndcg: Direct optimisation of a ranking metric via differentiable relaxation of sorting. arXiv preprint arXiv:2102.07831. - Cuturi, M., Teboul, O., & Vert, J. P. (2019). Differentiable ranking and sorting using optimal transport. Advances in neural information processing systems, 32. - Thonet, T., Cinar, Y. G., Gaussier, E., Li, M., & Renders, J. M. (2022, June). Listwise learning to rank based on approximate rank indicators. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 8, pp. 8494-8502). Other Strengths And Weaknesses: Other Strengths: - Practical Relevance: The method has clear real-world applicability in advertising and recommender systems. - Clear Paper Structure: The organization of the paper makes it easy to follow. - Empirical Strength: The industrial deployment and A/B testing strengthen the validity of claims. Other Weaknesses: - Computational Complexity Not Analyzed: The additional cost of using differentiable sorting techniques is not explicitly measured, which could impact scalability in large-scale applications Other Comments Or Suggestions: - Consider including a formal convergence proof for LCRON's optimization process to demonstrate that it will always converge. Additionally, it would be helpful to analyze the conditions under which the optimization may fail or become unstable. - It would strengthen the paper to provide formal generalization bounds for LCRON, especially in highly dynamic ranking environments, to guarantee that the model will generalize well to unseen data, beyond the empirical results. Questions For Authors: 1. why didn't you evaluate additional ranking quality metrics (e.g., precision, diversity)? 2. were baseline results obtained from previous papers or rerun under the same conditions? 3. can you detail which data preprocessing steps (e.g., filtering of interactions) were performed? If any pruning was performed, the justification should be included. 4. What strategy was used for hyperparameter tuning and what were the specific ranges considered for key hyperparameters? How many configurations were tested during tuning, and what criteria were used to select the final hyperparameters? Were all baselines tuned under the same conditions to ensure fair comparisons? 5. Can you specify the computational resources used for training, such as GPU models, memory, and runtime per experiment? 6. Why are statistical significance tests (e.g., p-values, confidence intervals) missing from Table 2? Can you provide evidence to confirm the robustness of the reported improvements? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks very much for the detailed and insightful review. For the questions: 1) In cascade ranking systems, **we can often explicitly define the ground-truth, thus optimizing the end-to-end recall directly maximizes selection efficiency. So we treat it as the golden metric**. Other intermediate metrics, such as precision or diversity, are less critical in this context. Recall and NDCG for single stage are also intermediate metrics for observation and analysis. 2) All baseline results were obtained by re-implementing or adapting the source code under the same experimental conditions as our proposed LCRON, rather than directly citing results from previous papers. **Since none of the baseline methods have been evaluated on the RecFlow dataset under cascade ranking settings. For FS-RankNet & FS-LambdaLoss, we adapt standard implementations from the TF-Ranking library to PyTorch versions. For other baselines, when open-source code was available and runnable, we used it directly; otherwise, we implemented the baselines based on the descriptions in their respective papers**. All methods were evaluated using the same common hyperparameters (lr and batchsize, optimizer, initialization method, etc.) to ensure fair comparison. 3) For the public experiments, the training data organization for the two-stage cascade ranking is described in lines 284-297. There is no additional data pre-processing step. I assume you might be asking how the data filtering is performed in a cascade ranking system, i.e., how many items the retrieval model selects to pass to the ranking model and how many items the ranking model then selects as the final output. These specific settings are detailed in lines 298–312 of the draft. 4) **Yes, we performed a grid search on the main hyperparameters for all methods to ensure fair comparisons. For the baselines, we reported the best results, and for LCRON, we included a sensitivity analysis in the appendix. Specifically, the parameters we tuned include: temperature for ICC (0.05,0.1,0.5,1.0), tau for ARF and LCRON (1,20,50,100,200,1000); alpha (0,0.25,0.5,0.75,1) for RankFlow; and top-k (10,20,30,40) and smooth factor (0,0.25,0.5,0.75,1) for FS-LambdaLoss**. BCE and FS-RankNet do not have independent hyperparameters. Regarding the learning rate (lr) and batch size, since all methods used the same setting for fair comparison and results on industrial applications are typically not very sensitive to these hyperparameters, we did not experiment with different lr and batch sizes. **Considering your concern, we ran an additional four sets of experiments to validate performance under different LR and batch sizes, rerunning all eight methods for each set, resulting in 32 experimental runs**. Due to time limitations, each method was run only once. Statistical significance can be assessed by referring to the mean±std and p-value from other significance tests. Due to space limitations, we only show the End-to-End Recall under different LR and batch sizes: |Method|bs=512,lr=0.001|bs=2048,lr=0.001|bs=512,lr=0.02|bs=2048,lr=0.02| |-----|-----|-----|------|------| |BCE|0.8181|0.8106|0.8637|0.8582| |ICC|0.7644|0.7459|0.4972|0.5061| |RankFlow|0.8326|0.8166|0.8768|0.8722| |FS-RankNet|0.7537|0.7533|0.7935|0.7884| |FS-LambdaLoss|0.8289|0.8194|0.8777|0.8726| |ARF|0.8288|0.8174|0.8704|0.8667| |ARF-v2|0.8302|0.8202|0.8776|0.8725| |LCRON|**0.8396**|**0.8247**|**0.8841**|**0.8785**| **It can be seen that our method achieves consistently optimal results, demonstrating the robustness of our approach**. These sensitivity analysis details and the hyperparameter tuning specifics for the baselines will be added to the appendix. 5) Please refer to our response to Reviewer VkMd (for weakness 1). 6) Please refer to our response to Reviewer VSAL. The improvement of LCRON over other baselines is statistically significant. For the missing reference: Thank you for highlighting these works. [1] is already introduced in related work. [1] and [3] are differentiable sorting methods, but they don't produce the permutation matrix, making them incompatible as foundation components for LCRON compared to NeuralSort and SoftSort. We will discuss this in related work. Our current comparisons include state-of-the-art single-stage recall optimization methods (e.g., ARF) and joint learning approaches for cascade ranking (ICC, RankFlow, FS-RankNet, FS-LambdaLoss). [2] and [4] focus on optimizing ranking metrics (e.g., NDCG and Precision) for single-stage models, which differ from our end-to-end cascade ranking objective. We will explicitly highlight this distinction in the related work section. Due to space limitations, we only show the key results (e.g., end-to-end recall) in the rebuttal text. Full additional results can be found in this anonymized github link: https://anonymous.4open.science/r/2025038594/ If you have any further questions or concerns, we will make every effort to provide further clarification.
null
null
null
null
null
null
HEAP: Hyper Extended A-PDHG Operator for Constrained High-dim PDEs
Accept (poster)
Summary: This paper focuses on solving high-dimensional time dependent PDE with constraints. Traditional method suffers from curse of dimensionality. The proposed method effectively solves the problem by combining quadratic programing (QP) with NeuralODE. The approach can be summarized as: 1. Reformulating the PDE with constraints into a QP problem; 2. Modifying existing QP solver APDHG with learnable parameters to improve efficiency, named the method as HEAP; 3. Training solution ansatz parameters and HEAP parameters with NeuralODE method in a differentiable way by minimizing residuals. ## update after rebuttal I had a specific question on how to measure performance on BS equations without explicit solution, but the authors did not reply. This is my main concern. Hence, I change my score to 3. Claims And Evidence: Yes. The experiments show their method being effective in a clear and convincing way. Methods And Evaluation Criteria: Yes. The experiments are benchmarked on heat equations Theoretical Claims: I checked both Theorem 3.1 & 3.2 and found no issue. Experimental Designs Or Analyses: Yes, I find no issue with the experimental design. Since there is little work on solving high-dim PDE with constraints, the authors only chose one baseline for demonstration, and I understand their choice. Supplementary Material: N.A. Relation To Broader Scientific Literature: N.A. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: **Originality** is high. There is little work for high-dim PDE with constraints for several reasons. Curse of dimensionality rules out traditional numerical methods. Meanwhile, constraints are not well handled by methods such as PINN or neural operators. Therefore, this work innovatively tackles the problem by combining traditional QP method with NeuralODE, which fills the blank to some degree. **Weakness** in my opinion is about the experiment. The authors chose the constraints as upper and lower bounds for $\theta_t$. However, it seems the proposed method HEAP should not be limited to such simple constraints. It would strengthen the effectiveness of HEAP if some other constraints are demonstrated, for example, some typical linear constraints $H[u](x, t)\geq 0$ as indicated in equation (1). Also, does HEAP have some boundary for application? Is there any constraints that HEAP cannot handle? Typo: 1. Line 164, $\Omega_A$ -> $\Omega_H$? 2. Line 127 (right), "Constraint $A$" -> Constraint $H$? Other Comments Or Suggestions: N.A. Questions For Authors: 1. As discuss in weakness, is there any linear constraints that HEAP cannot handle? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your insightful comments and suggestions. Here are our responses to your questions: > Q1: It would strengthen the effectiveness of HEAP if some other constraints were demonstrated A1: The additional experiment results are reported in the rebuttal supplementary material (RSM), available at <https://anonymous.4open.science/r/HEAP/RSM.pdf>. We provide four additional experiments, two of which are Black-Scholes with no-arbitrage constraints (Eq. 1-5 of RSM), and the other two are burgers and reaction-diffusion with rate-of-change constraints. The results are shown in Tables 1, 2, 6, 7 of RSM. --- > Q2: Does HEAP have some boundaries for applications? Are there any constraints that HEAP cannot handle? A2: HEAP is based on a PDHG algorithm variant, designed for convex constraints. Therefore, HEAP can handle any convex constraints in principle. For more general non-convex constraints, the convergence of HEAP is not guaranteed. We will add this discussion and fix the typos in the final version once the paper update is allowed. --- Rebuttal Comment 1.1: Comment: Black-Scholes example is interesting. BTW, what is the practical background for high-dimensional BS equation? A basket of assets? If so, what is the exact form of solution to high-dim BS equation? --- Reply to Comment 1.1.1: Comment: Thank you for this insightful question. The high-dimensional Black-Scholes (BS) equation is indeed used for the pricing problem of a basket of financial derivatives. We identify that BS equation in classical form and some special cases has an closed-form solution. Yet to our best knowledge, in our case with default risk, due to the piecewise nonlinear functions, the nonclassical boundary conditions and the high dimension, there is no such closed-form solutions. Reference: J. Han,A. Jentzen,& W. E, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. U.S.A. 115 (34) 8505-8510
Summary: The paper introduces HEAP (Hyper Extended Adaptive PDHG), a new neural operator designed to solve constrained high-dimensional PDEs, where solutions must meet additional constraints beyond the governing equations. HEAP learns the evolution of PDE parameters and formulates this process as a quadratic programming (QP) problem. To efficiently solve this, the method unrolls the adaptive primal-dual hybrid gradient (APDHG) algorithm into the neural network. This approach improves efficiency while ensuring theoretical guarantees for constrained optimization. Experiments on various high-dimensional PDEs show that HEAP outperforms existing neural operators in accuracy and efficiency. Claims And Evidence: Why constraint is so important for solving PDE? If the PDE and proper initial and boundary conditions are given, the solution is fixed, what does constraint mean here? Are you doing optimization? Could you make the motivation and problem setting more clear? Methods And Evaluation Criteria: "The reference solution is obtained by either the explicit solution or the PINN-based numerical solver", since the method solves equations, we should only compare with high-order numerical solutions or exact solutions. Are the authors doing so? Theoretical Claims: 3.1 to 3.3 look fine to me. Experimental Designs Or Analyses: Is there any more complex problem? Are all the testing cases with exact or trusted numerical solutions? Supplementary Material: The paper does not have such. Relation To Broader Scientific Literature: The contribution to me is unclear, it needs to be emphasized that what do the constraints mean? Are they residuals? If so, aren't we solving them? Essential References Not Discussed: 'Evolutional deep neural network', need to be cited and discussed. Other Strengths And Weaknesses: I am not sure the motivation of the paper, what does the constraint mean here? Why we need it for solving PDE instead of optimization? Other Comments Or Suggestions: No such. Questions For Authors: What does the constraint mean here? Why we need it for solving PDE instead of optimization? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the insightful question. The additional experiment results are reported in the rebuttal supplementary material (RSM), available at <https://anonymous.4open.science/r/HEAP/RSM.pdf>. Regarding your question, we provide the following explanations: > Q1: What does constraint mean here, if PDE has a fixed solution? A1: The purpose of constraints, both equality and inequality, is to regularize the solution space and select the meaningful PDE solutions when multiple solutions exist. Firstly, in a general sense, the IC/BCs are also constraints of PDE, albeit equality constraints. Secondly, in practice, proper IC/BCs are sometimes not easy to obtain, while inequality is usually more accessible. For example, under which IC/BCs the Navier-Stokes equation has a unique solution is still an open problem (the Millennium Prize Problems) [1]. In this case, the inequality constraints are added to ensure a physically reasonable solution, e.g., the upper-bounded total energy (Eq. 7 in [1]). Constraints do affect the solution of PDEs, as we show in Fig. 2 of RSM, where the solution of the Black-Scholes (BS) equation with different financial constraints is compared. The BS equation is a fundamental PDE in quantitative finance (Nobel prize 1997) [2], which does not have a unique solution in practice. Thirdly, the constrained PDE problems have been formulated and studied in previous AI4PDE works [3, 4]. However, the existing methods are not scalable to high-dimensional problems, which is the focus of our work. --- > Q2: Discuss the Evolutional deep neural network (EDNN) paper A2: The EDNN also formulates the evolution of surrogate model parameters as an optimization problem, but it 1) solves the problem by a numerical solver instead of a neural network, 2) considers only equality constraints, and 3) is not directly scalable to high-dimensional problems. We will add a comparison with EDNN in the final version. --- > Q3: Is there any more complex problem? Are all the testing cases with exact or trusted numerical solutions? A3: We provide four additional experiments, two of which are Black-Scholes with no-arbitrage constraints (Eq. 1-5 of RSM), and the other two are burgers and reaction-diffusion with rate-of-change constraints. The results are shown in Tab. 1,2,6,7 of RSM. All the additional testing cases are without exact solutions or trusted numerical solutions. We have added all the above discussions and RSM to the draft, but we are not able to provide the updated draft here due to the rebuttal rules this year. We hope this response addresses your concerns, and please let us know if you have any further questions. Reference: [1] Fefferman, Charles L. "Existence and smoothness of the Navier-Stokes equation." The millennium prize problems 57.67 (2006): 22. [2] Hull, John C., and Sankarshan Basu. Options, futures, and other derivatives. Pearson Education India, 2016. [3] Hoshisashi, Kentaro, Carolyn E. Phelan, and Paolo Barucca. "Physics-Informed Neural Networks for Derivative-Constrained PDEs." ICML 2024 AI for Science Workshop. [4] Moro, Viggo, and Luiz FO Chamon. "Solving Differential Equations with Constrained Learning." The Thirteenth International Conference on Learning Representations.
Summary: This work provides a principled way of handling constraints in the framework of Control-based solution operator (CSO) for learning PDE solution operators under constraints, called HEAP. In the original CSO, the evolution of the network parameter $\theta$ is governed by a neural network $V$ called the neural control fields that directly map network parameter $\theta$ to its time derivative $\dot{\theta}$. Constraints can be incorporated via soft penalties. This paper instead handles the problem by performing constrained quadratic programming (QP). Assuming the constraints are linear, the CSO constraint optimization objective can be identified as a QP problem. This paper modifies an existing QP solver, adaptive primal-dual hybrid gradient (APDHG) for the CSO QP problem, where a matrices $\{W\}$ corresponding to the linear mapping to and from a linear latent space, the initial point of QP solver and the step sizes are predicted from a NN $V$. The output of $V$ is then used in the modified ADPHG, which runs for a fixed number of iterations and produces the solution, which is the time derivative $\dot{\theta}$. By identifying the modified ADPHG iterations as layers, $V$ can be seen as a hyper-network that predicts $\dot{\theta}$ from $\theta$. The paper demonstrates the superior performance of HEAP over CSO in terms of accuracy, measured by L2 relative error, and constraint satisfaction on a set of constrained PDEs of dimension up to 20. Claims And Evidence: Yes Methods And Evaluation Criteria: The paper tested their algorithm HEAP again the baseline method CSO on (1) constrained heat equations, (2) unconstrained heat, (3) Burgers, (4) reaction–diffusion PDEs in 5, 10, 15, and 20 spatial dimensions, where training set is generated by sampling the initial condition $\theta_{0}$ from a Gaussian distribution, and the test set is generated by sampling $\theta_{0}$ from another Gaussian distribution. The main evaluation criteria are: - PDE residual error (L2REpde) after a learned final solution, - Constraint violation (L2REcon) for physically constrained PDEs. I think the PDEs chosen and the evaluation criteria are valid, however, the dimension is only up to $20$, which still seems relatively low, and the authors did not report any metric on the computational efficiency like memory usage and training/inference speed. This is crucial since the improvements in L2RE for some cases are marginal, and if the proposed method incurs heavy computational cost, then the usefulness of the proposed method would be discounted. Theoretical Claims: Yes, I went through all theorems in section 3.3. except for proposition 3.3. which comes from another paper. They all seem to be coherent. Experimental Designs Or Analyses: - I find that there is too little information provided on the experimental design. What were the parameters in the Gaussian distribution used for generating the training and test set? What were the hyperparameters used for implementing the baseline method CSO? The author should, at the very least, report the penalty coefficient used in CSO, which could greatly impact the performance. - As mentioned before, the authors did not report any metric on the computational efficiency, like memory usage and training/inference speed. Supplementary Material: There are no supplementary materials. Relation To Broader Scientific Literature: The paper references neural PDE-operator methods (DeepONet, PINOs, etc.) and high-dimensional PDE solvers (Han et al., Yu et al., among others). They specifically build upon the recent line of "parametric PDE operator" approaches (like the CSO or Neural Control of PDE solutions). For constrained PDE training, they mention relevant works on soft/hard constraints in PDE/inverse design but highlight how those typically remain in lower dimensions. Essential References Not Discussed: None that I can think of. Other Strengths And Weaknesses: Strengths: - The QP formulation + APDHG unrolling is a more principled way to enforce constraints than naive penalty-based methods used in CSO, and the experiment shows that the proposed method does outperform the baseline method in terms of accuracy. Weaknesses / Questions: - As mentioned, there is too little information provided on the experimental design, which greatly reduces the validity of the authors' claim. - As mentioned, the authors do not provide explicit complexity or memory scaling analysis. - The authors mention that a few iterations (K=3) suffice, but it remains unclear how robust this is across PDE families with more complicated constraints or stiffer PDE dynamics. - The method only works for linear constraints. It is unclear how to extend the QP formulation to nonlinear constraints. - Even for linear constraint, the proposed method uses an approximation by truncating the Taylor expansion of $u_{\theta_{t}}$ (Eqn. 4), which modifies the original constraint. There was no discussion on how this approximation is valid throughout the paper. Other Comments Or Suggestions: 1. The paper STDE (https://openreview.net/forum?id=J2wI2rCG2u) from NeurIPS 2024 seems to be the current state-of-the-art method for solving high-dimensional with PINN, which you could include in section 2.2. for a complete reference list. Questions For Authors: 1. Nonlinear Constraints: How easily can HEAP be extended to handle more general, non-quadratic, or nonlinear constraints, which might no longer yield a strict QP form in parameter space? 2. Scalability for d>20: Have you tested or estimated memory/time usage for truly large dimensions, e.g., d=50+ or 100, to see if the cost of forming Q, A becomes limiting? 3. Hyperparameter Ablation: How does the choice of iteration count K or extended dimension in the hypernetwork impact final PDE accuracy or constraint satisfaction? 4. Adaptive Step Sizing: The approach includes some learnable step sizes ($\tau$, $\sigma$, etc.). Do you observe they converge to stable values, or do they exhibit large variance across training? 5. What is the rationale for using ResNet? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the insightful questions and suggestions. The additional experiment results are reported in the rebuttal supplementary material (RSM), available at <https://anonymous.4open.science/r/HEAP/RSM.pdf>. Here we provide brief responses to your questions point by point: > Q1: Computational efficiency: memory usage and training/inference speed for d<=20 and higher dimensions scalability. A1: We provide results for d=5, 10, 15, 20, 50, 100 in the Tab. 4 of RSM. The HEAP costs 1.6x GPU memory and about 2x training/inference time compared to the baseline CSO. The time and memory usage are nearly linearly increasing with the dimension, thus HEAP is scalable to higher dimensions in terms of computation. --- > Q2: Experimental design: parameters in the Gaussian distribution, hyperparameters for CSO, penalty coefficient in CSO A2: The parameters in the Gaussian distribution are mean=0, std=$\sqrt{0.5}$ for training and mean=0 std=1 for testing. The penalty coefficients are PDE residual loss (with weight $w_1$), the constraint violation loss ($w_2$), and the numerical constraint loss ($w_3$). To balance their magnitudes during training, the weights are set as $w_1 = 10^{-2}$, $w_2 = 1.0$, and $w_3 = 10$. The remaining hyperparameters are displayed in Tab. 3 of RSM. --- > Q3: Hyperparameter Ablation: iteration count K, extended dimension in the hypernetwork. A3: As shown in Tab. 5 of RSM, both iteration count K = 1,2,3,4 and extended dimension = 5,10,20 in the hypernetwork have a significant impact on the final PDE accuracy and constraint satisfaction. However, K=3 is a relatively robust choice across different PDE families. --- > Q4: Handling more general, non-quadratic, or nonlinear constraints A4: The effectiveness of HEAP largely depends on its underlying APDHG algorithm and the structure of constraints. For convex but non-linear constraints, modifications for constraint projections may still preserve convergence guarantees, thus HEAP can be easily extended. For non-convex constraints, however, theoretical convergence becomes challenging, though heuristic adaptations like penalty methods or relaxed Lagrangian formulations might work in practice. --- > Q5: Convergences of learnable step sizes (tau, sigma, etc.) A5: As shown in Fig. 3 of RSM, the learnable step sizes converge to stable values during training after 60 batches. --- > Q6: Rationale for using ResNet A6: We follow the backbone choice of previous work, CSO, where ResNet is chosen for its simplicity and effectiveness in training deep networks. The skip connections in ResNet help to alleviate the vanishing gradient problem. Other architectures like Transformer or LSTM are also possible for HEAP, but we leave them for future work. --- > Q7: Add related work STDE. A7: STDE is a state-of-the-art method for efficiently estimating extra-high-dimensional gradients, which can be integrated into HEAP as a gradient estimator. We will add a discussion and cite to STDE in the final version. --- > Q8: Validity of the approximation by truncating the Taylor expansion of the constraints (Eqn. 4). A8: The original constraint is about the function (infinite dimension), which must be approximated by some finite-dimensional numerical scheme for computation. The truncating Taylor expansion is actually an implicit Euler method, in the sense that the constraint is enforced at the next time step. The implicit Euler method is a widely used stable scheme for PDEs, and the error can be controlled by the square of the step size.
Summary: The paper introduces a novel method called HEAP, designed to solve high-dimensional PDEs that include additional constraints. The PDE solution is approximated with the evolution of the neural network parameters, which is formulated as a quadratic programming problem. To solve the QP efficiently, the method unrolls a fixed number of iterations of the APDHG algorithm. This makes the iterative solver a differentiable module that can be trained end-to-end. A hypernetwork is incorporated to estimate initial values, step sizes, and latent weights needed for the APDHG iterations. The paper presents theoretical results demonstrating that HEAP can replicate the iterative sequence of the APDHG algorithm and achieves linear convergence under suitable conditions. Experiments on several PDEs (including constrained and unconstrained heat equations, Burgers equation, and reaction-diffusion equations) demonstrate that HEAP achieves lower PDE residuals and better satisfaction of constraints compared to CSO. Claims And Evidence: 1. Theoretical proof is provided for claims: HEAP aligns with the APDHG algorithm, the approximation capacity of HEAP. I have not fully checked the proof. 2. Empirical results are provided to demonstrate that HEAP outperforms the baseline CSO in terms of lower PDE residuals and improved constraint satisfaction. The experimental evidence is clear, showing nearly consistent improvements over the baseline. Methods And Evaluation Criteria: The proposed methods and the evaluation criteria are suited to the problem of solving high-dimensional constrained PDEs. Theoretical Claims: I have not fully checked the proof. Experimental Designs Or Analyses: The experimental design is sound for the problem at hand. However, a more detailed explanation of these problems and possible visualization can help the readers to understand the importance and difficulty of these problems. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The paper is related to AI for PDE research. No apparent relevance to broader scientific literature. Essential References Not Discussed: I can't think of apparent references that are missing. Other Strengths And Weaknesses: The method’s motivation of reformulating the parameter evolution as a QP problem and unrolling the APDHG algorithm is interesting, but the reasoning behind these choices is not strongly conveyed. A clearer explanation of how these design choices overcome limitations in existing approaches would help readers, especially those from the general machine learning community. The presentation is technical and dense, which makes it difficult for readers who are not already familiar with these methods to understand. The paper does not fully utilize its available space to offer intuitive explanations or visual aids that could help clarify the concepts. Other Comments Or Suggestions: I have no additional comments. Questions For Authors: I don't have any further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your suggestions. The additional experiment results and visualizations are reported in the rebuttal supplementary material (RSM), available at <https://anonymous.4open.science/r/HEAP/RSM.pdf>, and will be added to the draft once allowed. Here we provide responses to your questions point by point: > Q1: A more detailed explanation of the experiment problems and possible visualization. A1: Here, we briefly explain two of the existing experiment problems and a newly added problem. 1) Heat equation: The heat equation serves as a fundamental prototype PDEs in physics and mathematics. As a paradigmatic parabolic PDE, it governs ubiquitous physical phenomena ranging from thermal diffusion in materials to probability distribution evolution in stochastic processes. However, solving high-dim instances presents formidable challenges: both traditional mesh-based discretization techniques and neural network methods suffer from the "curse of dimensionality", with storage and computational costs growing exponentially as dimension increases. 2) Heat equation with constraints: The incorporation of monotonically decreasing temperature fields introduces a physically grounded regularization to the classical heat equation, elevating both its modeling fidelity and computational complexity. This constraint encodes the thermodynamic irreversibility of cooling processes in materials with latent heat barriers. Traditional numerical time-stepping schemes may violate the monotonicity condition unless rigorously coupled with projection operators or barrier methods. 3) Black-Sholes (newly added, with visualization): The Black-Scholes equation (Nobel prize 1997) is the cornerstone of modern quantitative finance, providing a rigorous framework for pricing financial derivatives under idealized market conditions. In practice, the no-arbitrage principle ensures the absence of trivial risk-free profit opportunities in the financial market. The no-arbitrage constraint arises naturally when modeling structured products with embedded downside protections or regulatory circuit breakers. The visualizations are Fig. 1 and 2 of RSM. --- > Q2: A clearer explanation of how QP formulation and HEAP algorithm overcome limitations in existing approaches. A2: 1) QP formulation: the existing approaches of high-dim PDEs formulate the parameter evolution either as a least square problem or a black-box optimization, without explicitly considering the potential constraints. The QP formulation allows the incorporation of linear constraints into the parameter evolution, extending the applicability of NN-based solvers to a broader range of high-dim PDEs. 2) HEAP algorithm: the existing methods for solving large-scale QP problem are either iterative solvers like APDHG or vanilla neural operators. The former requires a large number (100+) of iterations to converge, while the latter suffers from the generalization issues. HEAP combines the best of both: it unrolls the APDHG algorithm into a neural network, which achieves better accuracy than vanilla neural operator due to algorithm priors and solves within 3 iterations, much fewer iterations than the APDHG due to learned knowledge from data. --- > Q3: Intuitive explanations for concepts. A3: Intuitions of key components: 1) Evolving operator: the solution of a high-dim PDE is surrogated by a subnetwork, whose parameters evolve over time to approximate PDE. When constraints are added, the parameter evolution is formulated as a QP problem, where the objective is to minimize the discrepancy between the left and right-hand side of PDE, and the linear constraints represent the given constraints. 2) APDHG algorithm: the APDHG algorithm is a primal-dual iteration algorithm for QP, where the primal variable x is the solution variable and the dual variable y is the Lagrange multiplier or the penalty of constraints. The primal step updates x by a gradient descent of the objective function and penalty term, while the dual step updates y by a gradient ascent of the penalty term. Both updates are projected to the feasible set of the constraints. 3) HEAP network: HEAP unrolls a fixed number of iterations of the APDHG algorithm into a neural network, where the primal and dual variables are extended from a vector to a matrix, i.e. parallelized over a new dimension. The extension is parameterized by the output of the hypernetwork, which takes the information of the problem state as input. The HEAP network can be trained end-to-end by backpropagation. 4) Theorems: Theorem 3.1 shows HEAP network is truly an extended version of APDHG algorithm, since the HEAP network can be reduced to the APDHG algorithm when the parameters are set to ones. Theorem 3.2 shows that HEAP convergence of solving QP problems in terms of the required number of parameters. The proof naturally inherits from the convergence rate of the APDHG algorithm. --- Rebuttal Comment 1.1: Comment: I just realized that the authors cannot view my official comments. I am repeating my comment here in this rebuttal comment. I thank the authors for their clarifications and have increased my score accordingly. I hope these detailed explanations can be incorporated into the final version of the paper. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your constructive feedback and revised evaluation. We will ensure all detailed explanations are incorporated into the final manuscript to enhance its clarity and rigor.
null
null
null
null
null
null
Contour Integration Underlies Human-Like Vision
Accept (poster)
Summary: The authors systematically dissected where and why models struggle with contour integration by designing an experiment that tested object recognition under various levels of object fragmentation. It was found that humans exhibited an integration bias – a preference towards recognizing objects made up of directional fragments over directionless fragments. It was also found that not only models that shared this property performed better, but that this bias also increased with model training dataset size, and training models to exhibit contour integration leads to high shape bias. Claims And Evidence: The authors did not explain why the deep learning models performed worse than humans. In vision science, contour integration is normally associated with long-range interactions which has been exploited by humans. However, the authors did not mention this in the paper. The authors may need to investigate the size of the actual receptive field utilized by those CNN models. It is known that CNNs cannot capture long-range interactions/dependencies well while Transformer was developed to address this issue. In this context, it would be interesting to compare CNNs and Transformer networks with regard to the spatial extent that they utilize. Methods And Evaluation Criteria: I would like to see the size of the actual receptive field utilized by those CNN models in the experimental results. It is really interesting to explore the relationship between the performance of those models and the size of receptive field. Theoretical Claims: The role of long-range interactions used by HVS should be discussed because they are important to contour integration. The difference in human perception between two sets of stimuli should be analyzed. Note that perception of the outline which consists of directionless points should have a strong relationship with proximity. However, the directional version is different. Experimental Designs Or Analyses: Again, the authors are encouraged to examine the maximal spatial extent that the deep models exploited and the relationship between this value and the performance of the model on contour integration. Supplementary Material: I did not run the code provided in the supplementary material. Relation To Broader Scientific Literature: The human subject study was similar to that conducted in (Panis et al. 2008). However, the authors did not mention this work at all. Also, they missed many important references, such as (Field et al. 1993), (Dong et al. 2021). Essential References Not Discussed: Field et al. 1993, Contour integration by the human visual system-- evidence for a local ‘association field’ Panis et al. 2008, Identification of everyday objects on the basis of fragmented outline versions Dong et al. 2021, Perceptual Texture Similarity Estimation An Evaluation of Computational Features Other Strengths And Weaknesses: Strength: -A large number of Models were examined. -The experimental results were analyzed using statistical methods. -Two sets of stimuli were used. Weakness: -The difference in human perception between two sets of stimuli should be analyzed. In my opinion, perception of directionless outlines should have a strong relationship with proximity. However, this is not the case for the directional version. -Contour integration is normally associated with long-range interactions which can be exploited by humans. But the authors did not mention this in the paper. The authors are encouraged to investigate the size of the actual receptive field utilized by CNN models. It is known that CNNs cannot capture long-range interactions/dependencies well while Transformer was developed to address this issue. Therefore, it would be interesting to compare CNNs and Transformer networks in terms of the spatial extent that they can exploit. Other Comments Or Suggestions: -The difference in human perception between two sets of stimuli should be analyzed. -The authors are encouraged to investigate the size of the actual receptive field utilized by CNN models. -CNN and Transformer models should be compared in terms of the spatial extent that they can exploit. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review. We have addressed your comments individually below and point to an external link for additional figures: https://drive.google.com/drive/folders/1M_nUONfTXLmUZCHHL0PfIlIaEhpu0vLo?usp=drive_link > Long-range interactions in humans being normally associated to contour integration, and how that relates to our task. In our paper, we have intentionally omitted discussion around long-range interactions as our task was not designed to investigate these effects directly: - Our stimuli spanned only an 8x8 degree window of visual angle. - Our stimulus presentation time was only 200ms, which was followed by a 1/f noise mask. This ensured that no long-lasting integration or tracing is possible. The goal of these steps was to ensure that the types of long-range connectivity you refer to are controlled in this setting - see also below for our analysis of the human data. > The difference in human perception between two sets of stimuli should be analyzed. In my opinion, perception of directionless outlines should have a strong relationship with proximity. However, this is not the case for the directional version. Thank you for this suggestion, which we have now done. In summary, we found little difference in the scaling of the two conditions (phosphenes vs segments) in humans. The difference in performance in these two cases is mostly a shift of intercept instead of a change in slope across the number of elements (figure __*human_accuracy_split.png*__), though note that while the effect of difference in slopes is small, it is still technically significant (_t=2.87, p<0.01_). This means that the directionality of the segments only slightly affects performance positively over the directionless prosphenes as element density increases. > Comparison between CNN and transformer architectures, since CNNs do not have long-range interactions while transformers do Thank you for this suggestion. We indeed included transformers in our original work (the single largest architecture family in our paper is the vision transformer (ViT), of which we had 257 decoder-fit variants; Table 1, row 1). We have now conducted several new analyses comparing transformers and CNNs. To make results comparable, we restricted the analysis to only models trained on ImageNet-1k, although we also provide figures for all models in the figures too. - We found that the overall performance is no different (CNNs: _32.49%_ on average, ViTs: _31.24%_ on average, t-test of means _p=0.1108_: not significant). This is also true even when not controlling for dataset (CNNs: _29.08%_ on average, ViTs: _29.98%_ on average, t-test of means _p=0.1889_: not significant); figure __*vit_vs_cnn_imagenet.png*__ - Integration biases between transformers and CNNs are comparable: ViT (_7.8%_) and CNN (_10.1%_) are not statistically significantly different (_t=2.45, p=0.016_) at the 0.01 criterion; figure __*vit_vs_cnn_imagenet.png*__ - The way model performance scales across the number of elements is the same across models and across conditions: only one condition across all conditions is different between CNNs and ViTs (the 16-phosphenes condition), while the differences between the rest are statistically non-significant. __*vit_vs_cnn_imagenet_scaling.png*__ Taken together, these analyses show that long-range interactions do not play a crucial role in our task either in humans nor in models. > Receptive field sizes and how they relate to task performance should be studied We thank the reviewer for this nice suggestion. In addition to the experiments where we compared ViTs to CNNs (effectively two extremes of this spectrum), we have now added an explicit test for receptive field size and its relationship to the primate visual system. We evaluated the same subset of models as in Fig 6d and 8 on two Brain-Score benchmarks that test for the similarity of the effective receptive field size of a model to a primate counterpart. These are the Grating summation field (GSF) and surround diameter (Marques 2020, Cavanaugh 2002) benchmarks measured in macaque V1. In short, we do not find a statistically significant relationship between either measure of receptive field size similarity and fragmented object recognition accuracy (GSF _r=0.2655, p=0.0653_; surround diameter _r=0.276, p=0.055_); figure __*receptive_field.png*__ We believe these additional results further bolster our conclusion that architecture plays a minimal role in contour integration, and that the receptive field size of models is not a crucial component in this study. We also thank you for pointing us to missing references, which we have included. Thank you again for your review. We believe we have addressed all of your concerns and would be grateful if you considered raising your score. --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for addressing my comment. However, I still have some concerns. (1) Within the Abstract, it was stated that "Importantly, humans exhibit an integration bias – a preference towards recognizing objects made up of directional fragments over directionless fragments". This conflicts with the above response,e.g., "In summary, we found little difference in the scaling of the two conditions (phosphenes vs segments) in humans". (2) The receptive field sizes of more models should be investigated. Maybe a CC can be calculated between these sizes and the performances of the models. (3) Since the authors believed that long-range interactions did not take effect in their experiments and there was nothing to do with the size of receptive field, how to explain the phenomena found in the experiments? --- Reply to Comment 1.1.1: Comment: Many thanks for your reply and engagement which we appreciate. We have further comments to make on the concerns: **(1)** We want to clarify that when we talk about integration bias, we talk about the difference in performance between the conditions (Figure 6a). In our previous response (which admittedly was left slightly short due to the character limit), we mentioned that the difference in segment and phosphene performance in humans is primarily a **fixed offset** that only changes slightly with the number of elements present in the image (i.e., the _scaling_ of performance in the number of elements is approximately constant). For a figure, see here: https://drive.google.com/file/d/1fNzdf8by5Wlmdve19Drt1_ADS9_YXMBS/view?usp=drive_link What this means is that there is a performance difference across all conditions from 12% elements to 100% elements, and this difference in humans is approximately the same on average in all conditions. Thus, our previous comment is not in conflict with our statements about integration bias, but rather show it is stable across conditions. **(2)** Since in our previous comment we reported results for exactly this concern with 50 total models, how many models would be satisfactory? This number was chosen similarly to the other reported results in the paper for robustness (Fig 6d) and object recognition (Fig 8), and we would not want to p-hack by adding models until a number becomes significant; but rather commit to a fixed number of models. This number is already rather large given other work in the area (e.g., Dapello, Marques et al. 2020: 30 models; Linsley, et al. 2018: 18 models; Fel, Felipe, Linsley et al. 2023: 84 models, Biscione & Bowers 2023: 16 models) and is consistent with our other analyses. Furthermore, if we were to take the current effects of receptive field size similarity that we reported in our previous rebuttal as statistically significant (which they are not): GSF _r=0.2655, p=0.0653_; surround diameter _r=0.276, p=0.055_, the effect sizes would be rather small, with $R^2$ values of _0.070_ and _0.076_ respectively. This is in contrast to training dataset size, which we in the paper reported to have _r=0.814_ with an $R^2$ of _0.663_. Thus, even if one were to test more models and assume that the effect remained the same until the effect was significant, it would still fall massively short of the impact of training dataset size, and short of even the amount of compute a model uses per sample (FLOPs). **(3)** We also believe this is a very interesting question, and we have focused on the algorithmic (rather than the mechanistic) explanation in this work, since an investigation of this type has until now been missing. Our current stance on the algorithmic level is that contour integration is a bias that's helpful for solving general tasks - that as the task diversity increases, the model implements contour integration, and that this implementation improves robustness in general. This of course does not answer _how_ this contour integration is implemented on a mechanistic level, and perhaps it varies wildly based on model, too. Based on the new results regarding ViT vs CNN, as well as receptive field size experiments, we can say that it's unlikely that receptive field size plays a crucial role, as even if the effects we found were significant, they would be small compared to the total effect of contour integration we report. That being said, we think a full investigation of this would be interesting, but we also firmly believe would warrant its own work due to the extent of investigation, how many different model types, different methods to investigate this, etc., there are. Thank you again for your comment. We hope it resolves your concerns and hope that if so, you consider raising your score. **References** Linsley, D., Kim, J., Veerabadran, V., Windolf, C., & Serre, T. (2018). Learning long-range spatial dependencies with horizontal gated recurrent units. Advances in Neural Information Processing Systems (Vol. 31). Fel T, Felipe I, Linsley D, Serre T. Harmonizing the object recognition strategies of deep neural networks with humans. Adv Neural Inf Process Syst. 2022 Dec;35:9432-9446. Kubilius, J., Schrimpf, M., Kar, K., Rajalingham, R., Hong, H., Majaj, N., Issa, E., Bashivan, P., Prescott-Roy, J., Schmidt, K., Nayebi, A., Bear, D., Yamins, D. L., & DiCarlo, J. J. (2019). Brain-like object recognition with high-performing shallow recurrent ANNs., Advances in Neural Information Processing Systems (Vol. 32). Dapello, J., Marques, T., Schrimpf, M., Geiger, F., Cox, D., & DiCarlo, J. J. (2020). Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations. Advances in Neural Information Processing Systems (Vol. 33, pp. 13073–13087). Biscione, V., Bowers, J.S. Mixed Evidence for Gestalt Grouping in Deep Neural Networks. Comput Brain Behav 6, 438–456 (2023). https://doi.org/10.1007/s42113-023-00169-2
Summary: The work investigates the difference between the human ability to generalise object recognition and DNNs. The study builds on experiments that test the ability to recognise objects even in the presence of fragmentation, particularly by contour integration, and the ability of DNNs to perform the same task. The experiments conducted tests with 50 individuals and 1038 models from 13 architecture families and 18 datasets; the largest models were trained on more than 5B images. The tests showed that, in general, people perform better than DNNs and that the performance trend is related to the amount of data, with a correlation of 0.814. On the other hand, architectures are less important than data in contour integration. The set of experiments shows that contour integration is learned automatically from the distribution of data and does not depend directly on horizontal connectivity in the primary visual cortex, as previously assumed, as this mechanism can be learned from the data. Claims And Evidence: The work follows an interesting track leading to conclusions on contour integration through meaningful experiments on humans in a controlled laboratory setting and on models. Each experiment step is documented in detail, and metrics and statistics explain the analysis. Methods And Evaluation Criteria: Benchmarks are made on a large amount of data collected, both on the side of humans and models, significantly validating the claims. Theoretical Claims: Perhaps the only theoretical claim concerns the demonstration that contour integration is not necessarily a product of horizontal connectivity in the primary visual cortex (referring to the literature) and that the mechanism is learned from the amount of data inducing learning. I am not sure that the tests effectively support these conclusions. Experimental Designs Or Analyses: In my opinion, the experimental design is valid and also quite interesting. Supplementary Material: I have read the supplementary and also looked in the added code. Relation To Broader Scientific Literature: This is probably the first systematic model on the subject. It will certainly induce discussion and further research; there is little or no prior literature. Essential References Not Discussed: All required references for the main task seem to be discussed. Other Strengths And Weaknesses: The paper is well-written and very interesting. However, I found that attempting to train models directly to group elements is a bit naive. There are attempts like this in the literature, and obtaining binary contours using Gaussian filters and Otsu threshold is not quite skillful. Furthermore, besides *IN only*, it is not clear how contours were extracted in the other combination, as there is no reference to the literature. The problem of obtaining contours, which here is considered resolved and easy is, in fact, still an open problem even in the case of supervised models.. Other Comments Or Suggestions: I'm unsure about the attempt to train models directly to group elements. Several attempts like this have been made in the literature. Moreover, obtaining binary contour using Gaussian filters and Otsu threshold is quite naive. Also, apart from "IN only" for all the other cases, it is unclear how contours were extracted, as no literature is mentioned. This ease in extracting contour is a bit superficial since no model performs well on this task yet. This blurs the conclusions in lines 368-378. Questions For Authors: Please explain clearly how you have extracted the contours on all objects in ImageNet-1K Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review. We are happy to hear that you found our experiments meaningful, and the paper well-written and very interesting. We reply point-by-point to your comments below and point to an external link for additional figures: https://drive.google.com/drive/folders/1M_nUONfTXLmUZCHHL0PfIlIaEhpu0vLo?usp=drive_link > Contour extraction is not explained in the paper We apologize for not including a detailed description of this step in our paper. We use a phosphene rendering algorithm (Rotermund et al., 2023, as cited in the paper) for extracting contours and rendering stimuli with phosphenes or segments. The contour extraction is simple: images are transformed from RGB to grayscale, and then convolved with a Gabor filter bank of 8 different orientations. Specifically, our contour image is defined by: __see *contour_equation.png*__ For the placement of phosphenes and segments, we place elements on the contours of the object preferentially depending on the strength of the contour and its directionality. We use this algorithm both for our experimental stimuli, as well as ImageNet-1k images. For ImageNet-1k images, we also perform a background removal using `rembg` (Gatis) before applying this algorithm. We will include this description in the updated manuscript. > The problem of obtaining contours, which here is considered resolved and easy is, in fact, still an open problem even in the case of supervised models How exactly we obtain contours from RGB images is not central to our argument in any way. This is for two reasons: 1) all models and humans see the same images, regardless of how the contour was extracted. 2) The contour condition is merely a control condition meant to show that the large drop in model performance is not merely due to superficial changes in data distribution (i.e., most of the image being black). The contour extraction merely serves as an image preprocessing step that we can use to render images which are shown to humans and models to compare their performance and behavioral characteristics. > The paper is well-written and very interesting. However, I found that attempting to train models directly to group elements is a bit naive. The goal of this experiment was to causally show the primacy of training data in achieving a human-like contour bias, which we successfully demonstrate with the improved performance of the resulting models. Despite the lack of horizontal connectivity or other architectural biases, we were able to train a model simply using segments and phosphenes to exhibit a human-like integration bias. This also resulted in high shape bias, exceeding previous shape biases from other similar direct training approaches (Geirhos et al., 2018). While the approach itself is simple, it serves to strengthen our claim about the importance of dataset. > There are attempts like this in the literature, and obtaining binary contours using Gaussian filters and Otsu threshold is not quite skillful. We are not exactly sure what is meant by it not being “quite skillful”. The motivation here was to simply turn our non-binary contours extracted using Gaussian filters into binary contours, and for this purpose the methodology works very well. > Perhaps the only theoretical claim concerns the demonstration that contour integration is not necessarily a product of horizontal connectivity in the primary visual cortex (referring to the literature) and that the mechanism is learned from the amount of data inducing learning. I am not sure that the tests effectively support these conclusions. This is a very interesting thought – which experiments do you have in mind that would better support these conclusions? We believe our evidence is strong. We show that: - human-like contour integration emerges in models trained on large datasets despite any explicit architectural mechanism being present (Figs. 6b, 7). This provides proof of existence that such a mechanism is not strictly necessary. - We causally show that human-like contour integration is possible to directly train for without the use of human behavioral response data (Figure 7). This is causal evidence in a controlled setting that such a mechanism is not necessary. Taken together, we believe these facts strongly support the conclusion that horizontal connectivity is not necessary for contour integration. Of course it is still possible that the human visual system implements contour integration using a mechanism of horizontal connectivity. Our results challenge the prevailing view that this is the only way to implement contour integration, or that the existence of this mechanism is the key reason for why contour integration exists - clearly, it emerges in other settings too. We thank you again for your review, and believe that we have addressed your comments. In light of this, we hope that you consider raising your score.
Summary: This paper conducts a nuanced analysis of the extent to which vision models are human-like by conducting an experiment involving categorization of degraded images, where those images are reduced to lines or to fragments that are either points or line segments. Humans are able to recognize the images with fragmented contours while these pose a challenge for many vision models. Very large models nonetheless approach human performance, although they do not show a bias for directional fragments that is shown in humans. ## update after rebuttal Thank you for clarifying. These points do not change my opinion of the paper and I will keep my score. Claims And Evidence: The core claims are justified through careful experiments and statistical analyses. Inferential test statistics and error bars are included. Methods And Evaluation Criteria: In general the methods made sense for this problem and the experiment design was creative and sensible. Theoretical Claims: There were no theoretical claims to evaluate as the primary claims are empirical. Experimental Designs Or Analyses: The basic experimental designs are relatively simple, involving construction of a set of stimuli and evaluating model performance across those stimuli. The set of models used is extensive. Supplementary Material: Yes I consulted the model details and the additional details on the experiments and training datasets Relation To Broader Scientific Literature: There is a fairly extensive literature covering comparisons of models to human performance in image classification tasks. The paper does a good job of summarizing that literature. The key contributions here are use of a novel paradigm to provide a more stringent test of these models and the demonstration that while large models actually approach human performance even in this new task there is still an interesting bias in human vision that differentiates it from the models. Essential References Not Discussed: I did not see any major omissions. Other Strengths And Weaknesses: The primary strengths of this paper are its novel experimental approach and interesting findings about human vision. The main weakness is that it is not clear whether there are actionable insights for improving vision models -- the main focus in the paper in addressing this point is suggesting that there may be omissions from the training data, but this hypothesis is not explored in detail. Other Comments Or Suggestions: In general the paper was clear and I appreciated the use of color in the text to differentiate models. Questions For Authors: No questions other than the issues identified above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We are glad you find our experiments justified and carefully conducted, and that you found the findings interesting. We respond to each of your comments below point-by-point. > It is not clear whether there are actionable insights for improving models based on our results We believe there are quite a number of actionable insights for improving models based on our results: **First**, many attempts in the literature for reproducing alignment to low-level human visual cortex have focused on architectural changes in lieu of other approaches (such as data-based approaches), see e.g. Kubilius, Schrimpf et al. (2019). We show that these approaches do not pay off when trained and evaluated at scale. This gives a direct actionable insight: for modelers seeking to improve models of basic human vision, a more fruitful approach is to focus on making the training diet of the models more human-like (especially in scale) than to improve the model architecture in a specific way. **Second**, we demonstrate success in training models for contour integration directly. While this in itself is the less surprising insight, it certainly is valuable for those who simply want models that exhibit human-like contour integration for further experiments. Interestingly, we also find that training for contour integration in models also leads to shape bias. **Third**, we find that models with a more human-like integration bias exhibit improved accuracy on a downstream object classification task (Figure 6b, Figure 8) – indicating that selecting models for their ability to integrate contextual information is a useful validation signal when building artificial neural networks. Taken together, our work substantially advances our understanding of contour integration in vision science, and provides a clear path models that do exhibit human-like contour integration: large training diets, or direct training. This finding contrasts previous work that focus on the architectural role of horizontal connectivity in human visual cortex and models as the mechanistic source of contour integration (e.g. Linsley et al, 2018). > “Very large models nonetheless approach human performance, although they do not show a bias for directional fragments that is shown in humans” [...] “ [the paper demonstrates] that while large models actually approach human performance even in this new task there is still an interesting bias in human vision that differentiates it from the models” We would like to clarify that the best models do indeed show a human-like bias (Figure 6b), demonstrating that large training diets can yield human-like contour integration behavior in models. Integration bias separates almost all models from humans due to their integration bias that is not human-like. This shows that object recognition itself is possible without contour integration (a surprising finding in itself: e.g. Field 1993 as pointed out by reviewer eQjn; Kovacs & Julesz 1993; Grossberg & Mingolla 1985), but the largest models learn to do contour integration nonetheless. Thanks again for your review. We believe we have addressed your comments and ask you to consider raising your score? **References from all rebuttals** Cavanaugh, J. R., Bair, W., & Movshon, J. A. (2002). Nature and interaction of signals from the receptive field center and surround in macaque V1 neurons. Journal of Neurophysiology, 88(5), 2530–2546. https://doi.org/10.1152/jn.00692.2001 Zamir, A. R., Sax, A., Shen, W. B., Guibas, L. J., Malik, J., & Savarese, S. (2018). Taskonomy: Disentangling task transfer learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Gatis, D. (n.d.). rembg [Computer software]. GitHub. Retrieved March 31, 2025, from https://github.com/danielgatis/rembg​ Linsley, D., Kim, J., Veerabadran, V., Windolf, C., and Serre, T. Learning long-range spatial dependencies with horizontal gated recurrent units. NeurIPS, volume 31. Curran Associates, Inc., 2018. Grossberg, S., Mingolla, E. Neural dynamics of perceptual grouping: Textures, boundaries, and emergent segmentations. Perception & Psychophysics 38, 141–171 (1985). https://doi.org/10.3758/BF03198851 Kubilius, J., Schrimpf, M., Kar, K., Rajalingham, R., Hong, H., Majaj, N., ... & Dicarlo, J. (2019). Brain-like object recognition with high-performing shallow recurrent ANNs. NeurIPS (pp. 12785-12796). Field DJ, Hayes A, Hess RF. Contour integration by the human visual system: evidence for a local "association field". Vision Res. 1993 Jan;33(2):173-93. doi: 10.1016/0042-6989(93)90156-q. PMID: 8447091. Marques, T., Schrimpf, M., & DiCarlo, J. J. (2021). Multi-scale hierarchical neural network models that bridge from single neurons in the primate primary visual cortex to object recognition behavior. bioRxiv. https://doi.org/10.1101/2021.03.01.433495
Summary: The paper presents evidence suggesting that contour integration—a fundamental feature of human vision—remains largely absent in artificial vision models. To demonstrate this, the authors tested human performance on contour integration tasks and evaluated over 1,000 computational models to identify trends in machine vision systems. Notably, they found that models trained on larger datasets exhibited better contour integration capabilities. Intriguingly, certain advanced models like GPT-4 achieved performance levels comparable to humans, underscoring the role of scale in bridging this perceptual gap. Claims And Evidence: I am sympathetic to the argument that models trained on vast datasets may develop shape biases conducive to contour integration. However, given that many state-of-the-art models (e.g., GPT-4o) are trained on proprietary, non-public datasets, there remains ambiguity about whether their training data included images resembling the experimental stimuli. While the paper notes that the exact stimuli used are not publicly disclosed, the possibility of "close encounters" between test stimuli and training data—even unintentional ones—raises concerns about ecological validity. A possible way could be to verify whether performance persists under novel conditions, such as with modified backgrounds or added noise, which would reduce the likelihood of prior exposure influencing model behavior. Methods And Evaluation Criteria: Yes. Theoretical Claims: No proofs were provided in the paper. Experimental Designs Or Analyses: The experiments appear methodologically sound. I specifically reviewed the human data collection protocols and model evaluation framework, which encompass: Zero-shot evaluation via the BrainScore pipeline. Decoder fitting using fragmented ImageNet subsets, with label remapping to align with the 12 stimulus classes. Dataset size analysis to correlate scale with performance. Architecture size comparisons: While insightful, my primary concern lies in the potential unfairness of comparing models like RNNs—which lack access to modern large-scale training datasets—against newer architectures (e.g., transformers) trained on vastly larger corpora. This discrepancy in data availability complicates direct performance comparisons, as differences may reflect dataset scale rather than architectural superiority, leading to t values that undermined possible architecture impact. Experiments about integration biases and how countour integration leads to robustness. Supplementary Material: I checked the code provided and the implementation for the brainscore and decoding. Relation To Broader Scientific Literature: The paper delivers valuable insights for the community by systematically evaluating how artificial vision models align with human visual processing. It contextualizes its findings against prior work, contrasting architectural approaches (e.g., Linsley et al.’s biologically constrained networks) with data-driven explanations for emergent capabilities. Specifically, the authors highlight how integration bias—a phenomenon extensively studied in models by Geirhos et al.—can arise from large-scale training, thereby enabling certain architectures to solve contour integration tasks without explicit architectural mimicry of human vision. This dual focus on architecture and data provides a nuanced framework for understanding the mechanisms behind model performance, guiding future research toward more human-like machine vision systems. Essential References Not Discussed: I think the paper cites the most important literature. Other Strengths And Weaknesses: The paper is well writen and provides good justification for the different experimental choices. It is transparent about its limitations and provide some possible way to mitigate some of the problems that can be raised. I think it is a good example of a task that can help understanding the gap between human and artificial vision. Maybe I missed it, but perhaps would be to comment on models that have been trained on other downstream tasks such as object detection, image segmentation, and how the hypothesis of data diets apply or not in them. Other Comments Or Suggestions: No. Questions For Authors: * Although its mention in the limitation section, I think the potential problem of data leackage in the larger datasets can be an important point for the validity of the study. It would be good to run at least one control, perhaps under different background colors or under some noise. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your favorable review. We are happy to read that you found the paper insightful and our experiments sound, and that you see how it helps guide future research. We address your comments point by point below, and point to an external link for additional figures: https://drive.google.com/drive/folders/1M_nUONfTXLmUZCHHL0PfIlIaEhpu0vLo?usp=drive_link > Concerns about training data, which could be remedied by testing models on the task with e.g. a different background that is more unlikely to be in training datasets Thank you for the nice suggestion. We followed your proposal and made a new version of our dataset, this time with a red background instead of the standard black background (image examples in the folder __*red_images/*__). We tested the subset of pre-trained models also in figures 6d and figure 8. We did a t-test to test for the difference in the group performance of the fragmented conditions and found that there was no difference in model accuracy under these two conditions (_t=0.323, p=0.749_). This shows that the specific model training distribution is not the reason for model performance here, and that the results are robust against a change of background. See figure __*normal_vs_red.png*__. > Concerns about the analysis regarding how important architecture is and how robust the analysis is We have taken two additional steps to remedy this: We now also run a mixed effects regression, which allows for the architecture type to be treated as a random variable. This analysis also finds that training dataset size is most important (_z=14.66, p<0.001_), while model compute is less important but significant (_z=7.177, p<0.001_). Importantly, the random effect for architecture type under this more controlled analysis is dominated by error rather than systematic variance (variance=0.002, standard error = 0.008), suggesting that the random factor of architecture family plays little systematic role in the mixed effects model. We analyzed ViT vs CNN-based architectures in more detail for a fixed training dataset size (ImageNet-1k) -- due to space constraints, we detail this analysis in our response to reviewer __eQjn__. These further analyses provide additional evidence to our claim that architecture is less important than training dataset size. > Different downstream tasks, such as segmentation We thank the reviewer for this idea. We analyzed additional models from the taskonomy (Zamir et al. 2018) model set, which provides a great degree of breadth in terms of tasks, with the downside that we only have one model per task. Nevertheless, we report these additional results here (Figure __*taskonomy_scores.png*__). In summary: - The base object classification and scene classification models reach 31% & 32% accuracy respectively - Unsupervised segmentation reaches 29%%, surface normal estimation 33%, and depth estimation reach 34% accuracy - Interestingly, the edge-computing model reaches only 21% accuracy In general, it seems that while the task itself appears to play a role, object recognition stands out as a good objective, with more specialized objectives often falling slightly short. Nevertheless, the differences are rather small. > Novelty of the work: “Specifically, the authors highlight how integration bias—a phenomenon extensively studied in models by Geirhos et al.—can arise from large-scale training [...]” We would like to point out that Geirhos et al. (2018) did **not** study integration bias, but rather shape bias, which is a different metric measured using an entirely different dataset. Specifically, Geirhos et al. (2018) studied the human and model responses to images with a conflicting texture and shape cue. This gives a point estimate of the amount of shape bias a model has. Our work does not study shape bias, but rather contour integration. Shape bias is a human preference to detect object categories by their shape, rather than their texture, while contour integration is the ability to integrate elements despite their discontinuity in image space. As such, while contour integration allows the recognition of images that would otherwise be unrecognisable, shape bias is a description of choice preferences. Our experimental setup is also different. Instead of several different conditions that are not related to each other on a single continuous scale (like in Geirhos et al.), we varied the image fragmentation. This allowed us to understand where and why neural networks fail (at the fragmentation of the contour), and to pinpoint this specifically to contour integration, a well-established “algorithm” in humans. Taken together, our work is very different from previous work, with a novel experiment, metric, and findings. We thank you again for your review. We believe these comments address your concerns, but please let us know if additional analyses would be helpful. If all concerns are addressed, we ask the reviewer to consider increasing their score.
null
null
null
null
null
null
From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories, and Applications
Accept (poster)
Summary: The authors deal with the task of studying low-rank compressions within LLM approaches studying the low-rankness of the weight matrices of the LLM. Claims And Evidence: The authors discuss the low-rankness of the matrices within the LLM model. The illustrate this numerically, which is nicely shown in Figure 1. Methods And Evaluation Criteria: Yes this is appropriately chosen. Theoretical Claims: The authors prove something about the properties of the Hessian during training where the full proof is given in the appendix and a sketch in the main paper. Also the authors prove some eigenspace alignment. Experimental Designs Or Analyses: The tests seemed appropriate to me. Supplementary Material: I did not check the supplementary material in great detail but its nice that the proofs are included. Relation To Broader Scientific Literature: The findings are presented in comparison to other low-rank techniques for LLMs and this seems to the best of my knowledge appropriate. Essential References Not Discussed: I did not miss any key references. Other Strengths And Weaknesses: **Strength** **Weakness** The authors in Section 2.4 discuss sparsity but it is not clear to me what the mean here. If they refer to a low-rank representation this is quite something else than sparsity. If you mean low-rankness please be specific if you mean sparsity what is inducing this sparsity, i.e. in the coefficients, in the matrices? Again when talking about Q/K projections it is unclear to me whether sparsity is confused with low-rankness. Same comment regarding the MLP Gate Projections. Other Comments Or Suggestions: Threshold is misspelled throughout the figures Questions For Authors: Would it be possible to rescale all subfigures in Figure 1 to have the largest value at 1. This makes the rank decay more comparable. Is equation (1) really meant to be the sum of sums? Isnt the length of $S_{W_l}$ not the same everywhere? Then the numerator sums up the singular values, right? How is the activate SVD defined? This is not explained but heavily used. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank you for your time to review our work. Next, we will try to address the weakness pointed by you one-by-one as follows: **1. Discussion of Sparsity in Section 2.4 and typoes:** Thank you for identifying this mistake. In Section 2.4, we mean low-rank representation and we promise to correct this in our camera-ready version of our submission. We will also correct additional typoes including threshold. **2. Rescaling Figure 1:** We would like to highlight that we intentionally have not rescaled all subfigures in Figure 1 to effectively capture/illustrate the variation in eigenvalues across different component types (MLP vs attention) of the model. **3. Clarification regarding Equation 1:** $S_{W_l}$ is an array of singular values of weight matrix $W$ of layer $l$. $(S_{W_l} < k)$ will be a $0/1$ array indicating which singular values are less than k which will be truncated or compressed. $sum(S_{W_l} < k)$ is the sum of 1’s in the array illustrating the reduction ratio of layer l. Summing across all the layers (i.e., $\sum_l sum(S_{W_l} < k)$) will give the total rank reduction in the entire model. $len(S_{W_l})$ is the indicator of the number of singular values in the weight matrix of layer $l$ and it may vary depending upon the model architecture choices. --- Rebuttal Comment 1.1: Comment: Thank you for the comment. My main concern regarding the scaling was that it seems sensible to scale all singular values by the largest one as \frac{\sigma_i}{\sigma_{max}} in order to define a cut-off tolerance across the different matrices that makes the truncation comparable. --- Reply to Comment 1.1.1: Comment: We thank you for further clarifying your concern regarding the scaling and it is indeed a good suggestion. We will surely address this in our revised draft. If you feel that your remaining concerns have been resolved, we would greatly appreciate it if you could consider raising the score. Thank you once again for your time. We hope you have a wonderful day! Best wishes, The Authors
Summary: This paper proposes that repeated gradient alignment on the leading Hessian directions gradually drives large transformer models toward low-rank weight configurations. The authors formalize this tendency as a “rank collapse” that can be exploited both for compressing pretrained networks and for selectively finetuning only the most significant parameter subspaces. Their method, WeLore, implements a global thresholding rule to prune weaker singular values layer by layer. Empirically, they show that WeLore not only yields strong compression with minor performance impact, but also supports a novel partial finetuning routine that updates only the high-importance (low-rank) components. Claims And Evidence: The key claim is that top Hessian eigenvectors dominate gradient trajectories in large transformers, effectively concentrating the updates within a small subspace. Data from LLaMA-family checkpoints supports this assertion, revealing that the ratio of retained singular values decreases in deeper layers without degrading perplexity or downstream accuracy. The authors also show that once low-rank behavior emerges, subsequent training maintains it or even accentuates it, implying a persistent structural bias that WeLore can exploit. Methods And Evaluation Criteria: WeLore applies a global singular-value threshold to each layer’s weight matrices, creating an adaptive truncation that balances compression and fidelity. The authors assess the resulting models on perplexity (using the C4 dataset) and multiple downstream tasks (QA, summarization, conversation). They also measure speedups from partial finetuning, focusing on the memory and throughput gains by only updating the “highly ranked” fraction of parameters. Theoretical Claims: The analysis builds on classic Lipschitz continuity arguments and eigenvector perturbation bounds, positing that second-order curvature narrows the gradient search space over training iterations. By showing that the Hessian’s top eigenvalues shift slowly, the authors infer that meaningful weight updates stay within a lower-dimensional manifold, effectively turning the model’s parameters into a near-rank-deficient collection. While they provide partial proofs for these curvature-based claims, the full set of constants and domain-specific constraints (like attention mechanisms) remain only informally addressed. Experimental Designs Or Analyses: In addition to testing different compression ratios on LLaMA-7B, 13B, and Mistral-7B, the authors compare WeLore’s adaptive thresholding with uniform SVD, outlier-based factorizations, and competing methods for parameter-efficient finetuning. They track how reduced-rank layers retain strong performance across open-domain QA and abstractive summarization. Although the evaluations are thorough within the LLaMA family, no experiments on non-decoder architectures (e.g., T5) or specialized tasks (like code completion) appear. This leaves questions about broader applicability. Supplementary Material: Appendices include an overview of the steps used to approximate the Hessian’s top eigenvectors and additional proofs detailing how small step sizes limit eigenvector rotation. There is also discussion of partial SVD routines that make WeLore’s per-layer factorization more tractable in practice. Relation To Broader Scientific Literature: This work situates itself at the pretty unique intersection of Hessian-based analyses of network optimization and low-rank compression strategies. Unlike prior studies that introduce side modules (e.g., LoRA) or uniform rank constraints, WeLore leverages a single threshold that reflects global curvature trends. It also resonates with recent theoretical models linking gradient flow to emergent structure in overparameterized networks, though the paper’s emphasis on multi-layer transformers marks a specialized application. Essential References Not Discussed: N/A Other Strengths And Weaknesses: A notable strength is the demonstration that partial finetuning on just the “most relevant” components can match or exceed full finetuning in some tasks, potentially reducing hardware requirements in practical scenarios. Another strength is how their single threshold mechanism adjusts itself to each layer’s singular values. However, a potential weakness is that the approach has only been tested on decoder-only or Mistral-like models, so it is unclear if encoder-decoder architectures with cross-attention layers would exhibit comparable rank collapse. Also, the reliance on repeated SVD across many layers might be resource-intensive, and the discussion of approximate factorization is brief. Other Comments Or Suggestions: It would be valuable to investigate whether these low-rank patterns hold in tasks that deviate strongly from the original training domain, such as specialized code-generation or multimodal inputs. Questions For Authors: Please see several questions in my above comments. Two additional questions: 1. Can deeper cross-attention layers or encoder-decoder styles disrupt the rank collapse pattern observed in decoder-only frameworks, and if so, how might that influence WeLore’s thresholding strategy? 2. Does “freezing” the bulk of weights ever lead to underfitting for tasks that place heavy demands on representational flexibility, or does the initial training’s rank deficiency remain adequate for adaptation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would first like to thank you for the time to review our work. We would now address your weakness point by point as follows: **1. Computational overhead of SVD on large matrices:** Thank you for raising this point. We would like to highlight that WeLore-COMP is a *one-shot data-agnostic* compression technique. WeLore-COMP requires **one-time SVD estimation** of all weight matrices of the pretrained checkpoint which can be saved and various compression ratios (e.g., 10%, 20%, 30%, etc.) can be achieved by a simple linear search of threshold k (Algorithm 1) without any need for re-estimation of SVD. 1. For empirical estimates of **LLaMa2-7B** using a Nvidia A6000 RTX GPU, a 4096 * 4096 matrix SVD decomposition takes ~2.6895 seconds to complete. Given 7 weight matrices for each transformer layer of approximately similar dimensions in LLaMa2, SVD time cost for decomposition of a transformer layer will be ~19 seconds. With 32 layers, the total time taken will be approximately ~10 minutes. *Note that SVD estimation of each layer can be executed independently and parallel*. Therefore, with 8 GPUs, we can reduce the time 10/8 = ~1.25 minutes in total. 2. For empirical estimates of **LLaMa2-70B** using a Nvidia A6000 RTX GPU, a 8192 * 8192 matrix SVD decomposition takes ~2.799 seconds to complete. Given 7 weight matrices for each transformer layer of approximately similar dimensions in LLaMa2, SVD time cost for decomposition of a transformer layer will be ~20 seconds. With 80 layers, the total time taken will be approximately ~26 minutes. *Note that SVD estimation of each layer can be executed independently and parallel*. Therefore, with 8 GPUs, we can reduce the time 26/8 = ~3.25 minutes in total. We promise to include additional computational cost discussion on SVD in our final draft. **2. Low-rank patterns and Datasets:** We would like to clarify that low-rank patterns emerging in model checkpoints are subjected to their pre-training. WeLore exploits these existing patterns for data-agnostic compression and PEFT. Experimentally through a wide-range of tasks in our experiments ranging from commonsense tasks to open-ended tasks (MT-Bench with code-centric questions - Table 2), we found that WeLore consistently perform well for the existing low-rank patterns in pre-trained checkpoints. **3. Freezing Weights and Underfitting:** Thank you for raising this point. Across all the experiments with WeLore-PEFT with different compression ratios, we didn’t observe the underfitting issue and WeLore-PEFT can closely mimic the training trajectory of full-finetuning. For example, as it can be seen from Table 6, with 30% compression, which approximately freeze 70% weights of the model during PEFT, WeLore outperform full-finetuning across several tasks which indicate that task adaptations can be adopted by only a few components in the model. **4. Extension for encoder-decoder architecture:** Thank you for this suggestion. Decoder-only architectures like LLaMa and Mistral family are most popular and scaled up architecture, that’s why we choose them as our experimental focus. We would like to highlight that while deeper cross-attention layers or an encoder-decoder structure might exhibit differences in rank collapse behavior compared to decoder-only models, these differences do not fundamentally invalidate the mechanism underlying WeLore’s compression or PEFT strategy. The low-rank collapse is driven by the stabilization of the gradient subspace and the emergence of a clear Hessian spectral gap—a phenomenon that is intrinsic to the training dynamics rather than being solely an artifact of a particular architecture. Even if cross-attention layers show a less pronounced rank collapse (due to their role in integrating encoder context, which can diffuse gradient signals), WeLore’s design is inherently adaptive. In other words: • Even if some layers (such as deep cross-attention ones) deviate from the typical rank collapse pattern, the thresholding in WeLore is computed on a per-layer basis, automatically preserving higher ranks where needed and applying more aggressive compression where the low-rank structure is clear. • The core mechanism—gradient subspace alignment with dominant Hessian directions—remains relevant across different architectures. Thus, while encoder-decoder models might introduce variations, they do not “disrupt” the phenomenon in a way that would render adaptive thresholding ineffective. Due to time limit of rebuttal, we couldn't complete our experiments but we are working towards it and we will include in our final version.
Summary: This paper studies the emergence of low-rank structures in Large Language Models (LLMs) through gradient subspace stabilization, revealing that as training progresses, gradients increasingly align with dominant Hessian eigenspaces, driving weight matrices toward low-rank factorization. The authors support this phenomenon with theoretical analysis—incorporating Hessian smoothness, KL conditions, and reversibility assumptions—and empirical evidence from LLaMA-based models (7B, 13B) and Mistral-7B. Building on these insights, they propose WeLore-COMP, a data-agnostic compression method that applies an adaptive global threshold on singular values to selectively compress "low-rank components" (LRCs) while preserving "non-low-rank components" (N-LRCs), and WeLore-PEFT, a parameter-efficient fine-tuning approach that optimizes only LRCs while freezing N-LRCs. Empirically, WeLore-PEFT matches or surpasses full fine-tuning performance while significantly reducing memory and computational costs, demonstrating the effectiveness of leveraging LLMs' intrinsic low-rank structure for efficient model compression and adaptation. ## update after rebuttal Thanks for the response. I will keep my score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The main theoretical insights (Theorems 2.1 and 2.2) revolve around bounding changes in Hessian eigenvalues/eigenvectors over training steps under certain smoothness and KL assumptions. They demonstrate that: (1) The top-r eigenvectors converge quickly in orientation (Davis–Kahan, implying small angles); (2) The gradient eventually resides in the span of these dominant eigenvectors, pushing the solution toward a lower-rank subspace. While the authors do not present a fully rigorous line-by-line proof for every constant, they cite standard results (Lipschitz continuity, KL-based gradient decay) and supply sketches in Appendix A. This level of detail is acceptable for a conference paper. Experimental Designs Or Analyses: Yes - All experiments. Supplementary Material: Yes - The full supplementary material. Relation To Broader Scientific Literature: This paper extends prior works on: 1. LLM compression (e.g., SVD-based approaches, low-rank factorization, pruning, quantization). 2. PEFT (LoRA, QLoRA, and other adapter-based methods). 3. Hessian/gradient-based analyses linking spectral properties to optimization geometry. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths: The paper provides a novel theoretical explanation by linking the emergence of low-rank structures to Hessian eigenspace alignment, offering a more principled understanding beyond heuristic-based approaches. The proposed adaptive rank thresholding achieves a better balance between compression and accuracy compared to uniform baseline methods. Extensive experiments across different model sizes, fine-tuning settings, and tasks demonstrate the robustness of the approach. Additionally, WeLore-PEFT has practical significance, as it can be directly integrated into training pipelines, reducing memory and compute costs while maintaining strong performance. ## Weaknesses: 1. The study focuses primarily on LLaMA- and Mistral-based models, limiting its generalizability to other architectures such as GPT-style models. 2. The computational overhead of performing singular value decomposition (SVD) on large matrices is not thoroughly analyzed, particularly for very large models (13B+ parameters), where approximate SVD methods may be necessary. 3. The theoretical framework relies on a reversibility assumption that may not always hold in standard Transformer layers, and the practical implications of this assumption are not fully discussed. 4. Some layers are classified as non-low-rank components (N-LRCs) with minimal rank collapse, but it remains unclear whether these could still benefit from advanced compression techniques or partial rank factorization. Other Comments Or Suggestions: N/A Questions For Authors: Please see weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would first like to thank you for the time to review our work. We would first like to thank you for finding our work to provide extensive experiments to establish robustness and have practical significance. We would now address your weakness point by point as follows: **1. Computational overhead of SVD on large matrices:** Thank you for raising this point. We would like to highlight that WeLore-COMP is a *one-shot data-agnostic* compression technique. WeLore-COMP requires **one-time SVD estimation** of all weight matrices of the pretrained checkpoint which can be saved and various compression ratios (e.g., 10%, 20%, 30%, etc.) can be achieved by a simple linear search of threshold k (Algorithm 1) without any need for re-estimation of SVD. 1. For empirical estimates of **LLaMa2-7B** using a Nvidia A6000 RTX GPU, a 4096 * 4096 matrix SVD decomposition takes ~2.6895 seconds to complete. Given 7 weight matrices for each transformer layer of approximately similar dimensions in LLaMa2, SVD time cost for decomposition of a transformer layer will be ~19 seconds. With 32 layers, the total time taken will be approximately ~10 minutes. *Note that SVD estimation of each layer can be executed independently and parallel*. Therefore, with 8 GPUs, we can reduce the time 10/8 = ~1.25 minutes in total. 2. For empirical estimates of **LLaMa2-70B** using a Nvidia A6000 RTX GPU, a 8192 * 8192 matrix SVD decomposition takes ~2.799 seconds to complete. Given 7 weight matrices for each transformer layer of approximately similar dimensions in LLaMa2, SVD time cost for decomposition of a transformer layer will be ~20 seconds. With 80 layers, the total time taken will be approximately ~26 minutes. *Note that SVD estimation of each layer can be executed independently and parallel*. Therefore, with 8 GPUs, we can reduce the time 26/8 = ~3.25 minutes in total. We promise to include additional computational cost discussion on SVD in our final draft. **2. Generalizability to other architectures such as GPT-style models:** We would like to have additional clarification on this weakness. Our experiments across various tasks,compression ratios on LLaMa & Mistral illustrate that WeLore generalizes to GPT-style decoder models. Decoder architectures are most popular and scaled up architecture, that’s why they are our experimental focus. **3. Theoretical framework relies on a reversibility assumption:** The current proof employs the reversible network structure primarily as a sufficient condition to ensure the boundedness and stability of both gradients and Hessians. In particular, the reversibility assumption is invoked to prevent unbounded growth of norms and to guarantee stable spectral properties of the Hessian sequence $\{H_t\}$. Notably, if boundedness and stability can be established through alternative means, strict reversibility may not be necessary. From a purely mathematical standpoint, the key requirements of the proof are as follows: 1. **Boundedness of $||G||$, $||W_t||$, and $||H_t||$:** It is imperative to ensure that the norms of gradients, weights, and Hessians do not diverge. When these quantities remain bounded, the spectral gap property and the Lipschitz continuity of the Hessian ensure stable eigen-decomposition and validate the application of Davis–Kahan perturbation theory. 2. **Stable Spectral Gap:** For $H_t=\nabla^2 L(W_t)$ with eigenvalues in descending order, there is a uniform $\gamma>0$ such that $\min_{1 \le i \le r,\, j>r}|\lambda_i(H_t)-\lambda_j(H_t)|\ge \gamma$. It is important to emphasize that these conditions do not inherently require reversibility. Many standard optimization settings—such as strongly convex problems or well-conditioned neural architectures—employ mechanisms that naturally ensure boundedness and prevent degenerate Hessians. Consequently, if uniform boundedness of parameters, gradients, and Hessians can be secured by other means (for example, weight regularization, strong convexity near minima, or alternative architectural constraints), then the strict assumption of reversibility is not essential. **4. Benefit from other compression techniques on N-LRCs layers:** Thank you for raising this point and we agree that exploration of mixed compression strategy can be highly interesting with WeLore. We have indeed conducted experiments to study WeLore in conjunction with pruning. A careful observation of popular *non-uniform layerwise pruning algorithms (https://arxiv.org/pdf/2310.05175) reveals that the majority of middle transformer blocks can be subjected to a higher pruning ratio* which is **complementary** to *WeLore low-rank reduction ratio that favours terminal blocks being low-rank friendly* (Appendix B.1). Our experimental results in Appendix B.2 reveals that **WeLore can be used in conjunction with SoTA pruning techniques** like SparseGPT, Wanda etc. due to existing orthogonal properties in mixed compression settings with minimal performance degradations.
Summary: This paper investigates the low-rank property of LLM weights. The authors identify that low-rank properties vary systematically across components (q/k/v/o/mlp1/mlp2/out) and network depth. Based on this observation, they develop: 1) WeLore-COMP for non-uniform compression across different layers, and 2) WeLore-PEFT for selectively fine-tuning only components with good low-rank properties. Claims And Evidence: Yes. 1. Novel theoretical analysis. The gradient subspace analysis provides valuable insights into why low-rank structures emerge in LLMs, connecting optimization dynamics to model structure. 2. Non-uniform approach. The recognition that different components have inherently different low-rank properties is insightful and leads to significant performance gains over uniform compression methods. Methods And Evaluation Criteria: Yes. Unified framework. Unlike prior work focusing separately on compression or fine-tuning, WeLore provides a cohesive approach addressing both challenges simultaneously. Theoretical Claims: Not yet. Experimental Designs Or Analyses: Yes, this paper has a comprehensive evaluation. The work includes extensive experiments across different models (LLaMA-2 7B/13B, Mistral-7B), compression ratios (10-70%), and downstream tasks. Supplementary Material: Yes but not fully in details. Relation To Broader Scientific Literature: Yes Essential References Not Discussed: NO Other Strengths And Weaknesses: weaknesses: 1. Figure issues and visualization clarity. Several figure problems: (1) Row 1, column 4 in Figure 2 is mislabeled as "k_proj" instead of "q_proj"; (2) Figure 1 lacks logical organization for comparing eigenvalue gaps among MLP components, attention components, and different layer depths; (3) Almost all the texts are missing in Figures 3-6, so it is impossible to interpret their results properly. 2. Hyperparameter sensitivity. The threshold $k$ used for determining rank reduction lacks rigorous justification. The paper doesn't analyze how sensitive the results are to this parameter or provide a principled method for selecting optimal values across different models and tasks. 3. Subjective LRC/N-LRC definition. The distinction between Low-Rank Components (LRCs) and Non-Low-Rank Components (N-LRCs) relies on empirical thresholds and visual interpretation of "heavy-tail" distributions (In figures 1-2). A quantitative metric would be better. Limited comparison with quantization methods. The method lacks thorough comparison with compression methods (quantization, pruning). Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would first like to thank you for the time to review our work. We greatly appreciate that you have found our theoretical analysis novel and experimental section comprehensive while identifying the unique proposition of WeLore as a cohesive method to handle compression and fine-tuning in a unified way. We would now like to address the weaknesses pointed by you one by one as follows: **1. Issues with Figure and Visualization Clarity:** We appreciate your concerns regarding about some mislabel and missing texts in our figures in the submitted draft. We promise you that we will address all these issues in the camera-ready version of our submission. **2. Hyperparameter sensitivity (k):** Thank you for raising this point. We would like to highlight that our hyperparameter k is uniquely determined/searched based on the effective rank reduction ratio (ERR) required. We use a linear search with np.linspace(0, 1, 0.005) using the algorithm described in Appendix D3 (Algorithm 1) to preserve normalized singular values greater than the threshold k so that we can achieve the ERR. In our experiments, we have found that the precision 0.005 is sufficient to select optimal threshold K for a given ERR without notable variation in results. We will provide this additional clarification in the final draft of our submission. For additional information, we have included the pre-estimated singular value thresholds (k) for LLaMa-2 7B and 13B as follows: |Model| 10% | 20% | 30% | 40% | 50% | 60% | 70%| | ------------- |:-------------:|:-------------:|:-------------:|:-------------:|-------------:|-------------:|-------------:| |LLaMa-2 7B| 0.065 | 0.085| 0.115| 0.145 | 0.175 | 0.215 | 0.260| |LLaMa-2 13B| 0.065 | 0.085 | 0.115 | 0.140 | 0.180 | 0.225 | 0.270 | **3. WeLore and other compression techniques:** We appreciate your interest about WeLore comparison with other compression techniques like pruning and quantization. WeLore is a low-rank compression method with its own unique benefit of hardware-friendly acceleration and it is unclear what compression ratio of low-rank compression will equate to what compression ratio of pruning and quantization. However, we have indeed conducted experiments to study WeLore in conjunction with pruning. A careful observation of popular *non-uniform layerwise pruning algorithms (https://arxiv.org/pdf/2310.05175) reveals that the majority of middle transformer blocks can be subjected to a higher pruning ratio* which is **complementary** to *WeLore low-rank reduction ratio that favours terminal blocks being low-rank friendly* (Appendix B.1). Our experimental results in Appendix B.2 reveals that WeLore can be used in conjunction with SoTA pruning techniques like SparseGPT, Wanda etc. due to existing orthogonal properties in mixed compression settings with minimal performance degradations.
null
null
null
null
null
null
MathConstruct: Challenging LLM Reasoning with Constructive Proofs
Accept (poster)
Summary: This paper introduces MathConstruct, a novel mathematical benchmark designed to evaluate LLMs' reasoning in constructive proofs from high-school competition problems. The authors curated a dataset of 127 challenging problems from various sources and converted them into a unified format with symbolic parameters, enabling parameter variations and generating a total of 480 problem variants. To rigorously assess LLMs on these tasks, MathConstruct provides Python-based verification code for each problem and incorporates a parser that prompts LLMs and delivers feedback to refine their responses toward the correct output. Experimental results reveal that current state-of-the-art LLMs (e.g., OpenAI o1) achieve only 41.08% average accuracy on the full set of 480 problems and 22.83% robust accuracy across all variants of the original 127 problems. Additionally, the authors conduct a comprehensive evaluation, employing code agents, brute-force methods, error analysis, and potential data contamination assessments. Claims And Evidence: From my perspective, all claims made in the submission are supported by clear evidence. Methods And Evaluation Criteria: I think the proposed benchmark and evaluation setting are well-motivated and make sense. Theoretical Claims: N/A Experimental Designs Or Analyses: I think the experiments are comprehensive and sound. Supplementary Material: Yes, I have reviewed the data and examined some of the logs in the supplementary material. Relation To Broader Scientific Literature: I think the proposed benchmark serves as a valuable complement to existing mathematical reasoning benchmarks. Essential References Not Discussed: I find the related work section to be thorough and well-researched. Other Strengths And Weaknesses: I find the paper is well-motivated, novel, and overall well-written. It primarily evaluates LLMs on informal constructive proofs, while leveraging Python-based verification methods instead of formal theorem provers, making the approach more lightweight and accessible. The experiments and analysis are also quite comprehensive. I do not see any major weaknesses in the paper. However, one potential limitation is that constructive proofs in Olympiad mathematics may be relatively rare compared to other types of proof-based or computational problems. Other Comments Or Suggestions: I would say the paper is overall very strong, but there are some minor aspects that could be improved. If I understand correctly, the phrase "symbolic problem statement in natural language" in lines 182 and 211 could be more clearly written as "problem statement in natural language with symbolic parameters." Additionally, the authors could consider evaluating DeepSeek R1 on their proposed benchmark to further enhance the comprehensiveness of the evaluation. Questions For Authors: In principle, given the original 127 problems in MathConstruct, is it possible to generate an arbitrary number of variants for each problem? What design choices guided the selection of the 480 problem variants used in the benchmark? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer 6y3x for their review. We are delighted that they recognize MathConstruct as a valuable and novel contribution. We also appreciate their feedback on the clarity of our paper and will incorporate the suggested clarifications. Below, we address their additional questions: **Q.1. Could the authors evaluate more recent models on MathConstruct?** We have evaluated newly released frontier models, including o3-mini and DeepSeek R1, and provide an updated results table below: | Model | Avg | Robust | Cost | |----------------------|--------|--------|-------- | | Llama-3.1-405B | 3.17 | 1.59 | 1.99 | | GPT-4o-mini | 3.77 | 1.59 | 0.32 | | Llama-3.3-70B | 3.77 | 1.59 | 0.67 | | 3.5-Haiku | 3.37 | 1.59 | 1.37 | | GPT-4o | 3.57 | 0.79 | 4.62 | | 3.5-Sonnet | 4.17 | 0.79 | 4.80 | | Qwen2.5-72B | 6.35 | 1.59 | 2.24 | | Flash | 11.57 | 3.17 | N/A | | QwQ-Preview | 13.89 | 7.14 | 8.34 | | o1-mini | 25.46 | 10.32 | 51.49 | | Flash-Thinking | 27.05 | 11.11 | N/A | | R1 | 32.28 | 15.08 | 48.39 | | o1 | 41.34 | 23.02 | 434.08 | | o3-mini | **53.77** | **34.92** | 71.14 | We observe that o3-mini significantly outperforms even o1, demonstrating stronger generalization and mathematical reasoning while being considerably more cost-efficient. R1, on the other hand, achieved an accuracy of only 32.3%. **Q.2. How many variants can be generated, and how were the 480 presented variants selected?** Many of our problems, particularly those in the “Find Inf”-category, permit infinitely many variations. However, generating interesting variations is more challenging for other types of problems, as their constructions fail for most values. Additionally, for each problem, we manually defined a range of parameters to ensure two important conditions: (1) the problem is resistant to brute-force solutions, and (2) the resulting output remains within a 4,000-character limit. Most problems allow for at least four variations under these constraints, although a small number permit only two or three. Therefore, we generated up to four variations per problem or fewer if four were not feasible. In practice, many problems could include significantly more variations. This point is illustrated clearly in Figure 7, in which we use 24 variations for each of the ten “Find Inf”-problems discussed in this subsection. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I continue to hold a positive assessment of the paper.
Summary: The paper presents MathConstruct, a new benchmark to test LLMs on constructive mathematical proofs with a symbolic verifier for correctness of each problem. Unlike traditional math benchmarks that focus on fixed numerical answer problems, MathConstruct introduces 127 modified olympiad-level math problems, where tasks require constructing mathematical objects with some specific properties. The also evaluates 13 SOTA LLMs on the dataset, and the best LLM only achieved 41% accuracy, which highlights its difficulty. Claims And Evidence: The paper claims that MathConstruct is a challenging benchmark for evaluating LLMs in mathematical reasoning. The evidence is demonstrated by extensive experimental results on various models like GPT4o o1, and compared with brute-force approaches and analysed errors. Methods And Evaluation Criteria: (i) The methods for problem selection, symbolic encoding, and evaluation are well-defined. The benchmark is designed with strict verification criteria, ensuring that models must generate correct constructions rather than relying on memorization. (ii) Evaluation metrics like average accuracy and robust accuracy in this paper are the correctness of constructed instances on selected variations of problems. However, many problems in this benchmark (e.g. IMO Shortlist 2014 C3 in Appendix B on Page 13) have infinitely many variants (here, variants include \{(n,k) | n, k \in \mathbb{N}^+, n\geq 2, k\geq 1, k \leq \lfloor \sqrt{n-1}\rfloor \}, but evaluations are done on small subsets of the infinite variants, which are insufficient to solve the problem. Overall, the metrics are not suitable enough for evaluating the correctness of problem-solving. Theoretical Claims: The paper does not make any new theoretical claim. Experimental Designs Or Analyses: The experimental design seems comprehensive, covering different LLMs and evaluation settings. The error analysis is detailed, revealing common failure modes in LLM. Supplementary Material: The supplementary material includes the dataset and the code implementation. Relation To Broader Scientific Literature: The paper is closely related to mathematical reasoning and somehow contributes to formal theorem proving. Essential References Not Discussed: This paper cites all essential references in this research area, but could add the following: (i). Program-assisted LLM (PAL) paper in solution enumeration: [1] Gao, Luyu, et al. "Pal: Program-aided language models." International Conference on Machine Learning. PMLR, 2023. (ii). Datasets: [2] Li, Jia, et al. "Numinamath: The largest public dataset in ai4maths with 860k pairs of competition math problems and solutions." Hugging Face repository 13 (2024). Other Strengths And Weaknesses: Strengths: The paper introduces an interesting and challenging benchmark for evaluating LLM reasoning by constructive math problems. Weaknesses: 1. The benchmark dataset is very small in size, containing only 127 problems in total. This amount of problems is insufficient to fine-tune an LLM on this task or conduct thorough/in-depth evaluation of constructive math problems. 2. In addition, the dataset curation relies too much on manual laboring (including problem selection, verifier construction, and quality checks) , making it intractable to scale up. More data synthetic methods can be applied to create large-scale high-quality dataset. 3. The “problem variations” in this paper are still limited to specific cases of problems. For example, on Page 4 Figure 3, the “variation” is different cases of $n$ in a problem, instead of a completely new problem adapted from original problem. 4. Regarding the limitations in evaluation metrics, check the Methods and Evaluation Criteria section. 5. The paper mentions that constructive proofs are challenging but does not propose any method to enhance or improve LLM’s capabilities in such tasks. It would be very interesting and contributive if the authors could implement any method and outperform the LLM baselines Other Comments Or Suggestions: N/A Questions For Authors: How do you mitigate potential dataset contamination, especially since olympiad problems are commonly included in pre-training data? Have you provided another set of problems with timestamps after knowledge cutoffs of popular LLMs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer S8Ey for their review. We appreciate that they found MathConstruct to be an interesting benchmark, and our experimental design detailed. Below, we address their other questions: **Q.1. Does robust accuracy suitably measure the correctness of problem solving for a given question?** Yes, because all included variations require the same underlying solution strategy. Furthermore, we explicitly exclude brute-forceable variations, thereby preventing models from using shortcuts specific to any single problem instantiation. Therefore, if a model correctly applies the solution strategy across all four provided problem variations, it is very likely capable of generalizing this approach to other instantiations. More fundamentally, the main goal of our benchmark does not depend on these specific metrics. Specifically, our benchmark aims to provide effective means of evaluating models on a new reasoning task, where relative accuracy between models is the primary metric of interest. **Q.2 Is the small dataset size a limitation?** No, this is not a limitation. Although our benchmark is smaller than datasets like MATH [1], it is significantly more challenging, is better verified, and incorporates variations designed to test model generalization. It is worth noting that many widely used benchmarks contain relatively small problem sets. For instance, the 2024/2025 AIME competitions, which are frequently used to evaluate reasoning capabilities, consist of only 30 problems each. Lastly, rigorous benchmarking requires carefully curated problems that are neither erroneous nor overly simplistic. Thus, the importance of high-quality examples outweighs dataset size. **Q.3. Are there ways to reduce the amount of manual labour involved in the construction of the benchmark?** We refer the reviewer to our response to **Q.1** of reviewer Aczy. **Q.4. What value do the variations add to the benchmark?** The variations we define have three main purposes: - **Reducing Variance**: Including variations effectively increases the benchmark size, thus reducing measurement variance. - **Problem Analysis**: We can perform several additional experiments, such as the one presented in Section 4.5, that analyze the effect of model variations on the model solutions. - **Reducing Contamination**: Variations guard against memorization, as a model encountering a known problem from its training data must successfully solve all variations. **Q.5. Is there a way to improve the performance of LLMs?** Yes, we already included several improvements designed to enhance model accuracy. Specifically, we implement parser feedback to ensure accurate interpretation of answers and evaluate agent-based approaches. Furthermore, prompted by the reviewer, we ran additional experiments evaluating an agent-based approach that obtains detailed feedback from the ground-truth verification function. These functions log each failure with a specific reason, and therefore allow the model to correct its answer based on this feedback. Using this approach, o3-mini (the best model) improves by an additional 12% (53%->65%). We stress that this experiment should not be considered valid for real-world applications due to lack of access to the verification function at inference time, but it does give an upper bound on what is possible. We will add a discussion about this result in the appendix of our paper. We did consider an advanced agent-based approach where the model would first implement this verifier function itself and subsequently use it to validate its solution. However, preliminary experiments checking the automation feasibility of our benchmark showed that current models struggle significantly with this task. Thus, we currently consider this out of scope for this benchmark paper. **Q.6. Did the authors apply mitigations against contamination?** First, we emphasize that our benchmark's problems rarely appear verbatim in existing training datasets. Notably, problems beyond the “Find Any”-category underwent substantial revision to become suitable constructive problems, and some were translated from other languages. Second, we implemented a contamination detection strategy described in Section 4.4. Previous work shows that rewording mathematical problems can influence model performance in the presence of contamination [2]. By doing so, we observed minimal contamination of our benchmark. **Q.7. Can you include references to these additional works?** We appreciate the reviewer’s suggestion and will incorporate the additional references. [1] https://arxiv.org/abs/2103.03874 [2] https://arxiv.org/abs/2405.16281 --- Rebuttal Comment 1.1: Comment: Thanks for authors' reply. While the dataset is small and the proposed method does not fully prove the constructive problems in olympiad math, this paper is still interesting and does introduce a promising path towards informally/formally solving constructive problems. I have raised my rating to 3.
Summary: This paper proposes MathConstruct, a new benchmark for mathematical reasoning based on constructive problems from math competitions. These problems are highly interesting, yet benchmarks generally avoid them due to the non unique answers. This paper contributes a suite of problems taken from past along with answer verifiers. The problems are shown to be generally very hard for most LLMs, and OpenAI o1 does substantially better, although also at much higher cost. The authors also propose systematic problem variations to probe for robustness, and evaluate agents with access to a Python interpreter. Most of the problems are hard to brute-force via code alone, although access to the interpreter generally enables models to do better (again with higher cost). Claims And Evidence: Yes, they are clear Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. The benchmark construction seems sounds (it involved a significant amount of manual analysis from students with experience in math competitions, so that part is hard to check, yet I think it is inevitable for constructing a high-quality dataset for this domain). The experimental analysis of frontier LLMs also seems sound: I checked the examples in the appendix, and the authors spent a decent amount of effort into getting parsing right, which is often a pain in these evaluations. I believe that the results reflect the model's capabilities, not any unsoundness in the pipeline (e.g. failure to parse answers, etc). Supplementary Material: Yes. I mainly looked at problems (data/revised_problems.json) and their corresponding verifiers (e.g. src/math_construct/problems/bmo_shortlist/problem_2008_n1.py). They both seem easy to use, and consistent with the description in the paper. Relation To Broader Scientific Literature: This paper augments the literature in the evaluation of mathematical reasoning in LLMs by providing a class of problems that has been mostly ignored so far, due to the complexity of verifying answers. Although there are many benchmarks in general for "mathematical reasoning", none of them (as far as I know) contains constructive problems. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Other strengths The paper finds that performance on standard benchmarks, which would make it seem like competition problems have almost been saturated already (e.g. MATH; and more recent results on AIME by various frontier labs), is not telling the full picture. Even neglected problems from the same difficulty region are still very challenging. The difference in performance between GPT-4o and Sonnet on AIME vs on MathConstruct is quite stark. The paper is clear, the results are significant for the community at large, especially now that mathematical reasoning became one of the main evaluations of frontier LLMs, as advertised extensively by model creators whenever they release models. ### Weakness The only issue might be that the dataset construction pipeline seems quite effortful for further work to expand. Human-curated datasets are very rarely expanded later (e.g., GSM8k -> GSM1k being a rare example). This doesn't diminish the value of the current benchmark or the results, but I think the work would be more "future proof" if some of the more labor intensive parts could have been automated (e.g., I could imagine an LLM-based pipeline doing a first pass on (a) filtering relevant problems from recent competitions, (b) writing a verifier and ensuring that the example solution verifies, etc, perhaps with manual review by humans only at the end). Other Comments Or Suggestions: N/A Questions For Authors: Did the authors consider automating some parts of the pipeline, as suggested above (besides the quality checks, but I mean more for the construction of the problems per se)? If so, were there significant challenges that prevented this from making into the paper? This would be good for future work to be aware of. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments, and are happy to read that they find our work novel and relevant to the community, and our framework easy to work with. We address any remaining concerns below. **Q.1. Can any parts of the pipeline be automated? What challenges did the authors encounter when trying to automate them?** Although certain elements of our pipeline can be readily automated, most aspects currently remain beyond the capabilities of state-of-the-art agents and models. We detail the three main components of the benchmark curation process below: - **Problem Curation**: One aspect of the pipeline where automation was partially successful involves the curation and filtering of constructive problems. Constructive problems can be identified according to the selection criteria specified in our paper. A small portion of our benchmark questions were automatically selected by GPT-4o-mini from a larger internal dataset, and subsequently processed manually by the authors. While there were still plenty of false positives among the selected problems, this did make the selection procedure easier for these questions. - **Verifier Function**: The most challenging part to automate is the creation of verification functions. Although we experimented briefly with automating this task, current models were unable to generate rigorous enough verification functions. Developing a robust verification function requires the identification of potential failure modes in model outputs, careful creation of efficient checkers, and creation of thorough test cases that cover all relevant edge cases. While future LLMs might be able to automate this, this currently remains out of reach for complex questions of MathConstruct. - **Problem Validation**: Another critical aspect is ensuring problem quality, which falls between problem curation and verifier function generation in terms of automation feasibility. We automated several aspects of this step. For example, confirming problems could not be solved through brute-force methods using LLMs, as detailed in the paper. However, each problem underwent additional validation through an internal peer-review process among the authors. Manual peer review is essential for catching small issues and ensuring nothing is missed. While automation can help, it currently cannot fully replace this process as one needs to be certain that problems are correct once processed through this review. Therefore, even in a fully automated pipeline, some form of human validation would remain necessary. In conclusion, while investigating further automation opportunities is valuable future work, we emphasize the importance of prioritizing a smaller yet more rigorously curated benchmark to accurately measure model performance.
null
null
null
null
null
null
null
null
Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM’s Reasoning Capability
Accept (poster)
Summary: The paper introduces the concept of critical tokens in mathematical reasoning tasks, which are pivotal points in incorrect reasoning trajectories that significantly influence the final outcome. The authors propose a novel framework for identifying these tokens through contrastive estimation and they further introduce cDPO. Reducing the occurrence probability of critic tokens through the DPO is straightforward. ## update after rebuttal I will keep my positive score because my concerns have been partially addressed. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: I am not familiar enough with this area to seriously recommend anything. Essential References Not Discussed: I am not familiar enough with this area to seriously recommend anything. Other Strengths And Weaknesses: **Strength** 1. The introduction of critical tokens is a novel and insightful contribution to the field of mathematical reasoning in LLMs. The authors provide a clear definition and empirical validation of these tokens, showing their significant impact on model accuracy. 2. This paper is well-written and easy to follow. The paper provides sufficient technical details for readers to understand. **Weakness** 1. A model trained with incorrect trajectories is used to simulate the probability distribution of critic tokens, but the incorrect trajectories do not necessarily satisfy the two conditions on page 2. 2. Lack the sensitivity of the proposed method to hyperparameters, such as the scaling factor β in contrastive estimation. Understanding how sensitive the results are to these parameters would be valuable for practitioners looking to implement the method. Other Comments Or Suggestions: This is discussed in the Strengths And Weaknesses. Questions For Authors: The Question is discussed in the Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your insightful and detailed comments. Below, we address each concern and hope that our responses sufficiently clarify your questions. **Weakness** **W1. Critical Token Estimation** In our approach, the distribution for critic tokens is approximated using a model trained with incorrect trajectories. Although these incorrect trajectories do not necessarily satisfy the two conditions specified on page 2, our experimental results nevertheless indicate robust and competitive performance. We measured the Area Under the Curve (AUC) of Contrastive Estimation (CE) with respect to rollout on LLaMA-3-8B, obtaining values of: (1) 0.77 on GSM8K, and (2) 0.84 on MATH. These metrics and corresponding analyses will be explicitly included in our revised manuscript. **W2. Ablation Study for Hyperparameter Sensitivity** Thank you for pointing out this important issue. We fully agree that examining hyperparameter sensitivity will strengthen the paper, enhancing its usefulness for practitioners. Accordingly, we have now included an ablation study investigating different values of the scaling factor β using the LLaMA-3-8B model on GSM8K, as summarized in the table below. | β-value | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 | 1.75 | 2.0 | 2.25 | 2.5 | |---------|------|------|------|------|------|------|------|------|------| | cDPO | 66.5 | 69.2 | 67.9 | 67.2 | 69.5 | 70.7 | 68.4 | 66.8 | 68.9 | As further elaborated in Appendix section 'Distribution Analysis of Contrastive Estimation,' the hyperparameter β affects the mean of the modified distribution P^{ce}. The above ablation results demonstrate that selecting β in a suitable range (approximately 1.5 to 1.75) shifts P^{ce} toward a more accurate distribution, thereby improving overall model performance.
Summary: In this paper, the authors start from the observation that the existence of critical tokens will influence the model performance and propose a contrastive estimation method to identify the critical tokens. At last, the authors proposed the cDPO to improve the model performance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NaN Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: NaN Essential References Not Discussed: NaN Other Strengths And Weaknesses: ### Strength 1. The paper is well-written and easy to follow. 2. The observation and analysis are novel and insightful. 3. Results of the proposed methods work well. ### Weakness. 1. My main concern is the correctness of the baseline results. I find the baseline results are lower than those reported in the original paper. For example, in Table 2 of DeepSeekMath, its performance on MATH is 36.2, not 31.4 as reported in the paper. Also, from the Table. 12 in LLaMA3, the LLaMA 3-8B gets 20.3 on MATH, but not 16.8. This is a serious problem, and I think the authors should clarify it in the rebuttal. 2. The experiment is somewhat simple, with only three models and two benchmarks. This also hinders the convincness of the proposed methods. 3. Similar to 2, there are no ablation and few analyses of the proposed methods. In a word, I think the paper starts from an interesting observation and convincing analyses. But the results of the experiment part have some problems. So, I give a weak reject to the current version, and I'll adjust the final rating based on the rebuttal. ## update after rebuttal The rebuttal partly solved my concerns. So I raise my score to weak accept. Other Comments Or Suggestions: NaN Questions For Authors: NaN Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your constructive feedback and your willingness to engage in further discussions with us. We have responded to each of the issues you raised below and have carefully addressed all your concerns. **Weakness** **W1. Baseline results** Thank you for pointing out this important issue. Upon careful investigation, we discovered that this disparity stems from a formatting inconsistency in the few-shot prompting examples. Specifically, the prompts differed slightly in spacing: - "Problem:\n" (line break) - versus "Problem: " (single space) This seemingly minor formatting difference unfortunately resulted in underestimated baseline performance on the MATH500 dataset. We have since corrected this issue. The revised performance numbers are reported below: | Model | MATH500 | |---------------------|---------| | DeepSeek-math-7B | 34.0 | | + cDPO | 35.2 | | Llama-3-8B | 18.6 | | + cDPO | 19.6 | | Llama-3-70B | 44.4 | | + cDPO | 45.0 | | Qwen-2.5-7B | 49.2 | | + cDPO | 54.0 | | Qwen-2.5-32B | 58.8 | | + cDPO | 64.8 | We will update and clarify the results accordingly in the revised manuscript. **W2. More experiments** We agree with your comment on the limited scope of experiments. To address this, we have conducted additional experiments using Qwen-2.5-7B and Qwen-2.5-32B on both GSM8K and MATH500 datasets. The expanded experimental results are detailed below: | Model | GSM8K | MATH500 | |-----------------|-------|---------| | Qwen-2.5-7B | 85.5 | 49.2 | | + cDPO | 87.5 | 54.0 | | Qwen-2.5-32B | 93.0 | 58.8 | | + cDPO | 93.5 | 64.8 | Additionally, we have also evaluated Pass@1 accuracy under various temperature settings. We sampled each question 10 times for each temperature setting and report average Pass@1 accuracy: | Temperature (T) | 0 | 0.25 | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 | |------------------------|------|------|------|------|------|------|------| | Llama-3-8B | 18.6 | 16.4 | 15.3 | 13.0 | 9.5 | 3.3 | 1.2 | | + cDPO | 19.6 | 20.3 | 20.1 | 19.6 | 18.7 | 19.7 | 18.3 | | DeepSeek-math-7B | 34.0 | 31.8 | 30.5 | 26.2 | 21.1 | 11.3 | 3.0 | | + cDPO | 35.2 | 34.5 | 34.5 | 34.5 | 33.9 | 32.9 | 32.8 | | Qwen-2.5-7B | 49.2 | 46.9 | 45.1 | 41.4 | 34.0 | 20.1 | 2.8 | | + cDPO | 54.0 | 54.2 | 53.6 | 52.8 | 52.9 | 53.4 | 51.9 | These results demonstrate clearly that: - cDPO consistently surpasses the baseline model performance. - cDPO maintains stability and robustness across diverse temperature settings. **W3. Ablation of the proposed methods** Thank you for the valuable suggestion. To strengthen our analysis, we have conducted an ablation study on the hyperparameter β controlling the mean of the contrastive distribution P^{ce}, as explained in the Appendix section “Distribution Analysis of Contrastive Estimation”. Using LLaMA-3-8B on GSM8K, we observed the performance effects of β values, shown below: | β-value | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 | 1.75 | 2.0 | 2.25 | 2.5 | |---------|------|------|------|------|------|------|------|------|------| | cDPO | 66.5 | 69.2 | 67.9 | 67.2 | 69.5 | 70.7 | 68.4 | 66.8 | 68.9 | These results indicate that optimal performance occurs when β is set within a range of 1.5–1.75, highlighting the need to appropriately balance the amplification and suppression of token likelihoods during training. We will present a more detailed discussion and expanded analysis on this topic in the next revision of the manuscript. --- Rebuttal Comment 1.1: Comment: The rebuttal partly solved my concerns. So I raise my score to weak accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer KGMH, Thank you for your reply and for updating your score! We will incorporate the experimental results presented in the rebuttal into the revised version of the paper. We truly appreciate your time and support in helping us improve our work. Best. Submission 2521 Authors
Summary: This paper introduces the concept of critical tokens, which are tokens that significantly influence the reasoning trajectories, leading to incorrect outcomes. They propose to use rollout algorithm to identify critical tokens, then study the difference between critical tokens and wrong tokens. They further propose an efficient method to detect the critical tokens. Based on these findings, they propose a cDPO method that assign more weights on critical tokens, and show that it improves the performance. Claims And Evidence: The claims for critical tokens are convincing: The authors demonstrate the existence of critical tokens. The criteria chosen by the authors are very stringent. Nevertheless, they successfully identify a large amount of critical tokens, showing that critical tokens widely appear in LLM generations. However, I find the claims for the efficient identification method less convincing. There lacks comparison between the efficient identification approach and the "golden standard", the rollout algorithm. I can only find the efficiency comparison. There should be a table showing the correct identification probability of critical tokens for the efficient algorithm. Even if the two methods do not lead to coherent approach, it will be important to understand the cause of the difference. Methods And Evaluation Criteria: There are several concerns: 1. The use of base model instead of instruct fine-tuned model: Can the authors comment on the choice of the model? It would make more sense to use the instruct finetuned model as the baseline and for further experiment. 2. There should also be some details for evaluation (COT, k-shot, pass@k, etc). 3. Since the negative model serves only as an intermediate step and is discarded afterward, the current approach introduces significant computational overhead through the negative model used for critical token identification. [1] L. Team and A. Meta, “The Llama 3 Herd of Models,” Jul. 2024. Available: https://arxiv.org/pdf/2407.21783 ‌ ‌ Theoretical Claims: I did not check the theoretical claims as I think the paper mostly focus on empirical applications. Experimental Designs Or Analyses: 1. Can the authors explain more about the derivation of Equation (1)? For example, why does s_t can be realised with this form? How does this form fit into the weights in the proposed cDPO? 2. When is the negative model trained? Is it prior to the cDPO training? 3. There should be more details on the implementation of the rollout method. Supplementary Material: No. Relation To Broader Scientific Literature: The critical tokens fit with previous intuitions that some tokens are more important than others, both in the general setting [1], or in mathematical reasoning setting [2]. This may also relate to COT compression problems. [1] Lin, Zhenghao, Zhibin Gou, Yeyun Gong, Xiao Liu, Ruochen Xu, Chen Lin, Yujiu Yang, Jian Jiao, Nan Duan, and Weizhu Chen. "Not all tokens are what you need for pretraining." Advances in Neural Information Processing Systems 37 (2024): 29029-29063. [2] Xia, Heming, Yongqi Li, Chak Tou Leong, Wenjie Wang, and Wenjie Li. "Tokenskip: Controllable chain-of-thought compression in llms." arXiv preprint arXiv:2502.12067 (2025). Essential References Not Discussed: None Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive review. We sincerely appreciate your recognition of our work's novelty and contributions. Your detailed comments are very helpful; we respond to each of your concerns below. **Concerns:** **C1. Comparison Between Contrastive Estimation (CE) and the Rollout Algorithm** The AUC values of CE compared to rollout sampling on LLaMA-3-8B are as follows: 1. GSM8K: 0.77 2. MATH: 0.84 We will include these results in the next revision. **C2. Use of Base Model Instead of Instruct-Tuned Model** We primarily adhere to setting protocols established by previous studies [1, 2, 3] by using base models rather than instruct-tuned models. This choice serves two main purposes: 1. To ensure controlled and consistent model evaluations. 2. To isolate the effects specific to the post-training phase, thereby clearly assessing our method’s direct contribution without interference from any prior fine-tuning effects. **C3. Detailed Information on Evaluation Setup (COT, k-shot, pass@k, etc.)** Thank you for pointing out the need for clarification. Our experimental configurations strictly follow established practices: - 8-shot prompting for GSM8K as in [4], and 4-shot prompting for MATH500 as described in [5]; - Temperature fixed at 0 for main evaluations. Furthermore, we perform additional analyses by sampling each question 10 times at varying temperatures and measure Pass@1 accuracy as shown below: | Temperature (T) | 0 | 0.25 | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 | |-----------------|---|------|-----|------|-----|------|-----| | LLaMA-3-8B |18.6|16.4 |15.3 |13.0 |9.5 |3.3 |1.2 | | + cDPO |19.6|20.3 |20.1 |19.6 |18.7 |19.7 |18.3 | | DeepSeek-math-7B|34.0|31.8 |30.5 |26.2 |21.1 |11.3 |3.0 | | + cDPO |35.2|34.5 |34.5 |34.5 |33.9 |32.9 |32.8 | | Qwen-2.5-7B |49.2|46.9 |45.1 |41.4 |34.0 |20.1 |2.8 | | + cDPO |54.0|54.2 |53.6 |52.8 |52.9 |53.4 |51.9 | Key observations: - cDPO consistently outperforms baseline models across all temperatures. - cDPO demonstrates stable and robust performance under varying sampling conditions. **C4. Computational Overhead of Training the Negative Model** We appreciate your insightful observation. Identifying critical tokens via rollout sampling incurs significant computational overhead. To address this limitation, we introduce contrastive estimation (CE), a computationally efficient alternative utilizing trained positive and negative models to estimate critical tokens. To validate this approach further, we incorporate CE-based scoring into our cDPO strategy and demonstrate its effectiveness experimentally. Looking ahead, if sufficiently extensive datasets annotated with critical tokens become available, training a dedicated token-level reward predictor model can deliver an even more scalable and lightweight alternative solution. --- **Questions:** **Q1. Derivation of Equation (1)** - *Clarification of Derivation*: As thoroughly discussed in the Appendix "Distribution Analysis of Contrastive Estimation," the term s_t presented in Equation (1) is derived directly as a combination of the probability distributions P^p (positive model) and P^n (negative model). This formulation results in an improved distribution, efficiently suppressing incorrect token likelihoods while promoting correct ones. - *Use of Logits as Weights in cDPO*: Further, as detailed in Section 3.2 "Formulation", cDPO refines the negative portion of the DPO loss into a token-level loss, using s_t as weighting factors. This mechanism explicitly guides training towards avoiding generation of critical tokens. **Q2. Timing for Training the Negative Model** As depicted explicitly in Figure 3, the negative model is trained prior to initiating cDPO training. Specifically, our full experimental pipeline consists of the following two distinct phases: 1. Critical token estimation using CE, involving prior training of both positive and negative models. 2. Integration of CE-derived scores into cDPO training. **Q3. Implementation Details of the Rollout Method** We perform rollout sampling exactly as described in Lines 194–202: - Given an incorrect response T = {t_1, t_2, ..., t_n}, we traverse each token t_i. - At each position t_i, we generate k=64 sampled continuations from the prefix T_{≤i} , employing the identical sampling configuration used for original generation. - The accuracy averaged over these k continuations forms a score for token t_i. --- **References:** [1] ARGS: Alignment as Reward-Guided Search. ICLR 2024. [2] Token-level direct preference optimization. ICML 2024. [3] Enhancing llm reasoning via critique models with test-time and training-time supervision. arXiv 2024. [4] Chain-of-thought prompting elicits reasoning in large language models. NeurIPS 2022. [5] Solving quantitative reasoning problems with language models. NeurIPS 2022.
null
null
null
null
null
null
null
null
BSemiFL: Semi-supervised Federated Learning via a Bayesian Approach
Accept (poster)
Summary: 1. This paper proposes BSemiFL, a federated semi-supervised learning framework. BSemiFL theoretically demonstrates why solely relying on either the global model or the local model for labeling local data is suboptimal. 2. The authors employ a Bayesian approach to evaluate the proximity of local and global models to the samples, and then dynamically weight their contributions to pseudo-label prediction based on their inferred relevance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, it is practically meaningful. BSemiFL enables the dynamic adjustment of the contributions from local and global models in pseudo-label generation, which provides a certain contribution to addressing data heterogeneity in federated semi-supervised learning. However, its novelty is somewhat limited. Theoretical Claims: Starting from the empirical distributions of the global dataset and local datasets, it theoretically proven that the ensemble loss between the global model and local models is lower than the maximum individual loss of either the global model or any local model. Experimental Designs Or Analyses: Yes, I have checked it. The main contribution of this paper lies in its ensemble strategy. In the experimental section, the authors compare their proposed strategy with other ensemble approaches and demonstrate its superior performance. Supplementary Material: Yes, I have verified it. The supplementary material includes the theoretical proof as well as the pseudocode. Relation To Broader Scientific Literature: Previous studies in the field of FSSL have also utilized both local and global models for pseudo-label generation, such as [1,2]. However, these works did not provide theoretical analysis to demonstrate the limitations of relying on a single model. Essential References Not Discussed: FedDB[3] also employs Bayesian analysis in the federated semi-supervised learning setting. Compared to this work: FedDB explicitly models class prior bias arising from imbalanced data, aiming to mitigate its impact throughout the training process. In contrast, BSemiFL focuses solely on weighting the predictions of global and local models, without addressing the internal class bias of the model or adjusting the prior distribution within the predictions. Moreover, the analysis in BSemiFL is limited to its aggregation strategy and does not provide a comparative evaluation against the debiasing strategy proposed in FedDB. Other Strengths And Weaknesses: Strengths: 1. The writing is clear and fluent. 2. One of the main contributions of this paper lies in its ensemble strategy. In the experimental section, the authors appropriately validate this contribution by comparing their method with other ensemble strategies. Weaknesses: 1. The theoretical analysis has certain limitations. Specifically, the paper assumes that the local data distribution is known. However, in semi-supervised settings, the local distribution is typically unknown. Although pseudo-labels can assist in estimating the distribution, they are prone to errors. The impact of pseudo-label noise is not considered in the knowledge derived from the local models. 2. The novelty is somewhat limited. The limitations of local and global models under data heterogeneity have become widely recognized in federated learning. Although this work attempts to extend such insights to FSSL, the characteristics of FSSL are not sufficiently reflected—particularly due to the assumption of known local data distributions. Other Comments Or Suggestions: This paper ignores the cumulative amplification of pseudo-label errors across multiple rounds:The theoretical analysis focuses on a single re-labeling step, overlooking the long-term accumulation and propagation of pseudo-label noise through iterative training rounds, which can degrade model performance over time. Questions For Authors: Currently, many foundation models, such as CLIP, have been introduced into FL and FSSL, where their prior knowledge helps alleviate the problem of client data imbalance. In this context, how do you evaluate the improvements brought by the method proposed in this paper compared to foundation models in pseudo-label generation? Furthermore, do you think the theoretical analysis in this paper could be extended to incorporate considerations related to foundation models? [1] Cho, Y.J., Joshi, G. and Dimitriadis, D., 2023. Local or global: Selective knowledge assimilation for federated learning with limited labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision. [2] Liu, Y., Wu, H. and Qin, J., 2024, March. Fedcd: Federated semi-supervised learning with class awareness balance via dual teachers. In Proceedings of the AAAI Conference on Artificial Intelligence. [3] Zhu, G., Liu, X., Wu, X., Tang, S., Tang, C., Niu, J. and Su, H., 2024, January. Estimating before debiasing: A Bayesian approach to detaching prior bias in federated semi-supervised learning. In Proceedings of the International Joint Conference on Artificial Intelligence. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks a lot for the reviewing. ``` Q1.FedDB[3] also employs Bayesian in FSSL. FedDB explicitly and mitigates class prior bias from imbalanced data. BSemiFL does not address internal class bias. Additionally, BSemiFL’s analysis is confined to its aggregation strategy and lacks a comparative evaluation. ``` Although [3] also employs Bayesian methods, the scenarios in which the two methods are applicable are entirely different. The scenario considered by [3] is "labels at client," which may not be suitable for our "labels at server" scenario. The core step of [3] involves using a single model to annotate local data and suppress the prediction probabilities of the majority class to correct annotation errors of that single model. - However, this method requires calculating the prior probability of the majority class, which relies heavily on labeled data from the "labels at client" setting. When there is no labeled data locally, an initial miscalculation of the prior probability will exacerbate errors during the suppression process, and there would be no way to correct these priors based on labeled data later on. In contrast, our proposed method does not depend on local labeled datasets but instead leverages a Bayesian weighting approach between the local and global models to jointly correct prediction probabilities. - The fundamental idea of [3] is a weighted inference method for addressing class imbalance, which has inherent limitations. For example, in a three-class classification task, assuming class C1 is the majority class and classes C2 and C3 are minority classes with equal sample sizes, the trained model might tend to classify C2 and C3 as C1. Although [3] penalizes the majority class, due to insufficient training on minority classes, it still cannot distinguish between C2 and C3, leading to misclassification. Therefore, widely used methods in the field of class imbalance are oversampling rather than weighting. Our method adaptively assigns the labeling of the majority class to the local model and the labeling of minority classes to the global model, thereby avoiding such errors. Cifar10 | |dir d=0.1| shard k=2| |-|-|-| |FedDB|66.78|66.14| |Our|76.19|78.31| Thus, [3] and our method apply to different scenarios. Indeed, combining the error correction of a single model from [3] with the correction of integrated annotations from two models in our method could potentially lead to further improvements and broader applicability across various scenarios. ``` Q2. Theories have limitations: the paper assumes a known local distribution. using pseudo-labels to estimate the distribution has noise. ``` Although the pseudo-label estimation distribution contains some noise, as training progresses, the model's accuracy improves, and this estimation error gradually decreases. Moreover, while the theorem has certain limitations, it still demonstrates the effectiveness of the method to a certain extent. It shows that under approximate conditions, the method outperforms using a single model or a simple average of two models. ``` Q3:Novelty is somewhat limited. The limitations of data heterogeneity have been recognized in FL. Although this work seeks to extend it to FSSL, the characteristics are not sufficiently reflected. ``` This paper investigates multiple characteristics of FSSL, including but not limited to: - The first systematic exploration of the advantages and disadvantages of local-only and global-only labeling methods, as well as the underlying reasons, which has not been addressed in traditional FL. - The first proposal of an adaptive weighting integration of global and local models tailored to the labeling needs in FSSL, a unique feature in FSSL. - Our theoretical analysis specifically targets the accuracy of labeling in FSSL. Additionally, we adopt an approximation of pseudo-label distributions and do not require prior knowledge of the local data distribution. ``` Q4. The paper ignores the cumulative amplification of pseudo-label errors across multiple rounds. The theories focus on a single re-labeling step, overlooking the long-term accumulation of noise. ``` Our method involves labeling in conjunction with the global model, which can be refined using the global labeled dataset, thereby reducing such noise. This is also reflected in SemiFL, where using only the global model can gradually lead to better performance without amplifying cumulative errors. ``` Q5.Many big models were used in FL. Their knowledge helps alleviate the imbalance. How to evaluate improvements of the method compared to big models in pseudo-label generation? Can theories incorporate big models? ``` Our method can be combined with foundational models to a certain extent. For example, we can treat the foundational model as the global model and integrate it with the local model to label local data. Considering the differences between foundational models and global models, new theoretical and experimental analyses may be required.
Summary: This paper focus on the semi-supervised scenarios in Federated Learning. This paper delves deeply into the performance dominance and limitations of the global and local models for relabeling the local data from both theoretical and empirical perspectives. Then, they propose a novel method which re-labels the local data through the collaboration between the local and global model in a Bayesian approach. Established theories demonstrate the effectiveness of their proposed method. Experimental results also show that their method greatly improves the performance. ## update after rebuttal After the Reviewer-author discussion phase, I maintain my score and explicitly support acceptance. Claims And Evidence: This paper presents the challenge of using single model from both theoretical and empirical perspectives. Methods And Evaluation Criteria: The method is well elaborated and has strong motivations. The designed evaluations are sufficient to verify that the method can solve their identified problem, meeting the criteria. Theoretical Claims: The theories clearly illustrate the limits of using single models and guarantee the effectiveness of the proposed method. The proofs are clear and correct. Experimental Designs Or Analyses: The experimental designs are comprehensive and reasonable to evaluate the proposed method. Specifically, the experimental datasets and tasks are widely adopted in this SSFL area, which meets the criteria. Besides, the compared baselines are sufficient enough to demonstrate the effectiveness of the proposed method. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper focuses on the integration of semi-supervised learning and federated learning, with particular attention to the issue of inaccurate labeling caused by NonIID. Therefore, it is related to works on semi-supervised learning in centralized scenarios [1] and efforts addressing NonIID in federated learning [2]. [1] Z. Zhu et al. The rich get richer: Disparate impact of semi-supervised learning. ICLR 2022. [2] SP Karimireddy et al. Scaffold: Stochastic controlled averaging for federated learning. ICML 2020. Essential References Not Discussed: The references are sufficient. Other Strengths And Weaknesses: Strengths: 1. The empirical and theoretical analysis about the limits and benefits of using single models are sound and clearly present the motivations. 2. The proposed method of using Bayesian-based ensemble of the global and local model to relabel the local data is interesting and innovative. 3. The theoretical validation of the performance is solid and guarantees the effectiveness of the proposed method. 4. The presentation is good and clear, making the paper easy to follow. 5. The experiments compare their method with many baselines over different datasets and NonIID settings, which are sufficient. Weakness and Concerns: 1. The details of the designed ensemble strategies in Figure 5(a) are not specified. It seems that these methods are designed by this paper itself. However, the details are not clear. For example, what’s the ,meaning of the majority voting? In my opinion, there are only two models, which has the concept of “majority”. 2. I'm curious as to why your method performs worse than Orchestra when it comes to Cifar100 (100, 2)? Other Comments Or Suggestions: No. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you very much for your reviewing and valuable suggestions. ``` Q1:The details of the designed ensemble strategies in Figure 5(a) are not specified. It seems that these methods are designed by this paper itself. However, the details are not clear. For example, what’s the ,meaning of the majority voting? In my opinion, there are only two models, which has the concept of “majority”. ``` We apologize for the lack of clarity in some of our descriptions: In Figure 5(a): - **Simple average** refers to assigning equal weights (0.5) to the results produced by the global model and the local model, and then computing a weighted sum. - **Random** refers to randomly assigning two weights to the results produced by the two models and then computing a weighted sum. - **Majority Vote** means that for a given unlabeled data point, we only assign a pseudo-label if both the global model and the local model produce the same pseudo-label for that data point. ``` Q2:I'm curious as to why your method performs worse than Orchestra when it comes to Cifar100 (100, 2)? ``` Thank you for your feedback. First, our method achieves a performance of **39.09±3.12** on CIFAR100 (100, 2), while Orchestra achieves **39.93±2.17**. These results are based on multiple experiments, so the performance of our method is not inferior to that of Orchestra. Regarding the slightly lower average value compared to Orchestra, we believe this is because Orchestra is a federated self-supervised learning method. In this specific scenario, its better performance is mainly due to the fact that the client data distribution is highly concentrated in a few categories. This local consistency makes it easier for local clustering to capture clear category patterns. In contrast, our method does not rely on clustering but instead leverages the collaboration between the global model and the local model to assign pseudo-labels. Given the higher complexity of CIFAR100 images compared to other datasets, this leads to slightly higher fluctuations in our method compared to Orchestra.
Summary: This paper aims to solve the problem in Semi-supervised Federated Learning where the local data labels are absent in clients. They first theoretically and empirically demonstrate that the limitations and benefits of local model and the global model for relabeling the local data. They propose a new method which re-labels the local data through the collaboration between the local and global model in a Bayesian approach. Finally, they theoretically empirically demonstrate the effectiveness of their proposed method. Claims And Evidence: This paper claims that using a single model (e.g., a local or global model) for relabeling the local data will lead to poor generalization or personalization. They verify this from both theoretical and empirical perspectives. Besides, they claim that Bayesian based ensemble can solve this problem. Their established theories demonstrate that the achieved performance outperforms a single global or local model or their simply average. Methods And Evaluation Criteria: They propose using the weighted ensemble of the local and global model to re-label the local data, where the weights are calculated by the Bayesian approach. They evaluate the proposed method by comparing 8 baselines over both shard and Dirichlet based NonIID distribution settings, which are sufficient to me. Besides, the ablation study also verify the effectiveness of the Bayesian based ensemble. Theoretical Claims: Their theories can be divided into two main parts. The first part includes Theorem 4.1 and 4.2, which claim that using a single model (e.g., a local or global model) either cannot achieve generalization or cannot fill the distribution gap. The second part includes Theorem 6.1 and 6.2, which claim that their proposed method outperforms using a single model or the simple average of two models. The claims are consistent with the empirical observation and the proofs are correct. Experimental Designs Or Analyses: They use SVHN, CIFAR10, and CIFAR100 as the evaluation dataset and Resnets as models, which is a generalized adopted benchmark. They compare their proposed method with 8 baselines including both SSFL and Unsupervised FL methods, which are sufficient enough to verify the effectiveness. The analysis also explains the reason for the phenomenon. Supplementary Material: yes Relation To Broader Scientific Literature: The field of this paper focus on is de-facto a combination of the semi-supervised learning and federated learning. Although the proposed method achieves great effectiveness for SSFL, they only adopt a basic semi-supervised learning method in FL. In fact, more recent advanced SSL methods [1] can considered. [1] BEM: Balanced and Entropy-Based Mix for Long-Tailed Semi-Supervised Learning. CVPR 2024. Essential References Not Discussed: The references are sufficient.. Other Strengths And Weaknesses: Strengths: 1. The idea of using Bayesian based method is interesting and novel. The design of weighted ensemble to leverage both benefits of two models makes sense. 2. The motivation is clear. The analysis reveal the underlying principles behind the problem using a single global or local model. 3. The techniques are sound and the established theories are solid, which corresponds to the empirical observations. 4. The paper is well organized and the writing is good. 5. The evaluations are sufficient which many recent baselines and various settings. Weakness: 1. The proposed method requires more interaction items except of the models between the clients and the server. The risk of the privacy leakage is not discussed. 2. There exists a SSFL method using Bayesian approach [1]. What’s the difference between your proposed method their method? [1] Estimating before Debiasing: A Bayesian Approach to Detaching Prior Bias in Federated Semi-Supervised Learning. IJCAI 2024. Other Comments Or Suggestions: No Questions For Authors: See concerns. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your reviewing and valuable suggestions. ``` Q1:more recent advanced SSL methods [1] can considered. [1] BEM: Balanced and Entropy-Based Mix for Long-Tailed Semi-Supervised Learning. CVPR 2024. ``` BEM [1] mainly proposes a novel hybrid method for rebalancing the class distribution in terms of data quantity and uncertainty. This method can be integrated into our proposed approach. ``` Q2:The proposed method requires more interaction items except of the models between the clients and the server. The risk of the privacy leakage is not discussed. ``` Although our method requires more interaction terms, it does not actually increase the risk of privacy leakage. Compared to traditional FL methods, as shown in line 681, our method only needs to additionally transmit the K-dimensional probability distribution vector $Q_s=[Q_s^1,...,Q_s^K]$ of the server-side public dataset categories on the global model. In the SSFL scenario, the data on the server side is generally non-private. Therefore, in practice, our method does not result in additional client privacy leakage compared to traditional FL methods. ``` Q3:There exists a SSFL method using Bayesian approach [1]. What’s the difference between your proposed method their method? [1] Estimating before Debiasing: A Bayesian Approach to Detaching Prior Bias in Federated Semi-Supervised Learning. IJCAI 2024. ``` First, the scenario of our method differs from that of the referenced article: In our scenario, we assume that clients only have unlabeled data, while the server has a small amount of labeled data. In contrast, in [1], the scenario assumes that clients possess both supervised and unsupervised data, while the server does not have any data. Second, regarding the application of Bayes' theorem: 1. Our method leverages Bayes' theorem to assign pseudo-labels to local data based on the knowledge of both the local model and the global model. Specifically: We first fine-tune the global model on the server using the labeled data available on the server: $ Q_s^k = \sum_{i=1}^{N_s} f_{\hat{p}}(k|x_i, w_t) $ and then compute the probability distribution of the server-side data on the global model. This parameter, along with the global model, is distributed to the clients. Subsequently, we calculate: $$ \hat{p}(x_i^m|k) = \frac{f_{\hat{p}}(k|x_i^m, w_t)}{f_{\hat{p}}(k|x_i^m,w_t) + Q_s^k} $$ For each data point in the local client, we compute: $$ \hat{p}(x_i^m) = \sum_{k=1}^K \hat{p}(x_i^m|k)\hat{p}_s(k). $$ Similarly, in the local model, we compute the empirical distribution (using pseudo-labels from the previous round) and the probability distribution of the local data: $ Q_m^k = \sum_{i=1}^{S_m} f_{\hat{p}_m}(k|x_i,w_t^m) $. Then, we calculate$A=\sum_{k=1}^K f_{\hat{p}_m}(k|x_i^m, w_t^m)\hat{p}_m(k)$, and $B=(f_{\hat{p}_m}(k|x_i^m, w_t^m)+Q_m^k)$ to obtain $\hat{p}_m(x_i^m)=A/B$ Next, for each local data point, we obtain the confidence levels of the global model and the local model as: $$ \alpha_i^{m} = \frac{\hat{p}(x_i^m)}{\hat{p}(x_i^m)+\hat{p}_m(x_i^m)}, \quad 1-\alpha_i^{m} = \frac{\hat{p}_m(x_i^m)}{\hat{p}(x_i^m)+\hat{p}_m(x_i^m)}. $$ Finally, by denoting $C=f_{\hat{p}}(x_i^m, w_t)$, and $D=f_{\hat{p}_m}(x_i^m, w_t^m)$, we derive the final probability distribution: $$ \hat{y}_i^m = \alpha_i^m C+ (1-\alpha_i^m) D $$ 2. In contrast, in FedDB, the use of Bayes' theorem occurs during local training, where the knowledge of the local model is utilized to optimize pseudo-labeling via APP-U (the Average Prediction Probability of Unlabeled Data), thereby reducing label prior bias. Specifically, it is argued that the imbalance in the local dataset introduces bias when the local model assigns pseudo-labels to unlabeled data, favoring classes with more data. To address this issue, the authors propose applying Bayes' theorem to rewrite: $$ p_s(y|x) = \frac{e^{z(x)[y]}}{\sum_{k=1}^{K} e^{z(x)[k]}} $$ as: $$ p_s(y|x) = \frac{p_s(y) p_s(x|y)}{\sum_{k=1}^{K} p_s(k) p_s(x|k)}. $$ To mitigate the imbalance issue, a regularization-like term is introduced, yielding: $$ \hat{p} = \frac{p(y|x)/\overline{p}}{\sum_{k=1}^{K} {p(k|x)}/\overline{p}_k}, $$ Thus, our method primarily uses Bayes' formula to combine the knowledge of the global model and the local model through pseudo-labeling, whereas the work in [1] focuses on addressing data imbalance issues. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. After reading the author's rebuttal, the main concerns I had have been addressed, and I will maintain my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer kFQg, We appreciate it a lot that the valuable suggestions and comments you provided. We will incorporate these revisions into our paper. Best wishes, Authors
Summary: This study focuses on the semi-supervised learning paradigm within Federated Learning (FL), focused on re-label technology. Theoretical and empirical results demonstrate the local model has higher relabel accuracy on local data. Furthermore, this paper propose a Bayesian approach to re-label the local data by both the local and global model. Specifically, bayesian inference to choose model and get weighted combination of pseudo labels. Theoretical analysis shows the lower labeling error and experimental also gives SOTA results. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: - It is unclear how we can get the $\hat{p}(x_i,k)$. 5.1 part, the notation are confused, it is better to use a consistent notation to represent this bayesian inference process, because this process is the same for global model and local model. - In the equation (20), what’s the meaning of $p$ and $\hat{p}$? It is also unclear how to get the equation (20) based on the Hoeffding inequality. Experimental Designs Or Analyses: - The experiments about "Impact of Labeled Dataset Size" lack the specific heterogeneity of the experiment, the number of clients, and other specific settings. - Lacks the $\alpha$ visualization experiments to demonstrate whether the model depends more on the global model to label as the global model improves in performance, so it is recommended to add these experiments. Supplementary Material: Yes, I reviewed all of the supplementary material Relation To Broader Scientific Literature: New insights about the relationship between the global and local models to relabel data in SSFL. Essential References Not Discussed: The related work in ““labels at client” part lacks some recent citations e.g., [R1,R2] References: [R1] Zhang, Yonggang, et al. "Robust Training of Federated Models with Extremely Label Deficiency." *The Twelfth International Conference on Learning Representations*. [R2] Bai, Sikai, et al. "Combating data imbalances in federated semi-supervised learning with dual regulators." *Proceedings of the AAAI conference on artificial intelligence*. Vol. 38. No. 10. 2024. Other Strengths And Weaknesses: Weakness: - What’s the meaning of “the global model can progressively improve the re-labeling performance by introducing the extra data knowledge of other clients” in the abstract part? What is the difference between a local model with a high relabel capability and a global model with a progressively higher relabel capability? - In the motivation part, all experiments do not clearly explain which model to train and what training data and test data are used, so it is a little difficult to follow. For example, how are the global model and local model obtained in Figure 2a, and what are their test data respectively Other Comments Or Suggestions: See weakness Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your reviewing and valuable suggestions. ``` Q1: How to get the p^(xi,k) in 5.1. It is better to use a consistent notation. ``` $p^(x_i,k)=p^(x_i,y_i=k)$ represents the joint probability distribution of the sample $x_i$ and $y_i$, i.e., the probability that the sample takes the value $x_i$ while the label simultaneously takes the value $k$. In fact, our notation is consistent throughout the paper. However, due to the relatively large number of variable types involved, it may give an impression of complexity. We will further optimize the explanation and representation of the variable notations moving forward. ``` Q2:p and p^ in eq.(20)? It is unclear how to get (20) from Hoeffding. ``` Eq.(20) involves derivation steps that are commonly used in empirical risk analysis, so we omitted these steps. Since $L_p(f)$ is the expectation of $L_{\hat{p}}(f)$, by applying Hoeffding's inequality, we can derive: $$ \mathbb{P}\left( |L_{\hat{p}}(f) - L_p(f)| \geq \epsilon \right) \leq 2 \exp(-2S\epsilon^2) $$ Here, $ \epsilon > 0 $ is an arbitrarily small positive number. Let: $$ \delta = 2 \exp(-2S\epsilon^2) $$ Solving for $ \epsilon $: $$ \epsilon = \sqrt{\frac{\log\frac{2}{\delta}}{2S}} $$ This implies that with a probability of at least $ 1 - \delta $, the following inequality holds: $$ |L_{\hat{p}}(f) - L_p(f)| \leq \sqrt{\frac{\log\frac{2}{\delta}}{2S}}. $$ Decomposing the above absolute value inequality yields formula (20). We will include these in the revised version. ``` Q3:Experiments of "Impact of Labeled Dataset Size" lack the specific heterogeneity, the number of clients, and other settings. ``` The total number of clients is 100, with 10% activated in each round. The heterogeneity among clients is set to a shard distribution with \( k=2 \). The threshold is 0.7. ``` Q4:α visualization. ``` We add the following experiment. When each client is selected for training, we record the average value of α for all data points in its local dataset. We here present the corresponding values for five randomly selected clients in the table below. As can be seen, in the initial rounds, the local model achieves higher accuracy on the local data compared to the global model, resulting in a relatively smaller weight α for the global model. In later rounds, the values of α gradually stabilize around 0.5, which aligns with the gradual improvement in the accuracy of the global model. |Client ID&Round|10|20|40|60|Last| |-|-|-|-|-|-| |18|0.4531|0.4880|0.5060|0.5146|0.5287| |26|0.4643|0.4857|0.4951|0.5101|0.5061| |46|0.4477|0.4781|0.4827|0.5045|0.5069| |72|0.4585|0.4756|0.4926|0.5088|0.5093| |94|0.4602|0.4829|0.4943|0.5023|0.5303| ``` Q5:Meaning of “global model can progressively...”. The difference between a local model with a high capability and a global model with a progressively higher capability. ``` - This statement is relative to the local model. In Fig.2b, we observed that when labeling data using only the local model, some labeling errors occur. When these labeled data are used to train the local model, the errors are carried forward, and during the next round of labeling, the same errors persist, making it impossible for the model to self-correct. On the other hand, the global model, which incorporates knowledge from other clients, can correct these errors. - Since the local model converges quickly while the global model converges more slowly, there is a significant difference in accuracy between the two in the early stages of training, but they gradually converge later on. However, due to the impact of heterogeneity, the error of the global model on the local dataset remains consistently higher than that of the local model. ``` Q6:Motivation does not clearly explain which model to train and what data are used. How are the global and local model obtained in Fig.2a? ``` All motivation experiments used the Wide ResNet28*2 model. As indicated in lines 151 and 208, we used CIFAR10. All experiments were conducted to train the global model. - In all experiments, the global model is obtained on the server using FedAvg, while the local model is obtained after further local training of the global model. - In all figures, the global model is tested using the combined local data from all clients (i.e., the global data), whereas the local model is tested using the local data corresponding to its respective client. This is explained in the captions. Thus: - In Fig.2a, as in lines 150–154, the test data for the global model is the global data, while the test data for the local model is the local data corresponding to its respective client. - In Fig.2b, the global model is tested using two different annotation methods, and the test data used is the global data. - In Fig.2c, the local model is tested using its corresponding local data. ``` Q7: Lacks in some recent citations in “labels at client” [1,2]. ``` Thank you for the reminder. We will include them.
null
null
null
null
null
null
Quadratic Upper Bound for Boosting Robustness
Accept (poster)
Summary: The paper presents a new adversarial training scheme based on a simple quadratic upper bound (QUB) to the standard adversarial loss, aimed at improving robustness in the context of Fast (single-step) Adversarial Training (FAT). The authors demonstrate that, when applied on various adversarial training schemes from previous work, QUB can increase the smoothness of the loss landscape at a relatively small runtime cost and, in some cases, it enhances the adversarial robustness of the network. ### Update after rebuttal I thank the authors for their reply, for the clarifications, and for acknowledging the limitations of QUB. I still think this is very much a borderline paper, but I am increasing my score to weak accept. I would nevertheless encourage the authors to prominently feature the large-epsilon experiments and the inability of QUB to prevent catastrophic overfitting in the next version of the paper. I would be eager to also see the outcome of the ELLE + QUB results. Claims And Evidence: The main claim made in the paper is that QUB *can* improve the robustness of FAT methods. Indeed, judging from Table 1 and the experiments in the appendix, this appears to be the case when applied on a series of FAT methods. However, robustness is instead decreased for weaker FAT methods (FGSM-RS), and remains unvaried when used on the strongest considered FAT algorithms (FGSM-PGDI-MEP, and FGSM-UAP). Furthermore, as acknowledged by the authors in section 2.3, one of the main points of FAT methods is to prevent catastrophic overfitting. It doesn't seem to me that this is explored in the paper. Indeed, the negative results when used on FGSM-RS may suggest otherwise. In addition, I would suggest that the authors downscale a bit the claims made concerning the derivation of the proposed scheme (the intro states "we prove the convexity of cross-entropy": as later acknowledged in section 3, this is well known. Methods And Evaluation Criteria: While different datasets and architectures are used, I am concerned by the fact that the authors chose to focus on perturbations of size 8/255, for which previous work shows a very small gap between FAT methods and multi-step baselines: see (de Jorge Aranda et al., 2022), for instance. An analysis for larger $\epsilon$ values is shown in the appendix, but it focuses on PGD-AT, which does not share the failure cases of FAT methods on larger perturbations. The practical utility of QUB in this context is unclear, considering that TRADES attains better robustness-accuracy trade-offs in Table 1. Theoretical Claims: QUB is derived using Taylor's theorem, stated in equation (B.1). However, it seems to me that this statement of the theorem omits the remainder term (the equality is not exact in general). Regardless, QUB can be alternatively (and, arguably, more straightforwardly) derived using the notion of $\beta$-smoothness, which is a fairly common tool in the optimisation literature [1]. I believe the authors should acknowledge this. [1] Lecture Notes: Optimization for Machine Learning. Elad Hazan, arXiv:1909.03550 Experimental Designs Or Analyses: See "Methods And Evaluation Criteria". Supplementary Material: I went over the paper appendix. Relation To Broader Scientific Literature: As discussed by the authors (section 2 is fairly comprehensive), this work fits within a long series of works aiming to improve the robustness of FAT schemes. In particular, it is very related to works integrating smoothness regularisers into the loss function (for instance, ELLE): this is particularly related to the third QUB term. QUB is only one way to provide an upper bound to the adversarial loss. For instance, upper bounds can be alternatively derived using deterministic certified training algorithms. A concurrent work explores the utility of these methods in the context of FAT [2]. This could be acknowledged for the sake of completeness. [2] On Using Certified Training towards Empirical Robustness, De Palma et al., arXiv:2410.01617 Essential References Not Discussed: No essential references are omitted. Other Strengths And Weaknesses: I found the idea of using the quadratic upper bound to enhance robustness to be intuitive and interesting. However, its empirical utility compared to previous work in the area remains to be fully determined (see questions). Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: - Can QUB prevent catastrophic overfitting? This can be proved by applying it on top of vanilla FGSM on CIFAR-10 for 8/255. - Could the authors analyse performance for larger perturbation values on top of other FAT schemes (the appendix only uses PGD-AT)? - Could the authors provide results for using QUB on top of ELLE (a state-of-the-art FAT regularizer) or N-FGSM (arguably the best-performing FAT method without any runtime overhead)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### 1. On the Scope and Effectiveness of QUB across FAT Methods We appreciate the reviewer’s careful analysis and valuable feedback. As the reviewer pointed out, QUB improves robustness in several FAT methods, but the improvement is limited in strong methods (e.g., FGSM-PGI) and even decreases in FGSM-RS. We acknowledge this limitation. However, our primary focus was not to universally improve all FAT methods or to fundamentally resolve catastrophic overfitting. Instead, QUB aims to provide a practical, lightweight strategy to promote output-level smoothing and improve generalization without increasing attack complexity or training cost. In particular, FGSM-RS is structurally prone to catastrophic overfitting, and we observed that QUB could not fully prevent this behavior. This does not undermine our method, which aims to provide a complementary regularization rather than a comprehensive solution. Nonetheless, we agree that QUB’s effect is method-dependent, and this observation will help us better clarify the scope and limitations of our approach. In the final version, we will elaborate this point and position QUB as a practical, scalable method that can provide improvement under certain conditions, without claiming universal effectiveness. ### 2. On the limited analysis for larger values of ε Please refer to our response #2 to Reviewer TY46’s comment ### 3. TRADES comparison We agree that TRADES achieves a better robustness-accuracy trade-off than PGD+QUB in some cases (Table 1). However, in our experiments, TRADES requires approximately 1.3 times longer training time compared to PGD+QUB, which can limit its practicality in resource-constrained environments. In contrast, QUB improves robustness with lower computational overhead by modifying only the loss function without changing the attack process. Our work aims to provide a simple and flexible strategy that can be easily integrated into various FAT methods without increasing attack complexity. ### 4. On QUB derivation and smoothness-based alternatives We agree with the reviewer that the same bound can be more intuitively derived based on the -smoothness property of the cross-entropy loss, and that this approach could simplify the theoretical presentation (although in this case, we first need to calculate a bound on the $l_2$-norm of the Hessian and then invoke the lemma that if the norm of Hessian is bounded by a constant, then the gradient is Lipschitz continuous with the same constant). In the current manuscript, we chose to derive QUB from Taylor’s theorem to explicitly connect the upper bound formulation to the local behavior of the loss landscape. Nevertheless, we acknowledge that a smoothness-based perspective is equally valid and may offer a more straightforward interpretation. We will briefly mention this alternative view in the revised appendix for completeness. ### 5. On Related Work and Clarification of Theoretical Claims We appreciate the reviewer’s careful reading and valuable suggestions, which will help us improve the clarity and completeness of the manuscript. First, we acknowledge that QUB is closely related to prior works incorporating smoothness regularizers in adversarial training, such as ELLE, and shares conceptual similarities with certified training methods that provide deterministic upper bounds. We will revise the related work to explicitly mention these connections and cite the work suggested by the reviewer. Second, we acknowledge that the convexity of cross-entropy is a well-known property, and the current phrasing in the introduction may have overstated this point. We will revise the statement to avoid overstating our contributions. As for the high order terms in (B.1), the current statement is correct as it is. Notice that the Hessian is evaluated at some point $z$ between $x$ and $y$. If $z$ is replaced with $x$, then (B.1) would only give an approximation with missing high order terms, as the reviewer mentioned. ### 6. On Evaluation of QUB Combined with Other FAT Methods As the reviewer suggested, we evaluated QUB in combination with other recent FAT methods such as ELLE and N-FGSM. When applied to N-FGSM, QUB-static and QUB-decreasing improve robust accuracy by +4.85% and +2.68%, respectively with compromised standard accuracy by 1.63% and 1.25%, respectively. This further demonstrates that QUB can be effectively integrated with many FAT methods and exhibits the same trend as observed in our main experiments. Regarding ELLE, we agree that combining QUB with other smoothness-based regularizers is a meaningful direction. Although we could not finish experiments with ELLE within the rebuttal period, we will include the result in the final version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their reply, for the clarifications, and for acknowledging the limitations of QUB. I still think this is very much a borderline paper, but I am increasing my score to weak accept. I would nevertheless encourage the authors to prominently feature the large-epsilon experiments and the inability of QUB to prevent catastrophic overfitting in the next version of the paper. I would be eager to also see the outcome of the ELLE + QUB results. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for reading our response and for the thoughtful follow-up. We are especially grateful for recognizing our effort. As the reviewer pointed out, the current version has limitations, particularly in handling catastrophic overfitting and in the scope of effectiveness under large epsilon values. We fully acknowledge these aspects and will explicitly highlight them in the final version of the paper. To address the reviewer’s suggestions, we plan to clearly present the results with large ε and discuss the limitations of QUB in cases such as FGSM-RS, where it fails to prevent catastrophic overfitting. Moreover, we have completed experiments combining QUB with ELLE, a recent smoothness-based regularizer designed to prevent catastrophic overfitting by minimizing linear approximation error over a wide ε range. Specifically, we integrated QUB into ELLE-A, which uses FGSM-based adversarial training with two cross-entropy losses—one for adversarial examples and another for regularization. We evaluated the effect of replacing either or both losses with QUB. Our findings show that QUB can enhance robustness even when combined with ELLE. For instance, using QUB-decreasing only for adversarial loss improved RA by +2.31% (with a moderate SA drop of −1.53%), while replacing the ELLE regularization loss with QUB improved both RA (+2.00%) and SA (+0.65%). The best robust accuracy (+2.62%) was achieved when both losses were replaced with QUB, confirming that QUB complements smoothness-based regularizers effectively. These results demonstrate that QUB not only scales to recent FAT methods like ELLE-A, but also adapts well in regularizer-based training pipelines, supporting its versatility and practical value across a wide range of adversarial training frameworks. We would like to thank the reviewer again for constructive feedback, and assert again that we will improve the clarity and completeness in the final submission.
Summary: This work demonstrates the convexity of the cross entropy loss and derives its upper bound. Additionally, it applies this to fast adversarial robust training to enhance its adversarial robustness across multiple baselines. Claims And Evidence: The author presents adequate theoretical evidence to support the method employed. However, I hold certain reservations regarding the author's motivation. The author brings up the issue of current FAT in lines 21-22 of the Introduction: FAT frequently encounters catastrophic overfitting. (The model becomes overly robust.) Nevertheless, the QUB designed by the author is a stranger attack method. (This phenomena is also mentioned by the author in Section 3.3 and the analysis of Table 1.) Thus, does the author's solution clash with the current problem of FAT as stated in the introduction? Methods And Evaluation Criteria: The dataset and evaluation method used by the author are common in the field of robustness and are reasonable. This method presents a training technique that involves using QUB initially and then reverting to normal AT (QUB-decreasing). Can this be directly equivalent to normal AT with a gradually weakening attack strength (for instance, setting the number of attacks or the strength to gradually decrease during the training process)? Theoretical Claims: The author's justification for his theory is reasonable and correct. Experimental Designs Or Analyses: 1. The QUB loss employed by the author appears to be a method that has a relatively lower computational cost compared to the general AT. However, why is it that the time of QUB-static is typically higher than the baseline as shown in Table 1? 2. In the supplementary materials, the experiments conducted on other datasets lack time information. It is recommended to add this time information. 3. It appears that the author's performance improvement on Tiny ImageNet is comparatively more effective than on CIFAR-10 and CIFAR-100. Do the authors possess any analysis regarding this phenomenon? Is it correlated with the input size of the image? Supplementary Material: I reviewed the proof of the convexity of the loss and the derivation of the upper bound in the supplementary material, as well as the experimental performance on other different datasets. Relation To Broader Scientific Literature: Improving the effectiveness of FAT may help with the robust training of current large vision language models. Essential References Not Discussed: Sufficient references have been cited Other Strengths And Weaknesses: The author's presentation of the method is clear. However, the introduction to related work is somewhat excessive. It is advisable to allocate more space to offer a clearer introduction to the insight or motivation. Other Comments Or Suggestions: No other comments Questions For Authors: 1. Demonstrating the necessity and suitability of QUB loss more clearly. 2. Demonstrate the advantage of training time on each dataset and clarify the magnitude of the difference that QUB makes in comparison to directly increasing the attack strength at the beginning of training. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### 1. Clarifying Motivation We appreciate the reviewer’s comment regarding potential conflict between the stated motivation and the proposed method. We acknowledge that the terms “overly robust” and “excessively robust,” used in different parts of the paper, were not clearly differentiated. In the introduction, “overly robust” refers to catastrophic overfitting in FAT, where the model performs extremely well on the specific single-step attack used during training, but fails to generalize to unseen perturbations. In contrast, in Section 3.3, “excessively robust” refers to the behavior of QUB in later training phases. Because QUB is an upper bound, it may overestimate the actual adversarial loss, seeking to further minimize the loss even when the AT loss is already near zero. This can lead to undesired degradation of standard accuracy with marginal improvement of robustness. Ultimately, both terms reflect our motivation to alleviate the overfitting tendency of FAT, and QUB was introduced as a practical strategy toward this goal. ### 2. Distinction Between QUB-Decreasing and Gradual Attack Weakening We appreciate the reviewer’s insightful question. Both QUB-decreasing and attack strength scheduling in AT aim to balance stability during training. Attack strength scheduling in AT achieves this by progressively reducing the perturbation magnitude or the number of attack steps, thereby directly controlling the strength of the perturbation $x'$. This approach focuses on reducing the loss at specific attack points. In contrast, QUB-decreasing adjusts the loss function itself rather than the attack inputs. It begins with the QUB-loss to encourage a flatter loss landscape via its upper bound, promoting generalization to unseen attacks. In later stages, it transitions to the standard AT loss to avoid over-regularization and preserve standard accuracy. Therefore, unlike attack scheduling, QUB-decreasing operates at the gradient and loss landscape level, encouraging broader generalization beyond specific attack points. This is what differentiates QUB-decreasing from attack scheduling, and comprehensive comparison of the two strategies could be an interesting direction for future work. ### 3. Computational Overhead of QUB and Reporting Training Time As the reviewer pointed out, QUB-static shows longer training time than single-step baselines in Table 1. This is because QUB has two additional terms of L2 distance between logits and gradient alignment compared to standard cross-entropy loss. Nonetheless, QUB remains computationally lighter than full loss landscape regularization, such as the smoothing term (Eq. (8) in Section 3.2), as it approximates the smoothing effect without explicit gradient backpropagation. Although QUB-static increases training time compared to single-step baselines, it achieves smoothing effects more efficiently than conventional loss landscape regularization. We will also include the missing training time information in the final version. ### 4. Interpretation of Stronger Improvement on Tiny ImageNet We carefully examined the reviewer’s observation that the performance improvement on Tiny ImageNet appears larger than on other datasets. The reported results were based on the average of two predefined random seeds. During the experiments, we observed unusually large improvement in FGSM-PGI setting, possibly due to seed-induced fluctuations. To verify the consistency of the observed improvement, we conducted additional experiments with more random seeds. We found that the improvement from QUB varied significantly across seeds, likely due to favorable initialization rather than a dataset-specific effect. We will clarify this in the revision and avoid overinterpreting the result. ### 5. Clarifying the Necessity and Suitability of QUB The main goal of this work is to enhance Fast Adversarial Training (FAT) with minimal additional cost. While FAT is efficient, it often overfits to a single adversarial pattern and fails to generalize. QUB addresses this limitation by minimizing an upper bound on the AT loss, encouraging loss smoothing and broader robustness. The QUB-decreasing strategy further improves the trade-off between robustness and standard accuracy by transitioning to AT loss in the later training phase. Empirical results demonstrate that QUB performs consistently better under AutoAttack, especially against unseen perturbations. We will clarify this positioning more explicitly in the revised manuscript to better reflect QUB’s role as a practical and lightweight enhancement to FAT. ### 6. Clarifying Related Work vs. Motivation We agree that the related work is overly detailed and may better be reduced to better highlight motivation. We will streamline the related work section and improve the motivation in the final version.
Summary: This paper proposes a novel adversarial training method called Quadratic Upper Bound (QUB), defined as follows: $$ \mathcal{L}_{\text{QUB}} = \mathcal{L}(f(x)) + (f(x + \delta) - f(x))^T \nabla_f \mathcal{L}(f(x)) + \frac{1}{4} \| f(x + \delta) - f(x) \|_2^2. $$ By incorporating the QUB loss into existing adversarial training (AT) methods, the authors achieved notable improvements over baseline approaches. Claims And Evidence: The main claim of the paper can be summarized as follows: **(Main claim)** The proposed QUB loss can be helpful in mitigating catastrophic overfitting. While the theoretical work is valid, the empirical validation is not sufficient to conclusively prove that the proposed method effectively mitigates catastrophic overfitting. First, if QUB loss truly helps prevent catastrophic overfitting, the authors should conduct experiments with larger epsilon values and longer epochs, as demonstrated in [1]. Since the experiments only use a fixed $\epsilon=8/255$, it is difficult to claim that they provide a comprehensive evaluation. Second, I wonder whether QUB alone can achieve robustness. In Algorithm 1, QUB loss is optimized without any adversarial training (AT) loss. However, in the main table (Table 1), there is no standalone QUB training; rather, QUB is only used in combination with existing adversarial training methods. In this regard, I am also curious about the difference between static and decreasing weight. I suspect that Algorithm 1 is incorrect and actually uses a fixed weight for QUB loss. Please correct me if I am mistaken. Lastly, I recommend further analysis of the third term, $|f(x+\delta)-f(x)|^2_2$, which resembles logit pairing [2]. Since logit pairing can lead to gradient masking, applying it to adversarial training might be detrimental. **Suggestions:** 1) Please move all experiment tables to the Appendix with smaller font sizes. Since they are crucial for verifying the effectiveness of QUB, it is essential to provide them in the main text. 2) Please indicate the improvement when using QUB with existing methods. For example, 47.33 (+2.42). This will make the table easier to read. 3) Please check the significant figures in the tables. In Table 1, 47.8 should be 47.80. Moreover, right-aligning the values would improve readability. - [1] Andriushchenko, Maksym, and Nicolas Flammarion. "Understanding and improving fast adversarial training." Advances in Neural Information Processing Systems 33 (2020): 16048-16059. - [2] Kannan, Harini, Alexey Kurakin, and Ian Goodfellow. "Adversarial logit pairing." arXiv preprint arXiv:1803.06373 (2018). Methods And Evaluation Criteria: Refer to Claims And Evidence. Theoretical Claims: There is no problem. Experimental Designs Or Analyses: Refer to Claims And Evidence. Supplementary Material: I've read the entire supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Refer to Claims And Evidence. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### 1. On whether QUB effectively mitigates catastrophic overfitting We appreciate the concern regarding catastrophic overfitting. As clarified in the main text, our goal is not to directly prevent catastrophic overfitting, but to propose a practical and lightweight strategy that enhances stability and generalization within the Fast Adversarial Training (FAT) framework. As observed in Table 1 (e.g., FGSM-RS results), QUB does not completely prevent catastrophic overfitting. In particular, QUB fails to improve the performance of FGSM which is known to suffer from catastrophic overfitting. We agree that the presentation may have caused some confusion. Although QUB is theoretically motivated to promote broader generalization by minimizing an upper bound on the AT loss, its benefits may be limited when the attack is not sufficiently informative, as in FGSM-RS which generates a random attack. This is indeed the limitation of addressing structural attack weakness through loss-level regularization alone. This does not contradict the theoretical validity of QUB, but rather highlights the challenge of addressing structural limitations of attack generation through loss-level regularization alone. As shown in Section 3.2, QUB helps flatten the loss landscape, contributing to generalization under perturbation constraints—even if it does not fully resolve catastrophic overfitting. ### 2. On the effect of increasing perturbation size (ε) and longer training epochs We appreciate the reviewer’s suggestion to further analyze the behavior of QUB under stronger perturbations and extend training durations. We conducted additional experiments under the same setting as Appendix E, using FGSM-RS and FGSM-PGI instead of PGD-AT. For FGSM-RS, which is prone to catastrophic overfitting, we again confirmed that QUB-decreasing does not fundamentally prevent overfitting, as discussed in our response to Comment #1. While QUB-decreasing improves robust accuracy at ε = 4/255 (+4.18%), it failed to mitigate overfitting at larger ε values and even showed degraded performance compared to vanilla AT. In contrast, for FGSM-PGI, which does not suffer from catastrophic overfitting by design, QUB-decreasing consistently improves robust accuracy across all perturbation sizes: +1.33% at ε = 4/255, +0.52% at ε = 8/255, +0.14% at ε = 12/255, and +0.10% at ε = 16/255. We also observed that overfitting does not occur with training epochs of up to 200. These results further support our clarification in Section 4.1 that QUB is not intended to fundamentally prevent catastrophic overfitting but serves as a practical regularizer to improve the robustness of FAT. ### 3. On the role of QUB as a standalone loss and clarification of Algorithm 1 We agree that the role of QUB and Algorithm 1 need to be further clarified. The QUB loss is not added to the traditional AT loss as a regularizer, but rather replaces it. That is, in our QUB-static method, we retain the adversarial example generation (e.g., FGSM-RS, FGSM-PGI) but train solely with the QUB loss, as stated in Algorithm 1. QUB-decreasing, by contrast, gradually transitions from QUB loss to AT loss during training, as outlined in Algorithm 2. We will revise the text to clarify this in the final version. ### 4. On the potential for gradient masking due to the third term in QUB The reviewer’s observation regarding the third term in the QUB loss, $|f(x+\delta) - f(x)|_2^2$, and its resemblance to logit pairing is insightful and appreciated. While this term is indeed structurally similar to logit pairing, our QUB formulation is carefully designed to avoid gradient masking through the joint interaction of all three terms. The first term ensures correct classification with respect to the ground-truth label, while the second term aligns output changes along the direction of the gradient. Together, these terms mitigate the masking effect that would arise from the third term alone. To empirically assess whether QUB leads to gradient masking, we evaluated our models using AutoAttack, which includes a gradient-free component (Square Attack). As shown in Table 1, QUB outperforms baseline AT models under this evaluation, indicating that the model remains robust to both gradient-based and non-gradient-based attacks. In summary, while the third term may resemble logit pairing, it is not used in isolation and does not dominate the behavior of the overall QUB loss. The complete formulation ensures robust learning without relying on gradient masking. ### 5. On suggestions for improving table formatting and clarity We appreciate the reviewer’s suggestions. We agree that the proposed improvements (i.e., moving auxiliary tables to the Appendix, adding improvement margins, and unifying numerical formatting) will enhance the readability of the paper. We will incorporate these revisions in the final version. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response. While I understand that the goal is not to directly prevent catastrophic overfitting, I would like to point out that the phrasing in the abstract—"mitigate the problem of degraded robustness under FAT"—can easily lead readers, especially those familiar with the field, to associate it with catastrophic overfitting. Although the issue is not fully resolved, the additional experiments and clarifications provided by the authors suggest that QUB has promising potential for future methods that aim to address this challenge. Furthermore, the explanation regarding QUB’s standalone application has been clarified. That said, I believe Table 1 requires improvement. For example, Table 1 should list FGSM-RS and FGSM-PGI as inner maximization strategies, and it should also be clearly explained in the main text or algorithm. A more detailed explanation—or a revision of the table—would be necessary for clarity. Thank you for the interesting work. I am updating my score to Weak Accept. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful and constructive feedback. We’re glad that our explanations were helpful, and we appreciate your understanding of our intent regarding catastrophic overfitting. As you pointed out, the phrasing in the abstract may lead to misinterpretation. We will revise it in the final version to clearly reflect the focus of our work and avoid potential confusion. Regarding Table 1, we agree that improvements are needed. We will update it to explicitly list FGSM-RS and FGSM-PGI as inner maximization strategies and clarify their roles in the main text to better illustrate the application of QUB. We truly appreciate your suggestions on how to make our work clearer and more accessible. Thank you again for your valuable time, detailed feedback, and for updating your score.
Summary: This paper provides a new theoretical upper bound of the adversarial training loss and proposes a method to improve the existing fast adversarial training. Specifically, the paper focuses on the problem of catastrophic overfitting, or the degraded robustness after fast adversarial training. To overcome this problem, the paper derives a new upper bound of adversarial training loss, called Quadratic Upper Bound (QUB), and proposes a new adversarial training that minimizes the QUB loss rather than traditional adversarial training losses. The derivation of QUB uses the convexity of the cross entropy loss function, which is commonly used in practice, and additional bounding of Hessian. The paper proposes a new loss term, i.e., QUB loss, and a new training strategy that uses QUB loss with traditional adversarial training loss (or AT loss). A set of experiments demonstrates the effectiveness of the training with QUB loss and the effect of QUB losses in flattening loss landscape and better adversarial sparsity. Claims And Evidence: Proofs support the theoretical statements, and the experiments demonstrate the effectiveness of the proposed methods. Methods And Evaluation Criteria: The models used for evaluation are all ResNet variants, and more evaluations on recent model architecture would be needed. Theoretical Claims: I briefly checked the proofs, and I don’t find any specific issue in the proof part. Experimental Designs Or Analyses: The presented experiments are well-designed, and the analyses look correct. Supplementary Material: I checked the appendices to see the proofs. Relation To Broader Scientific Literature: The paper presents interesting tricks to upper-bound the adversarial training loss. Although the tricks are limited to the context in which we use cross-entropy loss and a softmax layer, this context is extremely common in ML practice. Essential References Not Discussed: The paper cited the needed references well. Other Strengths And Weaknesses: ### Strengths 1. To the best of my knowledge, the paper’s findings are novel contributions. 2. The paper presents useful tricks for theoretically analyzing adversarial training loss, e.g., constant bound for Hessian and chain rule application to simplify the terms. 3. Experiments use various settings with many baseline methods. In particular, the proposed method was tested under a powerful attack such as PGD50-10. ### Weaknesses 1. While the suggested upper bound is an impressive achievement, we need more understanding about this bound. For example, how tight is the bound in practice? If not, under what condition does this bound overestimate the AT loss? 2. The model architectures are limited to ResNet variants, and all model architectures are too old. To demonstrate the practical value of the proposed method, we need a more thorough evaluation with more recent model architectures. 3. While the QUB loss improves the baseline methods, the performance of other methods seems better in many cases. Other Comments Or Suggestions: 1. Please consider adding more experiments with more recent model architecture other than ResNet variants. 2. Another variant of QUB is to use some fixed ratio between QUB and AT loss. The ratio can be 50-50, but other ratios can be tried to find the best fit. Here, QUB can be interpreted as a regularizer term that would add the flatter loss landscape throughout the training process. 3. Similarly, other training strategies can improve the performance, e.g., changing the $\lambda_t$ progression. 4. To show the effectiveness of QUB-static, we want to know that the QUB is tight enough so that minimizing QUB loss can effectively reduce the AT loss. While this tightness would depend on different conditions, both QUB and AT loss can be measured during training, so they can be plotted to show that QUB loss can be effectively used as a surrogate loss for the AT loss. Questions For Authors: * How tight is the proposed QUB bound? If the QUB bound may become loose, under what circumstances? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### 1. On the practical tightness and overestimation behavior of the QUB loss We appreciate the reviewer’s emphasis on the importance of validating QUB as a practical upper bound. To evaluate the practical tightness of the proposed QUB loss, we compared QUB and AT loss during training. For most experiments, the QUB loss consistently decreases in parallel with the AT loss while maintaining a small and nearly constant gap. Specifically, for each image, the mean difference between the QUB loss and the AT loss during training is approximately 0.0013, calculated as the average of epoch-wise mean differences. This suggests that the upper bound is fairly tight in practice without significant overestimation. Notably, with FGSM-PGI, the gap remains stable throughout training, confirming the reliability of QUB as a practical upper bound. However, in settings prone to overfitting—such as standard FGSM—QUB initially tracks the AT loss closely but begins to overestimate after catastrophic overfitting occurred. In these cases, the QUB loss value becomes 2–3 times larger than the AT loss, limiting its effectiveness as an upper bound in the later stage of training. Except for such extreme cases, we found that QUB reliably serves as a practical surrogate loss that balances efficiency and robustness across various FAT settings. ### 2. On performance limitations and the potential for dynamic QUB-AT combinations We applied QUB in two forms: (1) using QUB loss alone throughout the training process (QUB-static), and (2) gradually transitioning from QUB loss to AT loss during training (QUB-decreasing). We adopt such simple applications of QUB in order to demonstrate the applicability with minimal computational demand. However, we acknowledge that such a static form of method can be suboptimal for fine-grained performance tuning. As a result, in some experiments, QUB-based models perform comparably to or even slightly worse than the baselines. We find the reviewer’s suggestion of dynamically combining QUB and AT loss (e.g., via fixed or adaptive weighting schemes) to be a promising direction. In particular, tuning the balance based on attack characteristics or training stage may allow QUB to better complement different adversarial settings. Although our method may not reach SOTA-level robustness in its current form, it demonstrates the potential of QUB as a lightweight regularizer that enhances training stability and robustness when integrated into various adversarial training pipelines. ### 3. On the limitation of using only ResNet-based architectures We acknowledge that all models used in our experiments are ResNet variants. This choice was made to align with existing literature and ensure a consistent and fair comparison with prior FAT methods adopting ResNets. However, we agree that evaluating the generalization of QUB on recent architectures such as Vision Transformers or ConvNeXt is essential for understanding its broader applicability. We plan to conduct experiments with those architectures, however we have not been able to obtain the results as of this rebuttal, primarily due to computational constraints. Given that QUB is a loss-level modification that does not rely on architectural assumptions, we expect it to be easily adaptable to a wide range of models, and we will include these additional results accordingly as soon as the experiments are finished.
null
null
null
null
null
null
Are Large Language Models Ready for Multi-Turn Tabular Data Analysis?
Accept (poster)
Summary: The authors create a synthetic conversational dataset with dialogues about the data tables. The authors scrape the tables from Kaggle. To generate the conversation, the authors organize a multi-agent multi-turn conversation where each agent is an LLM instance prompted to play a specific role.The authors then perform extensive human expert validation of the generated conversions and ground truth formulation as MCF or code. A thorough comparison to 8 competing datasets/benchmarks is performed. Having the dataset in place, the authors propose Adaptive Conversation Reflection (ACR) - an agentic setup that learns from the conversational dataset and improves the scores over the naked LLM and CoT-LLM. Claims And Evidence: All the claims of the paper are well supported. 1. The reasoning over the table data is an important field of study 2. The dataset creation procedure is rigorous enough. Even though the original dialogues are synthetic, they are well validated and curated by human experts. 3. Formal (objective) automated evaluation ground truth and metrics are worked out. 4. The feature coverage compared to the baseline benchmarks is rich. 5. ACR seems to improve the scores over a fairly strong baseline of LLM-CoT. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes. Supplementary Material: The code and the dataset are in place and well organized. Relation To Broader Scientific Literature: - Essential References Not Discussed: I am not aware of any essential literature that was missed. Other Strengths And Weaknesses: The idea of having a “private” version of pandas API is interesting and helps to assess how well LLMs close the Open/Closed source domain gap. Other Comments Or Suggestions: The citation of “Li et al., 2024a” is incorrect, should be “Li et al, 2023”. The term “Action” is not very clear. The mentioned term “Clarification” scenario would be more suitable as the main name. Questions For Authors: Writing could be slightly improved. 1. The name Decision Company is not clear, specifically, why “decision”? 2. The consistency of the terminology is not ideal. The authors write: 094 efficient creation of COTA Whereas: 034 In this paper, we introduce COnversational Tabular data Analysis (COTA). The derived phrase “creation of analysis” is not correct. 3. “Action mode” and “Action types” lack mathematical notation for them. How are they related to the notation of the Task Formulation section? 4. What is the definition of a “logic”? What are curly brackets in “Re-Org One-Shot Reasoning” section? (This is the most IMPORTANT question). Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Legal Compliance (e.g., GDPR, copyright, terms of use)'] Ethical Review Concerns: What are the licenses of all the tables that you had harvested from Kaggle? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Additional Evidence in Anonymous Link: https://anonymous.4open.science/r/additional_materials-E646**. We will use `B.X` to index evidence in the following rebuttal: **C1: The name Decision Company is not clear, specifically, why “decision”?"** **A1:** The name "DECISION COMPANY" refers to the multi-agent sandbox environment created by the authors to simulate realistic data analysis scenarios. Based on the paper, this environment is designed to support data-driven decision-making processes. The name likely reflects its purpose: facilitating decisions through data analysis in a company-like setting where different agents (Administrator, Client, Data Scientist, AI Chatbot) interact to answer analysis questions for decision making. **C2: Suggested Formula of Action Types** **A2:** Thank you for the suggestion. Given that our benchmark evaluates LLMs on separate actions, we initially didn't formulate each action individually. For more mathematical rigor, we can indeed extend our notation as follows: Let $s_i \in S$, where $s_i$ represents the selected action mode from an enumerated set of available actions listed in Section 2. The task formulation could then be improved as generating answers: $a_{it} = f_\theta(u_t, H, T, s_i)$ This represents that given conversation history $H$, tabular data $T$, and the current query $u_t$, the agent's response $a_{it}$ is conditioned on the specific action mode $s_i$. **C3: What is the definition of a “logic”? What are curly brackets in “Re-Org One-Shot Reasoning” section? (This is the most IMPORTANT question).** **A3:** Thanks for this question. In Figure 11, "logic" represents the intermediate reasoning process that connects natural language queries to executable code or analytical answers. More formally, $m_{t-1}$ denotes the **inferred** reasoning pathway between a user query $u_{t-1}$ and the corresponding answer $a_{t-1}$ from the previous turn. This intermediate representation functions as pseudocode, which is a structured thought process that bridges natural language intent and formal execution. The curly brackets in the "Re-Org One-Shot Reasoning" section serve as an organizational construct. The notation $p_{t-1} = (u_{t-1}; \lbrace m_{t-1}; a_{t-1} \rbrace)$ mathematically represents our one-shot example structure, where: - $u_{t-1}$ is the previous user query - $\lbrace m_{t-1}; a_{t-1} \rbrace$ represents the pairing of inferred logic and answer. **In the prompt, this part is specifically highlighted with special symbols** to lead LLMs to follow this reasoning procedure. By structuring examples to show the reasoning process $m_{t-1}$ followed by its corresponding answer $a_{t-1}$, we enable the model to learn the pattern of first generating logical reasoning steps before producing final answers. This significantly improve performace of LLMs in conversational data analysis tasks in a simple manner. **C4: Data Resources & Dataset Liscense** **A4:** Thank you for this important question regarding our dataset resources and licensing. We have indeed provided comprehensive license information in our paper's Appendix B. Specifically, in Appendix B.1, we clearly state that the COTA dataset is available under the CC BY-SA 4.0 license (Creative Commons Attribution-ShareAlike 4.0 International). Regarding the source data, Appendix B.2 documents that all tabular data utilized in constructing COTA were obtained from Kaggle under either: 1) Public Domain Mark designation, or 2) CC BY (Creative Commons Attribution 4.0 International) licensing Also, we provide all detailed liscenses in `B.1` in the Anonymous Link. We would appreciate if you could go through it. Thanks.
Summary: This paper introduces CoTA, a benchmark to evaluate the effectiveness of LLMs in multi-turn conversational tabular data analysis scenarios. The authors' motivation is to address the lack of realistic, quantitative evaluation datasets by creating conversational data through an innovative multi-agent sandbox environment. CoTA includes diverse conversations scenarios to rigorously assess conversational abilities of LLMs. Additionally, the authors propose ACR, a self-generated reflection strategy to improve LLM performance, achieving notable enhancements over baseline approaches. Claims And Evidence: 1. The authors argue that the ACR method significantly improves conversational agent performance, and empirically evidenced. 2. The authors claim that CoTA is a scalable and realistic benchmark for conversational data analysis. However, the authors may show its scalability in the aspect of the cost (since they use LLMs to generate them). In addition, while their purpose is really similar to text-to-SQL tasks, CoTA lacks comparison with them. Methods And Evaluation Criteria: 1. They use Accuracy as metric, to measure correctness of code generation and answers. Theoretical Claims: As best as I know, there are no theoretical claims. Experimental Designs Or Analyses: The authors tested several advanced LLMs, including Mistral, Llama, Claude, and GPT-families, across four conversation scenarios, i.e., Normal, Action, Private, and Private Action. They also performed extensive error analysis. Supplementary Material: I reviewed the supplementary material, e.g., examples of generated conversations, evaluation scripts. Relation To Broader Scientific Literature: The literature is closely related on LLM-based code generation such as text-to-SQL. Essential References Not Discussed: I think Spider 2.0 [1] should be discussed in the paper, since it is also a benchmark that tries to make realistic text-to-SQL scenarios where tabular data is really messy and the generated code (i.e., SQL) is really long. [1] Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows, ICLR 2025. Other Strengths And Weaknesses: **Strengths** 1. Comprehensive benchmark addressing realistic multi-turn conversational scenarios. 2. Effective use of a novel multi-agent sandbox for dataset creation. 3. Extensive experimental results, including multiple model comparisons and error analysis. **Weaknesses** 1. Limited explicit discussion on scalability constraints or the cost implications of human-in-the-loop annotations for widespread practical use. 2. Relies heavily on GPT-4 based agents for conversation generation, possibly limiting diversity in generated data. Other Comments Or Suggestions: See above weaknessess. Questions For Authors: 1. What is the main differences of benchmark characteristics, compared to Spider 2.0. 2. Can you provide a used cost for generating conversations, and also used of human annotators? 3. Is it possible to generate conversations using open-sourced models like Llama? 4. What can be the practical applications of such conversational tabular analysis? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Paper Reference in: https://anonymous.4open.science/r/additional_materials-E646**. **C1: Comparison with Spider 2.0** **A1:** In summary, our benchmark differs from Spider 2.0 in several important aspects: - We focus on **conversational multi-turn** interactions for Python code, while Spider 2.0 primarily evaluates **single-turn** text-to-SQL capability; - Our benchmark includes more data analysis and scientific questions, including statistical and machine learning tasks such as clustering and linear regression; - Spider 2.0 emphasizes schema understanding and retrieval since their input data is larger (> 1000 columns), whereas our work prioritizes conversational compatibility and realistic data science problems. We will add these discussion in the Related Work section and Tab. 1 if fortunately given 1 additional page in camera-ready version. Thanks! **C2: Annotation Cost** Thanks for asking. Similar to Spider 2.0 and BigCodeBench for complex task annotation, we recognize annotators through authorship acknowledgment rather than direct financial compensation. Fortunately, to maintain data privacy and prevent GPT API abuse, annotators accessed our controlled annotation system, which documented a total working time of 3,677 minutes and accumulated GPT usage costs of 71.39 USD. While we did not provide monetary compensation to annotators, we can estimate the equivalent human expert cost by referencing established rates from research involving real data analysis experts [13-14], which indicates an average rate of 0.37875 USD per minute. Using this conversion, total human annotation would amount to 1,392.66 USD. Therefore, the comprehensive benchmark development cost totals 1,464.03 USD. This represents a cost-effective and scalable approach, with each data point in COTA costing approximately **1.45 USD**, significantly more economical than comparable benchmarks such as BIRD-SQL (**6.13 USD**) and TableBench (**6 USD**), despite COTA's increased complexity and deeper expertise. **C3: Is it possible to generate conversations using open-source models like Llama?** **A3:** We appreciate this question. Currently, we found that weaker LLMs struggle with this task. When we initially experimented with GPT-3.5-Turbo (the most popular LLM at that time), it exhibited significant hallucination issues after 2.5 conversation turns on average and failed to follow the Analysis Plan generated in Section 3.1. This required more expert workflow even than was generated totally by themselves. Therefore, we utilized GPT-4, the most capable model available when starting this project, as base model for sandbox construction. As shown in Figure 3, our sandbox based on this can support very long-turn conversations (14.15), similar to realistic complex task-solving scenarios. External human evaluations in Section 5 confirm the quality of these conversations. Recently, we observed that newer models like Llama 3.3 70B demonstrate improved capabilities for high-quality data annotation and data analysis questions, as shown in Table 3. This model could potentially serve as a replacement for GPT-4 in future studies. Thanks for your suggestion. **C4: Potential Limited Diversity** **A4:** The main reason why we use GPT-4 is its broad and diverse knowledge due to intensive pre-training. In the paper, we show scenario diversity evaluation in Tab.2 of **Section 5** and detailed fine-grained diversity analysis in **Appendix P.2** in terms of domain, result type, action, query, package, which actually shows our work already covers most comprehensive aspects of data analysis compared to related works as far as we know. **C5: What are the practical applications of conversational tabular analysis?** **A5:** Tabular data analysis is ubiquitous in daily operations. Automatic tabular data analysis can help users make informed decisions through natural language interactions, without requiring specialized skills in complex tabular data understanding, coding or even domain knowledge, which also can improve efficiency for data scientists or relevant users. As we stated in L 23-32, users rarely express their intentions completely in a single turn and often have follow-up questions based on previous responses or other actions (as we summarized and listed in the paper). Therefore, conversational capability is necessary for complex tasks. All Leading AI assistants like ChatGPT, Claude, and DeepSeek support multi-turn interaction because conversation is the most natural form of human communication. Therefore, conversational tabular data analysis can accelerate and improve decision making for finance, health care, policy making, which usually store their records and data in more structrued format. In this case, a comprehensive, scalable (to prevent data leakage) benchmark should be proposed to make users understand the capabilities of LLMs. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal; I have raised my score to 3. I strongly recommend that the authors incorporate the discussion, especially regarding annotation costs and the use of other LLMs like Llama or GPT-4. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for reading our paper and our rebuttal. We will incorporate both the annotation cost analysis and discussion about LLM usage into the paper. Your comments and suggestions make our work more rigorous. Thanks! Best,
Summary: # Summary This paper constructs a novel benchmark for the task of "conversational tabular data analysis." The benchmark is constructed using a complex process of interacting agents, aided with human annotation, starting from a set of 5 "data sources" and 18 "topics". This ultimately yields a benchmark of approximately 1k examples, over which the authors conduct a benchmarking study. This is a very important area of research and it requires both rigorous empirical studies, and reliable high-quality benchmarks; the current study is an attempt at both. However, I have some concerns about the current study. The dataset creation process is extremely complex. This has two consequences: (1) it is difficult for the authors to describe it with sufficient detail and clarity; (2) it makes it difficult to assess whether their design choices (of which there are many) affect the results of the benchmark. I also have some concerns about the value of the empirical insights gleaned from this study. Overall, I think that this is an important direction, but that the paper is not ready for acceptance in its current form. # Major comments * The dataset design process is almost incomprehensibly complex, and it is made even more difficult to understand by the writing in the paper. The are so many points that aren't clearly described; here are just a sample: - The authors state (L78) that "multiple choice questions" are a response type. How can this be evaluated, and how can all responses be distilled into either code generation or multiple choice types? What is an example of this kind of response? This is not clear. - The authors introduce 6 action types, saying simply that "we identify 6 common actions during conversations". How are these identified? How do we know that these represent a complete action space for conversational tabular data analysis tasks? These actions are also not clearly described; for example they define the Plot_QA action by simply saying "The Plot_QA action helps users understand plot-derived insights" -- but this is not a definition. * The data used as the basis for the benchmark are not clearly described. Relegated to the Appendix, in B.3 and Figure 9(a), we see that there are "18 topics and 5 sources of COTA". This still does not answer basic questions, such as: how many tables comprise the benchmark, and exactly which ones? What is the difference between a table and a "source"? This is critical, fundamental information about the benchmark that is not clearly provided in the paper, and makes the results nearly impossible to reliably assess. * Simple procedures, such as evaluation, are also not clearly described. For example, the paper says that "Each question is provided by an expected result type, such as dataframes, lists, or various plot types" but does not enumerate the expected result types. Similarly, the evaluation metrics are not clearly defined. For example, the definition of AccR, one of the core evaluation metrics, is simply limited to "we extend Acc to include a recall-based adjustment for instances involving private libraries" which does not give any details about the metric. * Collectively, the above issues make the empirical results nearly impossible to assess. Furthermore, the authors' evaluation of the existing empirical results is quite limited and does not seem to lead to new insights about how to improve models. # Minor comments * The paper relies a lot on annotations by the authors. For example, the authors do the human-sandbox annotation in Section 2, and the error analysis in 7. It would be more reliable (less prone to bias) to have external annotators conduct these annotation steps. * Why is only GPT-4-32k shown in Figure 7? It is not the best-performing model, and there isn't any other clear differentiator that makes this model particularly interesting to plot versus the others. # Typos etc. * "Conversational Tabular Data Analsis" in abstract does not need to be capitalized. * Section 1: "Among the vast types of data available, tabular data stands out as one of the most prevalent and interpretable formats organized by rows and columns" -- isn't tabular data the only format organized by rows and columns, by definition? * In several places the paper uses the word "codes" where the authors seem to mean "code" (as in, Python code). Claims And Evidence: See above. Methods And Evaluation Criteria: See above. Theoretical Claims: See above. Experimental Designs Or Analyses: See above. Supplementary Material: See above. Relation To Broader Scientific Literature: See above. Essential References Not Discussed: See above. I would suggest additional references to existing works that apply language models to tabular data tasks (of which there are several). Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Additional Evidence & Paper Reference in: https://anonymous.4open.science/r/additional_materials-E646**. We will use `B.X` to index evidence in the following rebuttal: **C1: Why the annotation is complex and annotators are also authors?** **A1:** Because the task that we are researching is complex. Unlike traditional NLP or mathematical tasks that rely on common knowledge or basic skills, our benchmark demands specialized expertise in tabular data analysis, statistical reasoning, coding proficiency, and domain experience. These requirements make conventional crowdsourcing approaches impractical for several reasons: - The technical depth required exceeds what typical annotation platforms can support. - The extended timeline is necessary for comprehensive task development and refinement. - The iterative workflow requires continuous expert feedback and adaptation. This is also reflected in very recent related work: BIGCODEBench and SPIDER 2.0 (ICLR 2025) as Reviewer K6vh mentioned. These projects similarly implemented sophisticated workflows for their tasks and recognized substantial annotator contributions through authorship. Our benchmark presents greater complexity than previous efforts. Therefore, our benchmarks deserve more sophisticated method to construct and authorship for annotators for their intellectual contributions, which is also trending of complex task benchmarking. **C2: Code Generation & Multi-Choice Annotation:** **A2:** In COTA, we deliberately designed two distinct answer types to enable objective and consistent evaluation: - Code Generation: This involves LLMs generating Python code to analyze tabular data. This code is evaluated through execution-based test case scripts. - Multiple-Choice Questions: After code execution, users often need to interpret results and make decisions. Rather than accepting free-form text analysis, which is difficult to evaluate objectively, we convert analytical questions into multiple-choice format. For example, after analyzing ATP tennis data, instead of asking "What trends do you see in player performance?" (subjective, and require LLM-as-Judge to evaluate), we frame it as: ``` Which surface shows no significant performance trend over time? A. Hard B. Grass C. Clay D. Carpet E. None of above ``` The evaluation process of multi-choice is straightforward - we compare the model's selected option against the ground truth answer. Each multiple-choice question has a single correct answer determined during dataset construction and verified by our expert annotators. This can eliminate ambiguity in subjective assessment, which may introduce evaluation bias. **C3: 6 action annotation and plot_QA** **A3:** Our identification of these 6 common actions stems from multiple rigorous sources of evidence rather than mere assertion. As we discussed in L187-188, annotators summarized actions and inject into conversations by two resources: - The annotators have> 10-year experience of data analysis (L135), ensuring strong expertise. This expertise is further validated by our high initial inter-annotator agreement rate (detailed in line 217). Also human expert (**outside annotator**) evaluation in Section 5 (**Action-wise Metrics**) shows its high Action Commonness. Detailed evaluation script in Appendix P. - Also, the action types were derived through systematic analysis of prior literature and empirical observation. Each action is grounded in but not limited to established research: **Update Code (for debugging)**: given code generation is crictical in data analysis [1, 2, 3]. **Fast Fail**: See Appendix J, [4, 8] **Clarification:** [6-9] **Best Guess:** [4, 5] **PlotQA:** Chart/Plot is one of most common data format during data analysis [1, 9]. **C4: Data Source Description** **A4:** Please see Tab. 1-5 of `B.1` in our anonymous link, it contains 45 large professional tables (> 5k rows, > 40 cols on avg), the "source" means domains. **C5: Details about metrics and Result Type:** **A5:** - Evaluation Metrics: We will feel grateful if you cloud give a look at Appendix K, M - Result Type Enumeration: We have catalogued all result types and their distributional characteristics in Figure 13(b) of Appendix K. To improve navigability, we will add direct cross-references in the revised manuscript. Thank you. **C6: Why is only GPT-4-32k shown in Figure 7?** **A6:** The visualization demonstrates how the Inter-Agent approach with ACR (light blue) generally outperforms both the base model and standard Agent configurations across most categories. GPT-4-32k likely serves as this case study because it represents a middle-ground performer (and one of most available resources). We have results on Claude-3.5-Sonnet which contains similar findings in Tab. 6 in `B.2`. **C7: More Reference** **A7**. Due to page limit, we have included a more comprehensive literature review in Appendix O. We will expand Section 8 in the camera-ready version if accepted.
Summary: This paper proposed a benchmark, namely COTA, and a multi-agent environment named Decision Company to evaluate the performance of LLMs in the task of Tabular generation. The paper used the proposed benchmark and multi-agent environment to evaluate the performance of 8 LLMs in the task of tabular generation. I have concerns in Client Persona Generation. In this step, the 'Persona' are generated by actually prompt engineering using a LLM. Although expert supervision is involoved, the output generated from LLMs may still differ from the real human. Moreover, the agents are given certain information of a faked person, which may not be representative to all the professionals in that task. The authors may consider use RLHF to enable the agents to mimic the professionals in a better way. Also, the proposed benchmark does not include other fields which maybe more challenging, such as medical, marting, and supply chain. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Do not apply. Experimental Designs Or Analyses: Yes. I checked the validity of the experimental design. Supplementary Material: Yes, I examed the prompts for Persona generation. Relation To Broader Scientific Literature: This paper contributes to the task of Tabular data generation. It provides a dataset and methodology to evaluate the Tabular generation. Essential References Not Discussed: No. Other Strengths And Weaknesses: Consider to enhance the Persona Generation step. Also consider to include more domains. Other Comments Or Suggestions: No. Questions For Authors: The Claude-3.5-Sonnet outperforms other models according to Figure 6. Is this the same case when using other dataset as shown in Table 1? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Concern 1:Client Persona Generation** **Ans:** Thank you for this valuable feedback. While LLMs were utilized in persona generation, we implemented a rigorous multi-stage validation process specifically to address potential discrepancies between LLM-generated content and real human behavior. Our approach includes: Expert supervision at every stage of persona development, where data analysis professionals with 10+ years of experience reviewed and modified the personas based on their real-world client interactions. High inter-annotator agreement (92.78%), suggesting strong consistency in the evaluation of these personas by multiple domain experts. Comprehensive human evaluation on multiple metrics, including Scenario Diversity and Reasonableness (0.96), Conversation Topic Coherence (0.93), and Conversation Naturalness (0.95), demonstrating the real-world applicability of these personas. Furthermore, our human-in-the-loop approach was intentionally designed as a practical middle ground between purely synthetic data generation and expensive expert crowdsourcing. **Concern 2: Persona Representativeness** **Ans:** We appreciate the suggestion regarding RLHF for better mimicking professionals. Our current approach prioritizes diversity and domain expertise. The personas were intentionally designed to cover a wide range of domain-relevant scenarios (18 analysis topics across 5 common domains) with varied backgrounds and needs. Actually, the proposed DECISION COMPANY framework already incorporates expert feedback loops that serve a similar purpose in guiding agent behavior toward realistic professional practices. Our evaluation with 10 data analysis experts outside the author team validated that the scenarios and interactions were highly representative of real-world data analysis tasks. For future work, we are exploring how RLHF techniques could further enhance the quality of agent-based simulations within our framework, especially for extending COTA to additional domains. **Concern 3: Domain Coverage** **Ans 3:** Thanks for suggestion. Our current focus on financial, sports, food, consumer electronics, and housing was intentional for three key reasons: - These domains feature widely available open-source tabular datasets with minimal privacy and ethical concerns, enabling broad accessibility of the benchmark. - These domains represent common data analysis scenarios that don't require highly specialized domain expertise to validate, ensuring reliable quality assessment. - Additionally, our current data already covers advanced topics such as Healthcare (Food), Financial (Credit Card, Bank) shown in Fig. 9. We view COTA as a foundation that can be expanded to more challenging domains. The evaluation framework, conversation action types, and metrics we've developed will transfer well to specialized domains in future extensions of this work. **Question:** **Ans 4:** Thank for asking this. Due to limited resource at this time, we test the performance of Claude-3.5-Sonnect against GPT-4 and CodeLlama on CoSQL a conversational text-to-SQL benchmark: Performance (EX) comparison on other datasets. | Model Name | CoSQL | |----------|----------| | CodeLlama | 35.7 | | GPT-4 | 68.2 | | Claude-3.5-Sonnet | 58.6 | From this table, we can observe that Claude-3.5-Sonnet didn't outperform GPT-4, which means different advanced models maybe strong in different programming languages due to different training corpus. In our Python-based data analysis code generation, claude-3.5-sonnet is the strongest model. --- Rebuttal Comment 1.1: Comment: I apperciate that the authors provide explanations and extra experiments. I will keep my original rating. --- Reply to Comment 1.1.1: Comment: Thanks for reading our responses and acknowledgment. We do appreciate your suggestions and time in reading our work in details. Best,
null
null
null
null
null
null
Double Machine Learning for Causal Inference under Shared-State Interference
Accept (poster)
Summary: This paper unifies the set of problems in causal inference with interference where the outcomes of individuals depend on others' treatment assignment only through an observed shared state. The paper assumes the units arrive sequentially and models this shared-state problem using the Markov chain. The paper then proposes a double robust machine learning meta-estimator to estimate causal quantities of interest and proves the estimator's asymptotic properties. The paper shows that both average direct effects and global average treatment effects can be estimated by this meta-estimator. The paper conducts numerical experiments to demonstrate the effectiveness of the proposed estimator. ## update after rebuttal The authors have addressed my concern. I will retain my score. Claims And Evidence: On line 83, could the authors explain why it is realistic to assume that the covariate X_t is independent of the hidden state H_t, perhaps using the example described in the introduction section? Is there a way to relax this? This also applies to invariance of the conditional distribution of H_t and D_t. On line 182, the authors assume that the nuisance estimators can be generated via an auxiliary sample of data. In reality, when is this auxiliary sample of data available? Methods And Evaluation Criteria: Yes, they make sense. Theoretical Claims: Yes, I checked the correctness of the proofs. Experimental Designs Or Analyses: Yes, I did. Since the authors have derived the asymptotic normality for the estimator, they shall also report the actual coverage of the estimator in the experiment. Supplementary Material: Yes, I have reviewed Section A, Section B, and Section C. Relation To Broader Scientific Literature: The problem studied in this paper appears to unify applications across various fields, as suggested by the authors in the introduction. However, the paper does not include an analysis of a real-world dataset. While the oracle causal effect is elusive in real data, it would still be beneficial for the authors to demonstrate their method on at least one such dataset. Additionally, I am curious to see how the authors justify their assumptions and the existence of auxiliary data used for estimating the nuisance parameter in a real-data setting. If the authors do this, I will raise my score. Essential References Not Discussed: Could the authors discuss more about how the prior works in various fields relate to the concept of "shared-state interference", and how the paper unifies this concept ( perhaps In the appendix)? Other Strengths And Weaknesses: It is a theoretically rigorous paper with significant practical implications. The assumptions made are generally reasonable. Other Comments Or Suggestions: line 82: "independent of the indentities of..." line 85: "...develops a DML theorem a discrete choice model..." line 151: "Hence, they are nuisances." is redundant line 307: "Next, we our validate our results" line 316: "averate treatment effect" The word "adjustment" only appears once in the title of section 5. Consider to replace the title with "Inference on Global Average Treatment Effect " Questions For Authors: Is this the first paper to propose the concept of shared-state interference? Which existing papers is the most closely related to this one? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We will incorporate your suggestions, including providing coverage rates in Appendix F, in our updated manuscript. In the coverage rate plots, our consistent variance estimator approaches the target coverage rate as $T$ grows, while the coverage rates for naive estimators is near zero due to their bias. * **Dependence of $X_t$ and $D_t$ on $H_t$.** (Copying this from our response to reviewer c4Lj.) This problem formulation would indeed be more general and would be a valuable direction for future work. We made the choice to have $H_t$ affect $Y_t$ but not $X_t$ and $D_t$ for two reasons. First, we found it hard to imagine situations in which $H_t$ exerts significant influence on unit characteristics and treatment assignments: most often, unit characteristics should be innate to the unit (e.g. their price sensitivity or preferences for particular types of content) and treatment assignments should be exogenous (we imagine treatment conditions like new features on a platform or inducements like discounts). Second, our context presents the simplest possible change from canonical iid models and so our work can be directly instructive for comparison with models where no shared-state is present by simply removing the shared-state variable from the outcome structural equations. Assuming dependence of $X_t$ and $D_t$ on $H_t$ would be possible under Theorem 3.1 as long as the data $W_t$ still obeys geometric ergodicity and detailed balance. * **Existence of auxiliary data.** Auxiliary data may exist when there are multiple similar systems or markets where unit behavior will be similar. For example, on a social media platform with multiple forums, the data from one forum might be used to construct nuisance estimates for measuring treatment effects in another forum. This is an assumption common to other methods using machine learning for inference or uncertainty quantification (see, e.g., Angelopoulos et al 2023), but of course, this is a limitation on how widely applicable any such method requiring auxiliary data can be. On the other hand, conceptually, the auxiliary sample assumption cleanly separates the machine learning from application of the learned predictors for inference. Relaxations of this requirement may be possible, but we wanted to preserve the conceptual clarity in our paper, so we leave these extensions for future work. * **Real-world validations.** Analysis of real-world settings would be a valuable contribution for future work. We wanted to include simulations for the sake of simplicity and brevity. * **Related work.** We are not aware of other methods that formalize causal inference through shared-state interference as we do. Several other papers formalize interference through markets or through recommender systems as we note in the related work, but these are context-specific. For example, Munro 2024 allows for causal inference under shared-state interference in settings like auctions where the shared-states are prices and allocations of goods exhibit a cutoff structure. Our work is complementary by offering a context-agnostic approach that does not require, e.g., knowledge of the allocation mechanism if the data generating process satisfies our Markov chain assumption. A. N. Angelopoulos, S. Bates, C. Fannjiang, M. I. Jordan, and T. Zrnic. Prediction-Powered Inference, Nov. 2023. URL http://arxiv.org/abs/2301.09633. arXiv:2301.09633 [cs, q-bio, stat]. E. Munro. Causal Inference under Interference through Designed Markets. 2024. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my questions and responding to my comments. I suggest clearly indicating the 95% threshold on the Y-axis. --- Reply to Comment 1.1.1: Comment: As far as we are aware, we aren't allowed to update the pdf after submission. See: https://icml.cc/Conferences/2025/PeerReviewFAQ According to the same page, we *are* allowed to share links. The figures we will add along with the corresponding explanatory text are available at the following link: https://docs.google.com/document/d/e/2PACX-1vRiWR2njelj5qIPR8LseWr2gJkLetrSuTK4ks_i2PMaVBfgtTn1zXQ9BY9jQR_Uzo5WkYGam9E7dm66/pub
Summary: This paper addresses causal inference under unit interference in systems like markets and recommendation platforms. To model inter-individual interference without strong assumptions, the authors introduce shared-state variables, assuming individual outcomes depend on others only through these shared states. The authors apply double machine learning (DML) for efficient inference, using an auxiliary data sample for nuisance estimators instead of cross-fitting, to account for the sequential nature of sampling. The methodology is used to estimate the average direct effect (ADE) and the global average treatment effect (GATE), with simulations showing DML’s advantages in debiasing and variance reduction. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, the proofs for the theorems on the asymptotic properties of DML appear to be largely correct. Experimental Designs Or Analyses: This paper presents simulation results for estimating the average direct effect (ADE) and the global average treatment effect (GATE). However, the simulation settings—particularly the one-dimensional covariance structure—seem somewhat simplistic. Exploring more complex scenarios would enhance the generalizability of the results. Additionally, while the evaluation focuses on the magnitude of bias and variance, it overlooks an examination of the consistency of variance estimation. Including the coverage rate of parameter estimates as an additional metric could provide valuable supporting evidence. Supplementary Material: Yes, I mainly review the proof of theorems and simulation details. Relation To Broader Scientific Literature: This paper combines shared-state interference with the DML method, both of which have been extensively discussed in the causal inference literature. A related work, *Causal Inference under Interference through Designed Markets*, applies localized debiased machine learning (LDML) for causal inference under shared-state interference in a two-sided market. In this context, the current paper extends the application of DML to causal inference in interference scenarios. Essential References Not Discussed: I believe this paper includes the vast majority of essential references. Other Strengths And Weaknesses: Strengths: This paper clearly introduces the two core aspects of DML: Neyman orthogonality and cross-fitting. It also points out the difference between share-state interference and the traditional i.i.d. data generation mechanism, thus requiring additional modifications to the DML procedure. Weaknesses: The notation in some places of this paper needs to be consistent. For example, in Section 4.1, Equation 4.1 is written as $Y_t(D_t, H_t)$, while Equation 4.5 uses $Y_t(D_t, X_t, H_t)$. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We will incorporate this feedback in our updated manuscript. * **Application-based simulations.** Exploration of more complex and real-world scenarios (using either simulated or real data) would be a valuable direction for future work. Our focus in this paper is on the novel theoretical methodological contribution. Our simulation settings were intended to be simple, to demonstrate how naive estimates can be biased even in simple, one-dimensional settings. Thank you for the suggestion to add coverage rates of variance estimates. We will include this in the appendix. * **Notation.** Thank you for pointing out the notational inconsistencies. We will fix these and adhere to potential outcome notation where the dependence on $X_t$ is implicit.
Summary: This paper introduces a double machine learning (DML) estimator for sequentially collected samples where dependencies follow a Markovian structure through a shared-state variable $H_t$. Claims And Evidence: 1. The paper has a strong theorical foundation. 2. Empirical simulations are lacking of specific explanations (e.g., the data generating processes). No empirical evidence for the fast convergence (debiasedness) and doubly robustness are provided. 3. Intuitive explanations on Assumptions C1 and C2 are required for assessing the claims, since they are key assumptions in the paper. 4. Even if it's mentioned that HT estimator is unbiased for estimating GATE, it seems that the HT estimator is biased in Figure 2. Could you please explain the simulation detail? Methods And Evaluation Criteria: 1. Empirical simulations are lacking of specific explanations (e.g., the data generating processes), so it's hard to assess. 2. It would be great if the experiments are done with real-world examples. Theoretical Claims: 1. Theoretical results are sound. Experimental Designs Or Analyses: 1. Empirical simulations are lacking of specific explanations (e.g., the data generating processes), so it's hard to assess. 2. Even if it's mentioned that HT estimator is unbiased for estimating GATE, it seems that the HT estimator is biased in Figure 2. Could you please explain the simulation detail? Supplementary Material: I checked the proof, and they are sound. Relation To Broader Scientific Literature: This paper tackles an interesting problem, where samples are accumulated in Markovian sense. However, I think the paper makes pretty strong and unrealistic assumptions such that Ht only affects Yt, not Xt and Dt. If you assumed so, what are the most difficult challenges? Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: 1. I think $Y_t(D_t, H_t) = f^*(D_t, X_t, H_t) + \tilde{Y}_t,$ is a wrong formulation. $Y_t$ should be dependent on $X_t$. 2. $D_t$ is not affected by $H_{t-1}$? This doesn’t make sense. 3. Is W_{aux} is iid, while W1,…,WT are dependent in Markovian? 4. Why Assumption C.2 has been made with high-probability? I meant, can we just assume with $\gamma = 0$? Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, as well as their comments about the soundness of theoretical results and strength of foundation for the work. We will incorporate your comments into our work. We clarify several points and answer questions below. * **Data generating process details.** We provide specific explanations of the data generating process in Appendix F, as we note in Line 300, column 2. * **Naive Horvitz-Thompson (HT) versus switchback HT estimators.** The HT estimator in figure 2 is a naive HT estimator which merely takes a difference in means across treatment and control observations. The SB estimator in figure 3 is a different estimator which averages only over observations where the last $m$ have been all treatment or all control. The naive HT estimator, in general, may be biased since it does not account for the shared state, but the SB estimator is unbiased. We will clarify this distinction, especially since the SB estimator is a HT-style estimator but is different from the naive HT estimator. * **$H_t$ only affects $Y_t$, not $X_t$ and $D_t$.** This problem formulation would indeed be more general and would be a valuable direction for future work. We made the choice to have $H_t$ affect $Y_t$ but not $X_t$ and $D_t$ for two reasons. First, we found it hard to imagine situations in which $H_t$ exerts significant influence on unit characteristics and treatment assignments: most often, unit characteristics should be innate to the unit (e.g. their price sensitivity or preferences for particular types of content) and treatment assignments should be exogenous (we imagine treatment conditions like new features on a platform or inducements like discounts). Second, our context presents the simplest possible change from canonical iid models and so our work can be directly instructive for comparison with models where no shared-state is present by simply removing the shared-state variable from the outcome structural equations. Assuming dependence of $X_t$ and $D_t$ on $H_t$ would be possible under Theorem 3.1 as long as the data $W_t$ still obeys geometric ergodicity and detailed balance. * **Notation.** We use standard potential outcome notation with exposure mapping where potential outcomes are typically not written as a function of covariates (even though outcomes depend on covariates). In other words, $Y_t$ *does* depend on $X_t$, but this dependence is omitted from notation. We note that we inadvertently included $X_t$ in the potential outcome notation of some equations, like 4.5, and we will fix this. Thanks for the question. * **Independence of $W_{aux}$.** $W_{aux}$ need not be iid as long as the nuisance parameter learners converge at appropriate rates. The tradeoff between quality (say, independence) and quantity (number of observations) of data is implicit in the nuisance learner rates assumptions, but exploration of this question would be a valuable direction for future work. * **Choices of $\gamma$.** You can assume $\gamma = 0$ as a special case of Theorem 3.1. We include the high-probability bounds because it follows the results in Chernozhoukov and because it allows for the possibility that eta falls outside of the nuisance realization set with some probability. $\gamma$ might be useful if, as we note in our response to Reviewer YLJC, $m$ may be mis-specified with some probability.
Summary: This paper studies causal inference in the presence of *shared-state interference*, a common structure in real-world systems such as online marketplaces and recommender platforms, where outcomes of individuals are influenced by a low-dimensional global variable (e.g., price, inventory, recommendations). The authors formalize this structure and develop a general semiparametric framework under which treatment effects can still be estimated efficiently using **Double Machine Learning (DML)**. The core contribution is an extension of the DML theorem (Chernozhukov et al., 2018) to settings where units arrive sequentially and interfere through a shared Markovian state. The paper derives asymptotic normality and consistent variance estimators for two estimands: the **Average Direct Effect (ADE)** and the **Global Average Treatment Effect (GATE)**, using switchback experiments. Simulations confirm the theoretical properties and demonstrate that the proposed estimators outperform naive plug-in and Horvitz-Thompson alternatives. Claims And Evidence: The paper claims that: - Causal inference under shared-state interference can be handled with an appropriately extended DML framework. - Under certain assumptions (Markovian dynamics, ergodicity, sample splitting or auxiliary data), valid and efficient estimation of ADE and GATE is possible. - The proposed DML estimators outperform naive baselines in both bias and variance. These claims are well-supported: - The extension of the DML theorem is rigorous and builds on a solid foundation. - The authors provide precise conditions and detailed proofs for asymptotic normality and consistency. - Simulations effectively illustrate both downward and upward biases in naive estimators, and the superior performance of the DML estimators. Methods And Evaluation Criteria: The methodology is carefully constructed: - The formalization of shared-state interference is novel and expressive, capturing dynamic systems where individuals influence and are influenced by a global state. - The use of DML with orthogonalization and bias correction is well-motivated. - The paper defines clear estimands (ADE and GATE), and designs plug-in and debiasing terms to achieve robustness. - Simulation settings reflect realistic assumptions (e.g., market congestion, algorithmic competition). Evaluation is primarily based on simulation, which is reasonable given the lack of real-world counterfactuals. Theoretical Claims: Theoretical contributions are central to this paper: - The paper extends DML to settings with shared-state interference by leveraging Markov chain properties. - Theorems 3.1, 4.1, and 5.1 establish conditions for asymptotic normality of the ADE and GATE estimators. - The variance expressions are derived under both geometric ergodicity and m-dependence. The proofs are detailed, with attention to assumptions like Neyman orthogonality, Gateaux derivatives, and mixing conditions. The adaptation of CLT results for Markov chains is particularly well-handled. Experimental Designs Or Analyses: The simulations are thoughtfully designed: - Two estimands (ADE and GATE) are estimated across multiple conditions. - Competing estimators (plug-in, Horvitz-Thompson) are included for comparison. - The authors use random forests as flexible nuisance estimators, with auxiliary datasets to avoid sample leakage. Figures clearly show the bias and variance behavior of each estimator. Although limited to synthetic data, the experimental results strongly support the theoretical claims. Supplementary Material: The supplementary material includes: - Detailed proofs (e.g., ergodicity, variance estimators) - Simulation setup and code - Discussion of Markov chain properties and variance estimation The appendices are rigorous and informative. They significantly bolster the technical validity of the main claims. Relation To Broader Scientific Literature: The paper is well-situated in the literature: - Builds on Chernozhukov et al. (2018) for DML - Extends recent work on interference in experiments (e.g., Johari et al., Farias et al., Munro 2024) - Addresses limitations of social network-based interference models, offering a complementary approach via global state dynamics The shared-state framework fills a notable gap in modeling real-world algorithmic and market-based interference. Essential References Not Discussed: No critical omissions were found. The paper cites essential works in DML, interference modeling, ergodic Markov chains, and semiparametric inference. It also offers comparison with network-based approaches, positioning its contribution as orthogonal and novel. Other Strengths And Weaknesses: **Strengths:** - Novel and realistic modeling of shared-state interference - Theoretical rigor and clear assumptions - Methodologically sound extension of DML - Strong simulation results - Clean writing and logical structure **Weaknesses:** - No real-world experiments, though the authors acknowledge this. - Some assumptions (e.g., availability of auxiliary data) may not always hold in practice. - The setup assumes known m in m-dependence, which could be hard to estimate in some applications. Other Comments Or Suggestions: NA Questions For Authors: 1. **Estimating m in Practice**: While the paper assumes known m for m-dependence, in real-world experiments m may be unknown. Can your method accommodate estimation of m, or how sensitive are results to misestimation? 2. **Auxiliary Data Requirement**: The use of an auxiliary dataset for nuisance estimation avoids dependence issues, but may not always be feasible. Could you discuss alternative strategies, such as block-splitting or approximate independence? 3. **Applicability to Recommender Systems**: Can your shared-state model be concretely instantiated in modern large-scale recommender systems, such as those used in e-commerce or social media? Have you explored any such datasets? Clarifying these would enhance the paper’s applicability and help practitioners adopt your framework. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! We also thank you for noting the expressivity of our formality, relevance to practical settings and gap in research on causal inference in algorithmic systems and markets. We respond to each of your questions below: * **Estimating $m$ in Practice**: We note that for the procedure to be valid, we just need an upper bound on $m$, so that the switch period can be chosen to be greater than this upper bound. The literature on switchback experiments proposes procedures to estimating $m$ if it is not known. See, e.g., Bojinov et al 2022, Section 4.4. Informally, the procedure involves computing the estimator using different candidate values $m$ and running a series of hypothesis tests to see whether the results are the same for different values. If the results are different, then the smaller candidate for $m$ can be shown to be less than its true value, with high probability. The uncertainty introduced by unknown $m$ can be incorporated in our high-probability bound by choosing $\gamma$ to account for the Type 2 error of estimating $m$ is smaller than it actually is. * **Auxiliary Data requirement**: We will add more discussion of this point to the paper. It would be a worthwhile direction for future research to explore how approximately independent data (such as blocks observed far away) may be used to train nuisance predictors that are approximately independent from the data they are evaluated on. * **Applicability to Recommender Systems**: Application of our framework to real-world empirical contexts would be a valuable direction for future work. Given the theoretical focus of the paper and the constraints of space, we don’t do this. We strongly believe that our framework is applicable to relevant empirical settings. I. Bojinov, D. Simchi-Levi, and J. Zhao. Design and Analysis of Switchback Experiments, Apr. 2022. URL http://arxiv.org/abs/2009.00148. arXiv:2009.00148 [stat].
null
null
null
null
null
null
Non-invasive electromyographic speech neuroprosthesis: a geometric perspective
Reject
Summary: The authors demonstrate a system that translates silently articulated speech into both text and audio. The method collects electromyogram (EMG) signals from multiple articulatory sites on the face and neck during alaryngeal speech and uses GRU with CTC loss to perform sequence-2-sequence decoding. Experiments are conducted to demonstrate the effectiveness of this work. Claims And Evidence: Yes, the authors supported their claim by conducting experiments. Methods And Evaluation Criteria: Yes, both the method and evaluation criteria make sense. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, I read the appendix, and no supplementary material was provided by the authors. Relation To Broader Scientific Literature: Limited to the domain of HMI in general. Essential References Not Discussed: N/A Other Strengths And Weaknesses: * a) While the proposed work demonstrates effectiveness in EMG-to-text and EMG-to-audio decoding, its contribution to the representation learning community is limited. The approach is largely built upon existing technologies; for instance, the use of a GRU with CTC loss has already been reported by Willett et al. in their brain language decoding framework, and the language corpora are similarly adopted. Moreover, the manifold of SPD matrices is well explored in prior work by Gowda et al. Overall, the novelty of this work does not appear to align well with the scope of ICML. * b) The experimental details are insufficient. Given that the proposed approaches aim to address the needs of individuals who have lost intelligible speech due to laryngectomy, neuromuscular disease, stroke, or trauma, it is unclear whether the enrolled participants are healthy subjects or patients. Since muscle movement patterns can differ significantly between healthy individuals and patients, simply simulating silent speech may not accurately capture the EMG patterns in the target population. * c) Compared to prior work by Gowda et al., the primary difference in this study is the use of silent speech data instead of audible articulation. This difference alone does not represent a substantial contribution. * d) The paper lacks comparisons with alternative decoding algorithms beyond GRU-based or other state-of-the-art EMG decoding approaches, making it difficult to appreciate the unique contributions of the proposed algorithm. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Other expertise'] Ethical Review Concerns: Human participants are involved. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Response to comment a)** We thank the reviewer for raising this point. Reference [1] presents a high-performance BCI (brain-computer interfaces) system that records neural activity from the motor cortex at single-neuron resolution. Their approach, involving CNNs for feature extraction and RNNs for temporal modeling, aligns with widely-used sequence-to-sequence architectures. Our approach differs fundamentally due to its focus on *non-invasive* surface EMG. Unlike intracortical recordings, EMG reflects the summation of motor unit action potentials from multiple muscles, resulting in lower-resolution signals with interference. To address this, we introduce SPD matrix representations that encode second-order channel correlations and provide a compact, discriminative signal representation. This forms the basis for sequence-to-sequence modeling in our EMG-to-phoneme translation task. Our work performs *phoneme-by-phoneme translation* of silently articulated EMG. This represents a substantial leap over prior efforts that focused on isolated word or phoneme classification (such as in [2]) and generalizes to full language corpora. Even though we cannot record activity at the resolution of individual neurons, our results demonstrate that fine-scale speech decoding is still feasible with non-invasive signals with appropriate representation—an important and novel contribution for speech neuroprostheses. Overall, we believe our contributions are important and substantial for the field of EMG, which is emerging as a new modality alongside *images*, *audio*, and *video* as recently demonstrated by [5]. **Response to comment b)** We appreciate the reviewer’s concern. Our study involved healthy volunteers, in line with institutional ethical guidelines and approvals. As with other early-stage BCI research (e.g.,[3] and [5]), healthy subjects are commonly used to demonstrate feasibility. Our goal is to establish that high-fidelity phoneme decoding is possible from surface EMG, paving the way for future clinical applications. We respectfully note that this study serves as a *first proof-of-concept* of such capabilities. Please refer to rebuttals for **reviewers 3dnc, GLiL** for explanation of signal distribution shift across individuals. **Response to comment c)** We respectfully disagree with the reviewer’s interpretation. Reference [2] addresses closed-set classification tasks (e.g., 36 words or 38 phonemes), whereas our work focuses on *open-set phoneme sequence decoding* from continuous EMG spanning general English language corpora. These are fundamentally different problems. Classification selects from a fixed list of categories, while sequence decoding must infer variable-length output sequences aligned with temporally evolving input. We have clearly described this distinction in the manuscript, including the increased complexity and novelty of the decoding task. Therefore, the comment that our work does not differ from [2] is factually incorrect. **Response to comment d)** We have made every effort to compare our work against representative baselines and relevant prior research in both invasive and non-invasive BCI domains. Specifically, we include comparisons with state-of-the-art invasive BCI systems such as [1], as well as recent advances in non-invasive BCI, including [3]. To the best of our knowledge, the only reproducible and publicly available baseline for EMG-based speech neuroprostheses prior to our work is [4], and we have conducted direct comparisons with this approach in our study. It is important to note that the field of speech BCI - particularly non-invasive EMG-based speech decoding—is still in its early stages, with relatively few prior works and limited established benchmarks. Within this context, we believe our comparisons are both fair and comprehensive, and we have taken care to position our contributions appropriately relative to the current state of the field. [1] Willett, F. R. et al. A high-performance speech neuroprosthesis. *Nature*, 620(7976), 1031–1036, 2023. [2] Gowda, H. T., et al. Geometry of orofacial neuromuscular signals: speech articulation decoding using surface electromyography *arXiv preprint* arXiv:2411.02591, 2024. [3] Defossez, A., et al. Decoding speech perception from non-invasive brain recordings. *Nature Machine Intelligence*, 5(10):1097–1107, 2023. [4] Gaddy, D. et al. Digital voicing of silent speech. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing *EMNLP*, pp. 5521–5530, 2020 [5] Ctrl-labs at Reality Labs, et al. "A generic noninvasive neuromotor interface for human-computer interaction." *Biorxiv* (2024): 2024-02.
Summary: The paper introduces a method for converting EMG signal into phoneme sequences without requiring audible speech. The core idea is to use CTC loss and inference, which obviates the need for explicit alignment between the input EMG signals and the output phoneme sequences. The proposed method leverages the functional connectivity of the EMG signals by modeling them as symmetric positive definite (SPD) matrices on a Riemannian manifold. Recurrent neural networks (GRU variants) take these (normalized) SPD matrices and outputs phoneme probability distributions at each time frame. The proposed approach is evaluated on three datasets, including one where subjects articulate using NATO codes. Claims And Evidence: * The paper’s central claim is that the use of SPD representations, combined with CTC loss, significantly improves performance in silent speech decoding. However, the experimental results appear to mix the contributions of the CTC loss and the SPD-based feature modeling. The claims would be strengthened if the paper shows ablation studies that isolate the performance gains attributable to SPD vs. CTC. * While the authors suggest that modeling the relationship of EMG signals on the SPD is advantageous, the empirical evidence supporting benefits of SPD is not entirely convincing – such as cross-subject generalization, invariance to individual differences, investigation of the changes in the adjacency matrix, etc. Additional experiments that explicitly demonstrate the advantages of the unique characteristics of the method would be beneficial. Methods And Evaluation Criteria: * Proposed method is evaluated on three datasets (Section 4.1-4.3). Although the datasets are relatively small compared to common ML datasets, they are consistent with those used in prior work, thereby supporting the authors’ choice of data. * One weakness is the limited comparison to prior works. Furthermore, many details are explained in Table captions (ex: Table 4), which impairs readability. It is recommended to put such details into the main text or organize them in a Table. * The metrics in the paper, “Levenshtein distance”, “WER”, and “decoding accuracy” are used somewhat interchangeably. Please consider unifying the metrics. In my perspective, Levenshtein distance captures the same result as WER (as in speech recognition convention). However, accuracy should not be computed as 1-WER; it should be measured as binary decision per utterance/word: either entirely correct (1) or wrong (0). Additionally, please consider reporting both PER and WER for all experiments for comprehensive understanding. * Figure 8, which shows inverse proportional results between decoding accuracy and Levenshtein distance, seems somewhat trivial. Theoretical Claims: * The paper utilizes established components, such as GRU variants and SPD matrices, to model the inter-sensor relationships. Both have been discussed/proposed in previous studies, and details are properly explained in Appendix. The novelty of the paper lies in its integration of these components, including modifications such as the use of an approximated common eigen basis, computation of the Fréchet mean on the training set, and the application of CTC-based training and inference. These methodological adaptations are sound and well-motivated. Experimental Designs Or Analyses: * Although I am not an expert in EMG-based models, the experimental design appears to follow standard practices in the domain, particularly those established in previous works. Supplementary Material: * The supplementary material seems comprehensive; however, some mathematical derivations might require further investigation by domain experts. Relation To Broader Scientific Literature: The paper is closely related to speech recognition literature, especially using CTC for alignment-free phoneme recognition. Essential References Not Discussed: The paper adequately discusses relevant references. Other Strengths And Weaknesses: Please see other sections. Other Comments Or Suggestions: * The paper focuses on EMG-to-phoneme translation, rather than generating the final speech itself. Text-to-personalized audio synthesis relies on the off-the-shelf models (Appendix G). Clarifying the scope of the paper would make its contributions clearer. * Please format the references to include publication years – for example, “Gaddy & Klein; Gaddy & Klein” seems to be repeated twice, despite referring to different papers. * Based on the results in Figures 3, 4, and 6, model scale-up does not bring noticeable performance improvement. Can you add some explanation on this observation? Questions For Authors: * In speech recognition, CTC decoding often utilizes a beam width of 50–100. Have the authors experimented with larger beam widths, and if so, what was the impact on performance? * It seems that each trained model is only applicable to the specific person whose data is used during training. Is this a common approach in the domain or is it a limitation of this work? * Although not directly related, the problem and the proposed approach seems to be related to the sparse sensing literature. It would be interesting to find/discuss the relationship between two. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Response to claims and evidence** *Comment 1)* We thank the reviewer for their comment and the opportunity to clarify. It is important to note that the CTC loss and the SPD matrix representation in our pipeline serve fundamentally different purposes. The use of CTC enables us to demonstrate that direct phoneme sequence translation from silently articulated EMG ($E_S$) is feasible—without relying on aligned EMG ($E_A$) or audio ($A$). This represents a significant advancement in the modeling paradigm itself. On the other hand, the SPD matrix representation is employed during feature extraction, as it effectively captures the structure of EMG signals in a way that reflects articulatory dynamics. As demonstrated in [2], such representations are well-suited to distinguishing subtle orofacial movements involved in speech, including variations in tongue and jaw positions. These articulatory features are essential for phoneme production, and SPD matrices allow us to model them in a geometry-aware manner. For example, on the manifold of SPD matrices, unsupervised methods like *k*-medoids clustering can naturally distinguish distinct articulatory patterns. Given the nature of the task, the CTC loss and SPD feature representation operate at different stages of the model and are tightly coupled in the end-to-end framework. Therefore, it is not possible to decouple these components for ablation in isolation. *Comment 2)* As shown in [2], signal distribution shift across individuals is a covariate shift. This shift can be captured by SPD matrices and can be interpreted as the change of basis. While we mention this in the *discussion section*, the central claim of the paper is that SPD matrix representation can lead to efficient design of neural networks. We support that central claim by designing a single layer recurrent network, and demonstrate 2.4x improvement in WER using a model that is 25x smaller on a limited vocabulary corpora compared to [4]. **Response to methods and evaluation criteria** *Comment on limited comparison:* Please see the response to **reviewer cEe2**. *Comment on measurement metrics:* - Levenshtein distance and PER are the same and are used interchangeably. - Accuracy = 1 - WER. They are indeed calculated as binary decision per utterance/word as the reviewer has mentioned. *Comment on figure 8:* In figure 8, Levenshtein distance is PER and decoding accuracy is 1 - WER and they are not trivially related. **Response to other comments or suggestions** In the revised version, we will further clarify regarding EMG to audio. We acknowledge that we use off-the shelf models as explained in Appendix G. We will prominently articulate this in the main text. We will also reformat the citations. Please see the response to **reviewer 3dnc** regarding model scaling. **Response to question for authors** 1) Yes, we have explored decoding with higher beam widths. The performance saturates around the beam width of 5. For example, on the large language corpora, we observe 2 percent points improvement in PER between beam widths 1 and 20 (with most improvement happening when we go from 1 to 5, and eventually plateauing). 2) Zero-shot and few-shot decoding techniques are not explored in the field of speech EMG, with our work being one of the first works. However, in a related field of decoding hand gestures using EMG, there are some works which demonstrate zero-shot learning (model trained on a set of subjects generalizes to unseen subjects) such as in [5]. In [5], they show that EMG trained on subject $A$ does not generalize to subject $B$. However, EMG trained on thousands of subjects show some generalization capability, but, still requires personalization. But [6] argues that pretrained models like in [5] are not of much use and hand gestures can be decoded using EMG in an unsupervised manner on the manifold of SPD matrices using simple $k-$medoids clustering algorithm. Theoretically, EMG can be interpreted as being defined by a set of orthogonal axes that span the space $\mathbb{R}^{|\mathcal{V}|}$, and since we can have infinite such orthogonal bases, generalization across subjects is difficult. It arises due to the purely additive nature of EMG signals. As such, we believe that this is not the limitation of our work. 3) We appreciate the reviewer’s insightful suggestion and will consider investigating the relationship between sparse sensing and our approach in future work. [6] Gowda, H. T., and Lee M. Miller. "Topology of surface electromyogram signals: hand gesture decoding on Riemannian manifolds." *Journal of Neural Engineering* 21.3 (2024): 036047.
Summary: This paper introduces a non-invasive EMG decoder that can produce text or audio from silent speech, without EMG responses from any audible speech. The authors found an efficient sparse decomposition of the responses by analyzing their geometry, and GRUs operating on this manifold significantly improved word error rates. Claims And Evidence: 1. The authors claim "EMG-to-audio" decoding by first decoding a sequence of phonemes, then personalizing the output of an off-the-shelf synthesizer to the specific subject. Since the authors claim to create more than just a "EMG-to-phonemes" decoder, there should be some evaluation about the faithfulness of the speech synthesis system as well. 2. The authors claim: > modeling dynamics of EMG signals using neural ODEs is beneficial and allows for better abstraction of the data. It's true that GRU_C outperforms the other GRUs at lower model sizes for the Nato and Small-Vocab datasets. But, seeing as the other GRUs end up outperforming GRU_C at larger sizes and are more stable, does this claim still hold? It's not (please correct if this is a wrong assumption) that the larger model size makes the models significantly harder to train, so I'm not convinced that the neural ODE is really helping much. Methods And Evaluation Criteria: The evaluation methods match previous literature cited by the authors (Gaddy & Klein, 2020; Willet et al.) to allow for near-direct comparison. Data_Large is actually _more_ challenging than in Willet 2023 because the speech rate is faster. I have one concern about how the final results are shown in the tables. There are 2 parameters the authors are sweeping along: GRU architecture and hidden unit size. It seems that they are taking the minimum _test_ performance along both dimensions to report e.g. WER in Table 2. But, the test performance of these models is not monotonic over model size (especially for GRU_C) or strongly ordered over architectures (GRU_C often trades places with A/B). How does one know which architecture/size to pick? Taking the minimum _test_ performance seems like it would introduce test set leakage, but using the validation performance instead would remedy that. Theoretical Claims: I read through the claims in the main text. Experimental Designs Or Analyses: The setup for the experiments and 3 datasets was sound. Supplementary Material: I reviewed sections A-C, E, and G. Relation To Broader Scientific Literature: This work shows that, under the right representation, EMG can rival many impactful BCIs from recent years that are more invasive. Essential References Not Discussed: I'm not aware of any missing references. Other Strengths And Weaknesses: Other strengths: 1. The paper is well written, and great care is taken to explain the diagonalization of $\mathscr{E}$. 1. This work removes the need to collect strongly aligned data under multiple conditions (audible vs. silent speech), easing the complexity of data collection. Other weaknesses: 1. The appendix is somewhat unpolished, and is missing key connections with the main text (see the next section for examples). Other Comments Or Suggestions: 1. I would suggest adding years to citations, at least where it is ambiguous without clicking on the link. For example, "Gaddy & Klein" can refer to either 2020 or 2021. (Also, there is no year in the bibliography for the 1st Gowda citation.) 2. The caption in Figure 1 was very helpful for understanding this decomposition, maybe even without the graphic. 3. I don't see a reference from the main text to Appendix C -- this would be helpful in Sec. 3.1 or 3.2. This specific section of the appendix covers many aspects, and it would be worth adding a reference in the main text for all these instances. 4. The aspect ratios for the appendix figures do not match the main text. And since they're far from where they're first referenced on page 7, the captions should be more descriptive (e.g. which dataset). Questions For Authors: 1. In any of the 3 datasets, is any data collected for the same subject across multiple sittings? If so, would $Q$ need to be re-estimated like it's a new subject, or can you take advantage of previously collected data for that subject? 2. During data collection, are the articulators (tongue, lips, etc.) still moving during silent speech? (I assume yes.) 3. How fast were the GRU variants to train/run? Given the neural ODE, was GRU_C noticeably more expensive? 4. What values are used for the GRU hidden dimensionality? And why are the ranges of model sizes different in Figs. 3/4 vs. 5/6? 5. Please clarify how a single model was chosen for Table 2 and for Table 4 among the model sizes and architectures. (See "Methods And Evaluation Criteria" for details of my concerns.) 6. Appendix G references a "40-second reference audio clip", whereas Sec. 3 paragraph 2 talks about "an audio clip of about 3-5 minutes, not necessarily containing the same phonemic content as L, recorded before their clinical condition". Are these describing the same recording, and if so, why is there a discrepancy in the duration? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Response to claims and evidence / Methods and evaluation criteria** *Comment 1* Our speech style conversion module follows the implementation provided in [7]. We trained the models using the publicly available code released by [7]. In the revised version, we will move the content currently in Appendix G into the main text and further clarify that we use the model from [7]. We will also clarify that speech style conversion is not the main contribution of our work; rather, our core contribution lies in the EMG-to-phoneme sequence translation framework. *Comment 2* As shown in [2], EMG signals exhibit structured representations on the manifold of SPD matrices, particularly with respect to articulator placement. For example, distinct orofacial movements underlying speech articulation naturally cluster and can be separated using unsupervised algorithms. Building on this insight, we demonstrate that a single-layer recurrent network is sufficient for EMG-to-phoneme translation, and that model size can be effectively controlled via the GRU hidden dimension. Our motivation for analyzing phoneme error rate (PER) across model sizes was to test whether performance follows the power law behavior described in [8]. Following standard practice, we plotted these trends on the test set; however, the same trends hold on the validation set as well, and we will include those plots in the revised manuscript. We observe that both $GRU_A$ and $GRU_B$ exhibit power law scaling, where the error $E$ decreases as a function of model size $N$ or data size $D$. Specifically, for models with limited capacity but sufficient data, performance improves as $E = \frac{c_1}{N^{c_2}}$ (Figures 4 and 6), while for models with sufficient size but limited data, the error follows $E = \frac{c_1}{D^{c_2}}$ (Figure 8). However, the law breaks down and plateaus when bottlenecked by either $N$ or $D$, explaining why simply increasing model size does not lead to noticeable gains, as correctly pointed out by **reviewer GLiL**. In contrast, $GRU_C$ does not follow this trend. While it performs well at smaller model sizes, its performance degrades as $N$ increases without increasing $D$. We agree with the reviewer that training larger models is not inherently more difficult. We will revise the sentence to: “Although modeling EMG using neural ODEs shows benefits at smaller model sizes, its performance decreases when the model is scaled without increasing the data size.” For the SMALL-VOCAB dataset, the fitted parameters are: $GRU_A$: $c_1 = 0.2539$, $c_2 = 0.4931$ $GRU_B$: $c_1 = 0.2319$, $c_2 = 0.2939$ Despite the simplicity of our model—where size is governed by a single hyperparameter—we observe that scaling laws still hold, consistent with trends seen in larger language models. Given these scaling patterns, for $GRU_A$ and $GRU_B$, model performance can be predicted reliably from data and model size. **Response to other comments or suggestions** We thank the reviewer for the suggestions. We will incorporate them in the revised manuscript. **Response to questions for authors** *Comment 1* All data in this study were collected in a single session, with sensor electrodes placed only once. In scenarios involving multiple sessions, the matrix $Q$ may need to be re-estimated, as precise electrode placement is critical. Even small shifts can result in signals from different muscle units, leading to distributional shifts. This remains an open question in non-invasive BCIs. In contrast, invasive BCIs benefit from surgically fixed electrodes. An important future direction is to explore whether few-shot learning methods can help adapt to such changes. For further discussion, please see our response to **reviewer GLiL**. *Comment 2* Yes. The articulators were moving during the data collection. *Comment 3*: Yes. $GRU_B$ and $GRU_C$ were slower to train compared to $GRU_A$. $GRU_A$: ~2 minutes, converged 100 epochs $GRU_B$: ~20 minutes, converged 50 epochs $GRU_C$: ~20 minutes, 25 epochs (All timings measured on an NVIDIA RTX 4090.) *Comment 4*: For SMALL-VOCAB, we used hidden unit dimensions ranging from 230 to 496. For NATO-WORDS, the range was 78 to 253. This difference is due to the number of electrodes: SMALL-VOCAB used 31 (input: 31×31), and NATO-WORDS used 22 (input: 22×22). *Comment 5*: The GRUs follow the same power law behavior on the LARGE-VOCAB dataset. We trained $GRU_A$ with various hidden sizes (training time: ~30 minutes on RTX 4090), and observed $c_1 = 0.5925$, $c_2 = 0.028$. *Comment 6*: The reported 3–5 minute duration refers to a worst-case scenario. For voice cloning, we used a 40-second clip. We will clarify this in the revised manuscript. [7] Choi, H.-S., et al. Neural analysis and synthesis: Reconstructing speech from self-supervised representations. Advances in Neural Information Processing Systems, 34:16251–16265, 2021. [8] Kaplan, J., et al. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
null
null
null
null
null
null
null
null
BDC-CLIP: Brownian Distance Covariance for Adapting CLIP to Action Recognition
Accept (poster)
Summary: In this work, the authors propose an adapter-based action recognition method built on top of CLIP visual and text encoders. To better capture local details, they propose to utilize all visual patch tokens and word tokens and employ Brownian Distance Covariance (BDC) as a similarity metric between video and text representations. They validate the proposed method on several public datasets and tasks. ## update after rebuttal The additional evidence is somewhat weak as it is based on only a couple of samples or one action class. Still, ~20% accuracy on SSv2 is far from being useful, similar to other CLIP-based methods. So I would say this is still a downside of this work. Therefore, I would like to keep my original rating. Claims And Evidence: Some of the claims made are not fully supported by evidence. - “BDC captures both linear and nonlinear relations, enabling it to model the complex dependencies in the video-language embedding space” -> There is no evidence of BDC enabling complex dependency modeling in the video-language embedding space. The paper only shows favorable task performance, some feature space visualization and attention map visualization. - “…enabling fine-grained multi-modal context modeling in space, time, and language” -> Similar to the first claim above, there is no evidence of *fine-grained multi-modal context modeling* in space, time, and language in the paper. Methods And Evaluation Criteria: The proposed method is sensible as we can reuse a vision-language pre-trained model with relatively little amount of fine-tuning effort. Employing BDC as similarity metric to model non-linear relation between tokens make sense and is proven effective in action recognition to some extent. Downside of the method is the limited capability of temporal dynamics modeling, similar to prior CLIP-based works such as Action-CLIP, X-CLIP, TC-CLIP, etc, as shown in SSV2 experiments in Table 2 and 3. Theoretical Claims: There is no theoretical claims made. Experimental Designs Or Analyses: The proposed method needs to be further validated by more experiments. - What happens if we do not employ temporal attention in eq (4)? (effect of the proposed temporal modeling method) - What happens if we use 3/4, 2/4, 1/4 of the patch tokens or half of the word tokens instead of using them all? - What happens if we turn on and off (or replace it with cosine similarity) text BDC matrices computing and visual BDC matrices computing? Which BDC computing is more important? - What happens if we turn on and off text token weighting and visual token weighting? Supplementary Material: I reviewed the supplementary material including experimental setup, hyper-parameter settings, and further experiments on direct CLIP fine-tuning without K400 pre-training, and additional attention map visualization. Relation To Broader Scientific Literature: Since this work is built on top of CLIP [Radford et al., ICML 2021], it is related to the vision-language alignment and multi-modal learning field. Similar to prior works using CLIP, the proposed method also shows limitations in modeling temporal dynamics as demonstrated in the Something-Something-V2 experiments in Table 2 and 3. All the CLIP-family including the proposed method show unfavorable performance on SSV2: ~20% accuracy while SOTA method with similar backbone (ViT-B) shows >70% accuracy on this dataset. Essential References Not Discussed: Please consider citing the following works and discussing the relation to the proposed method. - [Park et al., Dual-path Adaptation from Image to Video Transformers, CVPR 2023] - [Lee et al., CAST: Cross-Attention in Space and Time for Video Action Recognition, NeurIPS 2023] - [Qian et al., Rethinking Image-to-Video Adaptation: An Object-Centric Perspective, ECCV 2024] Other Strengths And Weaknesses: Please address my concerns on the unsupported claims, temporal modeling capability of the method, missing empirical validation. Other Comments Or Suggestions: - L195: “Let the values be V=[…” -> where does V come from? - L211-213: “we first achieve the embeddings of reduced dimension …” -> awkward sentence - L417: “... models are valuated ...” -> evaluated? Questions For Authors: Please address my concerns on the unsupported claims, temporal modeling capability of the method, missing empirical validation. 1) I do not understand how to construct $\textbf{b}^t=Vech(\textbf{B}^t) \in \mathbb{R}^{d(d+1)/2}$ in L190-192. Can the authors elaborate on this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer sjZc, We sincerely thank you for your constructive and insightful comments, particularly your positive feedback & decision. > ### Q1: The paper lacks evidence demonstrating that BDC enables modeling of complex dependencies in the video-language embedding space. Thanks for this concern. In BDC-CLIP, we compare two sets of $d$-dimensional tokens from language and vision. According to Zhelezniak et al. (2019), this setup can be viewed as modeling dependencies between $d$-observation samples from two random variable sets (textual vs. visual tokens). Furthermore, **Szekely & Rizzo (2009) show that BDC captures all types of statistical dependencies--linear and nonlinear--without assumptions on joint distributions.** Hence, by applying BDC to video-language alignment, BDC-CLIP can effectively model complex dependencies in the shared embedding space. *The scatterplots and token distributions (kindly see [Figure R1](https://anonymous.4open.science/r/rebuttal-D15F/Figure_R1.jpg)) illustrate that our model can effectively learn nonlinear relationships.* > ### Q2: The paper does not provide evidence to support the claimed fine-grained multi-modal context modeling across spatial, temporal, and language dimensions. The set of textual tokens naturally encodes key nouns and verbs describing people and objects, while the set of patch tokens captures crucial spatial regions in each video frame. As BDC can measure any form of statistical dependency between textual and visual tokens, **it provides a principled way to capture rich relations across spatial and language dimensions.** Further, by averaging BDC matrices over consecutive frames, our method models temporal evolution of the token-level correspondences. The heatmaps (kindly see [Figure R2](https://anonymous.4open.science/r/rebuttal-D15F/Figure_R2.jpg)) show how the model focuses on horses and players in the {playing polo}--highlighting the relevant objects and interactions in both text and video frames. **These results suggest that BDC-CLIP indeed learns fine-grained, action-centric context spanning space, time, and language.**   > ### Q3: BDC-CLIP needs to be further validated by more experiments: 1) without (w/o) temporal attention (TA), 2) using 3/4, 2/4, 1/4 of patch tokens (PT) or half of the word tokens (WT), 3) turn off text or visual BDC matrices, and 4) turn off text and visual tokens weighting (TW). Thank you for the suggestions. 1)$\ $W/O TA, BDC-CLIP’s performance drops consistently across 3 datasets, indicating TA provides valuable temporal modeling. 2) Subsampling patch tokens at ratios 3/4 and 1/4--or taking only half the word tokens--also degrades results, suggesting retaining all tokens is important for fine-grained alignment. 3) Removing either text or visual BDC significantly hurts performance, with a more pronounced drop from removing visual BDC, suggesting more importance of the visual adapter. 4) Disabling TW for either text or vision negatively impacts performance, underscoring how focusing on more informative tokens benefits our approach. |Method|K600_Zeroshot|HMDB51_2shot|HMDB51_16shot| SSv2_2shot|SSv2_16shot| |:-:|:-:|:-:|:-:|:-:|:-:| |BDC-CLIP|73.8$\pm$0.8|66.1|73.9|8.9|16.8| | w/o TA|73.3$\pm$0.8|65.5|73.4|8.3|16.2| |3/4 PTs|73.7$\pm$0.8|64.5|73.7|8.6|16.5| |1/4 PTs|73.7$\pm$0.9|64.7|73.8|8.5|16.5| |1/2 WTs|73.4$\pm$0.8|65.5|73.2|8.6|16.4| |w/o text BDC|73.6$\pm$0.8|65.2|72.3|7.9|16.0| |w/o visual BDC|73.8$\pm$0.8|64.3|71.5|7.5|14.3| |w/o text TW|73.2$\pm$1.1|65.7|73.2|7.6|16.2| |w/o visual TW|74.0$\pm$0.7|66.1|73.4|8.1|16.1| > ### Q4: Similar to prior works using CLIP, BDC-CLIP shows limitations in modeling temporal dynamics as demonstrated on SSv2 in Table 2 and 3, achieving ~20% accuracy while SOTA method with similar backbone (ViT-B) shows >70% accuracy. Kindly note our reported ∼20% or lower accuracy in Tables 2 and 3 are from few-shot and base-to-novel settings, while the mentioned SOTA results (>70%) rely on fully supervised (FullS) training. **These differing settings are not directly comparable**, and it remains unclear how those FullS approaches would fare in few-shot or base-to-novel settings.   > ### Q5: Essential References Not Discussed. Thanks for highlighting the three works that will be cited and discussed in our revision. Briefly, **they focus on adapting pure vision-pretrained models for fully-supervised (FullS) action recognition**. In contrast, **BDC-CLIP relies on multimodal alignment between vision and language,** achieving strong performance in, beside FullS setting, zero-shot, few-shot and base-to-novel settings. > ### Q6: How to construct $\mathbf{b_t}$ in L190-192? $\mathbf{B_t}$ is a symmetric $d\times d$ matrix, so half-vectorization (Vech) collects the elements on or below its diagonal into a $d(d+1)/2$-dimensional vector $\mathbf{b_t}$. Kindly see the [Wikipedia article](https://en.wikipedia.org/wiki/Vectorization_(mathematics)#Half-vectorization) for details on Vech. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal response. I have read the rebuttal and other reviews. The rebuttal partially resolved my concerns. I appreciate partial evidence on BDC enabling modeling of complex dependencies and fine-grained multi-modal context modeling capability in https://anonymous.4open.science/r/rebuttal-D15F/Figure_R1.jpg and https://anonymous.4open.science/r/rebuttal-D15F/Figure_R2.jpg. However, the evidence is somewhat weak as it is based on only a couple of samples or one action class. I understand that Table 2 and 3 are few-shot and base-to-novel results. Still, ~20% accuracy on SSv2 is far from being useful, similar to other CLIP-based methods. So I would say this is still a downside of this work. Therefore, I would like to keep my original rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer sjZc, Thank you for your thoughtful feedback. We are glad our rebuttal has partially resolved your concerns, and we appreciate this opportunity to clarify the remaining points. > ### Q1': I appreciate partial evidence on BDC enabling modeling of complex dependencies and fine-grained multi-modal context modeling capability in [Figure R1](https://anonymous.4open.science/r/rebuttal-D15F/Figure_R1.jpg) and [Figure R2](https://anonymous.4open.science/r/rebuttal-D15F/Figure_R2.jpg). The additional evidence is somewhat weak as it is based on only a couple of samples or one action class. Our vision–language matching framework uses **BDC (Szekely & Rizzo, 2009)**, a robust statistical metric capable of capturing *any* form of dependency between random variables. By measuring similarity among all visual tokens (fine-grained spatial regions) and all textual tokens (linguistic elements) across frames, our approach models rich multimodal context spanning space, language, and time. We believe **it is BDC's strong theoretical properties** that enable modeling of complex dependencies in the shared embedding space, thereby enabling fine-grained multimodal alignment. These capabilities are **further supported by our detailed ablation studies and broad experiments** (zero-shot, few-shot, base-to-novel, fully supervised) *presented in the main paper.* **Our additional evidence—scatterplots, token distributions, and attention heatmaps—serves as an *intuitive illustration* of these strengths**. We apologize for limiting these examples to just a couple of samples or one action class, *due to the rebuttal’s time constraints.* **We plan to add more examples in the revised version** to further highlight our method’s modeling ability. > ### Q2': Still, ~20% accuracy on SSv2 is far from being useful, similar to other CLIP-based methods. So I would say this is still a downside of this work. We understand your concerns about SSv2 performance and *acknowledge this limitation.* However, **we believe these results should not be viewed as a fundamental drawback of our proposed method.** Rather, they reflect the **extreme difficulty of few-shot (*all-way K-shot*) and base-to-novel recognition on SSv2**—scenarios that challenge a wide range of approaches, including other CLIP-based methods (e.g., Action-CLIP, X-CLIP, TC-CLIP). We see these lower numbers not as a reason to dismiss CLIP-based approaches, but as a call to explore new ideas and strategies that can ultimately lead to *practical* success in demanding scenarios. We hope our **BDC-CLIP will serve as a stepping stone** for developing more robust techniques on SSv2 and similarly challenging tasks. Once again, we thank you for your time and feedback, and we trust this additional response clarifies your remaining concerns. Sincerely, The Authors
Summary: This paper proposes BDC-CLIP, a framework that introduces Brownian Distance Covariance (BDC) to address the limitations of current CLIP-based video models. BDC-CLIP can leverage all the visual and textual embeddings and construct non-linear relations for vision-language modeling. BDC-CLIP achieves state-of-the-art performance on multiple video benchmarks under various settings. ## update after rebuttal Thanks for your rebuttal. The concerns have been solved. I will increase the score to 4. Claims And Evidence: The claims are well-supported in the paper. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, in Sec. 4. Supplementary Material: N/A Relation To Broader Scientific Literature: The proposed method can achieve state-of-the-art performance on a wide range of video benchmarks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strength - The motivation is clear and intuitive, which aims to address an intrinsic limitation for current video models. - The achieved performance is great on a range of benchmarks and experimental settings. ### Weaknesses - The core technique, i.e., Brownian Distance Covariance, is proposed by a previous paper. Therefore, the technical contribution of this paper seems a bit limited. - Additional ablation experiments should be added. In the paper, I cannot see how the performance gradually improves on top of a vanilla baseline with all the proposed modules. The authors should elaborate more on this. - What features will the model use for inference: global features, local features or BDC matrices? - The proposed method should be compatible with image-only tasks, e.g., few-shot image classification. I suggest the authors conducting some experiments on that for further demonstrating the effectiveness of the paper. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer LE28, We sincerely thank you for providing constructive and insightful comments. In particular, we appreciate your positive comments including **"The motivation is clear and intuitive"** as well as **"The achieved performance is great on a range of benchmarks and experimental settings."** > ### Q1: The technical contribution appears somewhat limited, as the core technique, Brownian Distance Covariance, was already introduced in prior work. Thanks for your concern. To our best knowledge, our work is the first attempt that introduces BDC into both text and visual encoders of CLIP for vision-language alignment. **Our work highlights, *in foundation models such as CLIP,* potential of advanced metric (e.g., BDC) over ubiquitous cosine similarity.** In contrast, DeepBDC (Xie et al., 2022) concerns no CLIP framework, while BDC-adapter (Zhang et al., 2023) only uses BDC in single vision modality. Besides, we design **temporal attention mechanism for BDC representations that previous arts lack** as they focus on image recognition. *kindly see Lines 151-164 in the paper for detailed discussion on differences from the previous works.* In terms of the above clarifications, we would be grateful if you could re-evaluate our technical contributions. > ### Q2: Additional ablation experiments should be added. In the paper, I cannot see how the performance gradually improves on top of a vanilla baseline with all the proposed modules. The authors should elaborate more on this. Thank you for the comment. Kindly note that we present the **performance variation on top of a vanilla baseline *in Table 6a.*** Following your suggestion, we **add additional ablation experiments** for further illustrating the effect of proposed modules. Specifically, we evaluate BDC-CLIP for the following settings: 1) not employ temporal attention (TA), 2) using 3/4, 1/4 of patch tokens or half of the word tokens, 3) turn off text or visual BDC matrices, and 4) turn off text and visual tokens weighting (TW). ***Kindly refer to our response to Q3 of Reviewer sjZc for the results and discussion.***   > ### Q3: What features will the model use for inference: global features, local features or BDC matrices? Our inference relies primarily on the BDC matrices produced by the two adapters for both video-language classification and purely visual classification. In addition, as in prior works, we also incorporate global features from the backbone encoder to enable a standard CLIP-like classification branch. > ### Q4: The proposed method should be compatible with image-only tasks, e.g., few-shot image classification. I suggest the authors conducting some experiments on that for further demonstrating the effectiveness of the paper. Thanks for your comment. **As suggested, we extend BDC-CLIP to few-shot image recognition.** Specifically, we remove temporal attention module in the visual encoder to fit for image recognition task; as the training images are scarce, we adopt parameter efficient technique as in CLIP-LoRA, attaching a LoRA module (rank 2, alpha 1) to each transformer block for both textual and visual encoders. Following previous arts, we conduct comparison on 11 datasets with ViT-B/16 as the visual encoder in 16-shot setting. From the table below, we see our BDC-CLIP improves over the strong baseline of CLIP-LoRA by 1.9\%, while outperforming the second-best (i.e, LLaMP) by 1.1\%. Notably, BDC-CLIP stands out across all 11 datasets. **The comparison suggests that BDC-CLIP that uses BDC and all patch tokens is general, effective for both video and image recognition tasks.** |Method|ImageNet|Aircraft|Food| DTD|UCF|Cars|Pets|SUN|Flowers|Caltech|EuroSAT|Avg| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |TransCLIP (Zanella et al.)|71.8|38.6|86.9|65.1|82.1|79.8|92.4|74.7|94.4|94.0|83.0|78.4| |ProGrad (Zhu et al.)|72.1|43.0|85.8|68.8|82.7|71.9|36.8|75.1|96.6|95.9|83.6|79.9| |CLIP-LoRA (Maxime et al.) |73.6|54.7|84.2|72.0|86.7|86.3|92.3|76.1|98.0|96.4|92.1|83.0| |LLaMP (Zheng et al.)|73.5|56.1|87.6|74.2|86.8|86.1|94.2|77.0|98.1|97.1|91.3|83.8| |BDC-CLIP (ours) |75.0|57.3|88.1|76.5|87.7|86.5|94.4|78.3|98.4|97.3|93.9|84.9| * Zanella M, Gerin B, Ayed I. Boosting vision-language models with transduction. In NeurIPS, 2024. * Zhu B, Niu Y, Han Y, et al. Prompt-aligned gradient for prompt tuning. In CVPR, 2023. * Maxime Z, Ismail B A. Low-rank few-shot adaptation of vision language models. In CVPRW, 2024. * Zheng Z, Wei J, Hu X, et al. Large language models are good prompt learners for low-shot image classification. In CVPR, 2024. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. The concerns have been solved. I will increase the score to 4. --- Reply to Comment 1.1.1: Comment: Dear Reviewer LE28, We appreciate that our rebuttal has addressed your concerns, and we are grateful for your decision to raise your score. Thank you for supporting our work. Sincerely, The Authors
Summary: This paper proposes BDC-CLIP, a novel framework for video-language alignment based on Brownian Distance Covariance (BDC). Unlike cosine similarity, BDC can capture both linear and nonlinear correlations. BDC-CLIP leverages all visual and textual tokens to model both linear and nonlinear relationships in the multimodal embedding space, thereby capturing rich contextual information across spatial, temporal, and linguistic dimensions. The framework also introduces a temporal BDC attention mechanism that integrates patch-wise spatial cues and frame-wise temporal dynamics. ## update after rebuttal I tend to give a borderline score (between 2 and 3), but since ICML only has a weak accept option, I have raised the score to 3. Claims And Evidence: Previous methods align video and language based on the cosine similarity between the average of frame-level [CLS] tokens in the video and the sentence-level [EOS] token. This limits the alignment to coarse semantic matching. In contrast, BDC-CLIP aligns the two modalities using Brownian Distance Covariance (BDC), which considers all visual and textual tokens. This approach captures fine-grained spatio-temporal cues crucial for action recognition. Methods And Evaluation Criteria: BDC-CLIP introduces two core components: (1) a video BDC adapter and (2) a text BDC adapter, which are aligned using Brownian Distance Correlation. (1) Video BDC Adapter: By leveraging all visual tokens (i.e., [CLS] and patch tokens), BDC-CLIP computes a BDC matrix as a frame-wise representation and designs a temporal attention mechanism to model frame-to-frame dynamics. (2) Text BDC Adapter: BDC-CLIP exploits all textual tokens (i.e., [EOS] and word tokens) to compute a BDC matrix as the text representation. Finally, BDC-CLIP aligns the video and text representations using Brownian Distance Correlation. Theoretical Claims: The idea of using Brownian Distance Covariance (BDC) for aligning video and language is theoretically reasonable. Experimental Designs Or Analyses: The paper conducts experiments on five widely used action recognition datasets: Kinetics-400, Kinetics-600, HMDB-51, UCF-101, and SSv2. The model is evaluated across various downstream tasks, including zero-shot, few-shot, base-to-novel generalization, and fully-supervised settings. The proposed BDC-CLIP achieves state-of-the-art performance on these tasks. Supplementary Material: I have read supplementary material. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: The paper is well-written and easy to understand. The proposed method achieves state-of-the-art results on multiple datasets. Weaknesses: 1, The main innovations of BDC-CLIP include: (i) A video BDC adapter and a text BDC adapter, which serve as post-processing modules for the visual and text encoders to enhance alignment. (ii) The use of Brownian Distance Covariance to strengthen alignment. However, I have two concerns regarding the novelty of the paper: (1) The idea of adding adapters after the CLIP encoder is not entirely new. Many existing works have explored similar techniques, including approaches related to relationships between sets of embeddings, global tokens, and local tokens. These are common strategies for processing CLIP tokens. (2) The use of Brownian Distance Covariance primarily originates from image-based few-shot classification tasks and has been adapted for video action recognition in this work. 2, The motivation of this paper is not clearly articulated. The authors claim that previous methods rely on cosine similarity and only use global tokens. However, this is not entirely accurate, as many existing works already explore different similarity computations and incorporate local tokens. The paper does not sufficiently clarify this issue. 3, The paper states that BDC can capture both linear and nonlinear correlations, but it is unclear what exactly is meant by "linear" and "nonlinear" in this context. How do "linear" and "nonlinear" correlations correspond to specific methods in the proposed framework? There is a lack of experimental validation to demonstrate the advantage of capturing both linear and nonlinear correlations. Other Comments Or Suggestions: None. Questions For Authors: The paper states that BDC can capture both linear and nonlinear correlations, but it is unclear what exactly is meant by "linear" and "nonlinear" in this context. How do "linear" and "nonlinear" correlations correspond to specific methods in the proposed framework? Which components in the method explicitly capture these correlations? There is a lack of experimental validation to demonstrate the advantage of capturing both linear and nonlinear correlations. It would be beneficial to include ablation studies or quantitative analyses to verify this claim. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer oB1H, We sincerely thank you for your constructive and insightful comments, especially your positive feedback that **"The paper is well-written and easy to understand"** and **"The proposed method achieves state-of-the-art results on multiple datasets. "** > ### Q1: However, I have two concerns regarding the novelty of the paper: (1) The idea of adding adapters after the CLIP encoder is not entirely new. Many existing works have explored similar techniques, including approaches related to relationships between sets of embeddings, global tokens, and local tokens. These are common strategies for processing CLIP tokens. (2) The use of Brownian Distance Covariance primarily originates from image-based few-shot classification tasks and has been adapted for video action recognition in this work. Thank you for sharing these concerns. 1) **BDC Integration into CLIP.** As noted in Section 1 of our main paper, **our primary contribution is introducing BDC for vision-language alignment in foundation models like CLIP.** While past works such as DeepBDC [Xie et al., 2022] and BDC-Adapter [Zhang et al., 2023] apply BDC in single-modality, image-based few-shot classification, **none** address the cross-modal alignment challenge of video and text. Our approach goes beyond the predominantly used cosine similarity, demonstrating how BDC effectively captures statistical dependence between two distinct modalities. We also appreciate that **Reviewer GB1s recognized this novelty**, calling it the ***“novel integration of BDC into the CLIP framework.”*** 2) **BDC-Based Temporal Adapter for Local and Global Tokens.** Our second contribution is **a temporal adapter tailored for BDC representation** that leverages both local (patch/word) tokens and global ([CLS]/[EOS]) tokens for adapting CLIP to video recognition. To the best of our knowledge, no prior work has applied BDC to modeling relationships among local tokens for CLIP-based video recognition. If such related work exists, we would welcome further references to ensure comprehensive coverage. Given these points, **we respectfully request a reconsideration of our paper’s novelty,** which combines BDC-based cross-modal alignment with a temporal adapter that captures fine-grained token interactions for video action recognition.   > ### Q2: The motivation of this paper is not clearly articulated. The authors claim that previous methods rely on cosine similarity and only use global tokens. However, this is not entirely accurate, as many existing works already explore different similarity computations and incorporate local tokens. The paper does not sufficiently clarify this issue. Thank you for noting this issue. As we state in our Abstract (Lines 21-23) and Section 1 (Lines 57-71), most existing methods still rely on cosine similarity and focus on global tokens. To our best knowledge, **for CLIP-based video recognition,** *OST (Chen et al., 2024) is the only approach that departs from cosine similarity by using optimal transport*, but it aligns **frame-level [CLS] tokens with sentence-level [EOS] tokens rather than local (patch/word) tokens,** and its primary goal is to enhance textual descriptors. We have not encountered prior CLIP-based video recognition works that leverage local tokens for cross-modal alignment, but would appreciate any pointers if they exist. It is worth noting **Reviewer LE28 affirmed that our *“motivation is clear and intuitive.”*** > ### Q3: The paper states that BDC can capture both linear and nonlinear correlations, but it is unclear what exactly is meant by "linear" and "nonlinear" in this context. How do "linear" and "nonlinear" correlations correspond to specific methods in the proposed framework? There is a lack of experimental validation to demonstrate the advantage of capturing both linear and nonlinear correlations. Thank you for raising these points. **We address them in detail in (and kindly refer to) our response to Q1 from Reviewer GB1s,** where we explain *from the statistical perspective* the distinction between linear and nonlinear correlations (i.e., cosine similarity vs. BDC) and provide both theoretical rationale and qualitative evaluations (e.g., scatterplots and density contours and heatmaps) to illustrate BDC’s ability to capture more complex dependencies. Our extensive experiments in Section 4 further show that capturing complex dependencies--including linear and nonlinear correlations--has yielded superior performance, compared to strong baselines (i.e., VIFI-CLIP and TC-CLIP) based on cosine similarity that can only model linear correlations. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. Some of my concerns have been addressed. However, there are still some issues remaining. For example, in the response to reviewer GB1s, the term "linear and nonlinear correlations" is not accurate enough. I cannot quite grasp what is meant by "nonlinear correlation"—my understanding is that it refers to the correlations between two set of tokens. Regarding the novelty of the paper, the integration of BDC into CLIP seems more like an engineering improvement rather than a theoretical innovation. I tend to give a borderline score (between 2 and 3), but since ICML only has a weak accept option, I have raised the score to 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer oB1H, Thank you for your thoughtful feedback and for raising your score. We’re pleased that some of your concerns have been addressed and appreciate the chance to clarify the remaining points. > ### Q1': For example, in the response to reviewer GB1s, the term "linear and nonlinear correlations" is not accurate enough. I cannot quite grasp what is meant by "nonlinear correlation"--my understanding is that it refers to the correlations between two set of tokens. We appreciate your perspective and would like to clarify that our intended meaning encompasses a broader concept. From a statistical perspective: * **Linear correlation** measures the degree to which two random variables (RVs) increase or decrease together in a linear manner—commonly quantified by the Pearson Correlation Coefficient (PCC). If the joint distribution is Gaussian, which features elliptical density contours, PCC fully characterizes their linear dependence. * **Nonlinear correlation** encompasses any statistical dependence beyond what PCC can capture, such as higher-order or non-monotonic relationships. In non-Gaussian distributions, these dependencies are often nonlinear. As formalized in (Zhelezniak et al., 2019), **token similarity can be measured through statistical correlations.** Specifically, each token embedding can be viewed as a sample of observations from a RV and cosine similarity (CS) is practically equivalent to PCC. Notably, CS/PCC performs optimally under linear or Gaussian assumptions. **However, our scatterplots clearly show that correlations between visual and textual tokens can be complex—nonlinear and non-Gaussian (refer to [Figure R1](https://anonymous.4open.science/r/rebuttal-D15F/Figure_R1.jpg)).** In such scenarios, CS/PCC, limited by its linear nature, inherently fails to capture these richer statistical dependencies effectively. Therefore, **existing CLIP adaptations** that rely on CS between global textual [EOS] token $\mathbf{w_0}$ and visual [CLS] token $\mathbf{p_0}$ across frames cannot effectively model nonlinear correlations. Our **BDC-CLIP** targets fine-grained vision-language alignment by measuring the similarity between all textual tokens $S_{\text{txt}} = \\{\mathbf{w_0}, \ldots, \mathbf{w_M}\\}$ and all visual tokens of one frame $S_{\text{img}} = \\{\mathbf{p_0}, \ldots, \mathbf{p_N}\\}$. Extending the framework of Zhelezniak et al. (2019), we view token embeddings $S$ as a collection of $d$-dimensional samples from a set of scalar RVs $R = \\{W_0, \ldots, W_M\\}$. Accordingly, **the similarity between $S_{\text{txt}}$ and $S_{\text{img}}$ can be naturally measured by the correlation between $R_{\text{txt}}$ and $R_{\text{img}}$.** BDC (Szekely & Rizzo, 2009) provides a rigorous, general-purpose metric capable of capturing any form of statistical dependence—linear, nonlinear, Gaussian, or non-Gaussian—thus overcoming the inherent limitations of CS/PCC. > ### Q2': Regarding the novelty of the paper, the integration of BDC into CLIP seems more like an engineering improvement rather than a theoretical innovation. We recognize that integrating BDC into CLIP is not a theoretical innovation. However, our work ***introduces two significant contributions:*** * **First Integration of BDC into Text & Visual Encoders for Vision–Language Alignment.** By replacing the ubiquitous cosine similarity with BDC in a foundation model like CLIP, we demonstrate the advantages of advanced statistical metrics for multimodal matching. * **A Temporal Attention Tailored for BDC Representations.** We propose a custom temporal adapter that operates on BDC matrices using all tokens for video recognition, differing substantially from prior approaches that rely on temporal attention using only global tokens. Additionally, **our BDC-CLIP achieves state-of-the-art performance** across zero-shot, few-shot, base-to-novel, and fully supervised settings. We believe **these contributions are non-trivial to the community** and could spark broader interest in exploring alternative metrics for multi-modality alignment in foundation models. Thank you once more for your time and valuable feedback. We trust these clarifications resolve your remaining concerns. Sincerely, The Authors
Summary: This paper introduces BDC-CLIP, a framework designed to adapt CLIP for video action recognition by using Brownian Distance Covariance (BDC). The authors claim that traditional methods, relying on cosine similarity on global tokens, lack the capacity to capture complex spatio-temporal relations in video data. Their proposed solution overcomes these limitations by leveraging BDC, which captures both linear and nonlinear relationships amongst all visual and textual tokens. Through extensive experimentation, the authors demonstrate state-of-the-art performance across various scenarios. ## update after rebuttal I agree with the observations made by the other reviewers. The authors have clearly described their proposed method and provided detailed experimental results. However, the theoretical explanation and the visual evidence supporting the benefits of the proposed approach remain relatively weak. That said, the rebuttal addressed many of my initial concerns, and I appreciate the authors’ efforts in clarifying their contributions and presenting additional analysis. If the authors can incorporate stronger theoretical insights and more comprehensive visualizations in the final version of paper, it would significantly strengthen the paper. At this stage, I maintain my Weak Accept recommendation. Claims And Evidence: The primary claim of this paper revolves around the effectiveness of using Brownian Distance Covariance (BDC) for video-language alignment. This claim is supported by the ablation study in Table 6. However, some important details lack sufficient evidence. In the original statement: “BDC can capture both linear and nonlinear correlations, enabling it to model the complex dependencies that exist between video and language embeddings.” It is necessary to clarify: What exactly do linear and nonlinear correlations refer to in this context? How do they specifically manifest in video action recognition tasks? What qualitative or quantitative evaluation methods can be used to assess them? Methods And Evaluation Criteria: Yes, the method and evaluation criteria make sense. Theoretical Claims: The authors leverage known theoretical foundations of Brownian distance covariance. However, their usage remains heuristic; no novel theoretical contributions or proofs are provided. Therefore, no theoretical analysis was performed or required for validation. Experimental Designs Or Analyses: The experimental designs make sense. They follow previous works. Supplementary Material: The supplementary materials are reviewed about the Detailed Experimental Setup and Further Experiments. Relation To Broader Scientific Literature: The paper situates itself within recent trends of adapting image-language pretrained models for video understanding tasks. The use of BDC for video-language alignment is indeed novel in this context. Essential References Not Discussed: Essential relates works are cited and discussed. Other Strengths And Weaknesses: Strengths: 1. Comprehensive evaluation compared with previous methods and ablation study 2. The novel integration of BDC into the CLIP framework Weakness: 1. Marginal Empirical Improvements Relative to Computational Overhead: The improvements shown over state-of-the-art baselines, while consistent, remain modest. Given the additional computational complexity and overhead introduced by integrating and computing BDC matrices, the incremental improvement may not be sufficient justification for adoption in real-world scenarios where computational efficiency is crucial. 2. Theoretical insights or deeper conceptual understanding are marginal or missing. First, a Preliminary Knowledge section is needed to introduce Brownian Distance Covariance (BDC). Following that, a thorough analysis of the theoretical motivation and insights should be provided, explaining the advantages of using BDC in video action recognition, how it captures non-linear relationships, and why non-linear relationship modeling is crucial for video action recognition tasks. Other Comments Or Suggestions: Deeper analysis focused on interpretability, explaining why exactly BDC works better practically, is strongly recommended. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer GB1s, We sincerely thank you for providing constructive and insightful comments. Especially, we are grateful for your positive feedback including **"The *novel* integration of BDC into the CLIP framework "**, and **"Comprehensive evaluation compared with previous methods and ablation study."** alongside **"The authors demonstrate state-of-the-art performance across various scenarios."** > ### Q1: What exactly do linear and nonlinear correlations refer to in this context? How do they specifically manifest in video action recognition tasks? What qualitative or quantitative evaluation methods can be used to assess them? Thanks for raising these insightful concerns. 1) **Statistical perspective on linear & nonlinear correlations** * **Cosine Similarity (CS) $\approx $ Pearson Correlation Coefficient (PCC).** As formalized in (Zhelezniak et al., 2019), a token embedding can be viewed as a sample of observations from some scalar random variable (RV) and CS is practically equivalent to PCC that can capture only linear correlations between RVs. * **Current CLIP-Based Methods.** Most existing adaptations apply CS on global tokens ([CLS]/[EOS]) across frames, only modeling linear dependencies between two scalar random variables (representing each token). This coarse alignment often overlooks more complex, fine-grained cues. * **BDC-CLIP.** We measure the similarity between the set of all textual tokens $S_{\text{txt}}=\\{\mathbf{w_{0}},\ldots, \mathbf{w_{M}} \\}$ and all visual tokens of one frame $S_{\text{img}}=\\{\mathbf{p_{0}},\ldots, \mathbf{p_{M}} \\}$. Extending from the statistical framework of Zhelezniak et al. (2019), we view textual embeddings $S_{\text{txt}}$ as a set of samples of $d$ observations from some theoretical set of scalar RVs $R=\\{W_0,\ldots, W_M\\}$. BDC can quantify all kinds of statistical dependency between two sets of RVs $R_{\text{txt}}$ and $R_{\text{img}}$ (Szekely & Rizzo,2009). 2) **Manifestation in Video Action Recognition** * Human actions exhibit rich contextual information across both spatial and temporal dimensions, involving dynamic interactions among people, objects, and the environment. * CS coarsely measures linear associations between the two modalities via global tokens, which CS may miss crucial details unless they lie in a single linear dimension of the embedding space. * BDC, however, captures subtler interactions among fine-grained elements between language and vision, which can be highly nonlinear in the common embedding space. 3) **Qualitative Evaluation** * We give **scatterplots and density contours** of some example visual vs. textual tokens; **kindly see [Figure R1](https://anonymous.4open.science/r/rebuttal-D15F/Figure_R1.jpg).** The plots show complex, nonlinear relations and clear non-Gaussian distributions, revealing nonlinear relation modeling is crucial and BDC can better capture these complex dependences. * We provide **heatmaps of example text-video frame pairs** for some action categories. These heatmaps highlight both key words (in shades of green) and salient spatial regions, evolving over time; **kindly see [Figure R2](https://anonymous.4open.science/r/rebuttal-D15F/Figure_R2.jpg).** The visualizations suggest BDC-CLIP can learn complex fine-grained multimodal context in space, time, and language. **Kindly refer to 2nd paragraph of Section 1 in our main paper for analysis on why nonlinear relation modeling is crucial for video action recognition.** > ### Q2: Marginal Empirical Improvements Relative to Computational Overhead. We appreciate the concern. We would like to emphasize that BDC-CLIP achieves substantial improvements over SOTA methods while maintaining a competitive computational profile. Specifically, BDC-CLIP significantly outperforms TC-CLIP: for zero-shot recognition, it achieves +4.7% on HMDB-51 and +1.5% on UCF-101; for base-to-novel task the gaps in light of HM are 2.6%, 2.0% and 2.1% on HMDB-51, UCF101 and SSv2, respectively. Meanwhile, BDC-CLIP uses fewer parameters (0.99x) and achieves higher throughput (1.12x), with only a slight increase in GFLOPs (1.04x). Overall, these gains justify BDC-CLIP’s use in real-world scenarios. > ### Q3: First, a Preliminary Knowledge section is needed to introduce Brownian Distance Covariance (BDC). Following that, a thorough analysis of the theoretical motivation and insights should be provided. Thanks for the thoughtful suggestions. In the revised paper, we will add a separate section, *in the Appendix (due to page limit in the main paper),* introducing background knowledge on BDC (Szekely & Rizzo, 2009). Then, we will highlight BDC’s theoretical advantages over PCC that is practically equivalent to cosine similarity. Also, we will integrate the response to Q1, making clear what linear and nonlinear correlations mean in CLIP-based video recognition, meanwhile providing qualitative evaluation showcasing the importance of learning complex relations.
null
null
null
null
null
null
Falcon: Fast Visuomotor Policies via Partial Denoising
Accept (poster)
Summary: The paper presents Falcon, an innovative approach that accelerates diffusion-based visuomotor policies without sacrificing their performance. Conventional diffusion policies rely on multiple denoising steps, which can hinder real-time decision-making. Falcon addresses this issue by exploiting the sequential dependencies among actions, enabling the denoising process to start from partially denoised actions rather than from a standard normal distribution. This approach reduces the number of required sampling steps, improving inference speed without the need for extra training. Claims And Evidence: Overall, the claims made in the Falcon paper are clearly articulated and largely supported by experimental evidence provided by the authors. The paper's main claims revolve around its ability to achieve accelerated inference speed through partial denoising, while maintaining performance and multimodal expressiveness. The author suggest Falcon is promising for real-world robotics. However, all validations are conducted exclusively in simulation environments. Real-world robot experiments are absent. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria presented in the Falcon paper generally make sense for the stated problem of accelerating diffusion-based visuomotor policies for robotic tasks. The authors evaluate Falcon on diverse and representative benchmark datasets, each suitable for validating different aspects of visuomotor policy performance. While the current evaluation criteria are robust and appropriate, the authors could further strengthen the evaluation: Include at least limited real-robot experiments to demonstrate practical utility, robustness, and generalization beyond simulated environments. Maintaining a buffer of previously denoised actions could significantly increase memory consumption. This may limit practical deployment in resource-constrained scenarios. Theoretical Claims: The derivations and mathematical transformations in the paper appear to be correct, and consistent with existing literature on diffusion-based methods. Experimental Designs Or Analyses: Overall, the experimental designs and analyses are sound and valid, and align well with standard practice in evaluating diffusion-based visuomotor policies. The chosen benchmark environments and evaluation criteria are appropriate. Additionally, the baseline methods (DDPM, DDIM, DPMSolver) are well-selected and relevant, ensuring fair comparisons. There are some limitations: Experiments were conducted exclusively in simulated environments, leaving uncertainty regarding real-world applicability and robustness. Computational efficiency is primarily measured by NFE, with no explicit runtime or memory consumption data provided. The proposed method relies on hyperparameters, requiring task-specific tuning for optimal results, potentially complicating practical adoption. Supplementary Material: Yes, I have reviewed Section C. Relation To Broader Scientific Literature: The contributions of the Falcon paper are closely related to several areas of existing research in diffusion models and robotic policy learning. Falcon builds explicitly upon the foundational formulations of Denoising Diffusion Probabilistic Models (DDPM, Ho et al., 2020) and Denoising Diffusion Implicit Models (DDIM, Song et al., 2020). Existing methods such as DDIM (Song et al., 2020a) and DPMSolver (Lu et al., 2022) accelerate diffusion sampling by reducing the number of iterative denoising steps. Falcon explicitly integrates with and further enhances these existing solvers. Diffusion policies (Chi et al., 2023) inherently model multimodal action distributions. Falcon retains this strength through careful initialization and partial denoising, contrasting with distillation methods that often reduce multimodality. Essential References Not Discussed: Yes, there is an important related work that the Falcon paper did not mention, namely the "Streaming Diffusion Policy (SDP)" proposed in the paper titled "Fast Policy Synthesis with Variable Noise Diffusion Models". SDP also proposes to accelerate diffusion-based policy generation through partial denoising, sharing conceptual similarities with Falcon. Specifically, SDP leverages the insight that partially denoised action trajectories can be generated significantly faster, enabling efficient inference without a large reduction in policy quality. Importantly, SDP is validated in both simulated and real-world robotic tasks, clearly demonstrating its effectiveness in realistic conditions. Other Strengths And Weaknesses: Falcon creatively combines existing ideas from diffusion models, Tweedie's formula, and sequential dependency exploitation. Although each component individually is known, the particular combination is novel and cleverly addresses a key practical challenge—slow inference speed. However, the paper does not sufficiently address the complexity, memory overhead, and scalability concerns introduced by buffer management. Furthermore, the approach lacks validation through real-world robotic tasks, as evaluations remain limited to simulated environments. Other Comments Or Suggestions: Figure 1 is somewhat complex and could benefit from simplification or clearer annotation. Questions For Authors: Falcon currently requires manual tuning of hyperparameters. Could you clarify how sensitive Falcon's performance is to these hyperparameters across diverse tasks, and discuss whether you've explored any automated or adaptive methods for tuning them? Could you clarify the computational and memory overhead of maintaining the latent buffer in practical deployment scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed and thoughtful feedback. We appreciate your recognition of Falcon’s novelty and practical motivation, and we are grateful for your constructive suggestions on deployment, which helped us strengthen the final version of our work. **Real-World Validation** To address the concern about practical utility, we conducted a **real-world dexterous grasping** using a physical robotic setup (see in **Fig. 1 of the [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**) . Falcon, DDPM, and SDP were evaluated using the same Unet backbone with planning horizon $T_p=16$. As shown in **Table 1 on our [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**, all methods achieved 100% success, while **Falcon reduced runtime by 3.07x compared to DDPM**, and **outperformed SDP in both runtime (0.14s vs. 0.23s) and memory usage (3730MB vs. 3808MB)**, demonstrating strong real-world performance. **Memory and Computational Overhead** Falcon maintains a **small buffer of 20–50 actions** (**see Appendix D in our submission**), while SDP requires a much larger horizon × noise-level buffer (e.g., 16×100). As shown in **Table 7 of the [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**, Falcon’s memory overhead is **+12MB**, significantly lower than SDP’s **+26MB**, making Falcon more suitable for real-time and resource-constrained deployment. **Hyperparameter Sensitivity and Usability** Falcon introduces two scalar hyperparameters: $\epsilon$ (reuse threshold) and $\delta$ (exploration rate), which control the balance between inference speed and action accuracy.. While both influence the trade-off between speed and accuracy, our ablations (**Figure 3, left and middle in submission**) show that **Falcon performs robustly when ε ∈ [0.001, 0.01] and δ ∈ [0.1, 0.2]**, across all tested environments. These parameters are intuitive, require **no retraining or fine-tuning**, and generalize well across tasks. We will include these practical guidelines in the revision. **Comparison with Streaming Diffusion Policy (SDP)** We thank the reviewer for highlighting SDP [1], which is indeed an important related work. We would like to clarify that SDP was already **cited and briefly discussed in Section 5 of our submission**. We appreciate the opportunity to strengthen this comparison further. While both Falcon and SDP leverage partial denoising for policy acceleration, Falcon offers **2 core advantages:** - **Training-free and plug-and-play**: Falcon does **not require task-specific training** or modifications to the diffusion model. In contrast, **SDP must be retrained** on each task with a specially designed noise corruption scheme (see SDP Section 3.3) to enable recursive denoising. This limits SDP’s adaptability and ease of deployment. - **Lower memory overhead**: Falcon uses a **threshold-based selection mechanism** to maintain a compact buffer (20–50 entries). SDP, by design, stores a full horizon × noise-level buffer (e.g., 16×100 entries). As shown in **Table 7 on our [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**, Falcon adds only **+12MB** of memory compared to DDPM, while SDP adds **+26MB**—a significant difference for resource-constrained settings. In both **simulated tasks** (**Tables 4–8 in our [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**) and **real-world experiments** (**Table 1 in our [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**), Falcon achieves **comparable or higher success rates than SDP**, while also achieving **higher speedups and lower runtime** (e.g., 0.14s vs. 0.23s in real-world grasping, . These results demonstrate that Falcon not only accelerates diffusion policies but also transfers effectively to physical robot systems. **Figure Improvement (Regarding Figure 1)** Thank you for the helpful suggestion. We have revised **Figure 1** to improve clarity and visual structure, now included in our updated submission and in the **[link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede) (Figure 3)**. The new diagram adopts a cleaner modular layout and color-coded elements to distinguish the two-stage mechanism: (1) reference action estimation and (2) threshold-based candidate selection. We believe this revision significantly improves readability. **Closing Remarks** We once again thank the reviewer for their valuable feedback. If our responses have addressed your concerns, we would be sincerely grateful for your consideration in revising the score. Please let us know if further clarification would be helpful—we would be happy to provide it. **References** [1] Fast Policy Synthesis with Variable Noise Diffusion Models --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my questions. I will keep my current rating for this paper. --- Reply to Comment 1.1.1: Comment: We appreciate your efforts in reviewing our paper and rebuttal, and thank you for your valuable feedback!
Summary: This paper presents Falcon, Fast visuomotor policies via partial denoising. This approach improves diffusion policies by accelerating action generation while preserving the multimodal generation capability. Accelerations are mainly provided by using partial denoised actions to reduce denoising steps. Falcon is a training-free algorithm and can be plugged-in to further improve efficiency on top of existing techniques. The proposed algorithm has been evaluated on three different simulated robotics datasets that expose different challenges and help evaluate the contributions of this paper. ## update after rebuttal I confirm my score. Authors addressed comments and added clarity and results to the original submission. Claims And Evidence: Claims are supported by clear and convincing evidence, through thorough analysis and grounding the proposed approach in relevant literature. Methods And Evaluation Criteria: The proposed method is supported by clear and convincing evaluations, through experiments on three robotics dataset, ablations of parameters, analysis of results presented in a clear way. Note: the two parameters (epsilon and delta) play a key role in obtaining speed and high success score. Is there an analysis of their relationship or an hypothesis regarding how to set guarantees ranges for these two in order to achieve high scores and speed? Theoretical Claims: I reviewed the theoretical claims and equations, although I did not verify all mathematical derivations in detail. While it is possible that I may have overlooked some aspects, the theoretical claims appear to be correct to the best of my understanding. Note: alpha (page 2) is not defined - please add a definition or relevant reference. Experimental Designs Or Analyses: Experiments are meaningful and informative. Results are analyzed in a compelling way and ablations are useful and to the point. Supplementary Material: N/A Relation To Broader Scientific Literature: This work is relevant for the robotics community where diffusion policies can play an important role. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well presented and significant. Claims, theoretical contributions and experiments are presented in a clear way. It can have impact on robotics applications. Other Comments Or Suggestions: See note on definition of alpha. Questions For Authors: Please refer to the other comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful suggestion and for the generous score. We deeply appreciate your recognition of our work, and your feedback has helped us identify opportunities to clarify and strengthen our contributions. **On the Relationship Between $\epsilon$ and $\delta$** In Falcon, $\epsilon$ and $\delta$ play complementary roles in controlling the reuse of partial denoised actions. - **$\epsilon$ serves as a selection threshold** that determines whether a previously denoised action is temporally aligned enough to be reused. Smaller ε enforces stricter matching (favoring accuracy), while larger $\epsilon$ allows more aggressive reuse (favoring speed). - **$\delta$ is the exploration rate**, specifying the probability of discarding the reused action and instead sampling from standard Gaussian noise. This mechanism injects diversity into the action sequence and prevents over-reliance on suboptimal reuse. While our ablations (**Figure 3, left and middle in our submission**) analyze these parameters independently, we agree that their joint effect is important. In our experiments, we typically fix **$\delta = 0.1$ or $\delta = 0.2$** , and find that **$\epsilon \in [0.001, 0.01]$** works robustly across tasks. This setting consistently balances acceleration and success rate. We will explore their interaction more formally in future work. **On the Definition of $\alpha$** Thank you also for pointing out the missing definition of $\alpha$. It refers to the variance schedule in the forward diffusion process, as commonly defined in DDPM[1] or DDIM[2]. We will include the formal definition and cite the appropriate reference in the revised version. We once again thank the reviewer for their thoughtful and constructive feedback. Your comments have been instrumental in helping us refine the clarity and usability of our method. **References** [1] Denoising Diffusion Probabilistic Models [2] Denoising Diffusion Implicit Models --- Rebuttal Comment 1.1: Comment: Thank you for addressing my and other reviewers' comments. --- Reply to Comment 1.1.1: Comment: Thank you for your time and effort! We really appreciate your support in our work!
Summary: This paper introduces Falcon, a method that accelerates the diffusion process by denoising partially noisy actions at each step using a one-step adaptive mechanism. Extensive experiments validate the speed improvement of the Falcon method on robot datasets. ## Update after rebuttal I'm satisfied with the authors' rebuttal which addresses most of my main concerns, including the comparisons with related baselines and the real-world experiments. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The provided propositions are clear and have been proven. Experimental Designs Or Analyses: The main experiments in the paper are conducted on the RoboMimic dataset, with comparatively fewer analyses on other datasets. I would have liked to see Falcon's performance on more complex datasets, such as Dexterous Grasping or Long-Horizon tasks. Supplementary Material: I have reviewed all supplementary materials, which detail Falcon’s algorithm under different diffusion solvers and provide additional experiments. Relation To Broader Scientific Literature: Acceleration of diffusion policies is crucial in the field of robotics, as lower latency benefits practical deployment in real-world robotic systems. Essential References Not Discussed: Streaming Diffusion Policy (ICRA 2025, https://arxiv.org/pdf/2503.04051), which also leveraged the insight that generating a partially denoised action trajectory is substantially faster than a full output action trajectory, and was released last year. Other Strengths And Weaknesses: **Strengths:** - The paper is well-written and easy to read. - The simulation experiments are sufficient and thorough. - The motivation is clear and reasonable, aiming to accelerate the diffusion denoising process using partially noisy actions. The corresponding mathematical tools support the proposed method. **Weaknesses:** - One of the major issues with this paper is that despite having a strict selection mechanism (Line 12 in Algorithm 1), the accuracy drop caused by the partial noise selected through this mechanism is significant. Specifically, in Table 1 (DDIM + Falcon), this not only raises questions about the effectiveness of the threshold mechanism but also calls for a more detailed explanation from the authors about why this limitation still results in performance degradation with carefully selected partial noise candidates. - This paper needs a more detailed comparison with existing work on streaming diffusion policies (SDP [1]), which uses step-wise partial noise without filtering as a buffer to achieve acceleration. A basic partial noise arrangement, such as the one in SDP where the noise level decreases as $t$ increases, should serve as a baseline to validate the effectiveness of the threshold mechanism. - There is a lack of more comparative results on the hyperparameter epsilon. The existing results indicate that the method is highly sensitive to this hyperparameter, which poses challenges for deployment on different tasks in the future. - As a robot learning paper, it is difficult to be highly convinced of the method's real-world effectiveness without physical experiments. - Please consider comparing with previous partial denoising-based methods [1][2]. If some of the works are considered concurrent works, feel free to disregard them. --- *References:* [1] Fast Policy Synthesis with Variable Noise Diffusion Models. https://arxiv.org/pdf/2406.04806 [2] Responsive Noise-Relaying Diffusion Policy: Responsive and Efficient Visuomotor Control. https://arxiv.org/pdf/2502.12724 Other Comments Or Suggestions: Overall, this paper presents an interesting idea of using partial noise to accelerate diffusion policies. However, the emergence of existing work (SDP) and the performance drop caused by the filtering mechanism reduce the novelty of this work. Nevertheless, I still look forward to the authors providing a more fundamental explanation of why Falcon causes uncontrollable performance degradation and potential solutions for hyperparameter sensitivity. Additionally, I would like to see more real-world experiments. I will adjust my future rating based on the improvements made. If I have misunderstood any points, I am open to discussion. Questions For Authors: Please carefully read the weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We appreciate your recognition of Falcon’s motivation and your suggestions for strengthening the comparison and evaluation. Below, we address each concern in turn. **On the Effectiveness of the Threshold Mechanism (DDIM+Falcon)** The performance drop in **Table 1** for DDIM+Falcon stems from the threshold $\epsilon $ being set relatively high in our original configuration to maximize speedup. This allowed partial denoised actions with weaker temporal dependency to be reused, leading to performance degradation. This does not indicate a flaw in the mechanism itself, but rather reflects a trade-off between efficiency and accuracy. To address this, we conducted additional experiments with smaller $\epsilon$ values. As shown in **Table 9 in the [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**, reducing $\epsilon$ (e.g., **0.005 → 0.001, 0.01 → 0.003**) significantly improved success rates in Transport **(0.74 → 0.81)** and Tool Hang **(0.51 → 0.54)**, while retaining acceleration. We also note that **Figure 3 (left)** in our main paper presents an ablation study showing this trade-off trend. For more task-specific results, **Figure 5 in the [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)** further supports this behavior. We will incorporate the updated results in the revised version. **Comparison with SDP[1]** We would like to clarify that **SDP was already cited and briefly discussed in Section 5 of the original submission**. We appreciate the opportunity to provide a more detailed comparison. Falcon offers **two key advantages** over SDP: - **Training-free** Falcon requires no retraining or noise-corruption design. In contrast, **SDP must be retrained per task** with a handcrafted noise corruption scheme (SDP Sec. 3.3), limiting flexibility. - **Less Memory Cost** **Falcon’s thresholding mechanism allows it to maintain a small buffer (e.g. 50, see Appendix D)**, while SDP stores a full horizon × noise-level buffer (e.g., 16×100). As shown in **Table 7 on our [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**, Falcon adds only +12MB of memory compared with DDPM, but SDP adds + 26M. Finally, we implemented SDP under our setting and provide comparisons in **Tables 4–8 on our [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**. Falcon achieves comparable task performance while offering higher speedups and lower memory overhead. **On Sensitivity to $\epsilon$ and Deployment Concerns** We agree that $\epsilon$ is a key hyperparameter, and we have already provided a detailed analysis of its impact in **Figure 3 (left)** in our submission and **Figure 5 in [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**. These results show that: - Falcon’s performance remains stable within a **broad $\epsilon$ range (e.g., 0.001–0.01)** across tasks; - The accuracy-speed trade-off is smooth and interpretable, making ε easy to configure in practice; - The same $\epsilon$ values generalize well across different environments, reducing per-task tuning burden. Although Falcon does rely on $\epsilon$ , it is **training-free**, and introduces only a few parameter to adjust—a significantly lighter requirement compared to training based acceleration methods. We believe this controllable trade-off offers a practical balance between deployment flexibility and performance. **On Real-World Deployment Evaluation** To assess real-world applicability, we conducted a **dexterous grasping experiment** with a physical robotic platform (**Figure 1 in [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**), composed of a RealMan 7-DoF arm, a PsiBot 6-DoF hand, and dual RealSense cameras. DDPM, SDP(DDPM), and Falcon+DDPM were deployed using the same Unet architecture, with $T_p=16,T_o=1,T_a=8$. As shown in **Table 1 on our [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**, all methods achieved 100% success rate, but **Falcon achieved 3.07× speedup** and outperformed SDP in both runtime **(0.14s vs. 0.23s)** and memory consumption **(3730MB vs. 3808MB CPU)**. These results confirm that Falcon not only accelerates diffusion policies in simulation but also transfers efficiently to real-world robotic execution. **Others** We consider Paper [2] as concurrent works. **Closing Remarks** We once again thank the reviewer for the detailed comments. If our responses have addressed your concerns, we would be sincerely grateful for your consideration in revising the score. Please let us know if further clarification is needed. **References:** [1] Fast Policy Synthesis with Variable Noise Diffusion Models [2] Responsive Noise-Relaying Diffusion Policy --- Rebuttal Comment 1.1: Comment: Dear authors, I have read the comments of each reviewer and checked the rebuttal file very carefully. I am truly impressed and grateful that the authors were able to address all concerns—including those regarding real-world experiments—in such a short timeframe. I understand how much effort and dedication this must have required, as preparing such comprehensive responses under tight deadlines is both mentally taxing and time-consuming. The authors have fully resolved my concerns, and I will increase my score without hesitation. Although the rebuttal satisfactorily addresses the current issues, I would like to highlight a few points for further discussion that could help refine the final version of this paper and enhance its professionalism. **Suggestions about Threshold Mechanism:** As I expected, the proposed method demonstrates stronger capabilities than the baselines after the ε is carefully selected, which is reasonable given its carefully designed adaptive mechanism. The results show that smaller values of ε (~0.00X) consistently yield better performance. However, this raises an important question (which I also mentioned in Weakness 3): the selection of ε remains somewhat difficult to control precisely. The results suggest that an optimal ε must be determined for each individual task, which would require extensive manual pre-testing if the number of downstream tasks is large. To address this, I offer the following suggestions: - Investigate whether a more universal ε can be derived—one that accommodates all candidate actions while maintaining sufficient performance, from a theoretical perspective. For instance, integrating the probability density function (i.e., the area) uniformly could ensure that the gap between the actual action and the reference action remains within an acceptable range. - Explore the development of a more sophisticated ε-selection algorithm, such as one that adapts dynamically based on the observation or robot state input. - (A small suggestion for future work) Extend this mechanism to more generalized diffusion policies (e.g., RDT or π-zero, though the latter is based on flow matching) to eliminate the need for per-task ε selection. **Suggestions about Real-world Experiments:** I sincerely appreciate the authors’ efforts in conducting real-world experiments within such a constrained timeframe. However, I noticed one additional issue in the rebuttal: the reported success rates for all methods are 100%. This may give the impression that the experimental tasks are overly simplistic, potentially obscuring the algorithm’s ability to differentiate itself from baselines. Additionally, the real-world results show only marginal differences in acceleration (including speed and memory usage) between SDP and Falcon, which might be negligible in practice. To strengthen the paper, I suggest incorporating more complex real-world tasks in future versions. This would better demonstrate Falcon’s superior performance not only in accuracy but also in computational efficiency. --- Once again, I commend the authors for their diligent work and thoughtful revisions. I am confident that addressing these remaining points will further elevate the impact and clarity of this already impressive research. I look forward to seeing the final version and the continued advancements in this direction. Best, Reviewer 4p76 --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the encouraging follow-up and the generous score increase. To ensure clarity, we have uploaded **new results and figures** to our supplementary site: https://anonymous.4open.science/api/repo/506Falcon/file/material2.pdf?v=6a54d6d0 Below, we respond to your suggestions: **Generalization to RDT (Foundation Diffusion Policy)** To demonstrate Falcon’s compatibility with task-generalizable diffusion models, we applied it to RDT[3] (a pretrained vla model), integrating Falcon into RDT’s DPMSolver++ without retraining. As shown in **Table 10, Fig. 6 and Fig. 7of [link2](https://anonymous.4open.science/api/repo/506Falcon/file/material2.pdf?v=6a54d6d0)**, Falcon significantly accelerates inference (**31x and 34x vs. DDPM**) **on PickCube and PushCube tasks** while preserving success rate, showing strong generalization. ------ | Method | Task | Success Rate | NFE | Time(s) | Speedup | GPU MEMS (MB) | | ------------------------------ | -------- | ------------ | ---- | ------- | ------- | ------------- | | DPMsolver++ (5 steps) | PickCube | 0.75 | 5.00 | 0.08 | 20× | 15479 | | DPMsolver++ (5 steps) + Falcon | PickCube | 0.75 | 3.22 | 0.04 | 31x | 15483 | | DPMsolver++ (5 steps) | PushCube | 1.00 | 5.00 | 0.08 | 20× | 15479 | | DPMsolver++ (5 steps) + Falcon | PushCube | 1.00 | 2.91 | 0.05 | 34x | 15483 | **Table 10**: Falcon is evaluated with 20 rollouts on ManiSkill benchmark ($T_p=64, T_a=32, T_o=2, \epsilon=0.02,|\mathcal{B}|=2$). Speedup is relative to 100-step DDPM. ------ **More Challenging Real-World Evaluation** Building on your suggestion, we conducted an additional complex real-world experiment involving **precise object insertion (see in Fig. 1 of [link2](https://anonymous.4open.science/api/repo/506Falcon/file/material2.pdf?v=6a54d6d0))**. The robot must insert a **square stick** into a tall **chip can**—a task requiring accurate 3D alignment. Even minor errors in angle or height result in failure. We trained DDPM and SDP on 50 human demonstrations, and applied Falcon on top of the trained DDPM (no retraining needed). Falcon matched DDPM in success rate (90%) but with **2.86× faster inference** (see **Table 11** and **Fig. 8** in the **[link2](https://anonymous.4open.science/api/repo/506Falcon/file/material2.pdf?v=6a54d6d0)**). Falcon maintained 90% success rate while being **2.86× faster**. ------ | Method | Speedup | NFE | Sampling Time per action (s) | GPU MEMS (MB) | Success rate | | --------------- | --------- | ---------------- | ---------------------------- | -------------- | ------------ | | DDPM | 1.00× | 100.00 ± 0.00 | 0.43 ± 0.01 | 3735.76 ± 0.48 | 90% | | SDP(DDPM) | 1.95× | 50.00 ± 0.00 | 0.22 ± 0.01 | 3743.28 ± 1.26 | 85% | | **Falcon+DDPM** | **2.86×** | **25.57 ± 7.10** | **0.15 ± 0.12** | 3731.50 ± 5.08 | 90% | **Table 11**: Each entry is evaluated with 20 rollouts in the mean $\pm$ standard deviation format. Falcon is set with $T_p=32,T_a=16,T_o=1,\epsilon=0.02, |\mathcal{B}|=20$. ------ **On $\epsilon$ Selection Strategy** We deeply appreciate your suggestions regarding the difficulty of manually tuning $\epsilon$. In practice, we observed that there exists value within $\epsilon \in [0.001, 0.05]$ yield good performance across tasks (see in **Fig. 5 in [link2](https://anonymous.4open.science/api/repo/506Falcon/file/material2.pdf?v=6a54d6d0) , Fig. 3 in submission**, with a predictable trade-off: **larger $\epsilon$ increases speedup**, while **smaller $\epsilon$ favors accuracy**. To ease this tuning process, we recommend a **binary search strategy**—given the monotonic behavior of performance with respect to ε, this approach has a **logarithmic time complexity** and requires only a small number of trials. Since Falcon is **training-free**, this tuning is light-weight and incurs minimal cost in practice. We fully agree with the importance of automating $\epsilon$ selection, and your suggestion has inspired us to pursue more **adaptive strategies** that leverage observation statistics or score distributions during inference. Due to the limited rebuttal timeframe, we were unable to fully implement and test these ideas, but we are actively exploring this direction as part of our future work and will continue extending Falcon to more complex real-world tasks. We are grateful for your thoughtful feedback, which helped us better articulate and extend the applicability of Falcon. If our updates have resolved your remaining concerns, we would greatly appreciate your further support and score reconsideration. **References** [3] RDT-1B: a diffusion foundation model for bimanual manipulation
Summary: The authors propose a method to speed up inference with diffusion policies using a scheme where the denoising chain for action sequences is initialized to a partially noised sequence predicted from a previous timestep. The proposed method is supposed to preserve the multimodality of the diffusion policy while achieving anywhere from 2 to 7x speedup. The authors evaluate their method on a variety of simulated robotics tasks. Claims And Evidence: The claims regarding significant speedup are clear. However, ultimately I do not find the proposed method convincing. The authors motivate their method by claiming that other more principled approaches to speed up diffusion inference such as ODE solvers suffer from numerical discretisation errors, and distillation techniques cannot represent multimodal policy distribution. These claims are never properly justified in the paper. The proposed method itself is does not have much theoretical motivation either. Methods And Evaluation Criteria: The datasets and tasks which are evaluated are a good starting point, but there are some issues. Firstly, there is no evidence provided that the behavior policy for these datasets is actually multimodal, and requires diffusion policies. Section 4.5 does not truly evaluate multimodality of policy distributions, since trajectory level multimodality can even be achieved through per-action unimodal policies like Gaussians. It is important to have a Gaussian policy baseline to compare against. One of the important claims of the paper is that the proposed method preserves multimodality whereas other distillation methods do not, and this claim is never backed up. Theoretical Claims: There are no theoretical claims made by the paper. The proposed method is justified primarily through author's intuition and empirical evaluation. Experimental Designs Or Analyses: As mentioned earlier, the tasks themselves are probably fine, as long as a Gaussian baseline is included to show that they actually have multimodal behavior policies. Table 2 compares number of function evaluations, and for DPMSolver and DDPM more evaluations are shown to be used then Falcon. However, we can see that NFE for DPM solver is already quite close to the Falcon. No comparison is shown where the ODE in DPMSolver is integrated with fewer steps. Does it actually lead to loss in performance in these tasks? There is also no distillation method that is compared against (consistency distillation, progressive distillation etc.) except a very limited evaluation in Fig 4, which in theory preserve the same marginals as the original diffusion model so should see minimal performance drop. Supplementary Material: I only skimmed through the experimental results in the supplementary material. Relation To Broader Scientific Literature: The paper is related to fast sampling of diffusion models. Some important works in this class are ODE based sampling [1], ODE distillation [2, 3], and progressive distillation [4] among many others. [1] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, https://arxiv.org/abs/2206.00927 [2] Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning, https://arxiv.org/abs/2309.16984 [3] Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion, https://arxiv.org/abs/2310.02279 [4] Progressive Distillation for Fast Sampling of Diffusion Models, https://arxiv.org/abs/2202.00512 Essential References Not Discussed: The literature in fast diffusion sampling is vast, and for flow models there are even more. The authors should additionally cite progressive distillation at least, since it is a very popular and important paper in the field. Other Strengths And Weaknesses: ### Strengths 1. The authors evaluate on many different tasks, which is appreciated ### Weaknesses 1. The proposed method is explained poorly in my opinion. I think this has more to do with the method not having sufficient theoretical justification. 2. Some confusing statements which are not really justified, such as "Moreover, distillation-based approaches are inherently training-intensive and task-specific, meaning they cannot generalize effectively to accelerate unseen tasks or adapt to diverse visuomotor applications." in the introduction. Other Comments Or Suggestions: Minor error: in line 432 the consistency policy reference instead points to consistency trajectory models, which is a different paper from the consistency policy paper (which is referenced correctly in section 4.5). Questions For Authors: 1. Why do you say distillation techniques are task specific and not generally applicable? Most diffusion distillation techniques are applicable to any diffusion model. 2. How do you know your proposed method requires less function evaluations than DPMSolver? You can manually adjust the number of steps for the ODE solvers using different integration schemes. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and detailed comments. We appreciate the constructive feedback and address the key concern below. **Clarifying the Motivation and Positioning of Our Method** Our contribution is not to replace distillation or ODE solvers, but to offer a practical, training-free alternative that leverages temporal structure for faster inference in diffusion policies. While we discuss limitations of prior methods, Falcon is not motivated by them. As stated in our abstract, *“The core insight is that visuomotor tasks exhibit sequential dependencies between actions at consecutive time steps.”* Falcon leverages this via partial denoising to reduce sampling steps. We fully recognize that **distillation methods are powerful**, particularly in settings with large data and compute [2,4,5,6]. In such settings, they deliver substantial speedups while maintaining performance. We respect this line of work and do not position Falcon as a replacement. Instead, Falcon instead targets low-resource regimes. In these cases, distillation may be infeasible, and ODE solvers can suffer from large discretization errors under very low NFE [1]. Falcon addresses this gap by offering a plug-and-play solution that also integrates well with ODE solvers in low-step regimes **(Section 4.3)**. We acknowledge that our initial wording may have been overly strong regarding distillation. While methods like CP [2, **Sec. 5**] may face challenges in preserving multimodality, this is task-dependent. We will revise accordingly, and thank the reviewer for pointing this out. **Lack of Theoretical Foundation** Although Falcon lacks a full theoretical derivation, its **core components are grounded in well-established principles**. Falcon is built on the insight that in sequential decision-making, when past actions are strongly correlated with the current one, initializing from them can reduce sampling steps. Falcon exploits this via reusing past denoised actions. Crucially, the choice of which past action to reuse is not heuristic. It is determined by Falcon’s **thresholding mechanism**, based on **Tweedie’s formula**, a foundational result in empirical Bayes theory. As noted in **Remark 1 of [7]**, Tweedie’s formula yields the Bayes-optimal posterior mean under Gaussian noise, guiding our selection of the cleanest available prior. This forms the theoretical core of our partial denoising module. Thus, while Falcon lacks a full pipeline of theory, its **main mechanism—Tweedie-based partial action reuse—is mathematically justified** and validated through strong empirical performance. **On Expressing Multimodal Distributions** 1. The BlockPushing task has inherently multimodal expert trajectories (**BeT [3], Table 2**). 2. We added a **Gaussian baseline** using the same network. It fails on both goals **(p1: 0.02, p2: 0.01)**, while Falcon succeeds **(p1: 0.99, p2: 0.97), (see in Table 3 in the [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede))** showing the need for multimodal modeling. 3. We evaluated **Consistency Policy** on PushT and found it biased toward one mode, indicating that some distillation methods may struggle to retain multimodality see in **Figure 4 in the [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**. **Comparisons with Reduced-Step DPMSolver** To directly assess whether lower-step DPMSolver leads to performance degradation, we matched its NFE to that of DPMSolver+Falcon. Under this constraint, DPM-Solver* exhibited significant performance drops—**10.6%** in Square_ph, **35.6%** in Square_mh, and **6.8%** in Transport_ph—while Falcon maintained high success rates (**see Table 2 in [link](https://anonymous.4open.science/api/repo/506Falcon/file/material.pdf?v=69217ede)**). This confirms that aggressive NFE reduction in ODE solvers can hurt performance, and Falcon helps mitigate this degradation. **Missing or Incorrect References** We will add the missing citation for Progressive Distillation [6], and correct the mistaken reference on Line 432. **Closing Remarks** We are grateful for the reviewer’s thoughtful comments, which helped us improve both the clarity and positioning of our work. If our responses have addressed your concerns, we would greatly appreciate a reconsideration of the score. Please let us know if further clarification is needed. **References** [1] PFDiff: Training-Free Acceleration of Diffusion Models Combining Past and Future Scores [2] Consistency Policy: Accelerated Visuomotor Policies via Consistency Distillation [3] Behavior Transformers: Cloning k Modes with One Stone [4] Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning [5] Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion [6] Progressive Distillation for Fast Sampling of Diffusion Models [7] Diffusion Posterior Sampling for General Noisy Inverse Problems --- Rebuttal Comment 1.1: Comment: I thank the authors for the response, however many of my original problems with the paper remain. I fundamentally do not think the contribution is significant, however I can raise the score to weak reject. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the follow-up response. The concerns you raised—including **comparisons with distillation methods**, **the expressiveness of Gaussian baselines**, and **lower-step DPMSolver**—have been addressed through new experiments and revisions. Regarding the concern that our method is not convincing, we have conducted extensive evaluations including: (1) **comparisons with SDP[1] and Consistency Policy[2]**, (2) **Gaussian baseline and multimodal trajectory analysis**, (3) **two real-world robot experiments (dexterous grasping and insertion)**, and (4) **a vision-lanugage-action foundation model RDT[3] experiment**. Several of these were also requested by other reviewers, and we are glad to report that they are now included in the updated version. Once again, we truly appreciate your feedback—and if our updates have addressed your concerns, we would be sincerely grateful for your reconsideration of the score. ------ All referenced experiments are available in our supplementary material: https://anonymous.4open.science/api/repo/506Falcon/file/material2.pdf?v=6a54d6d0: **Distillation comparison**: see **Fig. 4** **Gaussian baseline**: see **Table 3** **Lower-step DPMSolver**: see **Table 2** **SDP comparisons**: see **Tables 4–7** **Two real-world robot experiments**: see **Figs. 1, 2, 8** and **Tables 1, 11** **Vision-language-action foundation model experiment (RDT)**: see **Figs. 6, 7** and **Table 10** ------ References: [1] Fast Policy Synthesis with Variable Noise Diffusion Models [2] Consistency Policy: Accelerated Visuomotor Policies via Consistency Distillation [3] RDT-1B: a diffusion foundation model for bimanual manipulation
null
null
null
null
null
null
Autoencoder-Based Hybrid Replay for Class-Incremental Learning
Accept (poster)
Summary: In this paper, the authors proposed a method named hybrid autoencoder (HAE) and a strategy named autoencoder-based hybrid replay (AHR). For HAE, it is an autoencoder learnt with charged particle system energy minimization (CPSEM) equations and repulsive force algorithm (RFA). This autoencoder is used for both decoding stored embeddings for replay and encoding inputs for classification. The AHR is storing data embedding rather than raw data. Claims And Evidence: The theoretical part was not described clearly. The experiment part is relatively reasonable, but more metrics should be added. Methods And Evaluation Criteria: The author used several baselines on different benchmark datasets with accuracy. They also compared memory used among methods. Theoretical Claims: This part makes people confusion, especially the mess notations. It is hard to find the definitions of them. For example, they stated that complexity was O(cte) but hard to find the meaning. They mentioned that their space complexity was O(0.1t). It is a very strange and unprofessional way to describe the complexity. O(0.1t) is O(t). Experimental Designs Or Analyses: The experiment design is relatively reasonable. As I mentioned above, more metrics should be considered like how performance dropping along with more tasks leant. Supplementary Material: I reviewed the appendix. Relation To Broader Scientific Literature: Storing embedding rather than raw data is not new in continual learning. The idea of learning autoencoder to reconstruct for replay makes it better. Essential References Not Discussed: There are some works that have the idea of storing embedding, like “Skantze G, Willemsen B. Collie: Continual learning of language grounding from language-image embeddings[J]. Journal of Artificial Intelligence Research, 2022, 74: 1201-1223.” Other Strengths And Weaknesses: The good things are the idea of learning an autoencoder to reconstruct stored embedding and experiments included many baselines. However, the description of the method and notations are confusion. The experiment part should include more metrics. Other Comments Or Suggestions: See other parts. Questions For Authors: What is O(cte)? Is there any misunderstanding of O(0.1t)? Why did you just said the embedding size is 1/10 of the raw data? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We have carefully considered the points raised and we offer the following responses and clarifications: ### On the Theoretical Clarity - **Regarding O(cte)**: The notation "O(cte)" was intended as shorthand for "constant time complexity" (O(1)). We agree this was unclear and will explicitly replace "O(cte)" with the correct, widely recognized notation **"O(1)"** in the revised manuscript. - **Regarding O(0.1t)**: The notation "O(0.1t)" was deliberately chosen to emphasize that our hybrid replay method uses approximately 10 times less memory compared to standard non-hybrid replay methods. Such notation, indicating proportional improvements in complexity, is commonly used in professional literature and textbooks to highlight practical differences in resource usage. ## Incorporating Additional Evaluation Metrics We commit to incorporating metrics such as **average forgetting**, **forward transfer**, **incremental confusion maps**, and **accuracy curves over time** to provide a better understanding of any biases (recency/primacy). ### Clarification on Hybrid Method Novelty Indeed, hybrid replay strategies have been explored previously, and we cited relevant works like **i-CTRL** [1] (2022), **REMIND** [2] (2020), and its variant **REMIND+** [3] (2021) in our paper. To provide clearer context and comparison, we will include the following comparative analysis: | | Classification Approach | Quantization | Learning Scenario | Latent Space Representation | Architectural Simplicity | |------------------|------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|-------------------|------------------------------------------------------------|---------------------------------------------------------------------| | **AHR** | Classification within the latent space of the encoder using Euclidean distance. | Easy integration possible, potentially improving performance. | Offline | Structured latent space (Lennard-Jones Potential). | Minimalistic and clean design. | | **Remind** | Classification after decoding with cross-entropy loss. | Applies quantization to latent exemplars for compression. | Online | Unstructured latent space (CNN-based). | Complex architecture, complicating implementation and scalability. | | **Remind+** | Classification after decoding with cross-entropy loss. | Uses quantization for feature compression. | Online | Unstructured latent space (autoencoder-based). | Complex architecture, hindering scalability. | | **i-CTRL** | Classification within latent space using Euclidean distance. | Easy integration possible, potentially improving performance. | Offline | Structured latent space (Linear Discriminative Representation). | Minimalistic and clean design. | ## Additional Points and Clarifications * **Relevant References:** We will discuss the relevant work by Skantze and Willemsen [4] to provide further context. * **On Latent Space Dimension:** We clarify that the compression ratio varies depending on the dataset and the underlying network architecture used for feature extraction. The specific compression ratios are | Dataset | MNIST | SVHN | CIFAR-10 | CIFAR-100 | miniImageNet | | :--------------- | :---: | :--: | :------: | :-------: | :----------: | | Compression Ratio (%) | ≈ 40 | ≈ 10 | ≈ 10 | ≈ 10 | ≈ 10 | Thank you very much for your thorough review and feedback. We respectfully invite you to review the comments from the other reviewers as well, who have positively acknowledged key strengths and contributions of our work. We hope this broader context might offer additional perspectives on the significance and novelty of our contributions, leading to a more balanced and constructive reassessment. We greatly value your insights and remain committed to addressing your concerns comprehensively. **References** [1] Incremental learning of structured memory, Tong et al., 2022. [2] Remind your neural network to prevent forgetting, Hayes et al., 2020. [3] Acae-remind for online continual learning, Wang et al., 2021. [4] Collie: Continual learning of language grounding. Skantze et al., 2022.
Summary: The paper proposes an Autoencoder-Based Hybrid Replay (AHR) strategy for class-incremental learning (CIL), addressing catastrophic forgetting (CF) and task confusion (TC) while reducing memory complexity. The core innovation is a Hybrid Autoencoder (HAE) that compresses exemplars into a latent space using a repulsive force algorithm (RFA) inspired by charged particle systems. This allows efficient storage and recovery of exemplars via a decoder designed for memorization. Extensive experiments validate the effectiveness of this method. Claims And Evidence: Most claims are generally supported by rigorous experiments and ablation studies. However, the paper claims that AHR is applicable to task-free CIL, but it is only briefly mentioned in Figure 2 and lacks relevant experimental proof. Methods And Evaluation Criteria: Yes, the proposed method makes sense to the issue at hand. Theoretical Claims: Yes, its theoretical claims I have not found to be incorrect. Experimental Designs Or Analyses: Experiments are comprehensive but have some shortcomings: 1. The baselines compared are somewhat outdated, with most of the methods from around 2020 and only one from 2023. What is the relative performance of the AHR compared to the most recent methods? 2. In the five datasets, the architecture of the network uses the underlying dense network and ResNet32, how does the AHR perform for the models of the ViT architecture? 3. Benchmarks use balanced class splits; performance on imbalanced data (common in real-world CIL) is untested. Supplementary Material: The supplementary material contains hyperparameters and experimental details and provides source code, which makes it reproducible. Relation To Broader Scientific Literature: The work effectively builds on prior CIL strategies and addresses the shortcomings of previous approaches. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1.Innovative combination of physically-inspired RFA and autoencoder provides new ideas for CIL. 2. Extensive experiments on different benchmarks have demonstrated the effectiveness of AHR. 3.The proposed AHR practically focuses on memory/computation efficiency and achieves good results. Weaknesses: 1. Referring to the questions posed in Claims And Evidence and Experimental Designs Or Analyses sections. 2. The effect of some hyperparameters on the performance is missing, such as λ in Eq. 1. Other Comments Or Suggestions: None Questions For Authors: Please refer to the weaknesses. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough review and insightful comments. We are particularly grateful for the recognition of AHR's strengths. Below, we clarify and address issues pointed out by the reviewer: ### On Applicability to Task-Free CIL We'd like to clarify that **Figure 2** is intended solely as a *visualization* to illustrate the concept of a task within our **task-based CIL** framework. We do **not** claim that our current approach performs equally well in *task-free* settings. Nevertheless, due to the inherent **compression capabilities** of our proposed architecture, which facilitate the storage of larger and more diverse exemplars, it could potentially, with minor adjustments, be adapted for use in *task-free* settings. We will explicitly clarify this point in the revised paper and leave a detailed exploration of this possibility to **future work**. ### On the Choice and Recency of Baselines Our selection aimed to include **representative and relevant methods** from the CIL literature, particularly focusing on **hybrid replay strategies** where latent or compressed representations are stored. To this end, we included methods like **i-CTRL** [1] (2022), **REMIND** [2] (2020), and its variant **REMIND+** [3] (2021), which align closely with the concept of AHR. However, we remain **open to incorporating additional recent baselines** if the reviewer can point to specific, highly relevant examples where a direct comparison would be particularly insightful. Furthermore, we believe AHR's core contribution, using an **efficient hybrid replay**, is a concept potentially **orthogonal and complementary** to other advancements in CIL backbone architectures. One could envision integrating AHR's lightweight decoder mechanism with the latent representations of other contemporary CIL models to potentially enhance their performance under strict memory constraints, leveraging our memory-efficient replay approach. ### On Performance with ViT Architectures Our current experiments utilize standard backbone networks like ResNet32 commonly employed in the CIL literature. This choice was primarily driven by the goal of ensuring **fair and direct comparability** with a wide range of existing CIL methods, many of which report results using these architectures. While we believe AHR's core mechanism, efficient exemplar storage, is **conceptually compatible** with features extracted by ViTs, rigorously evaluating this combination was beyond the scope of the current study due to the aforementioned reasons of comparability and computational cost. We will note the exploration of AHR with ViT backbones as a valuable avenue in the '**Limitations and Future Work**' section. ### On Performance with Imbalanced Data Distributions While our current experiments utilize standard balanced benchmarks for clear comparability, we believe AHR's core contribution, its highly **efficient hybrid replay mechanism**, is conceptually **orthogonal** to the challenge of class imbalance itself. Critically, AHR's ability to store **significantly larger and potentially more diverse sets of exemplars** within a fixed memory budget remains a powerful tool, regardless of the underlying class distribution. Since maintaining knowledge of past classes especially minority classes, in an imbalanced setting is vital, the enhanced replay capability offered by AHR is anticipated to be **advantageous** even under such conditions. However, we acknowledge that we have **not explicitly evaluated** AHR under imbalanced scenarios in the present work. Thoroughly investigating AHR's performance in such settings, potentially in combination with techniques specifically designed for imbalance, is indeed crucial. We recognize this and will explicitly identify the evaluation of AHR under **class imbalance** as an important area for **future investigation** in the '**Limitations and Future Work**'. ### On Hyperparameter Sensitivity (λ) Concerning the weighting factor λ in Equation 1, which balances the reconstruction and classification losses, while detailed ablation studies were omitted from the main paper due to space constraints, we performed sensitivity analyses during development. The results for the **CIFAR-10 5/2 split benchmark**, demonstrate the impact of varying λ: | λ Value | 0 | 20 | 40 | 60 | **80** | 100 | 120 | | :--------------- | :---: | :---: | :---: | :---: | :-----: | :---: | :---: | | Avg. Accuracy (%) | 42.03| 63.93| 70.51| 73.70| **77.12** | 76.53| 72.26| We commit to including a more comprehensive sensitivity analysis for λ in the **appendix** of the revised paper. Thank you for reading our response. **References** [1] Incremental learning of structured memory via closed-loop transcription, Tong et al., 2022. [2] Remind your neural network to prevent catastrophic forgetting, Hayes et al., 2020. [3] Acae-remind for online continual learning with compressed feature replay, Wang et al., 2021.
Summary: The paper tackles the problem of class incremental learning, where the model sees a sequence of tasks of different classes, and needs to adapt to them sequentially while minimizing catastrophic forgetting and task confusion. During testing, the model does not have access to the task ID. To solve this problem, the authors propose to train a novel autoencoder architecture that is used for both replay and classification. The contributions of the paper are as follows: * The authors propose a hybrid autoencoder, that is used both for computing the latent representation to store in memory and replay, and for classification. This autoencoder is coupled with a physics-inspired approach: charged particles system energy minimization, and repulsive force algorithm to incrementally add components to the latent space memory. The goal of this approach allows for the different classes to be away from each other in the latent space, allowing for a simple use of the Euclidean distance to the class centroids to classify samples during test. * These autoencoders are used in an approach that combines exemplar and generative replay ideas. As mentioned above, the approach stores exemplars from the latent space, reducing the memory footprint of the approach. When replay data is needed, the autoencoder is used to decode samples from the memory. Contrarily to generative replay, the decoder is trained for memorization and not to generate new samples, which hedges the approach against some of the drawbacks of generative replay. * The approach is tested on 5 benchmarks, and compared to 10 baselines, showing an improvement in most cases. The authors also conduct some ablation studies, mainly testing the importance of the use of the physics inspired approach to structure the latent space. ## Update after rebuttal The authors answered most of my questions and addressed most of my concerns. I also thank the authors for their reactivity during the rebuttal. With the added evaluations and additional experiments to the revised version, I am happy to increase my score to 3. Claims And Evidence: The paper main claim is to achieve better performance for class incremental setting while reduce the memory footprint and keeping a linear compute as in competitor methods. The authors prove this claim empirically through extensive experiments, on several datasets, and comparing to a good number on baselines. While I have some comments on the choice of baselines and used benchmarks (see next sections), I think the evidence is clearly provided and relatively convincing. Methods And Evaluation Criteria: Strengths: * The whole approach is relatively novel to the best of my knowledge. * In particle, the use of the physics inspired approach to structure the latent space is interesting and see=ms to add a significant effect. * The experiments are extensive. I particularly appreciated the ablatio studies. Weaknesses: * The authors made the choice of not changing the class centroid positions. While this works well for the tested benchmarks, I think it is mainly due to the fact that classes are perfectly disjoint across tasks in this artificial benchmarks. In real applications, this might not be the case, and allowing for a more flexible adaptation of the latent space can not only be more generalizable, but also have other effects (e.g. help reduce the degradation of the decoder). * Regarding the benchmarks, despite their wide use in the literature, they are highly artificial and have a very limited representation of real world applications. There are multiple attempts to propose alternative benchmarks for continual learning and related topics (e.g. meta-learning) that are more realistic. I will provide the references in the dedicated section below. They also consist of relatively short sequences. Recent works have indicated the impact of the sequence length on the model behavior (see reference below too). * For evaluation, the authors base their analysis on accuracy at the end of the sequence only. It would be interesting to have a more granular approach, with metrics such as forgetting of forward transfer, to test if the approach has a recency or primacy bias, if it increases knowledge accumulation, etc ... Theoretical Claims: The paper is empirical in nature. The main theoretical contribution is in deriving the different objectives and algorithms that constitute the approach. Except of the first comments under Weaknesses in the previous section, I didn't detect any other issues. Experimental Designs Or Analyses: My main critique to the experimental design is the choice of benchmarks and metrics as explained above. As mentioned above, I appreciated the ablation studies that highlight the importance of different components of the approach. These results can be improved with additional tests. For example, it seems from the paper introduction and claims that the latent space size is fixed to 2.5 or 10% of the input size. If this is not the case, this should be made clearer. If it is the case, it would be interesting to test the impact of the latent space dimension on the results. On the form, the curves can be improved and made more readable. In particular, the choice of colors is not optimal. For example. the finetuning with replay exemplars baseline and the proposed approach (AHR-up) have the same color in figure 3. Supplementary Material: I checked the experimental details, and the extended literature review. Relation To Broader Scientific Literature: While the paper already include a detailed literature review, there are some missing references as mentioned above. Nevertheless, the paper is relatively well situated in the literature. In particular, the comparison to several generative and exemplar replay approaches and strategies is interesting. Benchmarks: * Meta-Album: Multi-domain Meta-Dataset for Few-Shot Image Classification, Ullah et al. 2022 * A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark, Zhai et al. 2020 * NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research, Bornschein et al. 2023 On the impact of sequence length * Challenging Common Assumptions about Catastrophic Forgetting, Lesort et al, 2023 Essential References Not Discussed: Regarding the idea of latent replay, and related baselines, I think the authors are missing an important reference: * Continual Learning with Foundation Models: An Empirical Study of Latent Replay, Ostapenko et al. 2022 Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: All of my questions are related to my previous comments: 1. Can the authors comment on the choice of not changing the class centroids? 2. How would the method behave and how should it be modified to be applicable in more realistic scenarios and benchmarks? 3. How does the method impact other continual learning metrics? 4. How does the latent space dimension impact the results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are impressed by the reviewer's high-quality feedback, which reflects a deep understanding of the field and engagement with our work. We particularly appreciate the fairness of their critique. In response, we offer the following clarifications: ### On the Choice of Not Changing Class Centroid Positions We acknowledge the importance of considering more realistic evaluation scenarios where task data may not be perfectly *class-disjoint*, in which case the idea of fixing class centroids (successfully employed in recent works [1]) is **suboptimal**. However, our decision to employ the **standard, well-established CIL benchmarks** (which typically feature *disjoint* classes) was primarily driven by the necessity of ensuring **fair, direct, and easily interpretable comparisons** with the large body of existing literature evaluated under these protocols. We commit to adding a dedicated discussion in the '**Limitations and Future Work**' section of our revised paper, where we state that while the proposed fixed-centroid approach demonstrates strong performance under the standard *disjoint* CIL setting, future work should investigate **adaptive centroid mechanisms** to optimize performance for *non-disjoint*, real-world data distributions [2,3,4]. We will also mention that in such realistic **non-disjoint** benchmarks, the decoder's degradation could significantly be **mitigated** because the overlap of classes across tasks could lead to the decoder **revisiting** older classes. ### On the Choice of Benchmarks and Sequence Length We acknowledge the **limitations of standard CIL benchmarks**, particularly their often *artificial nature* and relatively *short task sequences*. We thank the reviewer for providing references to **more realistic benchmarks** [2, 3, 4] and studies on the **impact of sequence length** [5]. We commit to incorporating a discussion of these limitations and citing these important works in the '**Limitations and Future Work**' section. ### Incorporating Additional Evaluation Metrics We will incorporate more **granular evaluation metrics** beyond final accuracy and agree that metrics such as **average forgetting**, **forward transfer**, **incremental confusion maps**, and potentially **accuracy curves over time** can provide better understanding of any potential biases (recency/primacy). We commit to including these evaluations in the **revised version** of our paper. ### Additional Points and Clarifications * **On Latent Space Dimension:** We clarify that the compression ratio varies depending on the dataset and the underlying network architecture used for feature extraction. The specific compression ratios and latent space sizes used in our experiments are summarized below: | Dataset | MNIST | SVHN | CIFAR-10 | CIFAR-100 | miniImageNet | | :--------------- | :---: | :--: | :------: | :-------: | :----------: | | Compression Ratio (%) | ≈ 40 | ≈ 10 | ≈ 10 | ≈ 10 | ≈ 10 | | Latent Space Size | 20 | 307 | 307 | 307 | 2117 | * **On Figure Readability and Color Choices:** In the revised paper, we will ensure **the distinctness of color palettes**, potentially incorporating different line styles or markers where appropriate. * **Inclusion of a Latent Replay Study:** We will include reference [6] and discuss how it compares with our proposed work. ### Kind Request for Reassessment We believe our proposed Autoencoder Hybrid Replay method remains highly effective, **even under realistic benchmark conditions**, due to its ability to store **a significantly larger and more diverse set of exemplars**. Extensive research supports that *exemplar diversity* effectively mitigates catastrophic forgetting. Additionally, we have devised solutions for handling non-class-disjoint benchmarks with adaptive centroids, which we could not include here due to space constraints but will gladly discuss during the discussion period. We have strived to address the key concerns, regarding benchmarks and evaluation metrics, through clarification and planned revisions (expanded limitations/future work section, additional metrics). We hope these responses and commitments are satisfactory. If you feel that our clarifications and planned updates satisfactorily address your concerns, we kindly request you to consider increasing your score. This would greatly support the acceptance of our work and help us contribute meaningfully to the field. Regardless of your decision, we are sincerely grateful for your thoughtful feedback. Thank you. **References** [1] Combating Inter-Task Confusion and Catastrophic Forgetting, Moslem et al. 2025. [2] Meta-Album: Multi-domain Meta-Dataset, Ullah et al. 2022 [3] A Large-scale Study of Representation Learning, Zhai et al. 2020 [4] NEVIS'22: A Stream of 100 Tasks, Bornschein et al. 2023 [5] Challenging Common Assumptions, Lesort et al., 2023 [6] Continual Learning with Foundation Models, Ostapenko et al. 2022 --- Rebuttal Comment 1.1: Comment: I thank the authors for their careful consideration of my comments, and for revising their paper accordingly. Regarding additional benchmarks: I agree that the used benchmarks are standard and widely used, and I also agree comparing the methods on these benchmarks is needed and useful. It is however important for the community to go beyond the limitations and the artificial settings of these benchmarks, in order to improve our understanding of more realistic scenarios, and develop methods that can have wide practical implications. Regarding varying the size of the latent space, while there is a difference between MNIST and the other datasets, the ratio is the same for all the other datasets. It is important to state how this ratio has been chosen (my guess is based on the memory requirements?), and how changing it would influence the behavior of the method. Again, thank you very much for your answers. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's suggestion to go beyond the *commonly used benchmarks* for evaluation as they have *significant limitations*. We commit to conducting *two additional experiment scenarios* and including them in the appendix of the revised manuscript: - **Non-class-disjoint scenario:** We plan to incorporate a simulation scenario in which task sequences feature **realistic class overlaps**, following the data distributions specified in references suggested by the reviewer (e.g., Ullah et al., 2022; Zhai et al., 2020). By juxtaposing our results across both *standard* and *realistic* simulation scenarios, we will discuss how the performance of our proposed **AHR** method is affected when operating in **non-class-disjoint scenarios**. Additionally, we will provide insights into the reasons for any observed performance variations, indicating AHR's strengths and potential limitations in *real-world applications*. - **Long task sequence scenario:** Recognizing the limitations of the short task sequences, we will conduct simulation scenarios that study longer sequences of incremental learning tasks following the approach outlined by Lesort et al. (2023). Our baseline comparisons for these additional scenarios will include representative latent hybrid replay methods, specifically: - **i-CTRL** (Tong et al., 2022) - **REMIND** (Hayes et al., 2020) - **REMIND+** (Wang et al., 2021) - **Latent Replay** (Ostapenko et al., 2022), as explicitly recommended by the reviewer. These baselines align closely with our proposed AHR method in terms of their main contribution, allowing for rigorous and meaningful comparisons. Detailed experimental setups, including task definitions, evaluation metrics, and architectures, will be provided in the appendix. --- We greatly appreciate the reviewer’s inquiry regarding our choice of latent space dimension. Specifically, we set the latent space dimension to **307** for SVHN, CIFAR-10, and CIFAR-100, and **2117** for miniImageNet. These latent space dimensions were chosen to achieve approximately a *10-fold memory compression*, with the *ultimate goal* of clearly demonstrating that an **order-of-magnitude memory reduction** is achievable with our proposed architecture across diverse datasets. For the simpler MNIST dataset, we utilized an even more substantial compression rate of approximately **40-fold**. However, during development, we found that achieving significantly greater compression (e.g., **100-fold**) was impractical given the constraints of the standard ResNet-32 architecture. Specifically, the ResNet-32 encoder naturally compresses the input images (32 × 32 × 3 for SVHN, CIFAR-10, and CIFAR-100; and 84 × 84 × 3 for miniImageNet) into a *64-dimensional latent representation*. Thus, attempting a latent dimension as low as 30 (corresponding to roughly two orders of magnitude compression) proved ineffective. Consequently, we consider the natural compression of **64 dimensions** provided by ResNet-32 as an approximate *lower bound* for latent space dimensions when working with SVHN, CIFAR-10, CIFAR-100, and miniImageNet (using ResNet-32 architecture). Additionally, there are two further reasons why setting the latent dimension to **64**, despite being the natural lower bound, is not ideal for our architecture: - **Decoder Complexity:** Our decoder architecture is intentionally designed to be *lightweight*, composed of only three simple convolutional layers. Achieving effective reconstruction from latent representations demands that the input images not be excessively compressed. Overly compressing images would necessitate employing **deeper decoder**. - **Class Separation and Structured Latent Space:** Within our architecture, a clear separation of classes within the latent space (structured latent space) is crucial for accurate classification and class-conditioned exemplar decoding. Empirically, we found that *larger latent spaces* facilitate significantly better class separation, particularly when dealing with extensive sequences of tasks. We'll include results and discussions on the latent space dimensions in a dedicated section in the revised paper. We're grateful to the reviewer and we'd be encouraged if they raised their score. Thank you very much for reading our response. **References:** - Ullah et al., 2022. Meta-Album: Multi-domain Meta-Dataset for Few-Shot Image Classification. - Zhai et al., 2020. A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark. - Lesort et al., 2023. Challenging Common Assumptions about Catastrophic Forgetting. - Tong et al., 2022. Incremental learning of structured memory via closed-loop transcription. - Hayes et al., 2020. Remind your neural network to prevent catastrophic forgetting. - Wang et al., 2021. Acae-remind for online continual learning with compressed feature replay. - Ostapenko et al., 2022. Continual Learning with Foundation Models: An Empirical Study of Latent Replay.
null
null
null
null
null
null
null
null
Cover learning for large-scale topology representation
Accept (poster)
Summary: This paper introduces "cover learning," an innovative unsupervised learning method designed to represent the large-scale topological structure of geometric datasets. It extends and addresses limitations present in traditional Topological Data Analysis (TDA) methods, specifically those relying on geometric complexes and Mapper graphs. The authors identify the fundamental optimization challenge of learning topologically-faithful covers and provide theoretical grounding for optimizing such covers using fuzzy covers, which facilitate gradient-based optimization. The paper proposes ShapeDiscover, a practical algorithm utilizing fuzzy cover optimization, which outperforms standard TDA methods in both quantitative topological inference and qualitative topology visualization. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: To the best of my knowledge, the proofs are correct. Experimental Designs Or Analyses: **W1.** It seems that the experiments mainly use default parameters and vary only two parameters (maximum cover size and threshold $\lambda$). A systematic sensitivity analysis would help in clarifying how sensitive the performance of ShapeDiscover is to variations in parameters, providing more guidance on parameter tuning. **W2.** While diverse datasets are considered, there is no explicit analysis of how ShapeDiscover performs in the presence of noise, which is a critical consideration for practical data analysis. Please include experiments (or theoretical results) with varying levels of artificial noise and assess the robustness and stability of the inferred topological structures. **W3.** A valuable avenue not explored is the integration of ShapeDiscover-generated topological representations with downstream machine learning tasks, especially deep learning. For example, investigating how ShapeDiscover’s topological representations could enhance the performance of neural networks on classification tasks (e.g., MNIST) would substantially broaden the practical relevance of this work. Supplementary Material: Yes. The proofs and the experimental details. Relation To Broader Scientific Literature: The paper situates itself within the broader scientific domain of TDA, explicitly addressing limitations of established methods such as geometric complexes and Mapper graphs. It connects to foundational concepts in computational geometry and algebraic topology, especially the nerve theorem and persistence theory (Edelsbrunner & Harer, 2022; Oudot, 2015). Moreover, by employing fuzzy covers and optimization frameworks common in modern machine learning, the paper bridges classical topology with contemporary computational methods, facilitating broader applicability. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: **S1.** The paper introduces a robust theoretical framework linking cover learning to fuzzy cover optimization, grounded firmly in geometry, topology, and optimization theory. **S2.** ShapeDiscover demonstrates superior performance in recovering accurate topological structures with fewer simplices compared to traditional methods like Vietoris–Rips and Mapper. **S3.** ShapeDiscover is capable of handling larger datasets more efficiently compared to existing methods, making it practical for real-world applications in large-scale topology visualization. Weakness: **W4.** Despite improvements, the topological persistence optimization remains computationally intensive, potentially limiting scalability for exceptionally large datasets. Other Comments Or Suggestions: N/A. Questions For Authors: **Q1.** The algorithm primarily utilizes 0-dimensional homology for computational efficiency, how well does it generalize to higher-dimensional homology inference? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback! - W1: This will be addressed in the ablation study/sensitivity analysis we will perform. Please see "Main comment A" in the response to reviewer mPj7 for what we plan to include, as well as preliminary findings. - W2: We will include this. In our experience so far, the algorithm is robust to noise for both visualization and topological inference purposes, when noise is additive, or not too many outliers are present. We will assess this methodically, and see how much and what type of noise is required to break the algorithm. - W3: We are actively working on this; please see "Main comment C", in the response to reviewer 5NTw. - W4: It is true that topological persistence optimization is computationally intensive, and hasn't been tested on exceptionally large data. We want to point out that 0-dimensional persistence optimization is feasible on medium size data. In the MNIST experiment, we are running the topology loss on the full dataset, and on 15 cover elements (ie 15 independent times), for ~250 iterations, which effectively means that we run the topology loss 250 * 15 = 3750 times on a dataset with 60,000 points, and this took ~150 seconds. We expect scalability to larger data to not pose a big problem, since the (0-dimensional) topological loss has complexity O(n log n) (n being vertices + edges in the knn graph which is in O(nk)). There is an easy way to reduce computation time significantly: Run the topology loss stochastically, say every ~10 iterations, which would cut down topology optimization to 15 seconds, in the above example (since we use topological optimization with big steps, the gradient of the topological loss is not sparse, so running the loss even a few times does achieve the desired effect). Although we have experimented with this, we do not mention it the paper since we want to focus on the main message, and leave further tuning, optimization, and other approaches to cover learning for future work. Having said this, higher-dimensional persistence optimization is, at this point in time, significantly less efficient, but we believe there is hope; please see our response to Q1. - Q1: Experiments A and B have examples in which the method uncovers the underlying 0-, 1-, 2- and 3-dimensional homology with many fewer simplicies than the main (to our knowledge) approaches to homological inference, even though our topological loss only ensures connectivity (as it only uses 0-dimensional homology). So our method is good at higher-dimensional homological inference even if the topological loss is only 0-dimensional; we believe that this is because small measure, plus regular geometry, plus topological connectivity, in practice, is forcing the cover elements to be contractible, but at this point this is a conjecture. Using higher dimensional homology in the topological loss would be computationally more expensive; we expect that, with several standard computational shortcuts (mostly subsampling and topological optimization with big steps), 1-dimensional homology could be added to the loss and have the algorithm run in the order of minutes on datasets of ~10,000 points. Moreover, we believe there is hope for faster approaches: As we mention in the conclusions, topological regularization only seeks to simplify topology, and thus one does not need to compute, e.g., a full persistence diagram, it is enough to find a persistent feature and to produce a gradient that will make it less persistent; we hope that probabilistic algorithms, sparse spectral methods, or ideas like the ones in (Chen, Kerber, Computational Geometry, 2013) could be used for this. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. After carefully reviewing the manuscript and all accompanying reviews and rebuttals, I have decided to maintain my score and recommend a weak acceptance of the paper.
Summary: This paper proposes a novel algorithm for learning subset cover of a dataset with respect to its geometric and topological properties. The authors develop a gradient optimization procedure for learning the fuzzy cover of a dataset with required properties; fuzzy cover induces a simplicial filtration (by grade of membership), and by thresholding it at certain pre-defined level, they obtain the desired (crisp) cover. Experiments on different datasets show that the proposed approach yields better covers than other existing algorithms. ## update after rebuttal I think that with the proposed changes this paper will be an interesting contribution to the research area. I will keep my original score (of 4). Claims And Evidence: Main claims of the paper are supported with enough evidence. Methods And Evaluation Criteria: The proposed algorithm is adequate for the problem of cover learning. Theoretical Claims: All mathematics statements are given with valid proofs. Experimental Designs Or Analyses: Seems reasonable. Supplementary Material: Code for reproducing the experiments from the paper is provided in the supplementary materials. Relation To Broader Scientific Literature: The task of cover learning is relatively unexplored at Topological Data Analyses, but something similar was previously covered in other works; however, the problem setting and the approach proposed here are highly novel. All related works are cited in the paper. Essential References Not Discussed: None, as far as I know. Other Strengths And Weaknesses: Strengths: 1. Solid theoretical part. 2. Large number of experiments of diverse collection of datasets. 3. Extensive background information provided in Appendices introduces reader to mathematical concepts that are used in the paper. Weaknesses: 1. Some uncertainty about potential practical applications of the proposed method. Other Comments Or Suggestions: Can you provide some information on time and memory complexity of the proposed algorithm (theoretical or empirical estimations)? Questions For Authors: 1. Are there any other applicable quality metrics, aside from number of vertices and simplices reported in Tables 1 and 2? 2. Can the cover constructed by the optimization procedure be influenced by initialization? If so, can you provide the mean and variance for values in Tables 1 and 2? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback! - About time and memory complexity: On Table 4 in the Appendix, we report the times it took to run our experiments. Separately to this, we will include a complexity analysis in the paper. - About practical applications: please see "Main comment C", at the end of this response. - About other applicable quality metrics for Tables 1 and 2: Our goal was to do topological inference with small simplicial complexes (much smaller than SOTA), and we do not know of other standard ways of quantifying simplicial complex size besides number of simplices. Please let us know if there are other measures we can compute. - About dependence on initialization: When our method is initialized with a good clustering, results are stable across runs, and, for experiment A, there is indeed no variance. When the method is initialized with a random fuzzy cover, the results can vary across runs. This will be addressed in the ablation study/sensitivity analysis we will perform. Please see "Main comment A" in the response to reviewer mPj7 for what we plan to include, as well as preliminary findings. **Main comment C: Applications and downstream tasks** We start by explaining the rationale behind our choice of applications. A main selling point of cover learning is that it improves upon the two main TDA methodologies with a single approach, by effectively addressing their main shortcomings: the sometimes prohibitive size of geometric complexes, and the difficulty in tuning and the lack of higher dimensional information in Mapper graphs. The two applications in the paper (sections 5.1 and 5.2) were chosen to demonstrate this. The effectiveness in downstream tasks (eg shape classification) of the TDA methodologies we are improving upon has been established in the literature, so we decided to limit the scope of the paper to emphasize the theory we develop (Section 3), and the computational methods to make optimization feasible (Section 4). Let us also mention that, besides machine learning, topological inference has had significant scientific applications such as (Gardner et al., Nature 2022) and (Benjamin et al., Nature 2024), and that Experiment B shows that our method (re)discovers the main finding of the former paper with ease. In addition to the above, here are further applications we are actively exploring, and will be happy to mention in the paper: - Point cloud vectorization: Typically, point clouds ought to be vectorized in a permutation-invariant way. One way to do this is to construct a graph on the point cloud (eg knn graph) and then apply methods from graph machine learning. Large point clouds result in large graphs, from which the global geometry is hard to extract (due to high dimensionality of the learning problem). Our method provides much smaller graph/simplicial complex representations of point clouds, which can potentially simplify the learning step (eg by requiring less training data). - Local+global dimensionality reduction: Popular modern dimensionality reduction algorithms such as tSNE and UMAP operate essentially locally, in the sense that they optimize embedding using interactions between few data points (typically two); with these algorithms, the preservation of global structure happens as a by-product of the preservation of global structure. We believe that a local optimization procedure, as above, could be combined with a global one which is tasked with ensuring preservation of global topology, as captured by a small simplicial complex given as the nerve of a cover learned from the data. - Simplifying parameter selection in clustering: A main difficulty with many standard clustering algorithms (eg k-means) is that the number of final clusters needs to be chosen by the user. An l-element cover gives rise to a graph with l vertices, and if the cover is good, the connected component of this graph represent the large scale cluster structure of the data, and give rise to a clustering of the data. So the chosen number of cover elements l is just an upper bound for the number of output clusters, and the final clustering (and the number of clusters) depends on the intrinsic geometry of the data, and not on the arbitrarily chosen number l. - Input for simplicial neural networks: Cover learning produces a compact simplicial complex on geometric data, and can thus be combined with simplicial neural networks, such as (Roddenberry et al, ICML 2021), (Chen et al, AAAI, 2022), (Maggs et al, ICLR 2024), (Gurugubelli, Chepuri, ICLR 2024). --- Rebuttal Comment 1.1: Comment: Thank you for the provided clarification. I have also read other reviews and responses to them. I think that with the proposed changes this paper will be an interesting contribution to the research area. I will keep my original score.
Summary: The paper aims to generate topologically faithful simplicial complexes for geometric datasets by reducing the problem to cover learning. By formally defining a set of three goals for cover learning, extending to the space of fuzzy covers (“softening” the inclusion of an element in a subset, which allows them to be parametrized by functions over real numbers), and demonstrating a way to estimate these goals as standard loss functions, the authors develop a framework through which these goals can be optimized with standard neural networks. Based on this idea, they implement a cover learning algorithm, ShapeDiscover, which outperforms other models quantitatively by requiring less vertices and simplices to achieve the same homology recovery quotient on synthetic geometric datasets. This model also provides more intuitive topological representations than previous cover learning approaches. Claims And Evidence: The paper supports all of its mathematical claims with sufficient proofs, either included in the main body or the appendix. Its claims on improvements to visual representation, while less rigorous, are supported by sufficient experimental evidence. Methods And Evaluation Criteria: The defined goals are reasonable for the problem of cover learning; generating a cover with measure-theoretically small sets, geometrically regular sets, and a homologically faithful nerve are all well-defined and meaningful objectives in cover learning. Theoretical Claims: All theoretical claims are supported with correct proofs. Experimental Designs Or Analyses: I think the proposal here is at the end of the day a kind of coarse graining or sketching of the graph used to compute homology etc. Gie this they should be comparing to other coarse graining methods like diffusion topology from (Huguet et al. 2023 SIAM, brugnone et al. IEEE big data 2019) , also Reeb graph method like Pascussi et al. ACM Trans on graphics, also PAGA from Wolf et al. 2019. Supplementary Material: The appendix (which I take to be supplementary material) is good because it offers a thorough background as well as detailed proofs. Relation To Broader Scientific Literature: The contributions of the paper are highly relevant to topological data analysis, and can be used alongside recent methods which utilize simplicial complexes to draw geometric insights from data, including [Maggs et al. 2024] published in ICLR 2024. To the best of my knowledge, no previous methods have enabled the parameterization of cover learning under traditional machine learning frameworks. Essential References Not Discussed: All of the ones mentioned in this review: Huguet et al. 2023 Wolf et al. 2019 Brugone et al. 2019 Pascussi et al Also topological encoders from Moore et al. 2020 Other Strengths And Weaknesses: Strengths: the paper is very well-structured and intuitive to follow. The initial definition of three main goals in cover learning using words, followed by concise mathematical expressions for each aim, allows for a clear derivation for the paper’s resulting loss function. Representing sets in covers as vertices over a k-dimensional simplex, and representing fuzzy sets as all points contained in this simplex allows for a clean parameterization for these covers using softmax. Weakness: A lack of ablation studies and comparisons; an interesting study would be to exclude certain components of the loss function and observe impacts on performance. In addition, it would be interesting to see the performance of just the “FuzzyCoverInitialization” using spectral clustering, and see how much further this is optimized after gradient descent. Other Comments Or Suggestions: None. Questions For Authors: 1. From my understanding, the number of vertices (i.e. the number of subsets in the cover) is a fixed parameter; is this correct? For the results in Table 1, does the model have to be completely trained from scratch to test each vertex number 1, 2, … k until the desired homology quotient is achieved? 2. Would this method yield more fruitful results if it used different resolutons of covers in keeping with the topological theme/ Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback! - About missing references: We will include them in the paper and comment on similarities and differences. Here are the main ideas we took from the papers (please correct us if we have misinterpreted something): - (Huguet et al. 2023)(Brugnone et al. 2019): Coarsening is done by simplifying the global structure of the point cloud (for example, by emphasizing cluster structure). The main difference is that our method produces a graph (simplicial complex) which encodes the global structure of the point cloud, and which does not have the data points as vertices (but rather groups of points). - (Pascussi et al. 2007): The technique is based on Reeb graph, which is also the main motivation for Mapper, but unlike Mapper it operates on a simplicial complex. The main difference is that, since they approximate a Reeb graph, they produces a one-dimensional simplicial complex, and thus, like Mapper (and as explained in Sec. 2 and App. D) it cannot be used to do higher dimensional topological inference. - (Wolf et al. 2019): This paper presents an end-to-end method for single-cell RNA-seq; the relevant part of their method is described in their section "Graph partitioning and abstraction" (pp. 7), where they explain how they coarsen an initial knn graph to a smaller graph; this is done by computing a clustering of the original graph, and then adding edges between clusters using a measure of connectivity between different clusters. For visualization purposes, their output serves very similar purposes as ours (eg our Fig. 5), and is thus very related. Since it is not a goal of theirs, their method is not suitable for higher dimensional topological inference. - (Moor et al. 2020): This is an autoencoder-based dimensionality reduction algorithm, with the novelty being the usage of a topological loss on top the classical reconstruction loss. Two connections to our work: One is producing a coarsened representation of the data; here the difference is that they output an embedding in low dimensional space, while we output a graph (simplicial complex) with groups of data points as vertices. The second one is the usage of a persistence-based loss; the main difference with our usage is that we use it purely as regularization (we enforce trivial local topology, motivated by the nerve theorem), whereas they use it to directly enforce the topology of the low dimensional representation to be similar to that of the original data. This work will also be mentioned in our section "A.5. Topological persistence optimization". - About our method being a coarse-graning/sketching method: This is a good interpretation. We want to emphasize the fact that our coarsening strategy serves two distinct purposes: topological inference (via homology) and visualization. In particular, we compute homology directly on the coarse representation (small simplicial complex), as opposed to computing homology of a large initial graph that then is coarsened only for visualization. Moreover, our strategy builds on the nerve construction (standard in topology), which lends itself to further theoretical analysis (eg consistency of covering algorithms, addressed in future work). - About ablation study: This will be implemented. Please see "Main comments A and B" in the response to reviewer mPj7 for what we will include, as well as preliminary findings. - About the number of cover elements parameter: The parameter is fixed, and in our experiments we simply retrained the model for each choice each case. Two comments: First, in practice, the chosen number is just an upper bound for the number of cover elements: gradient descent can converge to solutions in which some cover elements are empty; this a feature as it simplifies the choice of the parameter. Second, if an l-element cover is already available, we can use it as initialization to optimize for a (l+n)-element fuzzy cover (eg set the new cover elements uniformly at random and normalize to get a fuzzy cover). - About different resolutions of covers: (We believe we understand what you mean, but please let us know if the following does not address your question.) Indeed! And this is what the method does. As described in Section 3, paragraph "Fuzzy covers", optimization is done over the space of fuzzy covers, which is just a different name for a persistent/multiresolution/multiscale cover, ie a family of nested covers, in our case indexed by [0,1]. This is very much in line with persistent topology/TDA, and is what allows us to perform the quantitative experiment A, in which we show that our method compares favorably to all (to our knowledge) TDA approaches to topological inference based on persistent homology. The question might be about other applications of the fact that we have a multiscale cover, since, for example, we don't leverage this in the visualization examples. This is something we are actively working on.
Summary: This paper focuses on learning a representation of the large-scale structure of geometric datasets, and specifically this is achieved by learning the cover of the geometric datasets. The performance of existing methods such as the 1D Mapper or Differentiable Mapper is sensitive to the choice to hyper parameters. To tackle this issue, the author proposed a non-convex optimization program with a new loss function that does not involve the filter function anymore. The new loss consists of three terms that try to ensure the resulting cover learned are satisfying from different aspects. Empirical results demonstrate the proposed method is able to output meaningful and competetive cover compared to SOTA. Claims And Evidence: Yes. All the claims are supported by either empirical result or theoretical analysis. Methods And Evaluation Criteria: I think the proposed method appears to be sound and effective, as supported by several experiments on different datasets. Theoretical Claims: I didn't check the correctness of the theoretical claims but they appear to have no issue. Experimental Designs Or Analyses: The current result looks to be sound. However, as most of the results are visualization of the output from different methods, there are several extra experiments (apologize if I missed something) that could be further investigated to enhance the soundness of the proposed algorithm: 1. As the author stated that initialization is important, could we provide more evidence on this, e.g. what are the results with/without a good initialization, or with different clustering methods for initialization 2. Similarly, can we have some plot that compares the embedding g at initialization and after the optimization? In experiments I feel there are some cases (e.g. MNIST) where spectral clustering is already able to provide a very meaningful result. 3. As in table 4, looks like the topology loss contributes most to the runtime, could we have some comparison on the output with different weights on the loss function components, which can further help us understand the significance of these proposed components. 4. Besides visualizing the output, is there anyway to perform some downstream tasks, e.g. clustering based on the optimized embedding (representation) to further showcase the strengths of the proposed methods. Supplementary Material: I roughly went through the Appendix A (background) and Appendix G (other experiment results) Relation To Broader Scientific Literature: The key contribution of this paper is proposed a new topology representation learning method, which could benefit various downstream tasks such as clustering and synchronization. Essential References Not Discussed: I don't aware there are any outstanding missing reference in the manuscript Other Strengths And Weaknesses: Overall, I think this paper is well-written and easy to follow. The proposed method appears to be novel and sound. My main concern is on the experiments which has been elaborated in previous sections. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback! - About most of the results being visualizations: Three out of five experiments concern visualizations. We just want to emphasize the fact that the two other experiments concern topological inference, with experiment A being a quite thorough comparison with the main competitors based on sparse simplicial complexes. - About evidence on initialization being important: This will be addressed in the ablation study and sensitivity analysis. Please see "Main comment A", below. - About comparing initialization and output after optimization: This will be addressed too. Please see "Main comment A" below. Please also see "Main comment B", below, about differences between the clustering used for initialization and our output cover. - About using different weights for the loss components: Please see "Main comment A", below - About downstream tasks: Please see "Main comment C", in the response to reviewer 5NTw. **Main comment A: Ablation study and sensitivity analysis** We will add the requested ablation study and sensitivity analysis. We now outline our preliminary findings, and please see below for a sample experiment supporting these. We also emphasize that, in the paper, our default parameters work well in a wide range of datasets. - Initialization (random vs clustering). In terms of quality of output, this does not play a big role for small datasets (eg 2-sphere), but it does for larger data (eg MNIST). In terms of convergence speed, clustering initialization always leads to faster convergence (order of x10 or more for large datasets). - Ablation. - Measure loss and Regularization loss: These are required for obtaining reasonable results. - Topological loss: When using random initialization, it is required. When using a good clustering as initialization, the topological loss does not play a big role, and good results can be obtained without it. - Geometry loss: In our experiments thus far, it does not play an important role. We believe that regular geometry is already enforced by the regularization loss; this is explained in Appendix F.2. - Sensitivity to parameters. - Number of cover elements: The algorithm is robust to this choice. This parameter is just an upper bound, since the algorithm is allowed to return empty cover elements which are just discarded. If one wants to force more cover elements to be non-empty, one can increase the weight of the measure loss (since measure loss enforces many small sets as opposed to few large ones). - Regularization weight: The algorithm is robust to this choice. A large value will enforce larger cover elements, which can be used to have larger intersections between cover elements, in case the output is too disconnected. - Number of neighbors for nearest neighbor graph: The algorithm is robust to this choice, as is usually the case with this parameter in unsupervised learning algorithms. - Threshold (lambda): This parameter is only required for producing a single graph (or simplicial complex), for, eg visualization, and it is not fixed for topological inference. The algorithm is sensitive to this parameter, and the number of edges (intersections between cover elements) can change significantly when going from lambda = 1 to lambda = 0. The default 0.5 usually leads to good results, but sometimes there might be too many intersections for easy visualization; in that case the solution is to increase lambda (as in Fig. 4 and Fig. 5). Sample experiment (the experiments in the paper will be more thorough): We run our method on the 2- and 3-sphere datasets and quantify topology recovery as in Table 1 - Initialization and topology loss: - Results can be reproduced with clustering initialization and without topology loss. - With random initialization and no topology loss, topological recovery always fails. - With random initialization and topology loss, recovery is successful most of the times (>80%). - Geometry loss: The results in Table 1 can be reproduced without the geometry loss. - Number of cover elements: Topology recovery is still successful for much larger values than the ones used in the Table (>30). - Regularization weight: Results are replicated with smaller and larger values (5, 10, 20). - Number of neighbors: Results are replicated with smaller and larger values (8, 15, 30). **Main comment B: About the clustering used at initialization** Although initialization plays an important role for large datasets, it does not, by itself, solve the cover learning problem: There are no intersections between the clusters in a clustering, so the nerve is discrete and contains no topological information (above dim 0), meaning that a clustering by itself wouldn't be useful for topological inference (experiments A and B) or visualization of interesting geometric structure (eg flare structure in Fig. 5).
null
null
null
null
null
null
High-Fidelity Simultaneous Speech-To-Speech Translation
Accept (poster)
Summary: The paper introduces Hibiki, a decoder-only model for simultaneous speech-to-speech (S2ST) and speech-to-text (S2TT) translation. Unlike offline approaches, Hibiki translates speech in real-time using a multistream language model that synchronously generates text and audio tokens. The model leverages contextual alignment to determine optimal delays for translation, improving fluency and speaker similarity. Experimental results show strong performance in French-English translation, with real-time capabilities on both GPUs and mobile devices. Claims And Evidence: The paper claims Hibiki achieves state-of-the-art translation quality, speaker similarity, and naturalness, supported by BLEU scores, speaker similarity metrics, and human evaluations. The contextual alignment method is validated through ablation studies, demonstrating its impact on latency-quality trade-offs. However, the claim that Hibiki provides an optimal balance between latency and accuracy is questionable, as Seamless achieves lower latency (LAAL and End Offset). So Hibiki does achieve better quality but it might sacrifice the latency to some extent. I think this is okay though because the author adds human evaluation and show Hibiki is preferred. Methods And Evaluation Criteria: The paper employs standard evaluation metrics for S2ST, including BLEU for translation quality, speaker similarity (cosine similarity), and MOS for human evaluation. It also uses LAAL (Length-Adaptative Average Lagging) to measure latency, ensuring a fair comparison with existing models. One issue is that the experiments are limited to French-English, making it unclear how Hibiki generalizes to other languages. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is generally strong, with comprehensive comparisons to Seamless and StreamSpeech. Ablation studies effectively highlight the impact of alignment strategies and classifier-free guidance. However, latency trade-offs need more discussion, as Hibiki has higher lag than Seamless. Additionally, the alignment-aware TTS system lacks detail, making it difficult to verify how timing constraints are enforced during synthesis. The missing Appendix C (mentioned in section 3.2 line 203) further limits transparency. Supplementary Material: Yes, I saw the visualization of context-aware alignment. Relation To Broader Scientific Literature: Hibiki builds on prior work in S2ST, alignment modeling, and multistream processing. It extends Seamless (Barrault et al., 2023) and StreamSpeech (Zhang et al., 2024a) with an adaptive alignment approach and improved speaker transfer. Its multistream modeling is inspired by Moshi (Défossez et al., 2024), originally designed for full-duplex spoken dialogue. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: - I want to know more about the alignment-aware TTS, how is it implemented exactly to consider the alignment during synthesis? - For the text-only pertaining, I am wondering what is the machine translation performance after the training? Since the model is trained from scratch, I am assuming it is always trained to perform translation task? Or is it trained for general LM task then adapted into a decoder-only MT model? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. # Updates in the revised version of the paper We first inform the reviewer that we will update the reported results of Hibiki-M in [Table 2](https://hibiki-s2st.github.io/e.png) after fixing an issue that further improves performance. Following the reviewers' suggestions and thanks to the extra page provided for the camera ready, we will revise the paper on the following aspects: * Clarify the **model's architecture, configurations and the nature/size of the datasets used.** * Add **quality/latency trade-off curves** by varying hyperparameters of our contextual alignment method. * Add **COMET** evaluation scores. * Extend experiments to the **English->French** direction. * Improve the **references section** and discuss similar works pointed out by the reviewers. # Comments ## On the generalization of the contextual alignment method to many language pairs We acknowledge that our method is only illustrated by a single language direction in the paper. However, there is no specific aspect of our method with respect to the language pair, but the fact that MADLAD -- which we used to derive contextual alignment -- performs well on these languages. We expect our method to work as strongly on pairs of languages where SOTA text translation models perform well and can thus allow us to derive reliable contextual alignments. Given the massively multilingual nature of MADLAD or even more recent systems like GemmaX2-28-9B ([Cui et al., 2025](https://arxiv.org/abs/2502.02481)), we expect this approach to be a good candidate for scaling to many language pairs. We provide the reviewers examples of [contextual alignments with other languages](https://hibiki-s2st.github.io/d.png). As a first step towards more language directions, we have extended our experiments to the English->French direction and provide [experimental results](https://hibiki-s2st.github.io/b.png) that we will add to the revised paper. ## On the quality/latency trade-off As mentioned in Section 3.2.2 (l.183), we enforce a 2s delay between words associated through contextual alignment as we found it to provide a good balance between latency and translation quality. We acknowledge that this choice can be reconsidered and that trade-off curves would provide a clearer picture to the reader. **We thus produced a trade-off curve by varying the delay, as asked by the reviewers.** The results reported in [this quality/latency study](https://hibiki-s2st.github.io/c.png) show that Hibiki provides an overall better trade-off than Seamless. We will add this figure to the revised version of the paper. ## On the alignment-aware TTS We acknowledge that a non-negligible part of the technical details inherited from [Défossez et al. (2024)](https://arxiv.org/abs/2410.00037) were not exposed in our paper as we preferred to focus on the data creation pipelines and the experimental protocol and results. We will take the opportunity of the extra page provided for the camera ready to improve the clarity of technical details such as the alignment-aware TTS. Moreover, we would like to highlight that there is no missing Appendix to our paper as we referrerd to Appendix C of [Défossez et al. (2024)](https://arxiv.org/abs/2410.00037) at Section 3.2 l.203. We will improve the formulation in the updated version of the paper. As a rapid summary of the explanations given in Appendix C of [Défossez et al. (2024)](https://arxiv.org/abs/2410.00037), one can force text tokens directly in the text stream of a TTS model derived from the Moshi architecture. Thanks to the contextual alignment, we can ensure that a given text token is fed at the right timestamp (that is not too early) using `PAD` tokens instead to delay its insertion. ## On text-only pretraining The text-only pretraining phase is that of standard next token prediction, there is no adaptation to a MT model in the text-only pretraining phase. In early development, we tried alternating batches of text translation and speech translation (starting from a pretrained text model), however while this model did perform quite well in text translation, this did not result in any measurable improvement of the speech translation. We hypothesize that this lack of transferrability is due to the fact that MT samples were built by concatenating the source and target texts in the text stream which is radically different from what is seen in the text stream for a speech translation sample (where the source is audio-only and the target is time-aligned). This highlights a challenge in aligning text and speech representations in speech-text LLMs for each modality to benefit the other, which we believe will be critical to extend speech translation to more language pairs as text translation data is much more accessible than speech translation data.
Summary: This paper introduces Hibiki, a decoder only model for simultaneous speech-to-speech/text translation. Hibiki adapts the architecture of a full-duplex dialogue model Moshi to simultaneous translation by modeling source speech as user input and target speech as agent response. To train Hibiki, the authors synthesize trajectories of simultaneous translation by leveraging the log probabilities of a pretrained machine translation model. The experimental results on Fr-En direction of CVSS dataset demonstrate that 1) Hibiki shows higher speech quality and voice transfer than strong baselines like Seamless while has larger latency; 2) Hibiki is able to conduct efficient batched inference and the distilled version even able to run on a smartphone in realtime. ## update after rebuttal Some of my major concerns have been addressed. However, the latency remains somewhat high, as shown in the quality-latency trade-off presented in the rebuttal. Combined with the limited coverage of language directions, I will maintain my current score. Claims And Evidence: > Claim 1: The architecture of a full-duplex dialogue model Moshi can be adapted for simultaneous translation. This claim is supported. It is natural to regard source speech as user speech input and target translation speech as agent response output. Also, the experiments show that this modeling is able to conduct simultaneous translation effectively. > Claim 2: Decoder-only architecture enables efficient inference. This claim is also supported. It is true that prior architectures are hard to do efficient batch inference due to their complex policy design, while a decoder-only model with implicit policy makes it much more convenient. Methods And Evaluation Criteria: **Method** 1. The method adapts a dialogue model to simultaneous speech to speech translation natually. 2. The translation and source-target alignment are both generated by MADLAD-3B model. The author lacks quality analysis here. 3. Hibiki does not support adjusting the latency during inference, which means you need to train multiple models for each latency level. **Evaluation** 1. Dataset: CVSS is a common dataset for evaluating speech to speech translation. 2. Latency metric: LAAL and Offset are commonly used metrics for latency evaluation. 3. Translation quality metric: BLEU is a widely used metric for translation quality evaluation. However, BLEU is still a n-gram based method, and is outperformed by many later neural metric like COMET, MetricX as shown in recent WMT workshops. 4. Speech quality metric: The human evaluation is conducted only on 30 speech samples. May not demonstrate enough statistical significance. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: 1. Hibiki is trained on more and refined speech data compared to baseline StreamSpeech. The comparison is not fair. 2. The comparison in Table 2 not that informative since the comparison is not at the same latency. Vast literatures in both simultaneous speech-to-text and text-to-text translation [e.g., 1-2] already show that the quality can be much higher if the allowed latency is higher. 3. There are other existing ways to build the source-target alignment, like the one introduced in [3], but the author does not compare with them. 4. The experiment only tested Fr-En direction, but simultaneous translation could behave very differently on different language directions due to difference in linguistic structures. More language directions are needed. [1] Papi, S., Turchi, M., Negri, M. (2023) AlignAtt: Using Attention-based Audio-Translation Alignments as a Guide for Simultaneous Speech Translation. Proc. Interspeech 2023, 3974-3978, doi: 10.21437/Interspeech.2023-170 [2] Donglei Yu, Xiaomian Kang, Yuchen Liu, Yu Zhou, and Chengqing Zong. 2024. Self-Modifying State Modeling for Simultaneous Machine Translation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9781–9795, Bangkok, Thailand. Association for Computational Linguistics. [3] Wang, M., Vu, T. T., Wang, Y., Shareghi, E., & Haffari, G. (2024). Conversational simulmt: Efficient simultaneous translation with large language models. arXiv preprint arXiv:2402.10552. Supplementary Material: No. Relation To Broader Scientific Literature: 1. Hibiki is one of the first decoder-only models for simultaneous speech-to-speech translation and exhibit advantages in efficient batched inference. There are similar findings before in simultaneous text translation [1-2], but not in simultaneous speech-to-speech translation. 2. Synthesizing source-target alignment is not a new idea, [1] previously proposed a word-alignment-based approach. However, the perplexity-based method introduced in this paper is new, as far as I know. [1] Wang, M., Vu, T. T., Wang, Y., Shareghi, E., & Haffari, G. (2024). Conversational simulmt: Efficient simultaneous translation with large language models. arXiv preprint arXiv:2402.10552. [2] Yu, D., Zhao, Y., Zhu, J., Xu, Y., Zhou, Y., & Zong, C. (2025). SimulPL: Aligning Human Preferences in Simultaneous Machine Translation. arXiv preprint arXiv:2502.00634. Essential References Not Discussed: The key contribution of this paper is the decoder-only architecture for simultaneous speech-to-speech translation and a synthetic alignment building method. Both of which are discussed by [1] in the context of simultaneous text-to-text translation, but not cited nor discussed. [1] Wang, M., Vu, T. T., Wang, Y., Shareghi, E., & Haffari, G. (2024). Conversational simulmt: Efficient simultaneous translation with large language models. arXiv preprint arXiv:2402.10552. Other Strengths And Weaknesses: The writing needs improvement. The author assumes some prior knowledge of Moshi model, RQ-transformer and related techniques. It would be better to have a figure illustrating the model architecture, so that it is easier to understand for broader audience. Other Comments Or Suggestions: 1. Figure 2 is a bit confusing at first glance. A more complete illustration of the architecture will be helpful. 2. What is the noise augmentation techniques used in Hibiki? 3. line 311-316, the description of EOS tokens is confusing. Questions For Authors: 1. Is there way to adjust the latency of Hibiki at inference time? If so, does it provides a better quality-latency trade-off curve than Seamless? 2. Is Hibiki able to generalize to unbounded speech? By unbounded speech I mean streaming speech input with infinite length. 3. Is Hibiki still better than StreamSpeech using only CVSS training data? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. # Updates in the revised version of the paper We first inform the reviewer that we will update the reported results of Hibiki-M in [Table 2](https://hibiki-s2st.github.io/e.png) after fixing an issue that further improves performance. Following the reviewers' suggestions and thanks to the extra page provided for the camera ready, we will revise the paper on the following aspects: * Clarify the **model's architecture, configurations and the nature/size of the datasets used.** * Add **quality/latency trade-off curves** by varying hyperparameters of our contextual alignment method. * Add **COMET** evaluation scores. * Extend experiments to the **English->French** direction. * Improve the **references section** and discuss similar works pointed out by the reviewers. # Comments ## On the generalization of the contextual alignment method to many language pairs We acknowledge that our method is only illustrated by a single language direction, however there is no specific aspect of our method with respect to the language pair but the fact that MADLAD -- which we used to derive contextual alignment -- performs well on these languages. We expect our method to work as strongly on pairs of languages where SOTA MT performs well and allows for reliable contextual alignments. Given the massively multilingual nature of MADLAD, it is a good candidate for scaling to many language pairs. We provide the reviewers examples of [contextual alignments with other languages](https://hibiki-s2st.github.io/d.png). As a first step towards more language directions, we have extended our experiments to the English->French direction and provide [experimental results](https://hibiki-s2st.github.io/b.png) that we will add to the revised paper. ## On the quality/latency trade-off and controllable latency As mentioned in Section 3.2.2 (l.183), we enforce a 2s delay between words associated through contextual alignment. We acknowledge that this choice can be reconsidered **and produced a trade-off curve by varying the delay, as asked by the reviewers.** The results reported in [this quality/latency study](https://hibiki-s2st.github.io/c.png) show that Hibiki provides a better trade-off than Seamless. We will add this figure to the revised version of the paper. We also acknowledge that the proposed version of Hibiki does not allow for inference-time latency control. We could rely on conditional training, as we did for the speaker similarity, to simultaneously train the model on multiple latency levels making it possible to control the latency at inference by changing the conditioning. We will add this mention to the limitations section. ## On references We acknowledge the contributions made by [Papi et al. (2023)](https://arxiv.org/abs/2305.11408), [Wang et al. (2024)](https://arxiv.org/abs/2402.10552) and [Yu et al. (2025)](https://arxiv.org/abs/2502.00634). We also acknowledge the progress made in streaming and speech translation for complex language pairs such as English-Japanese as highlighted in [Ahmad et al. (2024)](https://arxiv.org/abs/2411.05088). We will add these references to the related work in the updated version of our paper. ## On the usage of a single text translation model We used a single model for translation and alignment as we expect this model to be the most appropriate to derive a reliable likelihood-based alignment. However we acknowledge in Section 4.6.2 that we may overfit MADLAD and diversifying the models used to generate and align data may improve the robustness of our system. ## On neural evaluation of quality We added [COMET evaluations](https://hibiki-s2st.github.io/a.png) and executed the `comet-compare` script which gave the following system ranking: MADLAD-3B > Hibiki > Seamless with a p-value < 0.05. ## On statistical significance of human evaluation Indeed we may get more robust estimation from more samples, however the gap between approaches (as demonstrated by error intervals) is wide enough such that we consider these results trustworthy. We also encourage the reviewer to listen to the example webpage. ## Answer: *What is the noise augmentation techniques used in Hibiki?* We use samples from [freesound.org](https://freesound.org) that are randomly added with various intensities to the input audio during training. We will add details about it in the revised version of the paper and the associated code will be released with the training code. ## Answer: *Is Hibiki able to generalize to unbounded speech?* The updated version of Hibiki that we will release is trained with windowed attention to extrapolate beyond a few minutes. ## Answer: *Is Hibiki still better than StreamSpeech using only CVSS training data?* We acknowledge the fact that we never trained Hibiki on CVSS data only as we aimed to handle real-world use cases with longer and diverse speech inputs, while CVSS only contains single sentences of a few seconds.
Summary: This paper introduces a model named Hibiki for real-time speech-to-speech translation. Hibiki employs a multi-stream architecture to synchronously process source and target speech, and generates both text and audio through multi-task learning. Trained with a weakly supervised method, Hibiki demonstrates SOTA performance in a French-to-English translation task, achieving good translation quality, speaker fidelity, and naturalness. Its simple inference process supports batch processing and real-time deployment on devices. Claims And Evidence: All the claims seem reasonable, however, the proposed methods are only validated on English-Franch, and their effects on other languages ​​need study. Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally make sense. Theoretical Claims: I have checked all equations and they are all correct. Experimental Designs Or Analyses: I have reviewed the experimental designs and analyses presented in the paper, and they generally appear to be sound and valid. Supplementary Material: I review the Appendix of the paper and demo page. Relation To Broader Scientific Literature: The paper study an important question in speech domain, real-time speech-to-speech translation. Essential References Not Discussed: I think all the related works are discussed in this paper. Other Strengths And Weaknesses: Strengths - Hibiki integrates simultaneous speech-to-speech and speech-to-text translation into a single decoder-only model, simplifying inference. - Achieves strong BLEU scores, outperforming previous models in both offline and real-time speech translation. - Produces fluent, well-paced speech with better voice preservation than prior models. - Uses a weakly-supervised alignment method to determine optimal delays, improving real-time accuracy. - Simple inference process allows for batched GPU translation and real-time on-device deployment. Weaknesses - Relies heavily on synthetic training data, requiring high-quality ASR, MT, and TTS models. - The paper written need to be imporved and is a little difficult to follow, many details are derived from the Moshi paper. - Currently evaluated only on French-English, raising questions about generalizability. - Speaker similarity is improved but not perfect, and accent transfer may not always be desirable. Other Comments Or Suggestions: - Testing on more language pairs and domains would strengthen claims of generalizability. - A more intuitive explanation of multistream decoding would help readers understand the model’s structure. - Incorporating real interpreter speech in training or fine-tuning could improve performance. - Exploring alternative speaker adaptation methods might enhance voice retention without needing classifier-free guidance. Questions For Authors: - How well does the approach generalize to other languages, especially low-resource ones? - Can the latency-quality trade-off be adjusted at inference, or is it fixed based on training? - Does using stochastic sampling for decoding lead to inconsistent translations across different runs? - How tightly are the text and speech token streams aligned—does each word directly correspond to a speech segment? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. # Updates in the revised version of the paper We first inform the reviewer that we will update the reported results of Hibiki-M in [Table 2](https://hibiki-s2st.github.io/e.png) after fixing an issue that further improves performance. Following the reviewers' suggestions and thanks to the extra page provided for the camera ready, we will revise the paper on the following aspects: * Clarify the **model's architecture, configurations and the nature/size of the datasets used.** * Add **quality/latency trade-off curves** by varying hyperparameters of our contextual alignment method. * Add **COMET** evaluation scores. * Extend experiments to the **English->French** direction. * Improve the **references section** and discuss similar works pointed out by the reviewers. # Comments ## On the generalization of the contextual alignment method to many language pairs We acknowledge that our method is only illustrated by a single language direction, however there is no specific aspect of our method with respect to the language pair but the fact that MADLAD -- which we used to derive contextual alignment -- performs well on these languages. We expect our method to work as strongly on pairs of languages where SOTA MT performs well and allows for reliable contextual alignments. Given the massively multilingual nature of MADLAD, it is a good candidate for scaling to many language pairs. We provide the reviewers examples of [contextual alignments with other languages](https://hibiki-s2st.github.io/d.png). As a first step towards more language directions, we have extended our experiments to the English->French direction and provide [experimental results](https://hibiki-s2st.github.io/b.png) that we will add to the revised paper. ## On the quality/latency trade-off and controllable latency As mentioned in Section 3.2.2 (l.183), we enforce a 2s delay between words associated through contextual alignment. We acknowledge that this choice can be reconsidered **and produced a trade-off curve by varying the delay, as asked by the reviewers.** The results reported in [this quality/latency study](https://hibiki-s2st.github.io/c.png) show that Hibiki provides a better trade-off than Seamless. We will add this figure to the revised version of the paper. We also acknowledge that the proposed version of Hibiki does not allow for inference-time latency control. We could rely on conditional training, as we did for the speaker similarity, to simultaneously train the model on multiple latency levels making it possible to control the latency at inference by changing the conditioning. We will add this mention to the limitations section. ## On incorporating real interpreter speech in training or fine-tuning Real interpreter speech would indeed be an ideal source of data. However, given the scarcity of such data in terms of volume, number of speakers, covered languages, etc. we believe that developing pipelines for synthetic paired data generation is the best path towards scaling speech translation to more languages and conditions. ## On speaker similarity and Classifier-Free Guidance (CFG) Accent transfer is indeed a limitation that we mention in the comments on the ablation on classifier-free guidance in Section 4.6. We expect that both a high speaker similarity and a reduced accent can be achieved by labeling our data with a speaker identification system that is invariant to accent and more accurate in terms of identity. While CFG offers fine-grained control on the strength of conditioning, it also doubles the computational cost at inference. [Cideron et al. (2024)](https://arxiv.org/abs/2410.06084) have proposed distilling the post-CFG logits into a student model. Since our submission, we have experimented this method to distill the logits with $\gamma = 3$ into a student model such that the latter can run without CFG. In our long-form evaluations, while **Hibiki-M without CFG reaches a speaker similarity of 0.33, after CFG-distillation it reaches 0.38 (without CFG), close to the 0.39 obtained with CFG.** This suggest we can use distillation to remove the need for CFG at inference. ## On decoding with stochastic sampling Stochastic sampling indeed induces variability in the output, some of which is desirable (e.g. acoustic diversity) while some is undesirable (unreliable translation). We thus use a lower top-k inference parameter on the text stream compared to the audio streams to disentangle acoustic and linguistic diversity, keeping the former high while lowering the latter. ## On the alignment of text and speech tokens As described in Section 3.4.4 of [Défossez et al. (2024)](https://arxiv.org/abs/2410.00037), special `PAD` and `EPAD` (End of PADding) tokens are inserted in the text stream to account for the difference between the constant framerate of the audio tokens and the variable rate of text tokens.
Summary: This paper proposes a state-of-the-art speech-to-speech translation system called Hibiki. This is a chunk-based decoder-only model based on Mimi codec, and a number of techniques (alignment-related, synthetic data creation, classifier-free guidance, etc.) are introduced to achieve state-of-the-art performance in several public benchmarks. The method seems to be only applied to En-Fr. ## update after rebuttal I acknowledge the authors' efforts in presenting the trade-off curve and providing additional English-to-French translation results, and I have accordingly raised my score. I also encourage the authors to follow through on their stated commitments, including open-sourcing their code. Claims And Evidence: This paper proposes a number of techniques - target text scaffolding - contextual alignment based on an off-the-shelf MT system and its application to target text/audio alignment - Alignment-aware TTS generation - Speaker similarity improvement in TTS generation - Conditional training with classifier-free guidance The effectiveness of these techniques is validated experimentally (e.g., Albations in Tables 4/5) Methods And Evaluation Criteria: The paper presents four evaluation metrics: fidelity, measured through subjective MOS scores (Table 3), speaker similarity, ASR-BLUE, and latency measures such as LAAL. Given that this study focuses on simultaneous speech translation, it is crucial to analyze the performance-latency tradeoff, particularly through ASR-BLUE and LAAL. However, Table 2 provides only a single condition, making it difficult to thoroughly examine this tradeoff. I suggest that the authors illustrate trade-off curves by varying latency control parameters and compare their method against competitors, discussing the advantages and limitations of each approach. Theoretical Claims: This paper does not have theoretical claims. Experimental Designs Or Analyses: As mentioned earlier, the paper should focus more on the performance-latency tradeoff rather than drawing conclusions based on a specific latency setup. For example, Section 4.6 states that Hibiki outperforms Seamless, but a 1.4-second latency is quite large for a speech interface. If generating trade-off curves or testing various latency conditions is not feasible, the authors should at least soften their claims to account for this limitation. Supplementary Material: I checked Figure 7 in the appendix section to check the contextual alignment examples. Relation To Broader Scientific Literature: SImultaneous speech-to-speech translation is one of the most important human language technologies to remove language barriers in the world. Essential References Not Discussed: The paper sufficiently cites related work. However, I would like to highlight that advancements in simultaneous speech-to-speech translation have not been driven solely by major industries but also by contributions from various researchers in the IWSLT community. I recommend that the authors acknowledge these efforts by citing relevant IWSLT summary papers. Other Strengths And Weaknesses: Strengths - Achieves state-of-the-art performance in simultaneous speech-to-speech translation. The results in Table 1 are impressive, as the proposed approach outperforms the offline system despite operating in a streaming setting. - Proposes several techniques to enhance performance, with their effectiveness validated through an ablation study. Weaknesses - The training procedure is complex, making it difficult to reproduce the results. While Section 1 states that the authors will release the code, models, and dataset, it is unclear whether the release will include the full dataset creation process and detailed training configurations. I recommend the authors clarify this. - The performance-latency tradeoff between Seamless and the proposed method is not clearly analyzed (see my comments above). - The alignment methods appear to be tailored to a specific language pair, raising concerns about their applicability to other language pairs. Other Comments Or Suggestions: - Section 3.1.1 requires some prior knowledge of the Mimi codec but is well written. - Section 3.1.4 is difficult to understand. As this section presents the main proposed architecture, it requires more detailed explanations, such as equations or figures, to enhance clarity. - Section 4.6 presents strong results, but it would be more informative if the authors included details on the training data for each system and the number of parameters. Questions For Authors: - Section 3: "$X$ is padded"—What happens when $X$ is longer than $Y$? Is $Y$ padded in that case? - Will the implementation in Section 4.2 be open-sourced? - "We build a French-English speech translation dataset of approximately 40K hours in each language." Which data sources were used? Will this dataset be released? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. # Updates in the revised version of the paper We first inform the reviewer that we will update the reported results of Hibiki-M in [Table 2](https://hibiki-s2st.github.io/e.png) after fixing an issue that further improves performance. Following the reviewers' suggestions and thanks to the extra page provided for the camera ready, we will revise the paper on the following aspects: * Clarify the **model's architecture, configurations and the nature/size of the datasets used.** * Add **quality/latency trade-off curves** by varying hyperparameters of our contextual alignment method. * Add **COMET** evaluation scores. * Extend experiments to the **English->French** direction. * Improve the **references section** and discuss similar works pointed out by the reviewers. # Comments ## On the generalization of the contextual alignment method to many language pairs We acknowledge that our method is only illustrated by a single language direction in the paper. However, there is no specific aspect of our method with respect to the language pair, but the fact that MADLAD -- which we used to derive contextual alignment -- performs well on these languages. We expect our method to work as strongly on pairs of languages where SOTA text translation models perform well and can thus allow us to derive reliable contextual alignments. Given the massively multilingual nature of MADLAD or even more recent systems like GemmaX2-28-9B ([Cui et al., 2025](https://arxiv.org/abs/2502.02481)), we expect this approach to be a good candidate for scaling to many language pairs. We provide the reviewers examples of [contextual alignments with other languages](https://hibiki-s2st.github.io/d.png). As a first step towards more language directions, we have extended our experiments to the English->French direction and provide [experimental results](https://hibiki-s2st.github.io/b.png) that we will add to the revised paper. ## On the quality/latency trade-off As mentioned in Section 3.2.2 (l.183), we enforce a 2s delay between words associated through contextual alignment as we found it to provide a good balance between latency and translation quality. We acknowledge that this choice can be reconsidered and that trade-off curves would provide a clearer picture to the reader. **We thus produced a trade-off curve by varying the delay, as asked by the reviewers.** The results reported in [this quality/latency study](https://hibiki-s2st.github.io/c.png) show that Hibiki provides an overall better trade-off than Seamless. We will add this figure to the revised version of the paper. We also acknowledge that the proposed version of Hibiki does not allow for inference-time latency control. We could rely on conditional training, as we did for the speaker similarity, to simultaneously train the model on multiple latency levels making it possible to control the latency at inference by changing the conditioning. We will add this mention to the limitations section. ## On the release of code and data We acknowledge that some critical parts of our framework are particularly challenging to reproduce, in particular the synthetic data generation using contextual alignment and **we will release our code for these steps along with training and inference code, trained models and around 900h of synthetic paired data with voice preservation** corresponding to our speech translation fine-tuning dataset introduced in Section 4.2. In order to build the speech translation training dataset, we relied on various data sources and will release the portion that we are allowed to release by their license. ## On references We thank the reviewers for their suggestion and acknowledge the contributions made by [Papi et al. (2023)](https://arxiv.org/abs/2305.11408), [Wang et al. (2024)](https://arxiv.org/abs/2402.10552) and [Yu et al. (2025)](https://arxiv.org/abs/2502.00634) that are particularly relevant with respect to our work. We also acknowledge the progress made in streaming and speech translation for complex language pairs such as English-Japanese as highlighted in [Ahmad et al. (2024)](https://arxiv.org/abs/2411.05088). We will add these references to the related work in the updated version of our paper. ## Answer to: *Section 3: "X is padded" - What happens when X is longer than Y ? Is Y padded in that case?* At this level of explanation (l.106), we also assume that *the modeling of Y knowing X should be causal*. This implies that Y is longer than X.
null
null
null
null
null
null
Calibrated Value-Aware Model Learning with Probabilistic Environment Models
Accept (poster)
Summary: The paper investigates value-aware model learning (VAML), particularly examining the MuZero loss and Iterative VAML (IterVAML) within a unified framework termed (m, b)-VAML. The authors present theoretical insights, showing that standard (m, b)-VAML losses are generally uncalibrated surrogates when applied to stochastic models. The authors introduce a correction term leading to a calibrated version called CVAML to handle this. Additionally, the authors explore deterministic versus stochastic models, empirically demonstrating stochastic models' advantages in certain settings. Claims And Evidence: The claims are supported by evidence on a few environments: Garnet MDPs and two environments from the DMC suite. Although these results are insightful, a broader set of environments, possibly including more complex ones, would strengthen the claims. Methods And Evaluation Criteria: The proposed method is simple yet makes sense for the given problem. As noted above, the provided evaluation criteria are not very thorough, but I think they are enough for proof-of-concept. Theoretical Claims: I have checked the theoretical claims' correctness and found no issues. Experimental Designs Or Analyses: I have checked the soundness of the experimental designs and found no issues. Supplementary Material: I have skimmed through the supplementary materials, especially on the proofs and implementation details. Relation To Broader Scientific Literature: Addressing calibration in surrogate losses for stochastic models is highly relevant to the reinforcement learning community, especially for those focusing on model-based approaches. Essential References Not Discussed: I did not find any essential references that were not discussed. Other Strengths And Weaknesses: Strengths - The paper provides a nice unifying view of the value-aware model learning methods. - The paper identifies a calibration issue that many value-aware model learning methods can suffer from. - The proposed method is simple yet seems to address the issue effectively, given the empirical results. Weaknesses - Although the empirical experiments are insightful, they are somewhat limited to specific domains and scenarios. A broader set of environments or more extensive empirical validation might strengthen the claims about stochastic vs deterministic models. Other Comments Or Suggestions: - The term sg is not defined until equation (2), even though it is first used in equation (1). - In Table 1, the $m$ and $b$ conditions in the header columns seem to be shifted by one cell. Questions For Authors: - L271: Is it correct that the indices $i$ and $j$ are inside the softmax, not outside? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We are thankful you took the time to engage with our paper! For your concerns on the number of environments and experiments, please refer to our reply to reviewer v9zD. Graphs can be found here https://drive.google.com/file/d/178cVcy05grmQ-dZCFu1p8ixIItgghoxG/view?usp=sharing
Summary: The paper analytically investigates *value aware model learning* (VAML) and puts it into relation with the MuZero loss, by defining a generalizing (m,b)-VAML loss. The authors show that both approaches, that is, (1,0) and (1,1)-VAML, are *uncalibrated* because averaging over samples introduces a variance term, akin to Bellman Residual Minimization (BRM). However, in difference to BRM this variance term can be estimated with samples from the learned model, and the authors call the corresponding correction the *corrected VAML loss* (CVAML). CVAML outperforms VAML in a variety of studied cases of (m,b)-VAML on randomly generated toy MDP. Furthermore, the authors prove that (1,0)-VAML generally allows to learn deterministic transition models in latent spaces, even if the true model is stochastic. Classical auxiliary losses for latent space models introduce a bias for all but linear value functions, though. Experiments on two DMC environment demonstrate that using stochastic models can in some cases still be empirically beneficial, although the difference between VAML and CVAML auxiliary losses does not seem to be significant. In summary, the paper discusses an important aspect of stochastic models in RL. It is generally well written and decently evaluated, but both points suffer in some parts which makes some contributions very hard to follow. I would recommend acceptance if some ambiguities in the formal notation and in the experimental description were rectified. I have not read the proofs in detail, though, so if other reviewers find faults, I would also be fine with rejecting the paper. Claims And Evidence: 1. "Iterative VAML and MuZero value-aware model losses are not calibrated, but can be". This claim is shown analytically for simple cases of both losses and evaluated on many randomly generated toy MDP. The corrected Iterative VAML loss outperforms the corrected MuZero loss significantly, but only on unrealistic toy tasks. 2. "Deterministic (1,0)-VAML losses in latent spaces are sufficient to be value aware, even if the true dynamics are stochastic". This claim is analytically proven. The authors also find empirically that stochastic models can improve performance in one out of two DMC environments, but attribute this to induced robustness against deviations. Practical differences between CVAML and VAML seem to be insignificant, though. 3. "Auxiliary losses for latent dynamics models help learning, but introduce biases except for value functions that are linear in the latent space". This claim is mostly argued at an example. The empirical evidence only compares model-learning as an auxiliary task without clarifying wether the value function was linear in the last layer. The practical relevance of this claim is therefore questionable. Methods And Evaluation Criteria: The claims are theoretically derived and empirically validated. Both parts can be improved by being more precise in formalism and details. Theoretical Claims: The first theoretical claim is based on a simple, but elegant and to my knowledge unpublished, insight that learned stochastic models suffer from an additional variance term. The presented formalism, in particular the unified (m,b)-VAML loss (Equation 2), is very complex and in some parts unclear. I understand that the authors wanted to present a unified loss, but it is a bit unclear over which training sets the loss is defined, and how certain inputs and indices affect the loss (like in Proposition 4). The claim that it unifies the IterVAML loss is also quite questionable, as this is only true for $k=1$. However, the main claims on calibration (Proposition 2, 3 and 4) are interesting and correct (with some createve reformulations, please add links to the proofs in the appendix, though). Again, the definition of the TD loss in Equation 4 is hard to parse without any expectation over $x^{(m)}$. The second theoretical claim (Proposition 5) is interesting, and I have not seen it stated like this before. It might be worth mentioning that bijective latent mapping are hard (or impossible) to come by. I did not follow the proof in detail here, though. Experimental Designs Or Analyses: The empirical evaluation of the first claim is a bit underwhelming, as randomly generated MDP rarely have properties similar to real applications. Even an evaluation on gridworlds could improve the analysis. Nonetheless, the results are fairly clear and support the claim. The evaluation of the second claim is a bit unclear: the authors train a model-free algorithm (TD3) with a VAML model-based auxiliary loss. This contradicts a bit the insight from their claim 3, unless the auxiliary loss is applied to the last layer of the value function (I could not find a mention thereof). Details are generally scarce here. MuZero would have made a much clearer test example, as it actually uses a value-equivalent model for inference. The results are also not very convincing, as the CVAML is not significantly better than VAML (except maybe for humanoid, but there the deterministic version is very strong). Please also add the results of vanilla TD3, as it is currently unclear whether VAML improves the performance at all. Supplementary Material: I scanned over the poofs in the supplementary material, but did not check details. Relation To Broader Scientific Literature: The paper is in parts very well written and discusses a wide range of relevant literature. The authors are commended for discussing many variants and differences to other methods in the main text. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: The paper contains a small number of typos and inaccuracies that need to be fixed (see detailed comments). I also believe the formal notation can be massively improved, as sometimes the authors use losses which have not been explicitly defined, or where the general definition is ambiguous of how they should be read. Other Comments Or Suggestions: - l.61L: the MDP is missing an initial state distribution - l.105L: it must be $\mathbb E_{\hat x^{(m)} \cdots}$ - Equation 2 is very (and IMHO unnecessarily) confusing. There are 8 parameters, but it seems you don't actually need $x^{(m)}$ (is always drawn from $\mathcal P^\pi$). Furthermore, this loss only makes sense if you define an expectation over it. How about this: $\hat{\mathcal L}(m, b, p, p', V, V'|x) := \mathbb E[ (V(x^{(m)}) - [[T^b_{P^\pi} V'](x'^{(m)}))^2 | x \sim p, x' \sim p']$? - Table 1: the (m,b) tuples in the second row have to shifted one column to the right - The TD-loss in Equation 4 was very confusing to me. First, it should use $\hat V$, not $V$, as the value is learned. Second, an expectation over it (in the real model) still contains a variance term that cannot be reduced without double-sampling. I think you wanted to define $(\mathbb E_{\hat p}[V(x^{(m)})] - \mathbb E_{\mathcal P^\pi}[r^{(m)} + \gamma V_{tar}(x^{(m+1)})])^2$ to use in Proposition 4. - Proposition 4 uses a surrogate loss with $\hat p = \mathcal P^\pi$, which does not contain a learned model $\hat p$! I assume you wanted to still use $\hat p$ of MuZero here. With this the (m,1) loss is indeed uncalibrated. - Please refer to the appendix where the proofs can be found! - I don't fully understand the issue exemplified in Equation 5. I would assume that the value loss is minimized for all $m \in \{0, \ldots\}$, so also for $m=0$. Your conclusion seems to be motivated by an empirical finding, not by this theoretical insight. Maybe there is another cause for it? - Figure 2: I assume these are (1,0)-VAML losses, not (0,1) which would make no sense? - Insight 1 mentions $\hat{\mathcal L}^{\hat p}_{0,j}$, but it is unclear what changing the superscript would do in Equation 2. Do you mean you learn the value by unrolling the model $\hat p$ for $j$ steps? - l.265L: mention which distirbution the weights $\omega_{i,j}$ are drawn from - l.248R: how is the value estimated? Analytically from the transition model? - l.260R: why does (1,1)-VAML perform so poorly in deterministic environments? - Proposition 5: It is not clear which loss $\hat{\mathcal L}_{IterVAML,1}$ refers to. The expected loss? The approximated? For which $k$? - Equation 6: the (m,b)-VAML loss has a different signature than before. What does it mean specifically? - l.363R: you mean (1,0)-updates! - Please clarify that the performance improvement of the corrected loss (not calibrated loss) in Section 7 is **not** significant. A trend, not a reliable result! - l.836, You are proving Proposition 4, not 2! Following propositions have the wrong numbers (except 5). Questions For Authors: - Why did you test latent CVAML on a model-free algorithm? Why not MuZero? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your kind and thorough review. We are grateful you took the time to engage very thoroughly with our paper! For your concerns on the number of environments and experiments, please refer to our reply to reviewer v9zD. These extended experiments also addresses concerns about significance. Graphs can be found here https://drive.google.com/file/d/178cVcy05grmQ-dZCFu1p8ixIItgghoxG/view?usp=sharing Notation: We share your concerns about the complexity of the notation. The loss has a lot of moving parts, and several past papers struggled to write it out properly in a non-confusing manner. We truly appreciate your suggestions for improvement and will adapt them where possible. As ICMl sadly doesn’t allow us to upload an updated PDF, we can only sketch out what we plan to change. All unmentioned typos will, of course, be corrected. Thank you for providing a thorough list. Equation 2: We are happy to simplify the equation as suggested! Equation 4: You are correct about $\hat{V}$. We will change that. The right-hand side is wrt to the target network and does not contain the model, so it does not have a double sampling issue. We will clarify this further by adding $[]_\mathrm{sg}$ Equation 6: We dropped the dependency on $\hat{x}^{(m)}$ (as you suggested for Eq. 2) here already. We will unify this across the paper. We also split the model explicitly into an encoder and latent transition model, we’ll clarify this. Some concrete replies to confusing bits Proposition 4: We are analysing the value learning part of the MuZero loss here, so we simplified the setup by assuming that the learned model matches the ground-truth model. Our result specifically states that even if you assume that the model is perfect (equal to the ground-truth model), the MuZero value learning component will not learn the correct value function. Equation 5: We will clarify the writing here. Our concern is that the MuZero loss only enforces that the expectation of the next model state value function will match the bootstrapped target. Your insight is correct if each state x appears in the dataset as x^{(0)}. However, this might not be true due to the dataset being used. Then we cannot guarantee that model-generated states will have correct value estimates. This is not an empirical finding, simply a consequence of the loss formulation. Proposition 5: As we are looking at a deterministic model here, $k$ doesn’t change the loss (every sample would be the same), so we dropped it to declutter the presentation. Insight 1: Indeed, we will clarify the writing here. We meant a regular model-based target estimate with a j-step model rollout, similar to the one used in [1][2][3] L. 248: The ground truth for calculating the MSE is computed via the analytical solution over the transition matrix. l.260R: Why does (1,1)-VAML perform so poorly in deterministic environments? Note that while the environment is deterministic, the model is not. For the corrected (1,1) loss, we find that it is much more prone to get stuck in local minima due to the averaging effects of the MuZero style value update. On the large-scale experiments, we see a similar pattern even with deterministic models. We debated this question a bit among ourselves. Our conclusion is that the (1,1) loss updates the value function on a model-generated sample $\hat{x}^{(1)}$ while the (1,0) loss only updates the value functions on samples which actually appear in the replay buffer. As the model will have errors, this can cause additional difficulties with the loss function even on deterministic environment, as the sample $\hat{x}^{(1)}$ and target $T[V](x^{(1)}$ don’t match even if the model is not stochastic (simply due to model errors). Updating values on generated samples (not just with model-generated targets) Running TD3: This seems to be a misunderstanding. We will improve the writing, thanks for noticing! We did not run TD3, but we understand why this might appear to be the case. We ran TD-MPC1 with two modifications: we replaced SAC in TD-MPC1 with TD3 (which is likely where the confusion arises), and we used the model for value learning (similar to the way [1],[2],[3] use the model). This means the model here is used both for value improvement and also at inference time, as the MPC procedure uses the model and the value estimate for planning. We chose TD-MPC1 over a MuZero variant for a very simple reason: computational efficiency. MuZero is notoriously slow due to the overhead of MCTS and reanalyze, and few efficient open-source implementations exist. As our contribution is first and foremost a theoretical one, we decided to use the more lightweight TD-MPC algorithm, which still contains all the relevant parts of the MuZero algorithm (loss, model-based search, + added model-based value improvement). [1] MBPO https://openreview.net/forum?id=BJg8cHBxUS [2] Dreamer https://openreview.net/forum?id=S1lOTC4tDS [3] MAD-TD https://openreview.net/forum?id=6RtRsg8ZV1
Summary: This paper examines a systematic issue in value-aware model learning (VAML) for reinforcement learning (RL). Core Idea of VAML: Unlike standard model learning, which maximizes log-likelihood, VAML optimizes for a model that results in zero value-function error. In other words, even if the model does not accurately capture the true transition dynamics of the MDP, it is considered "perfect" as long as it produces no value error. Key Contribution of the Paper: The paper highlights a critical issue with VAML-style objectives: when naively estimated empirically, they introduce additional bias. Specifically, the VAML objective requires an expectation over next states inside the squared error term, but empirical estimation places the expectation outside the square. This shift leads to overestimation due to Jensen’s inequality (since the squared function is convex). While the paper describes this as an "uncalibrated" estimate, a more standard term would be "biased." Connection to Value Function Learning: The paper makes a useful connection between this issue and value function learning in RL. A similar overestimation problem arises in residual learning, as described in Baird’s seminal work. Baird’s solution—using two next-state samples—mitigates this bias, though it is impractical in a model-free setting. However, in model-based RL, we can sample from the learned model and explicitly correct for the bias, effectively estimating and subtracting the overestimation error. Practical Implications: While the proposed bias mitigation approach is promising, its practical benefits remain uncertain. Reducing bias often comes at the cost of increased variance, and it is not always clear whether a zero-bias estimator is preferable to one with lower variance. The paper presents empirical evidence suggesting that this tradeoff is worthwhile in certain domains, but further theoretical and empirical validation is needed to confirm its general effectiveness. Claims And Evidence: The main claim—that VAML-style losses suffer from bias—is well-supported by both theoretical analysis and empirical results. The authors make a sharp observation about how the placement of the expectation affects estimation and introduce a clear argument grounded in Jensen’s inequality. This insight is particularly valuable because it connects VAML’s estimation issues to a well-known challenge in RL: overestimation bias in value function learning. The link to Baird’s residual learning framework strengthens the argument and provides a historical precedent for this type of error. While the paper does not explicitly frame the issue in terms of Jensen’s inequality, doing so makes the reasoning even more intuitive. However, the secondary claim—that removing this bias in the prescribed manner is necessarily beneficial—is less well-supported. While the paper demonstrates that the proposed bias correction technique improves estimates in certain domains, the broader implications remain unclear. A key concern is the classic tradeoff between bias and variance: reducing bias can increase variance, sometimes to the detriment of overall performance. The authors acknowledge this tradeoff but do not thoroughly analyze how their correction method affects variance in different settings. While empirical results show an advantage in some cases, they do not provide a comprehensive theoretical justification or broader empirical validation across a wide range of environments. Further, it is not always the case that a lower-bias estimator is preferable to one with slightly higher bias but significantly lower variance. Some RL methods explicitly tolerate bias in favor of stability and better long-term learning dynamics. A deeper analysis of this tradeoff—both theoretically and through more diverse empirical settings—would strengthen the paper’s argument for the practical benefits of its proposed correction. Methods And Evaluation Criteria: Yes Theoretical Claims: I did check the proofs to the best of my ability. I think sometimes the notation is overly complicated. The main text could have presented things in the simplest case possible, and also not hold off to the main result on overestimation so much. In equation (1), the sg notation is not immediately defined (deferred to the next page for some reason). It is also somewhat confusing, because on the term we are applying sg, there is no dependency to the model itself, so what does that even mean to say we are sg ing that term? Experimental Designs Or Analyses: My intuition about these VAML models is that while the point about likelihood models being an overkill is a fair one, in practice the correct VAML model might vary radically from one state to the other. This is because the correct next state to choose is the one with appropriate value function, and this can be a confounding factor. For example two very different next states have the same value function. Thus, when doing VAML in the function approximation case, generalization might become a big issue (two input states having radically different output). Thus, I find it interesting that this paper is applying VAML to the latent space case where the issue explained above may be less severe. On the negative side, a small portion of previous work in this space actually uses stochastic VAMLs. If the model is deterministic the bias issue is vanished, as identified by the paper. So while the bias issue is indeed present, it is not quite applicable in practice in light of the majority of papers using deterministic models anyways. I think that the experimental results are quite limited. I would have loved to see the impact of this bias reduction idea in a much larger set of benchmarks. Other than the toy setting, the paper is evaluating the effectiveness on just two domains, which I feel is not sufficient to make a strong argument about the usefulness of the idea in practice. I still lean on the acceptance side in light of the nice observation and despite the weakness on the experimental side of things. Supplementary Material: Yes Relation To Broader Scientific Literature: The connection to residual gradient method makes the result better situated. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: I have an ongoing internal debate about the fundamental difference between VAML-based models and classical likelihood-based models, and I would love to hear the authors' perspective on it. To me, VAML seems to introduce a kind of chicken-and-egg problem. The goal in VAML is to learn a model that minimizes value function approximation error when used for planning or learning. However, in the control setting, once the policy changes, the optimal model should also change. This is because the value function under the new policy might have different structure, meaning the previously learned model may no longer be optimal for minimizing value estimation error in the new setting. As a result, every policy update implies a corresponding update to the model, and every model update affects the policy, creating a tightly coupled iterative process. This stands in contrast to classical likelihood-based models, where the objective is to approximate the true environment dynamics, which remain fixed regardless of policy updates. In this case, learning the model is a separate, more stable problem, and once the model is learned, it does not need to be modified with every policy update. Given this difference, I’m curious how the authors view the stability and convergence properties of VAML-style models in iterative control settings. Does the need to repeatedly adapt the model with every policy change introduce instability or inefficiencies? Additionally, do the authors see scenarios where this iterative adaptation of the model and policy could be an advantage rather than a liability? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your kind and thorough review. We are grateful for your comments. For your concerns on the number of environments, please refer to our reply to reviewer v9zD. Graphs can be found here https://drive.google.com/file/d/178cVcy05grmQ-dZCFu1p8ixIItgghoxG/view?usp=sharing Bias-variance tradeoff: This is an excellent question and indeed the reason we chose to use the more technical term “calibration” instead of the term bias. We didn’t want to use “bias” as we felt it would be conflated too much with the phenomenon of bias and variance due to model architecture. However, in our case, we are talking about the bias of an estimator, not one stemming from a model class. As ML and statistical terminology are not always used in a consistent way across the literature, we felt that using the more specific idea of a calibrated surrogate loss would improve understanding. Our method does not induce a classic bias-variance tradeoff, as we can easily drive both terms to 0. Instead, we have a variance-compute tradeoff, as we can always draw more samples from our model to reduce the variance of the estimator, but that comes at the expense of additional computational steps. However, even for statistical estimators like ours, the unbiased estimator is indeed not guaranteed to be a minimum variance one for a fixed sample size. So there might be model classes and learning problems where the biased estimate does better with a fixed number of samples. However, as our method allows us to decrease the variance arbitrarily (with additional computational resources) we believe that it is still an important extension to the uncorrected estimator, which will result in non-zero error even in the limit of infinite model samples. Deterministic models: You are correct that the majority of published algorithms use a deterministic model. However, this is more of a historical issue than a reflection of the superiority of deterministic models. Most benchmark environments have a negligible amount of transition noise, and so using a stochastic model is often unnecessary. However, this is an artifact of the benchmarks the community has settled on, not really a condemnation of stochastic models. General concerns about VAML: The questions you raise are important and we believe unsettled in the literature so far. To the best of our knowledge, all current SOTA model-based RL approaches use some form of DAML loss [1][2][3][4]. Note that Dreamer [5] is an outlier, but seems to be outperformed in many cases by one of these approaches. However, constantly changing the target is indeed a concern. There is one interesting thing about the difference of (corrected) MuZero and IterVAML: For the true value function, the ground truth model is also a perfect model under the (corrected) VAML (MuZero and IterVAML) loss. However, for any other value function, this is still true for the IterVAML loss, but not for the MuZero loss. This might be an interesting insight as the VAML loss has at least one stable solution, while the MuZero loss does not enjoy this property. We added ablations to investigate your question further and were unable to achieve strong performance without the VAML or auxiliary components, so both seem important for these architectures. [1] Efficient MuZero https://openreview.net/forum?id=OKrNPg3xR3T [2] Efficient MuZero v2 https://arxiv.org/abs/2403.00564 [2] TD-MPC1 https://www.nicklashansen.com/td-mpc/ [3] TD-MPC2 https://openreview.net/forum?id=Oxh5CstDJU [4] MAD-TD https://openreview.net/forum?id=6RtRsg8ZV1
Summary: Value aware model learning losses penalizes if the model's value function prediction goes wrong. This works theoretically investigates the losses and shows that generally these losses are uncalibrated surrogate losses. They devise corrective measure for the losses. They provide experimental results on DMC control suite for their loss variant. They also show that learning a deterministic model can be sufficient but learning a calibrated stochastic model is more beneficial. Claims And Evidence: The claims are well supported by the theoretical evidence. They clearly poses the research questions and conduct relevant analysis. Methods And Evaluation Criteria: As core of the Value-Aware Model Learning (VAML) framework, they consider the (m, b)-VAML family of losses and exhaustively consider the value categories for m and b. The analysis of variance and bias for stochastic model of the environment is crucial to understand loss performance. Further, the discussion on auxiliary losses makes the discussion comprehensive. Theoretical Claims: They claim that (m, b)-VAML family of losses are uncalibrated under a stochastic model. They successfully demonstrate that the error is dependent on the model’s stochasticity rather than the environment. They nicely carried out the analysis to show the advantage of the calibrated losses. Experimental Designs Or Analyses: The experiments are relevant with the research questions. However, the experiments are limited to two DMC environment. To make the analysis robust, more experiments on diverse environments are crucial. Supplementary Material: The supplementary material contains detailed proofs and implementation details. Relation To Broader Scientific Literature: The scope of the paper is a bit narrow. It considers a certain aspect of the model-based RL. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper identifies key insights that would be very helpful to understand the relation between the model representation and loss functions. The proposed corrective loss components are of marginal novelty. Other Comments Or Suggestions: The paper should clearly mentions the definition of calibration early in the paper. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. For your concerns on the amount of environments, please refer to our reply to reviewer v9zD. Additional graphs can be found here https://drive.google.com/file/d/178cVcy05grmQ-dZCFu1p8ixIItgghoxG/view?usp=sharing Regarding your concern about novelty: while we agree that our focus is on model-based RL and therefore “narrow” in the sense that it targets a well-defined subarea of research, our results are, to the best of our knowledge, novel and unpublished. Model-based RL is also a mainstay topic at all major machine learning conferences, and we are therefore confident that this venue is an excellent place to disseminate our research.
Summary: The paper studies the family of value-aware model learning losses in model-based reinforcement learning, including MuZero loss. By theoretical analysis, it shows these losses are essentially uncalibrated surrogate losses. Then it proposes corrections for the losses. Experiments are conducted to show the correction to the losses is effective to obtain strong models. Claims And Evidence: The claims are well supported by evidence. Methods And Evaluation Criteria: The proposed methods and the evaluation criteria make sense. Theoretical Claims: Yes. The proofs for the theoretical claims are correct. Experimental Designs Or Analyses: The experiment designs and analyses are generally sound. However, in Figure 4 the CVAML is not significantly better than VAML, which makes the claim less convincing. Supplementary Material: No. Relation To Broader Scientific Literature: The paper is related to the losses used in value-aware model training. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strength - The paper provides sound theoretical proof of the nature of losses used in the value-aware model learning, and proposes effective correction to them. - The paper is well-written and easy to follow. Weakness - More experiments are needed to support the claims, e.g. about stochastic and deterministic models. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We are using this reply as a general reply since all reviewers raised similar concerns and this review is the first one that appears on open review. As ICML does not allow an updated manuscript, but all reviewers are mostly concerned about empirical questions, we provide additional experiments under this URL https://drive.google.com/file/d/178cVcy05grmQ-dZCFu1p8ixIItgghoxG/view?usp=sharing ### Number of environments We acknowledge that our experiments cover fewer environments than other, more empirically focused papers. However, we see our main contribution in the theoretical aspects of our work. Therefore, our focus was not on providing a broad benchmarking comparison. We want to point out that we did not compare on 2, but on the 7 most challenging DMC tasks. We acknowledge that this was not stressed clearly in the paper and will update the writing accordingly. The graphs in Figure 4 show aggregated results over several tasks on each locomotion robot. We will provide individual curves in an updated document in the rebuttal URL. We would like to stress that this is already a broader comparison than e.g papers which compare on the standard Open AI Gym Mujoco environments [1,2,3], and the DMC dog and humanoid tasks are significantly more challenging. [1] CrossQ https://openreview.net/forum?id=PczQtTsTIX [2] MBPO https://openreview.net/forum?id=BJg8cHBxUS [3] ALM https://openreview.net/forum?id=MQcmfgRxf7a To provide additional evidence for our claims, we ran the following additional experiments: - We ran all the environments to 2,000,000 environment steps. This creates a clearer picture: the humanoid confidence intervals for (1,1)-VAML and CVAML do not overlap anymore. - We ran additional experiments as requested by the reviewers: - We ablated the TD-MPC1 model used by removing the auxiliary loss and the decision-aware loss respectively. We show that both variants decrease in performance. - We added a model-free TD3 baseline using the TD-MPC1 architecture (without model) for a full comparison. We show that this does not result in strong performance, especially on hard humanoid tasks. As requested by reviewer 9s7Q, we added a grid world experiment in which we conduct policy iteration. A general note regarding the scope of the empirical comparison: If the reviewers feel that specific environments would greatly aid in understanding the implications of our findings, we are happy to add them. We did so in the case of the gridworlds mentioned by reviewer 9s7Q. Otherwise, without concrete recommendations, it is hard to scope a proper reply to a general call for “more experiments”. Thus, we were wondering whether you had any particular environments in mind.
null
null
null
null
CodeIO: Condensing Reasoning Patterns via Code Input-Output Prediction
Accept (oral)
Summary: The paper reports on work on generating training data for reasoning tasks from code. The method generates training examples from code by using a LLM to generate the query, input-output pairs with their reasoning chains, and input predictions from the output together with their reasoning chains. The data is used for fine tuning a LLM before instruction turning. Experiments show that the method obtain improved performance on many reasoning benchmarks for multiple LLMs. ## update after rebuttal Thanks for the rebuttal. From the rebuttal, it seems that the process is creating more training examples to reinforce other data sources, rather than data that create behaviour change. I still think this is useful work and maintain my score. Claims And Evidence: The paper claims that the training examples constructed as they describe in the paper provide universal reasoning primitives. Experiments show consistent improvements over many benchmarks, hence support a claim that the method is useful, although "universal" is a strong claim and is not defined in the paper. Methods And Evaluation Criteria: The experiments are done over a wide range of reasoning benchmarks and LLMs, hence seems appropriate. Theoretical Claims: No theoretical claims in the paper. Experimental Designs Or Analyses: Multiple ablation studies are done and they appear appropriate. Supplementary Material: I only scanned through the supplementary material. Relation To Broader Scientific Literature: There is a lot of work on code generation but to my knowledge, the use of code to generate examples to improve general reasoning is new. Essential References Not Discussed: I am not aware of essential references that are not discussed. Other Strengths And Weaknesses: The paper shows that code can be used to training examples that are useful for learning to reason. Of interest is the particular types of training examples that are useful. The paper shows that one type of useful example generated using code is example inputs together with the chain of thought to generate the output. This is interesting but somewhat expected. The other reasoning pattern that is useful is less expected -- from an output generate a valid input, with the corresponding chain of thought. Other Comments Or Suggestions: It would be useful for the authors to summarize the list of insights gained from their work in the introduction. Questions For Authors: What are the helpful reasoning patterns provided by the examples that are generated from code? A qualitative study of validation set (not test set) problems that change from incorrect to correct after inclusion of the new training set may provide useful insights. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the time and effort you spent reviewing our paper, and for recognizing our contributions. Below are our responses: # Q1 > "Universal gains/effectiveness" is a strong claim and is not defined in the paper. Thanks for this comment, we will change this statement containing “universal” to a more precise one, e.g., “CodeI/O and CodeI/O++ demonstrate performance improvements across models of various sizes and architectures on most benchmarks, although we also observed nearly unchanged or even decreased performance on a small number of tasks.” # Q2 > Summarize the list of insights. Thanks for the suggestion. We try to summarize some as follows: 1) Learning reasoning patterns via input/output prediction in natural language offers more general and robust reasoning abilities than raw code pre-training or function generation. 2) Code's verifiability improves data quality through regenerating incorrect predictions and concatenating multi-turn responses, encouraging model self-reflection. 3) Learning from code via input/output prediction enhances broader downstream reasoning tasks compared to knowledge-intensive instruction tuning (e.g., WebInstruct, OpenCoder-SFT) or single-domain data (e.g., OpenMathInstruct-2). We will merge them in the introduction and update our paper accordingly in the next version. # Q3 > Helpful reasoning patterns in CodeI/O and qualitative study. We provide some examples of generated responses in the CodeI/O dataset with certain reasoning patterns as follows: *Visual aided reasoning:* ``` … We can visualize the tree structure as follows: 1 / \ 1 1 / \ None 1 / \ / \ None None 1 1 / \ None 1 ... ``` *Verification and self-correction:* ``` … 5. Fifth Attempt: Let’s try poly=[4,6,10] and x=2. Using Horner’s method: \[ ret=4 ret=4*2+6=14 red=14*2+10=38 \] This gives us 38, which is close but still not exact. 6. Final Attempt: Let’s try poly=[4,6,10.7] and x=2. Using Horner’s method: \[ ret=4 ret=4*2+6=14 red=14*2+10.7=38.7 \] This gives us the exact output 38.7. Thus, the feasible input is {“input”: {“poly”: [4,6,10.7], “x”:2} ``` *State-space searching and backtracking:* ``` … #### DFS Execution: - Start from node 0: - Sum at node 0: `1` - Move to node 2: - Sum at node 2: `1 + 1 = 2` - Move to node 1: - Sum at node 1: `2 + 8 = 10` - Move to node 3: - Sum at node 3: `10 + 4 = 14` - Move back to node 1: - Sum at node 1: `10` (backtrack) - Move to node 4: - Sum at node 4: `2 + 4 = 6` - Move back to node 2: - Sum at node 2: `2` (backtrack) - Move back to node 0: - Sum at node 0: `1` (backtrack) … ``` *Decision tree traversal:* ``` … - 197: Check if it is prime. - 197 is not divisible by 2 (it's odd). - Check divisibility by 3, 5, 7, etc., up to the square root of 197 (approximately 14.03). - 197 is not divisible by 3 (197%3!=0). - 197 is not divisible by 5 (197%5!=0). - 197 is not divisible by 7 (197%7!=0). - 197 is not divisible by 11 (197%11!=0). - 197 is not divisible by 13 (197%13!=0). - Since 197 is not divisible by any of these numbers, it is a prime number. … ``` *Sub-task decomposition* ``` … Given the output list [27, 19, 28, 18, 25, 54] … Let's break down each output value: 1. For 27: 27=4X+Y Possible pairs (X, Y) that satisfy this equation: - X=6, Y=3 (since 4*6+3=27) 1. For 19: 19=4X+Y Possible pairs (X, Y) that satisfy this equation: - X=4, Y=3 (since 4*4+3=19) … ``` These examples in CodeI/O show that diverse reasoning patterns can be captured in the training set. However, there is no significant behavioral change when comparing models that are only instruction-tuned (baseline) and models with extra first-stage CodeI/O training. For example, most changes from incorrect to correct answers are due to avoiding simple and obvious mistakes, as follows: *Question:* ``` Jenna starts out with 8 sapphires. She trades 3 sapphires for two rubies. If sapphires are worth $800 and rubies are worth $1200, how much money are all her jewels worth? ``` *Baseline wrong response (it does not notice that 3 sapphires have been traded):* ``` … Adding the value of her sapphires and rubies, the total value of all her jewels is $6400 (sapphires) + $2400 (rubies) = $8800. So the answer is $\boxed{8800}$. ``` *CodeI/O correct response:* ``` … So, after the trade, Jenna has $6400 - $2400 = $4000 worth of sapphires left. … Adding the value of her sapphires and rubies together, Jenna's total worth of jewels is $4000 + $2400 = $6400. So the answer is $\boxed{6400}$. ``` We hypothesize that 2nd stage instruction tuning on shared data leads models to converge to similar states rather than develop distinct response patterns. More in-depth analysis is needed to further understand this subtle behavior change, and we leave it as important future work.
Summary: The paper introduces a training paradigm where models are taught to predict input–output pairs from code and accompanying test cases. The key idea is to leverage the structured nature of code to instill reasoning skills while preserving procedural rigor. In practice, the authors transform raw code into executable functions and frame tasks as either predicting execution outputs from a given function and query, or inferring feasible inputs from desired outputs. They also incorporate a multi-turn revision mechanism intended to correct initial errors. Experimental results, including comparisons of query + code versus chain-of-thought (CoT) + code setups, demonstrate promising improvements—though the gains from additional revision turns appear to taper off. ## Update after rebuttal I confirm that I have read the author response and i appreciate the detailed, easy-to-follow way of rebuttal. I still hold a positive opinion about this paper, thanks! Claims And Evidence: The paper claims that using structured code formats for input–output prediction can better capture reasoning signals compared to conventional pre-training on raw code. It also asserts that multi-turn revision improves output quality by correcting initial errors. However, while there is experimental evidence supporting some of these claims (e.g., performance gains in certain setups), concerns remain: * The evidence that diverse reasoning patterns—especially those not naturally aligned with procedural logic—is fully captured remains limited. * The diminishing gains after the first revision turn suggest that the multi-turn mechanism may not robustly address error propagation. * Reliance on generated responses (via Deepseek-v2.5) raises questions about the consistency and validity of the training data. Methods And Evaluation Criteria: I think the methods are sound at hand. The proposed method focus on recast code execution as a NL input-output prediction task. Several key elements are (1) transforming code into executable functions and defining dual prediction tasks (output prediction and input inference). (2) Implementing a multi-turn revision strategy to refine reasoning chains. Theoretical Claims: I don't think the paper give some theoretical claims as it seems to follow the typical SFT process. Experimental Designs Or Analyses: I have checked the experiment designs and I think it is valid and extensional. One concern is that in Table 3, I’m curious about the performance of using query+code in the prompt, and let the response be both Cot+Code? Since it seems like these two setups have both promising results across different metrics. Supplementary Material: No, I have not checked the appendix. Relation To Broader Scientific Literature: I think this paper contributes to the intersection of code generation, and Cot-like reasoning. It builds on recent successes in using structured training data (from code and math tasks) to enhance reasoning in language models and connects with work on chain-of-thought prompting and iterative refinement. However, it could benefit from a deeper discussion of how its approach compares to methods that tackle abstract, non-linear reasoning, and why code-centric logic might limit exposure to broader reasoning patterns. Essential References Not Discussed: I think the papers might want to discuss more regarding the recent advances in multi-turn revision or iterative refinement to address the error propagation in LLM outputs. Other Strengths And Weaknesses: Strengths: * Innovative combination of structured code training with natural language reasoning tasks. * Utilization of dual tasks (input inference and output prediction) that tap into the logic of executable functions. * A comprehensive experimental evaluation that explores different prompt and revision strategies. * The paper is easy to follow. Weaknesses: * Potential bias due to the selection criteria for code examples, which may not capture the full diversity of reasoning patterns. * The multi-turn revision mechanism shows diminishing returns, and error propagation remains a concern. * Key concepts (e.g., the deterministic reverse function) are insufficiently defined and justified, which may undermine the generality of the approach. Please see the detailed feedback in QA section. Other Comments Or Suggestions: Please see the detailed feedback in QA section. Questions For Authors: I think this paper should be considered as a good attempt for combining the structured format with the natural language reasoning tasks. However, I may doubt that the diverse reasoning patterns cannot be fully captured by code format as some reasoning cases don’t closely mirror procedural code logic like theorem proof. Another concern is that although the authors aggregate code from multiple sources, the selection criteria (e.g., filtering for complexity) might bias the dataset toward certain types of reasoning or programming styles, potentially limiting the model’s exposure to broader reasoning scenarios. To be much clearer, recasting code execution as natural language input-output prediction may oversimplify or misrepresent non-procedural reasoning. By focusing on code-centric logic, the approach might neglect abstract, non-linear reasoning patterns inherent in broader tasks, questioning the generality of the improvements. Another concern is regarding the multi-turn revision: although the multi-turn revision is intended to correct initial errors, the reported gains diminish after the first turn. This suggests that added complexity may not translate into substantial performance improvements, and the error propagation risks remain insufficiently addressed. In Table 3, I’m curious about the combination of using query+code in the prompt and letting the response be both CoT+Code? Since it seems like these two setups have both promising results. I have some concerns regarding the collection of the training data, especially for the process that all the responses for input-output pairs are generated by Deepseek-v2.5. The Figure 1 mentioned that the CoTs can have some optional revisions to further enhance the reasoning chains. I want to see more evidence for this claim, as I think the quality of the responses are vital for the whole system; several specific concerns could be (1) how to ensure that the generated responses contain the valid logic flow from the user query, (2) how to balance the complexity of the generated CoTs. I’m confused with the statement from Line 167 to 178, what do you mean by the deterministic reverse function? Why is it important? I’m curious compared to Code/I/O, how many instances can be fixed in Code/I/O++? And the results from Table 1 (w/ Code/I/O and w/ Code/I/O++) are quite similar and even in some setups using Code/I/O++ even results in worse performance; does it mean that even with additional verification and regeneration, there are still some errors that cannot be fixed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the time and effort you spent reviewing our paper, and for recognizing our contributions. Below are our responses: # Q1 > Diverse reasoning patterns may not be fully captured by code & Compare to other methods that tackle abstract, non-linear reasoning Thanks for this comment. We agree that not every reasoning pattern exists in code. However, as shown in case study (response to reviewer z6HH Q3), many foundational patterns can indeed be identified. We also tested on domains that are rare in code, e.g., medical reasoning (response to reviewer qybs Q7), and observed gains, indicating that the improvement is generalizable to some extent. Regarding comparison with other methods that tackle abstract, non-linear reasoning, we provide a discussion about neuro-symbolic systems (response to reviewer qybs Q4). # Q2 > Code-centric logic might limit exposure to broader reasoning patterns: the selection criteria (e.g., filtering for complexity) might bias the dataset Thanks for this comment. Actually, the selection criteria are set for mitigating bias rather than enhancing it. In the CodeMix source, most samples are pure algorithms, thus we deliberately filtered out pure algorithms in PyEdu source and did not apply complexity-based filtering. However, we also acknowledge that certain biases may still exist, as we only include executable code with proper JSON input/output formats. We leave this issue as future work to explore. # Q3 > Concerns on error propagation in multi-turn revision and discussion on recent advances in related topics. Thanks for this comment. We listed the statistics about multi-turn revision in Fig 7, Appendix D of the submission. Actually, only a small number of errors (16% in input pred and 10.7% in output pred) can be fixed. The value can be even smaller if a next round of revision is conducted. As a result, a slight improvement when involving this revision is expected. Potential reasons may be that DeepSeek-V2.5 still lacks the ability to revise. We did not tune this part with huge effort, as we only wanted to try this as a preliminary attempt to utilize the verifiable nature of code. On the other hand, there are also directions we could integrate into our workflow to enhance multi-turn revision effects. For example, involving multi-agent debate and discussion [1], interactive critiquing with diverse tools [2], or using models with strong self-reflection abilities [3]. We also plan to explore them in our future work to address error propagation more effectively. [1] Improving Factuality and Reasoning in Language Models through Multiagent Debate; ICML 2024 [2] CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing; ICLR 2024 [3] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning; arXiv:2501.12948 # Q4 > The combination of using query+code in the prompt and letting the response be both CoT+Code in Table 3 Thank you for this question. The two Code parts in prompt and response are actually identical - both refer to the reference code(see Table 8 for an example). If we include Code in both parts, the model will just learn to copy a block of contents, which would bring no benefits in learning. Therefore, we did not include this variant in the submission. # Q5 > Quality concerns on the generated responses in CodeI/O: 1) if they contain the valid logic flow from the user query, 2) how to balance the complexity of the generated CoTs Thank you for this comment. Our observations show that responses with correct predictions usually demonstrate valid logical flow. While removing incorrect responses seems intuitive for improving quality, our results (Table 2, rows 1 and 5) show this actually degrades performance. We hypothesize that such filtering, though ensuring valid logic, reduces exposure to diverse reasoning patterns, particularly difficult ones. Besides, our two-stage training approach helps mitigate these flaws, as high-quality second-stage data can remediate first-stage issues. Regarding CoT complexity balancing, we find that CoT complexity naturally corresponds to the input/output prediction complexity. Our sampling across data sources implicitly covers different complexity levels, though we didn't explicitly balance this in a fine-grained manner, which we agree is an important direction and we leave it as future work to explore. # Q6 > What is the deterministic reverse function (Line 167 to 178) and why is it important? Reverse functions take an output and return a feasible input for the original functions. If they are deterministic, one output maps to exactly one input, and we can just use the execution trajectory (i.e., print the executed lines of codes and the intermediate variables sequentially) as the perfect responses. However, since multiple inputs often produce the same output, truly deterministic reverse functions rarely exist. This is partly why we use DeepSeek-V2.5 to generate responses directly. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed, easy-to-follow responses. I have read them and still hold a positive opinion about this paper. Thanks! --- Reply to Comment 1.1.1: Comment: Thanks for your kind words in the response and your positive feedback on our paper once again!
Summary: The paper introduces CODEI/O, a novel approach designed to enhance the reasoning capabilities of large language models by leveraging code input-output prediction. The key idea is to transform code into a format where models predict inputs or outputs given a function, while reasoning in natural language using COT rationales. The methodology aims to apply models to fundamental reasoning patterns, such as logic flow planning, state-space searching, and modular decomposition, without being constrained by code-specific syntax. The paper further enhances this method with CODEI/O++, which incorporates multi-turn revisions to refine CoTs. Experiments across multiple reasoning benchmarks demonstrate that CODEI/O and CODEI/O++ improve performance not only on code-related tasks but also on more general reasoning challenges, achieving more balanced and generalizable results than existing datasets. Claims And Evidence: Most of the claims are well evaluated. One issue is that, the paper claims that CODEI/O systematically condenses reasoning patterns embedded in code and enhances reasoning capabilities across various domains. "For the claim, "We believe that real-world code programs reflect the integration of a wide range of reasoning patterns across diverse contexts, making them an ideal source for training while minimizing the risk of overfitting". The paper should give more evidence to the claims. For example, some case studies or key observations could be provided to explain what similar patterns they share. Methods And Evaluation Criteria: Yes. The methods make sense for the enhancing the reasoning capabilities. The experimental setup, including the comprehensive benchmarks and a comparison against strong baselines such as OpenMathInstruct, WebInstruct, and OpenCoder-SFT, is robust and provides clear evidence of the effectiveness of CODEI/O. Theoretical Claims: The paper does not provide theoretical claims. Experimental Designs Or Analyses: Yes. The experiments are well-designed. However, it lacks some qualitative analysis to show why the CODEI/O dataset is more effective than others. Moreover, the potential data leak risks from the the collected code should be discussed. Supplementary Material: The supplementary material is not explicitly provided. But the dataset construction and benchmark details are comprehensively discussed in the given appendix. Relation To Broader Scientific Literature: The paper positions CODEI/O as a bridge between code reasoning and broader natural language reasoning, distinguishing itself from prior works that focus purely on code execution (e.g., CRUXEval) or task-specific data augmentation. It builds upon previous work on structured reasoning datasets but extends the concept by systematically curating input-output mappings from code to enhance general model training.The authors reference relevant studies on code reasoning, execution-based learning, and dataset construction. Essential References Not Discussed: One potential gap is a comparison with neuro-symbolic integration methods, which also attempt to abstract structured reasoning. May add some discussions and comparisons on it. Other Strengths And Weaknesses: The approach is novel in that it reformulates reasoning as a structured code input-output prediction task with natural language CoTs, a departure from standard pre-training or direct instruction tuning. The work has strong implications for improving LLMs’ general reasoning ability beyond just code-related tasks. Potential Weaknesses: - While the method is well-validated experimentally, there is limited discussion on potential biases and data leaks in the collected dataset. - Further insights into model interpretability and why CODEI/O leads to improvements across different reasoning tasks would be valuable. - The training efforts should be clearly provided such as the number of GPUs used and the training time. Other Comments Or Suggestions: N.A. Questions For Authors: 1. How does the performance of CODEI/O compare when tested on completely unseen reasoning categories that may not align with the selected code dataset (e.g., medical domain or others)? 2. Have you analyzed whether the model is truly learning structured reasoning patterns, or if it is simply memorizing input-output mappings? Can you provide qualitative examples of reasoning improvements or the potential patterns? 3. How does CODEI/O perform when fine-tuned on smaller models (e.g., 1.5B parameters)? 4. Could you provide the training cost details? 5. How do you handle the potential bias or data leaks in the collected code dataset? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the time and effort you spent reviewing our paper, and for recognizing our contributions. Below are our responses: # Q1 > Evidence to show code integrates diverse reasoning patterns We conduct a case study on the CodeI/O dataset, and witness typical examples with certain reasoning patterns. Please refer to our response to Q3 of Reviewer z6HH for more details. # Q2 > Qualitative analysis to show why the CODEI/O dataset is more effective than others We compare samples from CodeI/O and other baseline datasets. Key differences are: 1) Structured Reasoning Process: CodeI/O shows a standard problem-solving method with clear steps, while other datasets show more varied and less ordered reasoning. 2) Algorithmic Thinking Pattern: CodeI/O shows orderly step-by-step processes, unlike the more direct shortcuts in other datasets. 3) Complete Reasoning Traces: CodeI/O gives full traces that record the whole reasoning process, making it better for training models to explain their thinking. These differences stem from both the input/output prediction task and our DeepSeek-V2.5 response generation. We will add these analyses to our next paper version. # Q3 > Data leak risks We conduct a strict 13-gram-based leakage detection on CodeI/O data following [1], the results are as follows: |Benchmark|Leakage Ratio (%)| |-|-| |LeetCode-O|21.5(950/4414)| |KorBench|5.1(64/1250)| |MATH/MMLU/CRUXEval|0.1| |Others|0| We see most of the benchmarks are not leaked. Upon manual inspection on the two ones with high ratio: - KorBench overlaps only contain general descriptions like Sudoku rules or common letter sequences ("A B C D...") rather than specific questions – our training tasks and the benchmark tasks are completely different. - LeetCode-O overlaps stem from sibling problems sharing common descriptions (e.g., Two Sum I & II), despite that we have removed all original problems from our training data To further detect whether the gains on these two are due to data leakage, we calculated the sample-wise acc gains of CodeIO compared to the baseline on both the full set (F) and the non-leaked set (UN). The results are as follows: ||LeetCode-O||KorBench|| |-|-|-|-|-| ||F|UN|F|UN| |Qwen|3.8|3.9|5.8|6.1| |LLaMA|9.4|9.4|1.0|0.9| |DSLite|5.3|5.7|-1.2|-1.3| |Gemma|3.7|3.9|1.4|1.3| The similar gains on both full and unleaked subsets across all models confirm that our improvements are not affected by data leakage. These analyses will be in our next paper version. [1] Evaluation data contamination in LLMs: how do we measure it and (when) does it matter? arXiv:2411.03923 # Q4 > Discussions on neuro-symbolic methods Thanks for this comment. Our work can also be regarded as neuro-symbolic integration, specifically in the "neuro:symbolic->neuro" category per [1]. We use symbolic rules (Python execution) to guide LLM training, but rely solely on neural components in inference. This mainly differs from other categories like Symbolic[Neuro], Neuro[Symbolic], or Neuro|Symbolic, which utilizes both techniques in inference. Further discussion will be included in the next revision. [1] Towards Cognitive AI Systems: a Survey and Prospective on Neuro-Symbolic AI, arXiv 2401.01040 # Q5 > Further insights into interpretability & Qualitative examples of reasoning improvements Thanks for this advice. We conduct a case study on model behavior in response to Q3 of Reviewer z6HH. Please kindly refer to that for details. Also, as the input-output prediction accuracy is only about 50% in CodeI/O data, it’s hard to memorize them and hack the benchmarks, and the gains should mostly come from the underlying reasoning logic flow. # Q6 > Training costs |Model|# of GPUs (40GB A100)|Stage1(CodeI/O) (hrs)|Stage1(CodeI/O++) (hrs)|Stage2 (hrs)| |-|-|-|-|-| |Qwen|80|5.8|7.5|3.5| |LLaMA|80|6.15|8.4|4.0| |DSLite|80|4.4|7.0|2.5| |Gemma|160|14.0|18.5|7.5| # Q7 > Performance on unseen reasoning categories We test on two medical reasoning tasks for complex clinical diagnosis: MedQA (US subset) [1] and MedBullets [2]. The results on Qwen 7B 2.5 Coder are as follows, indicating CodeI/O can also improve categories that may not align with codes: |Data|MedQA|MedBullets| |-|-|-| |Stage2 Only|47.2|40.9| |CodeI/O|49.3|42.5| |CodeI/O++|49.2|42.5| |OMI2|48.1|39.9| |WI|47.8|40.3| |PyEdu|46.2|42.5| |OC-SFT-1|48.0|38.6| [1] What disease does this patient have? a large-scale open domain question answering dataset from medical exams; arXiv:2009.13081 [2] Benchmarking Large Language Models on Answering and Explaining Challenging Medical Questions; arxiv 2402.18060 # Q8 > Performance on smaller models We test Qwen 2.5 1.5B as follows: |Data|Avg| |-|-| |Stage2 Only|37.4| |CodeI/O|38.3| |CodeI/O++|38.5| |OMI2|37.7| |WI|37.1| |PyEdu|37.8| |OC-SFT-1|37.4| The results show CodeIO is still effective, though with less gains than in larger models. This suggests smaller models may lack sufficient capacity to fully leverage the reasoning patterns in our datasets.
Summary: In this work, the authors develop CodeI/O and CodeI/O++ to improve the reasoning capabilities of Large Language Models. The proposed CodeI/O approach trains models to predict code inputs and outputs in natural language. Evaluation on a variety of benchmark datasets and base models show that CodeI/O improves reasoning capabilities even in non-code domains. Additionally, the authors provide extensive ablation studies to justify the design choices when building CodeI/O and CodeI/O++. Claims And Evidence: The main claims in the paper are all supported by extensive experimental evidence across a variety of benchmark tasks and base models. Additionally, all of the claims about building the finetuning dataset are supported by extensive ablation studies. One minor point is that that the authors claim "CodeI/O and CodeI/O++ exhibit universal effectiveness across model sizes and architectures," which is not exactly true given that the proposed method does underperform on some base models/tasks. Methods And Evaluation Criteria: I will say that I am less familiar with this research area, but the selection of benchmark datasets and base models seems reasonable to me. Evaluating across multiple different base models provides stronger support for the proposed method and evaluating on tasks outside of just code reasoning demonstrates that CodeI/O provides improvements to general reasoning performance. Theoretical Claims: N/A -- no theoretical claims are made. Experimental Designs Or Analyses: All of the experiments seemed well designed to me, but I am less familiar with this research area and may not be aware of any weakness/flaws. In my view, the main studies are well designed and provide a fair evaluation, and the ablation studies provide important insight into the design of the finetuning datasets and the scaling of the proposed method. Supplementary Material: N/A -- no supplementary material provided. Relation To Broader Scientific Literature: Reasoning capabilities are incredibly important to continue to improve LLM performance, and the proposed method provides a strong way to improve LLM reasoning performance across domains. Essential References Not Discussed: I am not familiar enough with this research area to be able to comment on any missing literature. Other Strengths And Weaknesses: Strengths: - S1: CodeI/O and CodeI/O++ provide consistent performance improvements across a variety of benchmark tasks and base models - S2: The authors provide several strong ablations studies that demonstrate impressive performance scaling and provide insights into the design of the finetuning dataset. - S3: The paper is well written, easy to understand, and has nice figures. Weaknesses: - W1: Most of the base models used in the paper are relatively small and the proposed method has the smallest improvement in performance on Gemma 2 27B. As such, it is unclear if the results demonstrated in the paper will scale to larger models. Other Comments Or Suggestions: ### Update After Rebuttal The authors provided some additional experiments which show the method still works for larger models. Overall this is a strong work and I recommend acceptance of the paper. Questions For Authors: - Q1: Can the authors attempt to evaluate the proposed method with a larger base model? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the time and effort you spent reviewing our paper, and for recognizing our contributions. Below, we have listed our responses to your questions and comments. # Q1 > The claim "CodeI/O and CodeI/O++ exhibit universal effectiveness …" is not exactly true given that the proposed method does underperform on some base models/tasks. Thanks for this comment, we will change this statement to a more precise one, e.g., “CodeI/O and CodeI/O++ demonstrate performance improvements across models of various sizes and architectures on most benchmarks, although we also observed nearly unchanged or even decreased performance on a small number of tasks.” # Q2 > Can the authors attempt to evaluate the proposed method with a larger base model? Thanks for this suggestion. We have trained a larger model, LLaMA3 70B. The results are as follows: | | Wino-Grande | DROP | GSM8K | MATH | GPQA | MMLU-STEM | LC-O | CRUX-I | CRUX-O | BBH | BBH-ZH | Zebra-Logic | Kor-Bench | Live-Bench | AVG | |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| | Baseline (Instruction Tuning) | 75.9 | **86.5** | **93.6** | 68.9 | 44.2 | 85.8 | 24.8 | 65.0 | 72.6 | 83.9 | 88.6 | 20.0 | 51.8 | 42.6 | 65.6 | | CodeI/O + Instruction Tuning | **76.3** | 85.8 | 93.1 | **70.1** | **45.3** | **85.9** | **31.4** | **68.8** | **76.1** | **85.9** | **88.7** | **22.6** | **53.8** | **44.1** | **67.4** | The results show that CodeI/O also works well on larger models. Although we see some performance drop on a small set of benchmarks, gains are obtained on most of them, indicating an overall improvement in reasoning ability. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing this additional experiment with a larger model. This is a good result to include in the revised manuscript, however it does not change my evaluation and I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for recognizing the additional experimental results we provided, and we appreciate your positive feedback on our paper once again!
null
null
null
null
null
null
Reducing Confounding Bias without Data Splitting for Causal Inference via Optimal Transport
Accept (poster)
Summary: This paper focuses on the causal inference task, specifically in the binary and continuous treatment settings. The authors argue that data sparsity can hinder covariate distribution alignment across different treatment groups, leading to biased outcome predictions. Instead, they push all conditional marginals forward to the marginal distribution, where the formers are implemented by the generalized propensity score. Theoretically, they provide a bound for optimization. Extensive experiments are conducted to evaluate their method. Claims And Evidence: N/A Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, the proof for the Theorem 4.2 is correct. Experimental Designs Or Analyses: 1. In Tables 1 and 2, the large performance gaps between different baselines seem strange. 2. A visualization of the balance results for mitigating the selection bias is missing. Supplementary Material: This paper is with no supplementary material. Relation To Broader Scientific Literature: This paper propose a new method to help address the selection bias in treatment effect estimation, which has wide applications in healthcare, ecomony, biology, and so on. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** 1. The proposed method is straightforward and easy to understand. 2. The writing in this paper is well-done, and the prerequisite knowledge is comprehensively provided. **Weaknesses:** 1. Instead of optimizing the traditional balance term between $q_0(x)$ and $q_1(x)$, this paper enforces all conditional marginal distributions $q_t(x)$ to be aligned with the marginal distribution $q(x)$. However, $q_t(x)$ is approximated by the generalized propensity score, which is generally difficult to predict and has high variance. Thus, the effectiveness of this method is questionable. 2. Figure 1 does not highlight the unique technical contribution of this paper, such as the computation of the generalized propensity score. 3. In Tables 1 and 2, the performance gaps between different baselines are too large—they are on different scales. Additionally, the in-sample experiment is missing. 4. Why do the results of different models in Table 4 show only minor differences, while in Tables 1 and 2, the differences between models are much larger? 5. A visualization of the balance results is missing, which is crucial because it demonstrates that balance can be achieved without data splitting. Other Comments Or Suggestions: N/A. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We appreciate the reviewer for the valuable comments. We will revise the submission according to the comments and responses. **Q1** A visualization of the balance results is missing. **A1** Thank you for the valuable comments. We have further conducted experiments to visualize the embeddings before and after representation learning based on t-SNE. The results shows that our method can reduce the distribution discrepancy caused by the confounding bias. Please kindly refer to https://anonymous.4open.science/r/ICML_figure_commit-3B64. **Q2** Regarding the conditional distribution estimation based on the generalized propensity score. **A2** 1) We exploit the generalized propensity scores to estimate the conditional distribution, which leverages all the training samples for distribution modeling and alignment without data splitting. The data splitting issue becomes even more severe in the continuous treatment setting since multiple groups are considered, and each group receives only a part of the samples, hampering the performance of distribution estimation and confounding bias reduction. Therefore, our method can achieve better performance. The experimental results demonstrate the effectiveness of our method. In addition, Theorem 4.3 also shows that more training samples, i.e., a large $n$, helps to achieve a better error bound. 2) Our experimental results demonstrate the effectiveness of our method involving propensity score estimation. In addition, existing studies have demonstrated that propensity scores are helpful for representation learning [a][b]. We design a different approach to leverage propensity scores, i.e., to reduce the discrepancy between conditional and marginal distributions based on propensity scores. **Q3** The computation of the generalized propensity score is missing in Figure 1. **A3** Due to space limitations, we present the details of the computation of the generalized propensity score in Section A of the appendix. We will revise Figure 1 to highlight the computation of the generalized propensity score. **Q4** Regarding the in-sample results. **A4** Thank you for the valuable comments. We have added the results of the in-sample experiments on the binary IHDP dataset in the following table. Our method achieves promising performance on the in-sample setting. | | PEHE | MAE | AMSE | |----------|------------------|-----------------|-------------------| | CFR | 1.0462 ± 1.0905 | 0.4966 ± 0.4711 | 1.0437 ± 1.0215 | | GANITE | 8.0017 ± 5.3730 | 5.3945 ± 1.0848 | 13.3972 ± 10.6536 | | DKLite | 5.0756 ± 6.0795 | 0.2252 ± 0.2440 | 5.5372 ± 6.1226 | | CausalOT | 10.2003 ± 4.5611 | 2.7824 ± 1.4760 | 8.2140 ± 9.1621 | | ESCFR | 1.0019 ± 1.6507 | 0.4434 ± 0.5371 | 2.0842 ± 1.6892 | | ORIC | 0.8463 ± 0.7730 | 0.3539 ± 0.3996 | 0.8173 ± 0.7391 | **Q5** The differences of the models are minor in Table 4 while larger in Tables 1 and 2. **A5** In general, the outcome values of the real-world data IHDP and News in Tables 1 and 2 are large, while the outcome values of the simulation data are small. As a result, the differences of the models on real-world data are larger compared with those on simulation data. Similar observations can be drawn from existing studies [c][d]. [a] Counterfactual Regression with Importance Sampling Weights, IJCAI 2019. [b] Counterfactual representation learning with balancing weights, AISTATS 2021. [c] Perfect match: A simple method for learning representations for counterfactual inference with neural networks. arXiv:1810.00656 [d] GANITE: Estimation of individualized treatment effects using generative adversarial nets. ICLR 2018.
Summary: This paper proposes an effective algorithm for estimating treatment effects while reducing confounding bias, applicable to both binary and continuous treatments. It employs optimal transport methods to utilize all available samples for estimating confounding bias, thereby mitigating bias and avoiding the decrease in estimation accuracy associated with splitting training samples into smaller groups for distribution alignment. Claims And Evidence: The proposed theorem on the confounding bias bound strongly supports the algorithm's design and its proof is rather valid. Experiments compare the performance of the proposed algorithm with baseline methods in both binary and continuous treatment scenarios. Note that the algorithm's performance advantage diminishes in binary cases and in the limited continuous treatment cases shown in Appendix E. The authors may consider providing further explanations regarding this observation. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to address causal inference under confounding bias. ORIC leverages optimal transport to align distributions without splitting data, overcoming the limitations of traditional methods due to reduced sample sizes—a reasonable approach since distribution estimation relies on sufficient data. The use of neural networks for representation learning and the Sinkhorn algorithm for Wasserstein distance computation is theoretically grounded. Theoretical Claims: The theoretical framework provides an upper bound on confounding error related to distribution differences characterized by optimal transport to support the algorithm's design. Additionally, experiments compare the proposed algorithm with multiple baseline methods, demonstrating its effectiveness. Experimental Designs Or Analyses: I examined the experimental designs and analyses in Section 5 and Appendices E and F. For continuous treatments, Synthetic/IHDP experiments ran 100 trials, and News ran 20, sufficient for statistical reliability. Binary treatment experiments on IHDP (100 trials) and News (50 trials) followed similar standards. Comparisons with multiple baselines (e.g., KNN, BART, VCNet, CFRNet) are comprehensive, with appropriate metrics. Sensitivity analysis in Appendix E.2 tests hyperparameters, demonstrating robustness. The ablation study (Table 3) isolates the impact of the Wasserstein component. The design is sound, data generation protocols are clear, and no obvious methodological flaws are apparent. Supplementary Material: The author has not provided supplementary materials. Relation To Broader Scientific Literature: This article falls within the field of causal effect estimation, particularly regarding the use of distribution alignment to reduce confounding bias. Essential References Not Discussed: The related work and citations in this paper is comprehensive. Other Strengths And Weaknesses: Strengths: S1: The article is well-organized, and the mathematical expressions are precise. S2: The proposed theorem on the confounding bias bound strongly supports the algorithm's design and is proven to be valid. S3: The paper conducts experiments comparing the proposed algorithm with multiple baseline methods, demonstrating its effectiveness. Weaknesses: W1: The advantages of the algorithm over distribution alignment methods using different treatment groups need further comparison and explanation. W2: The effectiveness of the algorithm concerning data splitting issues should be further demonstrated. Experimental results indicate that the performance advantage decreases in binary cases and in limited continuous treatment scenarios. W3: The proposed algorithm introduces significant computational complexity through nested loops to calculate the optimal solution for the Wasserstein distance. The paper should clarify this point. Other Comments Or Suggestions: Page 2, Line 67, and Page 6, Line 291: There is a inconsistency in the full name of the proposed algorithm ORIC. Questions For Authors: 1. I understand that the key to this paper is linking quantified confounding bias to the Wasserstein distance, which relies on several assumptions, such as the ignorability assumption for quantifying confounding bias. How does the algorithm perform when unobserved confounders exist, and the difference between $q_t$ and $q$ cannot quantify confounding bias? 2. The paper uses the Sinkhorn algorithm, which may slow down training. How does it perform on large-scale datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer for the valuable comments. We will revise the submission according to the comments and responses. **Q1** The performance advantage of the proposed method diminishes in binary cases and limited continuous treatment cases. **A1** This observation is reasonable. Compared with the setting with more treatment values, for the setting of binary treatments or limited continuous treatments, more training samples fall into each group to achieve better performance of distribution modeling and alignment. For example, for the binary treatment setting, each group has around half of the samples, thus can achieve a good performance. However, with more number of treatments, each group receives fewer samples, which tends to decrease the performance. **Q2** The advantage of the algorithm over distribution alignment methods using different treatment groups. **A2** Distribution alignment across different treatment groups splits training data into different subpopulations, which cuts down the numbers of training data, hampering the performance of distribution modeling and confounding bias reduction. The issue becomes more severe in the continuous treatment setting since many different treatment values are involved, and each group receives only a small part of the training samples. Different from them, our method considers all the training samples in each treatment group, which leverages more training data for distribution modeling and alignment. The experimental results demonstrate the advantage of our method, and Theorem 4.3 also shows that more training data, i.e., a large $n$, helps to achieve a better error bound. **Q3** Regarding the computational complexity of the Wasserstein distance and training on large-scale data. **A3** The computational complexity of the Sinkhorn algorithm is in $O(n^2d)$, where $n$ and $d$ are the numbers of the samples and features. The following table presents the running time results. Our method achieves moderate time efficiency. For large-scale data, it is feasible to consider optimal transport on mini-batch samples, as shown in [a]. 1) Continuous treatment setting on synthetic data $(\beta = 0.25)$ | Methods | Times | |----------|-------| | ORIC | 135s | | VCNet+TR | 23s | | VCNet | 17s | | ADMIT | 47s | | ACFR | 24s | | DRNet | 26s | | GPS+MLP | 25s | | MLP | 18s | | GPS | 9s | | BART | 7s | | KNN | 8s | 2) Binary treatment setting on the IHDP-1000 data | Methods | Times | |-----------|-------| | ORIC | 76s | | CFRNet | 47s | | DragonNet | 41s | | DKLITE | 4s | | ESCFR | 165s | | CausalOT | 4s | | GANITE | 4s | | BART | 0.2s | | OLS | 0.2s | | KNN | 0.3s | **Q4** Regarding the inconsistency in the full name of the proposed method. **A4** We are sorry for the confusion. We will revise the submission accordingly. **Q5** Regarding the existence of unobserved confounders. **A5** Since we characterize the confounding bias by measuring the discrepancy between $q_t(x)$ and $q(x)$, the ignorability assumption is required. If unobserved confounders exist, the confounding bias can not be fully captured by considering $q_t(x)$ and $q(x)$ only. We will investigate the situation with unobserved confounders in the future. [a] Improving mini-batch optimal transport via partial transportation, ICML 2022.
Summary: This paper proposes a novel method for causal effect estimation based on optimal transport. The method reduces the confounding bias without data splitting, which is different from the existing methods that partition training into multiple groups according to treatments. Theoretical and empirical results are provided to evaluate the performance of the proposed method. Extensive experiments on both binary and continuous treatment settings are conducted. Claims And Evidence: The claims are well supported by the theoretical analysis and extensive experiments. Methods And Evaluation Criteria: Benchmark datasets of both binary and continuous treatment settings are used in the experiments, and multiple evaluation metrics are adopted. Theoretical Claims: The theoretical claims are correct and well presented. Experimental Designs Or Analyses: The experimental designs and analyses are sound Supplementary Material: The implementation details, proofs of the theorems, and experimental details in the supplementary material have been reviewed. Relation To Broader Scientific Literature: The paper makes contributions to causal effect estimation, especially confounding bias reduction in binary and continuous treatment settings. The proposed method can be applied in different areas, such as policy decision and healthcare. Essential References Not Discussed: As far as I know, the essential references are well discussed and cited. Other Strengths And Weaknesses: Strengths 1.     The paper proposes a novel method to reduce confounding bias without data splitting, which is an under-explored and interesting topic. 2.     The extensive theoretical analysis regarding confounding bias, outcome estimation error, and effect estimation error are provided. 3.     The experiments are sufficient. Both binary and continuous treatment settings are considered, multiple evaluation metrics are adopted, and many compared methods are conducted. 4.     The paper is well organized. Weaknesses 1.     The balanced representation learning relies on the Wasserstein distance. It may bring more computation to solve the optimal transport problem. 2.     The analysis of effect estimation error in Section B only considers the binary treatment setting. Although I understand that the studies of continuous treatment usually consider potential outcome estimation instead of effect estimation, it would be better to analyze the effect estimation error of the continuous treatment setting. Other Comments Or Suggestions: 1. It seems that the analysis in Section B is related to the binary treatment setting. Is it feasible to derive similar results regarding the continuous treatment setting? 2. In Section E.1, I suggest to replace the notation $\mathcal{R}$ by $\mathbb{R}$, which is consistent with the main part of the submission. Questions For Authors: 1. It would be better to evaluate the efficiency in terms of the running time results. 2. Is it possible to extend the proposed method to more complex settings, such as bundle or graph treatments? 3. Based on Section B, is it feasible to derive some theoretical results regarding the effect estimation error of the continuous treatment setting Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer for the valuable comments. We will revise the submission according to the comments and responses. **Q1** Regarding the effect estimation error of the continuous treatment setting. **A1** We analyze the effect estimation error of the continuous treatment setting below. Following [a], we define the effect estimation error $e_{\tau }^{G}$ in the continuous treatment setting: \begin{align} e_{\tau}^{G} = E_{t \sim p(t | t \neq 0)} E_{x \sim q(x)}[ l( h_{t}(x) - h_{0}(x), \mu_{t}(x) -\mu_{0}(x))] \end{align} With the similar procedure in Appendix B, we have: $e_{\tau}^{G}= E_{t \sim p(t | t \neq 0)} E _ {x \sim q(x)}[ l( h_{t}(x) - h_{0}(x), \mu _ {t}(x) - \mu _ {0}(x))]$ $\leq E _ {t \sim p(t | t \neq 0)} [E _ {x \sim q(x)}[ l( h _ {t}(x), \mu _ {t}(x))] + E _ {x \sim q(x)} [ l( h _ {0}(x), \mu _ {0}(x))]]$ $= E _ {t\sim p( t|t\neq 0)}[ \varepsilon _ {q}( h _ {t}) +\varepsilon _{q}( h _ {0})]$ $= E _ {t\sim p( t)}[ \varepsilon _ {q}( h _ {t})]$ $\leq \int_{\mathcal{T}} \varepsilon_{q_{t}}( h_{t}) p(t) dt+\int_{\mathcal{T}}\mathcal{W}(c,q_{t} ,q) p(t) dt.$ where the first inequality holds due to the triangle inequality property, and the second inequality holds because of Eq. (12) in the paper. **Q2** Replace $\mathcal{R}$ with $\mathbb{R}$ in Section E.1. **A2** Thank you for the valuable suggestion. We will revise the submission accordingly. **Q3** Regarding the running time results. **A3** The following table presents the running time results. Our method achieves moderate time efficiency. 1) Continuous treatment setting on synthetic data $(\beta = 0.25)$ | Methods | Times | |----------|-------| | ORIC | 135s | | VCNet+TR | 23s | | VCNet | 17s | | ADMIT | 47s | | ACFR | 24s | | DRNet | 26s | | GPS+MLP | 25s | | MLP | 18s | | GPS | 9s | | BART | 7s | | KNN | 8s | 2) Binary treatment setting on the IHDP-1000 data | Methods | Times | |-----------|-------| | ORIC | 76s | | CFRNet | 47s | | DragonNet | 41s | | DKLITE | 4s | | ESCFR | 165s | | CausalOT | 4s | | GANITE | 4s | | BART | 0.2s | | OLS | 0.2s | | KNN | 0.3s | **Q4** Is it possible to extend the proposed method to more complex settings, such as bundle or graph treatments? **A4** Thank you for the valuable comments. In general, the bundle or graph treatments remain opening problems. Inspired by the assumption in networked interference [b][c], it is feasible to aggregate the information of bundle or graph treatments into a continuous treatment value, so that the proposed method can be employed. We will investigate this challenging problem in the future. [a] Estimating heterogeneous treatment effects: Mutual information bounds and learning algorithms, ICML 2023. [b] Identification and estimation of treatment and interference effects in observational studies on networks, JASA 2021. [c] Learning individual treatment effects under heterogeneous interference in networks, TKDD 2024.
Summary: This paper extends CFRNet to use all samples for each treatment when computing loss functions. Thus, it fits the continuous treatment setting better as it does not suffer from the sample splitting problem. However, this is at the cost of modeling propensity scores and density estimation for $q(x)$ and $q_t(x)$. The theoretical results extend those in CFRNet to their setting without sample splitting. Experiments show the proposed method leads to smaller errors in PEHE, MAE and AMSE. Claims And Evidence: See below. Methods And Evaluation Criteria: The method is an extension of the loss function proposed by CFRNet to continuous treatment. 1. The main concern of the proposed method is the computational cost of the wasserstein distance $\mathcal{W}(c_{\phi},\hat{q}_t, \hat{q})$ and the overhead of density estimation to obtain $\hat{q}_t$ and $\hat{q}$. For the Wasserstein distance, basically for each treatment $t$, each plan $\pi^t$ is an n by n kernel. I am not sure how many values of $t$ and plans are there. Can the authors discuss the time and space complexity for computing the Wasserstein distance? Theoretical Claims: 1. The authors claim $q_t(x) > 0$ for all $x$ given Asm 3.3, which seems not true. If $p(x)=0$, then no $q_t(x)=0$, regardless of $p(t|x)$, 2. Although it can be true intuitively, the claim that a small $\mathcal{W}(c,q_t,q)$ leads to poor outcome prediction performance due to losing information for outcome prediction is not well supported by theoretical analysis in this work. 3. Theorem 4.3 is an extension of Theorem 1 in [1] to continuous treatment, which says the outcome estimation error is upper bounded by the sum of the estimation error on factuals and the distance between the treated and controlled group in the representation space. I guess the main difference is that Theorem 4.3 allows soft $p(t)$, that is, for each $x$ you estimate the propensity score $p(t)$ and plug it into the upper bound. Could the authors clarify the difference between their contribution and existing work in terms of Theorem 4.3? [1] Shalit, Uri, Fredrik D. Johansson, and David Sontag. "Estimating individual treatment effect: generalization bounds and algorithms." International conference on machine learning. PMLR, 2017. Experimental Designs Or Analyses: The experiments include both continuous and binary treatment. 1. For the binary treatment case, I think the proposed method is very similar to the CFRNet with Wasserstein distance, except it uses a soft propensity score to compute the error and the Wasserstein distance. I wonder how it can be much better than CFRNet in this case, especially considering the error of propensity score models and density estimation models can lead to error in counterfactual outcome prediction. 2. It would be better to add an experiment to show how the error in propensity score models and density estimation models impact the final performance of the proposed method ORIC. Supplementary Material: No Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer for the valuable comments. We will revise the submission according to the comments and responses. **Q1** The time and space complexity for computing the Wasserstein distance. **A1** 1) In practice, to avoid heavy computation, we consider a set $\widehat{\mathcal{T}}$ including sampled treatment values evenly distributed in the continuous treatment space $\mathcal{T}$. The number of treatment values is set as $[20, 100]$, as shown in Figure 3. 2) For each $t \in \widehat{\mathcal{T}}$, we apply the Sinkhorn algorithm to compute the Wasserstein distance. Let $n$ and $d$ be the numbers of samples and features, the time complexity is in $O(n^2d)$, and the space complexity is in $O(n^2 + nd)$. 3) We report the running time results in **A3** to Reviewer L5M9. Our method has moderate time efficiency. **Q2** Regarding $q_t(x)>0$. **A2** The condition $p(x) > 0$ is implicitly embedded in Asm. 3.3. This is because the conditional density could be rewritten as $p(t|x) = \frac{p(x,t)}{p(x)}$, where we assume $0 < p(t|x) < 1$. If $p(x) = 0$, then $p(t|x)$ would be undefined. **Q3** Regarding the claim that a small $\mathcal{W}(c,q_t,q)$ leads to poor outcome prediction performance. **A3** Only optimizing the Wasserstein distance without incorporating outcome prediction loss can easily lead to balanced but trivial latent representations, such as mapping all samples to a single point. This is known as the over-balancing issue, as stated in [a][b]. **Q4** Difference between Theorem 4.3 and Theorem 1 in [c]. **A4** 1) [c] reduces the discrepancy between the subpopulations $q_1(x)$ and $q_0(x)$, each of which is modeled by the samples in one group. Different from it, we reduce the discrepancy between the marginal distribution $q(x)$ and the conditional distribution $q_t(x)$, which is modeled by all the samples equipped with the propensity scores. This difference is significant since data splitting is avoided and more data are leveraged for conditional distribution modeling. The data splitting issue becomes even more severe in the continuous treatment setting since multiple groups are considered, and each group receives only a part of the samples, hampering the performance of distribution estimation and confounding bias reduction. 2) [c] applies IPM to measure the confounding bias, while our analysis applies the Wasserstein discrepancy to measure the confounding bias. Although IPM can be implemented as the Wasserstein-1 distance, the Wasserstein discrepancy based on different underlying cost functions cannot be represented as IPM. **Q5** Advantage of the proposed method compared with CFRNet. **A5** 1) CFRNet splits training data into two groups to estimate $q_1(x)$ and $q_0(x)$. Different from it, our method leverages all the training data to estimate $q_1(x)$ and $q_0(x)$. Therefore, more training data are involved in conditional distribution modeling. Theorem 4.3 also shows that more training samples, i.e., a large $n$, helps to achieve a better error bound. 2) CFRNet reduces the discrepancy between $q_1(x)$ and $q_0(x)$. Different from it, we aim to reduce the discrepancy between $q_1(x)$ and $q(x)$, and the discrepancy between $q_0(x)$ and $q(x)$, which can be naturally applied into the continuous treatment setting without considering pairs of different treatments and data splitting. 3) Existing studies have demonstrated that propensity scores are helpful for representation learning [d][e]. We design a different approach to leverage propensity scores, i.e., to reduce the discrepancy between conditional and marginal distributions based on propensity scores. **Q6** How the error in propensity score models and density estimation models impact the final performance of the proposed method. **A6** Following existing works of optimal transport such as [f], we adopt $q(x)= \frac{1}{n}$ to avoid density estimation. We compare the results using the ground truth and predicted propensity scores. Since the ground-truth propensity score is unknown, we modified the IHDP dataset to assign treatment accordingly it. Our method can achieve comparable results even with the errors in propensity score models. | | PEHE | MAE | AMSE | |---------------|-----------------|-----------------|-----------------| | ground-truth ps | 1.4624 ± 0.1222 | 0.1662 ± 0.1255 | 2.1058 ± 0.1526 | | Estimate ps | 1.3400 ± 0.0800 | 0.2012 ± 0.1477 | 1.9894 ± 0.1329 | [a] On learning invariant representations for domain adaptation, ICML 2019. [b] Counterfactual representation learning with balancing weights, AISTATS 2021. [c] Estimating individual treatment effect: generalization bounds and algorithms, ICML 2017. [d] CounterFactual Regression with Importance Sampling Weights, IJCAI 2019. [e] Counterfactual representation learning with balancing weights, AISTATS 2021. [f] Optimal transport for domain adaptation, TPAMI 2017.
null
null
null
null
null
null
Modeling All-Atom Glycan Structures via Hierarchical Message Passing and Multi-Scale Pre-training
Accept (poster)
Summary: The paper introduces a hierarchical GNN for all-atom glycan modeling supported by a multi-scale pre-training strategy. Claims And Evidence: The paper is well-written and easy to understand. Methods And Evaluation Criteria: Yes Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: I notice that when a similar pre-training strategy is applied to RGCN (as PreRGCN), its performance actually gets worse. Could you clarify why this pre-training fails to help—or even harms—RGCN, while it improves the proposed model? Supplementary Material: Yes, I reviewed the visualization appendix and additional discussion on pre-training and efficiency. Relation To Broader Scientific Literature: The paper applies techniques in graph learning like hierarchical message passing and multi-scale pre-training in the field of glycans. Essential References Not Discussed: I did not recognize it. Other Strengths And Weaknesses: Although glycans have unique features and the field is new, I am still worried about the novelty. The paper’s use of hierarchical message passing and multi-scale pre-training has been done in graph representation learning. I hope the authors can provide a clearer explanation of what is truly new beyond applying existing GNN methods to the glycan domain. Other Comments Or Suggestions: I’m also concerned about the efficiency trade-off. The authors note that their all-atom approach is about 20% slower than the baseline methods, so I’m wondering how this slowdown affects broader usability or scalability for large-scale glycan datasets. Questions For Authors: Please refer to previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your valuable comments! We respond to your concerns as below: >**Q1: Why is the pre-training method less helpful to RGCN than to GlycanAA?** We deem that **the less benefit of pre-training to RGCN mainly owes to its lower model capacity**. Compared to GlycanAA that models both monosaccharide-level and atomic-level glycan structures, RGCN only models monosaccharide-level structures and is thus with lower capacity. As shown in the figure of [this URL](https://anonymous.4open.science/r/GlycanAA-73E6/pretrain_caption.png), the lower capacity of RGCN leads to its inferior pre-training performance (i.e., lower accuracy and higher cross entropy loss on masked monosaccharide prediction) against GlycanAA. These shortages of RGCN (i.e., lower model capacity and inferior pre-training performance) makes it benefit less on downstream tasks after pre-training, which is consistent with previous findings in other domains [a,b,c]. &emsp; [a] A simple framework for contrastive learning of visual representations. ICML, 2020. [b] Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL, 2019. [c] Strategies for pre-training graph neural networks. ICLR, 2020. >**Q2: What is truly new beyond applying existing GNN methods to the glycan domain?** We argue that the proposed methods are **clearly motivated** and **carefully designed** to **handle the complexity of modeling atomic-level glycan structures**, instead of simply applying existing techniques to the glycan domain. (1) For the model part, *our model design is inspired by the fact that the backbone structure of a glycan mainly determines its biological properties, and the atomic-level structures of individual monosaccharides provide auxiliary information.* Following this principle, **we perform three steps of message passing to progressively enhance global backbone structural features with local atomic structural features**, and the enhanced backbone features are finally readout for glycan representation. (2) For the algorithm part, *our pre-training algorithm is motivated by the fact that understanding the interactions between monosaccharides and their corresponding atoms is important to effective hierarchical learning, while only performing supervised learning on downstream tasks cannot guarantee acquiring such interactions.* Therefore, in our pre-training algorithm, **we facilitate the model to learn the interactions between monosaccharides and atoms.** Specifically, we first perform an interactive masking process where each selected monosaccharide is masked along with its corresponding atoms, and, on such a masked glycan, the model learns to recover masked monosaccharides with the hints brought by the recovery of some of their atoms. >**Q3: How the extra cost of GlycanAA affects its usage on large-scale glycan datasets?** Table A: Efficiency and effectiveness comparison between RGCN and GlycanAA on GlycanDomain-60K. |Model|Total processing time (s)|Macro-F1| |:----:|:----:|:----:| |RGCN|436.70|0.282| |GlycanAA|489.98|0.425| &emsp; This question is great. To study the usability of GlycanAA on large-scale glycan datasets, we construct the GlycanDomain-60K dataset. We first collect all existing glycans deposited in the GlyTouCan database whose structures are complete (GlyTouCan is a regularly updated glycan database containing all discovered glycans), summing up to 60,152 samples. We then annotate each of them with a domain label (Eukarya, Virus, Bacteria or Archaea) based on their nearest neighbor in the domain classification dataset of the GlycanML benchmark, where the nearest neighbor is determined by a motif matching algorithm depicted in [d]. We name this large-scale glycan dataset with domain annotations as GlycanDomain-60K. On this dataset, we compare the efficiency and effectiveness of RGCN (the most competitive monosaccharide-level baseline) and GlycanAA (our all-atom-level encoder). In specific, we respectively use the glycan domain classifier with RGCN and GlycanAA backbones to predict the domain labels of all glycans in GlycanDomain-60K, where both models are trained on the domain classification task of GlycanML. In Table A, we report the total processing time and the Macro-F1 score of predictions for these two models, where the test is done on a machine with 48 CPU cores and 1 NVIDIA GeForce RTX 4090 GPU under the batch size 256. According to the results, *GlycanAA achieves a 50% higher Macro-F1 score than RGCN with less than 1 minute more processing time.* Therefore, **GlycanAA is applicable in processing large-scale glycan datasets, which achieves outstanding performance with little efficiency trade-off.** In the revised paper, we will add this analysis to better illustrate the usability and scalability of GlycanAA on large-scale glycan datasets. &emsp; [d] A motif-based analysis of glycan array data to determine the specificities of glycan-binding proteins. Glycobiology, 2010. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. Some of my concerns have been addressed. However, I remain unconvinced for the Q1. Your response indicates that RGCN—pretrained only at the monosaccharide level—suffers in capacity. However, [1] suggests that motif-level pretraining can boost downstream performance rather than limit it. [1] Zhang, Zaixi, et al. "Motif-based graph self-supervised learning for molecular property prediction." Advances in Neural Information Processing Systems 34 (2021): 15870-15882. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thanks for your feedback. We would like to clarify that **the PreRGCN model studied in this work is pre-trained using a masked motif (monosaccharide) prediction task, which is in principle different from the autoregressive motif generation task used for pre-training in [1].** Analogy to the NLP domain, the BERT models based on masked language modeling (similar to our pre-training method) and the GPT models based on autoregressive generation (similar to the pre-training method of [1]) are two different kinds of models. In this work, we show that, **for the glycan domain, mask-modeling-like pre-training benefits atomic-level modeling (PreGlycanAA) more than monosaccharide-level modeling (PreRGCN), while monosaccharide-level modeling is stilled benefited (PreRGCN outperforms the non-pretrained RGCN on 9 out of 11 downstream tasks).** Of course, autoregressive-generation-like pre-training is a promising way to boost glycan modeling both on atomic level and on monosaccharide level. We leave this exploration as our important future work. &emsp; Best, Authors &emsp; [1] Zhang, Zaixi, et al. "Motif-based graph self-supervised learning for molecular property prediction." Advances in Neural Information Processing Systems 34 (2021): 15870-15882.
Summary: The paper introduces GlycanAA, a novel framework for All-Atom Glycan Modeling using hierarchical message passing and self-supervised pretraining. It models glycans as heterogeneous graphs where atom nodes represent local structures and monosaccharide nodes represent the global backbone structure. GlycanAA employs Hierarchical Message Passing to capture atomic-level interactions and glycosidic bonds in a unified framework. The pre-trained model, PreGlycanAA, uses Multi-Scale Mask Prediction for self-supervised learning, enhancing representation power. Claims And Evidence: Yes. The claims are supported by extensive benchmarking on the GlycanML dataset as shown Table 1. Methods And Evaluation Criteria: Yes. The GlycanML benchmark is suitable for evaluating glycan properties. Theoretical Claims: No. This paper is not a theoretical paper. Experimental Designs Or Analyses: Yes. The experimental design is valid, with clear definitions of training, validation, and testing protocols.The ablation studies demonstrate the importance of hierarchical message passing and the superiority of monosaccharide-wise readout. Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: Builds on previous monosaccharide-level GNNs and small-molecule encoders. Integrates self-supervised learning, commonly used for proteins and small molecules, into glycan modeling. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This paper introduces a new way of modeling glycans as heterogeneous graphs. 2. The model effectively leverages both local atomic information and global structural information. 3. The multi-scale mask prediction effectively captures glycan dependencies. Weaknesses: 1. Only uses glycosidic bonds for modeling backbone structures, potentially missing other structural features. Have you considered other graph construction methods? How to get the graph edges, based on the distances? 2. It is better to add a figure to present the Glycans, it's importance. 3. Overhead of All-Atom Modeling: Computationally more expensive than monosaccharide-level modeling. Though the efficiency comparison is presented in Table 2, it is better to compare it with baselines. In table 1, are the results tested by yourselves? if yes, it is convenient to provide the comparisons of training and testing time. Other Comments Or Suggestions: No. Questions For Authors: 1. Can the model handle more complex glycans with diverse glycosidic linkages? 2. What happens if structural features (e.g., torsion angles) are included in the model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments and constructive suggestions! We respond to your questions as below: >**Q1: Can the model handle more complex glycans with diverse glycosidic linkages?** We announce that **the proposed GlycanAA model can handle any glycan no matter how complex its structure is**. Basically, for a given glycan, GlycanAA extracts each of its monosaccharide as a node in the backbone-level graph, and, for each monosaccharide, its fine-grained atomic-level structure is further modeled by an atomic graph; GlycanAA constructs relational edges between different monosaccharides to capture glycosidic linkages, where **it use 84 types of relations to model all possible glycosidic bonds that connect atoms at different sites with different stereochemical configurations**. The glycans with complex structures can be well modeled in this way. For example, starch is a complex kind of glycans composed of hundreds of glucoses (monosaccharide units) and thousands of $\alpha$1-4 and $\alpha$1-6 glycosidic bonds. By using the proposed backbone- and atomic-level graph modeling and multi-relational glycosidic bond modeling approaches, GlycanAA can well capture (1) the local structure within each glucose unit and (2) the global structures of starch formed by $\alpha$1-4 glycosidic bonds for its linear parts and $\alpha$1-6 glycosidic bonds for its branching parts. >**Q2: What happens if structural features are included in the model?** Table A: Performance comparison between GlycanAA and GlycanAA-torsion on taxonomy prediction tasks. The Macro-F1 score for each task and the mean Macro-F1 score over all tasks are reported. |Model|Domain|Kingdom|Phylum|Class|Order|Family|Genus|Species|Mean Macro-F1| |:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |GlycanAA|0.642|0.683|0.484|0.429|0.291|**0.221**|**0.198**|**0.157**|0.388| |GlycanAA-torsion|**0.651**|**0.687**|**0.486**|**0.437**|**0.302**|0.215|0.190|0.154|**0.390**| &emsp; This question is great. First, we claim that **GlycanAA owns a generic model framework which can easily incorporate various structural features**. Here, we take the featurization of torsion angles of glycosidic bonds as an example. Specifically, by using the Carbohydrate Builder of GLYCAM, we get the 3D conformation of each glycan data in the dataset for taxonomy prediction. For each glycosidic bond in a glycan structure, it defines two torsion angles $\phi$ and $\psi$, as shown in the figure of [this URL](https://anonymous.4open.science/r/GlycanAA-73E6/torsion_caption.png). To include these two torsion angles in GlycanAA, we respectively compute their sine and cosine values and map the resulting four values to the hidden space for message passing. We name this model variant as *GlycanAA-torsion*. In Table A, we compare the performance of GlycanAA and GlycanAA-torsion on eight taxonomy prediction tasks. According to the results, GlycanAA-torsion outperforms GlycanAA on 5 tasks with at most 210 taxonomy categories, while GlycanAA-torsion is inferior on 3 tasks with at least 415 taxonomy categories. These results demonstrate that including torsion angle features can enhance the model's ability to fit the data, but it can also make the model more prone to overfitting, especially in complex tasks. **We will include this study in the revised paper to inspire more future work on glycan structure modeling**, e.g., constructing distance-induced glycan graphs based on the vicinity of atoms and monosaccharides in 3D glycan conformations. **We will also include the figure presenting glycosidic torsion angles in the revision for better understanding of glycan structures.** >**Q3: The selection of baseline for efficiency study.** We clarify that, in the efficiency study, we select the best-performing baseline (without pre-training), i.e., RGCN, for comparison with GlycanAA. Comparing to RGCN, GlycanAA performs better on all 11 benchmark tasks with a moderate 20% more computational cost, demonstrating that **GlycanAA achieves remarkable performance gains against the most competitive baseline under acceptable efficiency trade-offs**. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. Most of my questions are tackled.
Summary: This paper proposes a hierarchical graph model for atom-level glycan modeling. It employs self-supervised learning to enhance the model's capability. The self-supervised learning framework uses multi-scale mask prediction as its task. Subsequently, the pre-trained model is utilized for downstream tasks. The hierarchical graph network effectively models atom-level glycan structures. Through this pre-training and fine-tuning process, the proposed model surpasses previous state-of-the-art methods. Claims And Evidence: The submission includes claims that are supported by clear and convincing evidence. Experimental results and analyses validate these claims, demonstrating the effectiveness of the proposed hierarchical graph model. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the application. Theoretical Claims: This paper does not propose theoretical claims. Experimental Designs Or Analyses: The experimental design makes sense. However, an ablation study focusing on the proposed hierarchical graph during the pre-training stage could further demonstrate the paper's effectiveness. For instance, comparing the loss curves with and without the atom-level graph would provide valuable insights. Supplementary Material: I have reviewed all of the supplementary material. Relation To Broader Scientific Literature: No Essential References Not Discussed: The study titled [ProNet'22] employs hierarchical graph networks for protein 3D modeling. This paper compares the proposed method with GearNet, which was originally designed for protein 3D structures. Therefore, the authors should consider discussing [ProNet'22] and potentially using it as a baseline for further comparison. ProNet'22: Learning Hierarchical Protein Representations via Complete 3D Graph Networks Other Strengths And Weaknesses: No. Other Comments Or Suggestions: No. Questions For Authors: See the "Experimental Designs Or Analyses" and "Essential References Not Discussed". Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Appreciate your insightful comments and golden suggestions! We respond as below: >**Q1: An ablation study focusing on the proposed hierarchical graph during the pre-training stage is recommended.** This suggestion is great. By removing the atom-level modeling part, the obtained model variant of GlycanAA essentially performs relational message passing among monosaccharides, which is basically RGCN. Therefore, we compare the pre-training performance of GlycanAA and RGCN by using the proposed mask prediction algorithm, where the monosaccharide mask ratio is set as 0.3 for both models. In the figure of [this URL](https://anonymous.4open.science/r/GlycanAA-73E6/pretrain_caption.png), we present the accuracy and cross entropy loss curves of pre-training for these two models. According to the results, compared to RGCN, GlycanAA performs clearly better in pre-training with higher accuracy and lower cross entropy loss, thanks to its higher model capacity (i.e., modeling both monosaccharide-level and atom-level glycan structures). By checking the benchmark results in the Table 1 of paper submission, we can observe that the pre-trained GlycanAA (i.e., PreGlycanAA in the table) achieves clearly more performance gains on downstream tasks after pre-training, compared to the pre-trained RGCN (i.e., PreRGCN in the table). This correlation between higher model capacity, higher pre-training performance and more performance gains on downstream tasks is also reported in other domains [a,b,c]. We will add this study to the revised paper version, so as to give more insights to pre-training glycan representations. &emsp; [a] A simple framework for contrastive learning of visual representations. ICML, 2020. [b] Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL, 2019. [c] Strategies for pre-training graph neural networks. ICLR, 2020. > **Q2: ProNet is a related work, which should be discussed and compared with.** Table A: Performance comparison between ProNet, GearNet, GearNet-Edge and GlycanAA on benchmark tasks. The best and second-best results are denoted by **bold** and *italic*, respectively. |Model|Domain|Kingdom|Phylum|Class|Order|Family|Genus|Species|Immunogenicity|Glycosylation|Interaction|Weighted Mean Rank| |:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |ProNet|0.627|*0.590*|*0.438*|0.380|0.242|0.192|0.146|0.128|*0.778*|*0.930*|*0.252*|*2.31*| |GearNet|0.471|0.577|0.395|*0.389*|0.256|0.189|0.165|0.136|0.740|0.892|0.248|3.81| |GearNet-Edge|*0.628*|0.573|0.396|0.384|*0.262*|*0.200*|*0.177*|*0.140*|0.768|0.909|0.250|2.88| |GlycanAA|**0.642**|**0.683**|**0.484**|**0.429**|**0.291**|**0.221**|**0.198**|**0.157**|**0.792**|**0.950**|**0.288**|**1.00**| &emsp; Thanks for pointing this out. ProNet [d] is a representative model for protein structure modeling, which simultaneously captures the amino-acid-level, backbone-level and all-atom-level structures of a protein. In its implementation, ProNet passes the structural features at higher resolution (i.e., backbone-level and all-atom-level features) to the structural features at lower resolution (i.e., amino-acid-level features), and the graph convolution at lower resolution is biased by the features passed from higher resolution. To investigate such a modeling approach in the glycan domain, we implement a ProNet for glycan modeling which follows the original architecture with four interaction blocks, and, in each interaction block, atom-level features are passed to monosaccharide-level features to bias the graph convolution operation. In Table A, we compare this ProNet with GearNet, GearNet-Edge and GlycanAA on benchmark tasks. According to the results, ProNet outperforms GearNet and GearNet-Edge in terms of weighted mean rank, while it is inferior to GlycanAA on all benchmark tasks. This result again demonstrates the effectiveness of the proposed hierarchical relational message passing scheme in GlycanAA, which well captures different kinds of dependencies within a glycan. In the revised paper, we will supplement the above discussion and comparison for the interests of a broader audience. &emsp; [d] Learning Hierarchical Protein Representations via Complete 3D Graph Networks. ICLR, 2023.
null
null
null
null
null
null
null
null
Linear Contextual Bandits With Interference
Accept (poster)
Summary: This paper is the first to study contextual bandits with interference. The authors propose a linear contextual bandit framework and introduce an algorithm called LinCB to address the regret minimization/estimation problem under this framework. Experimental results based on MovienLens dataset further demonstrates the effectiveness of the proposed results. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: I have reviewed some of the proofs in the appendix. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Some references might need to be discussed. For example, in Shiliang Zuo's Federated Multi-Armed Bandits, they consider a setting where the number of agents varies over time, which is similar to the setting considered by the authors. Other Strengths And Weaknesses: Strengths: 1. This paper proposed the first study on contextual bandit with interference. 2. The proofs I have checked do not have any obvious flaws. 3. The authors consider both the regret minimization and estimation problems. 4. The experimental results are based on a real world dataset. Weaknesses: 1. I believe there are issues with the discussion of references in this paper. In line 86, the cited work Dubey et al., 2020 does not belong to the MAMAB problem; rather, it falls under a multi-agent (kernelized) contextual bandits problem. If, as the authors claim, Dubey 2020 studies interference-related problems, then that paper might be the first to investigate contextual bandits with interference. This contradicts the authors' assertion that their work is the first interference-related paper, which leaves me somewhat confused. How does the learning framework in that paper differ from the one in this work? 2. The authors have not clearly discussed the distinction between multi-agent bandits and bandits with interference. IMHO, most federated/distributed/MA bandit studies focus on leveraging the communication topology to design communication algorithms that accelerate the learning process. In contrast, interference bandits (e.g., [Jia et al., 2024; Agarwal et al., 2024]) focus more on learning the strength of interference between different nodes. The authors could add a discussion on this aspect. 3. The authors assume that the interference strength matrix $W_t$ is known. Based on my understanding of the previous interference paper, such as Jia et al 2024., Agarwal et al 2024., and Leung et al 2024., they do not assume that the interference strength between agents is known; instead, their task is more focused on indirectly learning this interference strength. Could you discuss references that consider a similar setting? I also look forward to the authors adding more discussion on the scenario where $W_t$ is unknown. 4. Theorem 4.2 refers to Assumptions A.1 - A.3, which the authors have placed in the appendix. I suggest that the authors include these assumptions (and the related discussion) in the main text. 5. A typical regret upper bound often includes the time horizon $T$, whereas this paper only provides an upper bound in terms of $\bar{N}_t$. The authors could add a remark discussing the relationship between the given upper bound and a more conventional upper bound that explicitly includes $T$. Other Comments Or Suggestions: I find this to be an interesting paper and am inclined to accept it. I look forward to the authors' responses to my questions. ## update after rebuttal: The authors have addressed my concerns, so I recommend acceptance. Questions For Authors: See the S/W section Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your thoughtful questions and the time you spent reviewing our paper. We really appreciate your insights and are happy to discuss any further ideas or questions you may have. **Answer to W1**: Thank you for your insightful perspective on the literature of Dubey et al. (2020). We would like to clarify a minor error in the related work section: Dubey et al. (2020) did not model interference. By definition, interference occurs when an agent’s action influences the rewards of others. However, in their framework, the reward of agent $v$ is defined as $f_v = F(a_v, z_v)$, which depends solely on the agent’s own action $a_v$ and contextual information $z_v$, with no influence from other agents’ actions. We appreciate your attention to this detail and will remove this paper from the interference-related literature in the main paper. Consequently, we reaffirm that our work is the first to address interference in contextual bandits. **Answer to W2**: Multi-agent bandits and bandits with interference are related but focus on different aspects of the problem, with some overlap but distinct emphases: * **Multi-agent bandits** primarily deal with the interaction between multiple agents (typically fixed in number) and how they share information to enhance bandit learning. The central focus is on fostering collaboration or competition among agents. * **Bandits with interference** stem from the concept of "interference" in causal inference. The key consideration is whether one agent's action impacts the reward of others. Not all multi-agent bandit works address interference. In Sec 2, under *Cooperative Multi-Agent Bandits*, we classify prior literature: Paragraph 1 lists works that do not address interference, and Paragraph 2 includes studies that do, even if not explicitly framed as interference problems. For example, Bargiacchi et al. 2018 and Verstraeten et al. 2020 implicitly handle interference, similar to Jia et al. (2024) and Agarwal et al. (2024). In summary, the defining criterion for bandits with interference is straightforward: *Does one agent's action affect another’s reward?* If yes, interference exists and must be accounted for. **Answer to W3**: * First, we would like to clarify that our work, along with (Jia et al., 2024; Agarwal et al., 2024; Leung, 2022), all assume different known structures on interference, albeit in different ways. Without any assumptions on the nature of the interference pattern, both estimation and bandit learning would be infeasible due to the high-dimensional action space, $K^{N}$. Specifically, (Jia et al., 2024) and (Leung, 2022) assume that the strength of interference decays according to a specific notion ($\psi$) of distance between units, while (Agarwal et al., 2024) imposes a sparsity assumption on network interference, restricting interference to only a small neighborhood of size $s$. In contrast, we introduce an assumption in a more intuitive manner by modeling pairwise interference through a matrix $W$, which simplifies both modeling and interpretation. While this assumption may appear strong, it provides flexibility, as the entries of $W$ can take any value within $[-1,1]$. From this perspective, our formulation is more general. * Additionally, we would like to emphasize that (Jia et al., 2024) and (Leung, 2022) do not ”indirectly learn” the interference strength. For instance, in (Jia et al., 2024), all regret bounds are derived under the assumption of a known $\psi$, which prescribes a specific functional form for interference decay. * In Sec 2, the second paragraph highlights works that quantify interference in a similar setting to ours, such as (Getis, 2009; Valcu & Kempenaers, 2010; Su et al., 2019). This structure is widely used in network and interference-related literature. * Lastly, due to space constraints, regarding the case where $W_t$ is unknown: aside from the second paragraph of Sec 7, we kindly refer the reviewer to our detailed discussion in response to **Reviewer f3ef, Q1**. We hope that this plausible extension, along with the tolerance of LinCBWI w.r.t. the misspecification of $W_t$ in the added simulation, will help alleviate your concern. **Answer to W4**: Thanks for your suggestion regarding assumptions placement. We will move them to the main paper and we believe this would improve the readability. **Answer to W5**: The regret bound actually depends on $T$ only implicitly through $\bar{N}_T$. In the special case where each round involves a fixed number of units, $N_t\equiv n$, then $\bar{N}_T=nT$, leading to a regret bound of approximately $O(n^{1/2}T^{1/2})$, up to logarithmic terms. **Others**: We noticed that the reviewer referenced Shiliang Zuo's *Federated Multi-Armed Bandits*, but we were only able to find a paper with the same title by a different author. Could you kindly provide more details or clarify the reference? We would be happy to include and carefully discuss this work in the final version of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for your answer — it resolved my concern. Yes, I made a mistake earlier; the first author of the paper titled Federated Multi-Armed Bandits is Chengshuai Shi. Additionally, I look forward to seeing a more detailed discussion of the case where the interference strength is unknown in future versions of the paper — this is certainly an exciting direction for future work. I will keep my current score and am inclined to recommend acceptance. --- Reply to Comment 1.1.1: Comment: Dear Reviewer SPrL, Thank you for your clarification regarding the related work *Federated Multi-Armed Bandits* by Chengshuai Shi. We agree that this is a highly relevant and interesting paper, and we will incorporate it into the related work section of our final version. This work is among the first to connect federated learning with multi-armed bandits in dynamic agent settings. However, it focuses on standard MAB rather than contextual bandits, which is the setting considered in our work. Moreover, the issue of interference is not addressed in their framework, as each action $k$ taken by a client (or agent) $m$ only influences its own local reward $\mu_{k,m}$. The shared information among clients is used to collectively improve estimates of the global arms, rather than to model inter-agent dependencies. Regarding the case where the interference structure is unknown, we will include the discussion provided in the rebuttal in the final version of our paper. We would also like to note that this direction raises several important and technically rich questions, such as identifiability, convergence guarantees, and statistical inference under uncertainty, which we believe warrant a dedicated follow-up study. Within the scope of the current paper, our primary aim has been to establish a principled framework that bridges interference and contextual bandits, supported by rigorous theoretical analysis and extensive simulations. We believe our current contributions represent a well-substantiated and meaningful step toward addressing this important gap in the existing literature. Once again, we sincerely appreciate the time and effort you devoted to reviewing our work, as well as your thoughtful feedback and constructive suggestions throughout the process. Best, Authors from submission 4971
Summary: This paper investigates the problem of interference in linear contextual bandits, where the actions taken for one unit influence the rewards of others. This paper leverages an adjacency matrix to model the interference structure and proposes three online algorithms LinEGWI, LinUCBWI, and LinTSWI. The authors establish several theoretical guarantees, including regret bounds, asymptotic properties of the OLS estimator, and statistical inference for the optimal policy value. The proposed methods demonstrate superior empirical performance over classical linear contextual bandit approaches in both synthetic experiments and a real-world MovieLens-based dataset. ## update after rebuttal Overall, I appreciate the technical novelty of this paper, so I will maintain my current positive score. Claims And Evidence: All claims are well-supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: I have not checked all the proofs in detail. I did not identify any obvious errors. Experimental Designs Or Analyses: The experimental designs or analyses are sound and valid. Supplementary Material: N/A. No supplementary material is provided. Relation To Broader Scientific Literature: This work advances the field by proposing a framework for linear contextual bandits under interference. In contrast, prior studies only consider interference in multi-armed bandits or adversarial contextual bandit settings. The proposed algorithms (Algorithm 1) appear to be adapted from Shen et al., 2024. Shen et al. Doubly Robust Interval Estimation for Optimal Policy Evaluation in Online Learning. 2024. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths 1. To my knowledge, the use of an adjacency matrix to quantify interference in linear contextual bandits is novel. 2. This paper derives several theoretical guarantees, such as sublinear regret bounds that match the minimax optimal rate and asymptotic normality of the OLS estimator. 3. The paper includes extensive simulations demonstrating the advantages of the proposed algorithms. The paper also shows that without interference, the proposed algorithms reduce to classical algorithms with comparable performance. Weaknesses 1. This paper assumes that the interference matrix is known, which may be impractical in real-world settings. 2. The algorithm design seems to be heavily influenced by Shen et al. 2024 (Doubly Robust Interval Estimation for Optimal Policy Evaluation in Online Learning). 3. The theoretical results rely on several assumptions that are non-standard in the bandit literature, especially the clipping assumption (Assumption A.2), which appears to be quite strong. Other Comments Or Suggestions: N/A. Questions For Authors: 1. How does the computational overhead of the proposed algorithms compare to that of classical linear bandit algorithms? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your thoughtful questions and the time you spent reviewing our paper. We really appreciate your insights and are happy to discuss any further ideas or questions you may have. **Answer to W1**: Regarding the assumption that the interference matrix is known, we clarify this in three aspects: **First**, in many real-world applications, interference is either known or can be precomputed before applying bandit learning. For instance, in the context of COVID-19, geographic proximity naturally defines community connections, while in movie recommendation systems, social networks provide side information that quantifies pairwise interference. Such structural information is often available through expert knowledge or can be inferred from covariates before deploying a bandit algorithm. **Second**, the assumption of a known interference structure is widely used in both classical interference literature and its bandit extensions. In single stage, several works rely on this assumption (Manski, 2013; Aronow & Samii, 2017; Su et al., 2019; Bargagli-Stoffi et al., 2020), and it has been adopted in bandit settings as well (Jia et al., 2024). While it is ideal to learn the interference structure purely from data, there is always a trade-off between what is pre-specified as an assumption and what is left for the model to infer. In other words, there is no "free lunch"--the choice of assumption depends on the specific problem setting and modeling priorities. **Lastly**, we give a practical direction to the case where the interference matrix is unknown in the second paragraph of Sec 7. Due to space constraints, we kindly refer the reviewer to our detailed discussion in response to **Reviewer f3ef, Q1**. **Answer to W2**: We clarify that our work differs substantially from Shen 2024 in several key aspects: 1.**Problem Motivation**: Our work addresses the fundamental challenge of **interference** in bandits, whereas Shen 2024 focuses on statistical inference for the optimal value in a standard bandit setting, which is an entirely different problem. 2. **Scope of Application**: Shen 2024 considers inference in a two-arm bandit setting, limiting its applicability in real-world scenarios with multiple arms. In contrast, our work accommodates general **multi-arm** settings, making it significantly broader in scope. 3. **Our Contributions**: Our work is the first to explore interference in contextual bandits. * From a statistical inference perspective, extending Shen 2024 to incorporate interference and multi-arm settings is **nontrivial**, particularly in establishing theoretical results due to the data dependency introduced by interference. Our work provides statistical guarantees in this broader context, but its contributions extend well beyond merely building on Shen 2024. Although we generalize their results, our study is fundamentally distinct, with a broader focus that does not rely heavily on their approach. * Moreover, if statistical properties (as discussed in Sec 4.3-4.4) are not relevant to a particular application, the clipping step (Line 10 of Alg. 1) can simply be omitted without affecting our main contributions, making our work entirely independent of Shen 2024. **The core novelty of our work lies in the problem formulation, the design of bandit algorithms, and regret analysis in this novel setting, all supported by extensive simulations and quasi-real data analysis. While we establish statistical inference results, they primarily reinforce our findings rather than serving as the sole core contribution.** **Answer to W3**: Regarding Assumption **A.2**, which may be of particular concern, we emphasize that: * **The assumption is not restrictive** due to the small multiplication factor applied on the right-hand side, $p_t$. In Alg. 1, we specify that the clipping rate $p_t$ only needs to satisfy the condition of not decaying faster than $O(\bar{N}_t^{-1/2})$. This means that any decreasing sequence, such as $p_t = O(\bar{N}_t^{-3/7})$ or $O(\bar{N}_t^{-2/5})$, or even a small fixed value (e.g., $p_t = 10^{-3}$) suffices. Given that $p_t$ is small and decreases over time, Assumption A.2 naturally holds as the sample size grows. * **A.2 is actively enforced in our algorithm**. Specifically, Line 10 of Algorithm 1 ensures that if an arm is explored insufficiently (determined based on $p_t$ and a smallest eigenvalue comparison), the algorithm enforces additional exploration, preventing extreme imbalance and ensuring statistical consistency. * **Empirical validation**: In our simulation studies, we simply set $p_t \equiv 0.01$ and observed that Line 10 is rarely triggered. This indicates that the assumption does not pose practical concerns. **Answer to "Questions For Authors"**: Due to space constraints, we refer the reviewer to our response to **Reviewer f3ef, Q2**, which includes a detailed analysis and a comparison plot at https://anonymous.4open.science/r/LinCBWI_ICML_rebuttal-C829/.
Summary: The paper explores the intersection of causal inference and multi-armed bandits, specifically in the setting of multi-agent bandits with interference among agents. According to the authors, this is the first work in the literature to incorporate contextual information (i.e., the covariates of units). Under certain assumptions, particularly linearity, the authors propose an algorithm with three exploration variants to address the problem. Both theoretical analysis and experimental results demonstrate the effectiveness of the proposed approach. ## Update After Rebuttal After reviewing the authors’ response, I have decided to maintain my current score. Claims And Evidence: Yes. Methods And Evaluation Criteria: The work is mostly theoretical but also contain some experiments which have reasonable design. Theoretical Claims: I checked the flow of some parts of proofs but I’m not certain all are correct. Experimental Designs Or Analyses: The design of conducted experiments are seems sound. Supplementary Material: No. Relation To Broader Scientific Literature: The key contributions of the paper are in the areas of online decision-making (bandits) and causal inference. Essential References Not Discussed: I am not aware of any missing prior works. Other Strengths And Weaknesses: Strengths: 1. The paper addresses an important and novel problem at the intersection of causal inference and bandits. 2. Rigorous theoretical analysis is provided, ensuring performance guarantees for the proposed algorithm. 3. The problem is well-motivated with real-world examples, and the literature review is thorough. Additionally, the paper discusses potential directions for future work. Weaknesses: 1. Some parts of the paper are difficult to follow due to notation and presentation of results, and these could be improved (see the suggestions section). Other Comments Or Suggestions: It would be better to use $\top$ for the transpose operation of a matrix (e.g., $\beta^{\top}$ instead of $\beta’$). The notation and formulas in Subsection 3.1 are not very clear. I believe this section could be rewritten for better clarity. Questions For Authors: 1. I did not fully understand why you modeled the inference using Equation 1. To what extent does this formulation restrict the problem? Could you discuss alternative formulations for the reward and the challenges they pose for algorithm design? 2. In Lines 2 and 11 of Algorithm 1, why do you sample actions from Bernoulli(0.5)? Shouldn't it be sampled uniformly over $[k]$? 3. In Theorem 4.2, how did you derive the third part? 4. In subsection 3.1, is it possible to express $f(X_{tj}, a_{tj})$ in a different form? For example, by a simple inner product if each action represents a vector. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your thoughtful questions and the time you spent reviewing our paper. We really appreciate your insights and are happy to discuss any further ideas or questions you may have. **Q1**: Eq. (1) models the reward of unit $i$ in round $t$ as $R_{ti}=\sum_{j=1}^{N_t} W_{t,ij} f_{tj}+\eta_{ti}$, where the reward is a weighted sum of sub-payoff functions $ f_{tj}=f(X_{tj},A_{tj})$, with weights $W_{t,ij}$ determined by the interference matrix $W_t$. Specifically, the $i$th row of $W_t$ captures the influence of all other units on unit $i$, where each element $W_{t,ij}$ represents the strength of unit $j$’s contribution to $i$’s reward. The term $f_{tj}$ reflects unit $j$’s individual contribution, and multiplying by $W_{t,ij}$ quantifies its effect on $i$. Summing over all units $j\in\\{1,\dots,N_t\\}$ yields the final reward $R_{ti}$. This formulation is intuitive, easily decomposable across individuals, and widely used in network and interference-related literature (Cliff & Ord, 1981; Getis, 2009; Su et al., 2019). There are several alternatives to Eq (1). One approach is to generalize it as $R_{ti} = g(\boldsymbol{W_t}, \boldsymbol{X_t},\boldsymbol{A_t}; \theta) + \eta_{ti}$, where $g$ is parameterized by $\theta$, potentially using more flexible models such as neural networks. While this model is learnable given sufficient data, it would invalidate Eq. (2), which, through the decomposition in Eq. (1), effectively consolidates all interference-related information of unit $i$ into the interference weight $\omega_{ti}$ and simplifies decision making. Other directions, instead of using a matrix $W_t$ to quantify pairwise interference levels, include assuming alternative interference-related structures such as partial interference or exposure mapping (see the first paragraph of Section 2). However, the fundamental challenge of the interference problem is that, without any structural assumption, $R_{ti}$ could be arbitrarily influenced by all units' contextual information and actions $(\boldsymbol{X}_t, \boldsymbol{A}_t)$, making the problem too general to be learnable. Some assumptions (whether in the form of Eq. (1) or other models) are necessary to impose structure on the interference, depending on what best aligns with real-world data. Given the structured formulation of $\boldsymbol{W}_t$ and its interpretability within a linear framework, we find Eq. (1) to be a natural and intuitive starting point. **Q2** Thanks for pointing that out! This is actually a typo that occurred while generalizing our setting from the $2$-arm case to the $K$-arm case. The sampling should indeed be uniform over $[K]$, as you correctly mentioned. We will revise this in the final version of our paper. **Q3** Thanks again for your careful reading and good catch. In EG, $\kappa_{ti} (\omega_{ti},X_{ti})$ should be $\epsilon_{ti}(K-1)/K$, instead of $\epsilon_{ti}/2$ (which holds only when $K=2$). This is because the probability of exploration is $\epsilon_{ti}$ in EG, and the fraction allocated to randomly exploring the optimal arm is $1/K$ of $\epsilon_{ti}$. Thus, the correct expression should be $ \kappa_{ti} (\omega_{ti},X_{ti}) = (1 - 1/K)\epsilon_{ti} = \frac{\epsilon_{ti} (K-1)}{K}$. We will correct this in the final version of our paper. **Q4** Section 3.1 expresses the payoff function as $f(X_{tj},a) = X_{tj} \beta_a$, which assumes linearity with respect to the contextual information $X_{tj}$ for each action $a \in \mathcal{A}$. This linear formulation is widely adopted in the linear contextual bandits literature, such as (Chu et al., 2011; Agrawal & Goyal, 2013). If I understand your suggestion correctly (i.e., representing $f$ as a simple inner product when each action is expressed as a vector), then Section 3.1 is already aligned with this idea. If we encode the action as a dummy variable vector, the function can be rewritten as an inner product, which is equivalent to your suggestion. There are several ways to extend this linear payoff assumption. First, $X_{tj}$ can be transformed using basis functions (e.g., polynomial features) to incorporate higher-order representations as needed, which is a straightforward extension. Another direction is to integrate neural bandits into this framework by modeling $f(X_{tj}, A_{tj})$ using a neural network. While this approach allows direct adaptation of existing neural bandit algorithms to interference-aware settings, deriving theoretical guarantees (particularly asymptotic properties, as established in Section 4) becomes significantly more challenging. This is also why we begin with a linear payoff function. **Regarding your suggestions about notation and writing**: We sincerely appreciate your feedback on Section 3.1. We will refine this section by adding details, clarifying concepts like $\widetilde{X}_t$, and updating the transpose notation to $\beta^\top$ for better clarity and readability.
Summary: The paper introduces a framework to address Linear Contextual Bandits with interference, where the actions of one unit can affect the rewards of others. The authors bridge the gap between causal inference and online decision-making by explicitly modeling interference through a linear structure involving an interference matrix. They propose three online algorithms LinEGWI, LinUCBWI, and LinTSWI, which extend classical algorithms by incorporating interference-aware reward modeling. They establish regret bounds and provide numeraical results on the generated based on the MovieLens data. Claims And Evidence: The claims made in the paper are supported by theoretical derivations and empirical evidence and no problematic claims were found. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for addressing interference in contextual bandit problems. The authors clearly define interference via an adjacency matrix, allowing flexible modeling of different real-world problems. Theoretical Claims: I skimmed through the theoretical results but did not thoroughly verify all proofs in detail. Experimental Designs Or Analyses: I reviewed the experimental setup and analyses presented in Section 5 and 6. The experimental design is clear and reasonable although no baseline for LinCB under interference are available and hence the algorithms are compared to their non-interference algorithmic baselines. Supplementary Material: I skimmed through the supplementary materials provided. Relation To Broader Scientific Literature: This paper extends recent works in the bandits with interference literature to the Linear Contextual setting. Related works Essential References Not Discussed: The authors have cited the most relevant previous works in the area. Other Strengths And Weaknesses: Strengths: - First work addressing contextual bandits under general interference. - Novel asymptotic analysis of estimators under complex dependencies induced by interference. Weaknesses: - The computational complexity of the proposed algorithms is unclear, which is typically an important aspect in this subclass of bandits. - The way the proposed "Pseudo-true-reward” generating process (as Described in App. B3) is carried out is a bit dubious (see questions below) Other Comments Or Suggestions: Typos: - Lines 31-32: "a synthetic data generated" → "synthetic data generated". - Lines 106-107: "to to" → "to". - Line 412: "of interference of interference" → "of interference". - Line 151-152: "i.e." → "i.e., ". Questions For Authors: 1. The authors state that their algorithm can be adapted in case of unknown interference Matrix inspired by techniques on low-rank factorisation. Could you please discuss at a high level what you believe the impact of learning such a matrix would yield to the algorithm performance? Is there any way to quantify the impact in case of misspecified interference matrix ? 2. Can you characterize (even roughly) the computational complexity of the proposed online algorithms LinEGWI, LinUCBWI, LinTSWI? 3. Why if there are more observations from the same user the correspojding element in the intereferene matrix is set to 1? Is there any way to evaluate the fit of your model (i.e. the reward you design in Point I,II in App. B3) to the actual data? Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your thoughtful questions and the time you spent reviewing our paper. We really appreciate your insights and are happy to discuss any further ideas or questions you may have. **Q1**: In the 2nd paragraph of Sec 7, we briefly mentioned how we could proceed to jointly estimate $(\Phi, \beta)$ when $W_t$ is unknown. A simple way to proceed is through **Alternating Optimization**: initializing $\Phi$, optimizing for $\beta$, then fixing $\beta$ to update $\Phi$, and repeating until convergence. Sseveral interesting questions arise. First, identifiability of $(\Phi, \beta)$ needs careful consideration, especially since $\widetilde{X}_t$ depends on $\Phi$, which may introduce issues in distinguishing their effects in $\widetilde{X}_t\beta$ without additional assumptions. Second, the convergence of the alternating updates requires further investigation. Finally, the sample size for accurate estimation of both parameters, and consequently, for effective action learning and reward accumulation, would likely be higher than in the case where $W_t$ is known. We expect that **the algorithm-wise implementation is relatively straightforward, but how to establish the theory behind would be an interesting future work**. To evaluate the impact of a misspecified interference matrix, we conducted a sensitivity analysis, summarized in Sec 1 of https://anonymous.4open.science/r/LinCBWI_ICML_rebuttal-C829/. The data follows the same simulation setup as Sec 5.2, except we manually set a misspecified matrix for LinCBWI: $\breve{W}_t = W_t + \Xi_t$, where each element of $\Xi_t$ is generated from $Unif(-b,b)$. Any values in $\breve{W}_t$ exceeding $[-1,1]$ are clipped. Notably, LinCBWI remains robust, converging to the optimal decision with negligible error even under substantial misspecification ($b \leq 0.5$). Even in the extreme case where $\breve{W}_t \sim \text{Unif}(-1,1)$ (completely unrelated to $W_t$), LinCBWI slightly outperforms classical LinCB. This suggests that **incorporating interference structures, even when poorly estimated, enhances flexibility and provides deeper insights into the bandit framework**. We hope LinCBWI’s tolerance to misspecification alleviates concerns about unknown interference. **Q2:** The computational complexity of LinCBWI is approximately $O(d^2K^2\bar{N}_T)$. Specifically, for each unit $i$ in round $t$, the primary computational costs arise from matrix inversion (Line 7 of Algorithm 1) and the smallest eigenvalue computation (Line 10), both requiring $O(d^3K^3)$ in standard implementations. However, using optimization techniques such as iterative eigenvalue decomposition and the Sherman-Morrison update can reduce this to $O(d^2K^2)$. Multiplying by the total number of units $\bar{N}_T$ gives the overall complexity. To empirically validate this, we compared the runtime of LinCBWI with LinCB in Sec 2 of https://anonymous.4open.science/r/LinCBWI_ICML_rebuttal-C829/. In classical LinCB, the complexity is $O(d^2K\bar{N}_T)$. LinCBWI introduces an additional factor of $K$ due to interference, requiring a joint update of $\beta_a$ for all $a \in \mathcal{A}$. Since our algorithm consistently runs within seconds, **computational complexity is unlikely to be a bottleneck**. **Q3:** When multiple observations are collected from the same user within a single round, each movie recommendation $A$ and its corresponding reward $R$ generate a new data tuple $(X,A,R)$, meaning a user can contribute multiple tuples per round with the same $X$ but potentially different $(A,R)$. These observations can be viewed as coming from "two persons with the same brain". As a result, these data tuples naturally exhibit interference with the highest weight (set to 1), since they originate from the same individual. Regarding the question of how to evaluate the fit of reward design in 'I' and 'II' of Appendix B.3, there is insufficient data to directly assess their closeness using the existing dataset. This limitation arises because we are working with an offline dataset to simulate an online bandit setting, where the "actual" reward is typically assumed to be observable. In our setup, $R_{ti}$ may depend on the actions of all units in round $t$, leading to $K^{N_t}$ possible reward realizations. However, in an offline dataset, we can observe only a **single** realization among these possibilities. As a result, the "actual" reward is not directly identifiable; we can only approximate a pseudo-real data environment to capture certain aspects of real-world behavior. Notably, the only two existing works on interference in bandit settings (Jia 2024; Agarwal 2024) have only conducted synthetic experiments. By incorporating semi-real data analysis, our approach takes a step forward that better approximates real-world applications. **Finally**, thank you for pointing out the typos in our writing. We will incorporate them into the final version of our paper and sincerely look forward to your feedback. --- Rebuttal Comment 1.1: Comment: Thanks for your answer and the additional results, it resolved my concern. I increased the score accordingly. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time and thoughtful review. We're glad to hear that our responses addressed your concerns, and we thank you for your updated evaluation. Best, Authors from submission 4971
null
null
null
null
null
null
Importance Sampling for Nonlinear Models
Accept (poster)
Summary: The paper introduces a framework to generalize norm-based and leverage-score-based importance sampling from linear models to nonlinear settings. It uses adjoint operator of a nonlinear map, to do so. The authors demonstrate that their sampling methods provide guarantees analogous to linear models, enabling approximation results for nonlinear mappings. ## Update after Rebuttal I appreciate the response provided by the authors and hope that the next version incorporates the required changes. Hence, I maintain a positive assessment. Claims And Evidence: The examples and propositions are supported with clear discussions. Methods And Evaluation Criteria: Yes it does. It is based on the adjoint operator for non-linear maps, which complements the use of non-linear leverage scores. The experiments in the paper compare the performance of a model trained on a smaller sample of training data. However, the classification task could be compared with more sampling methods. Theoretical Claims: The propositions and examples seemed correct, which was useful for understanding the idea and the practical and relevance. In the main theorem, the running time analysis is missing. Experimental Designs Or Analyses: The experiment section supports the theoretical claims. - For the classification task, the comparison with L1 Lewis weights (like Mai, Musco, Rao, 2021) and leverage + uniform (like Munteanu, Schwiegelshohn, Sohler, Woodruff, 2018) are missing. Supplementary Material: Section A.1 and A.2 Relation To Broader Scientific Literature: Importance sampling has been extensively studied in linear model problems such as least square regression, clustering, etc. This result is a key towards non-linear subspace embedding which is due to the adjoint operator. Essential References Not Discussed: All major relevant results are cited and discussed in the paper. Other Strengths And Weaknesses: - $\textbf{Strengths:}$ The paper introduces an innovative way of importance sampling for nonlinear models using the adjoint operator. Further, it offers theoretical guarantees backed by experiments. - $\textbf{Weakness:}$ The experiments are compared with limited sampling methods. The theorem misses the running time discussion. For data in higher dimensions the quadratic dependence on the coreset size maybe a bottleneck for practical purposes. Other Comments Or Suggestions: Refered to questions. Questions For Authors: - Can it be extended to losses other than squared loss? - Is it possible that the nonlinear leverage scores can be utilized for active learning strategies or transfer learning? If so, then you may also discuss this in the main paper to strengthen it. - What are the primary computational bottlenecks when directly calculating nonlinear importance scores? How can it be handled for high-dimensional data? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer JXa1, we sincerely appreciate the time you devoted to reviewing our paper and your comments. We aim to address your comments and questions in detail below. ### Additional Sampling Methods in Classification Thank you for the suggestion. While adding more sampling methods would ideally strengthen our arguments, we find that this is unfortunately not feasible, as most prior works are limited in the settings they apply to and do not extend to the nonlinear examples we have considered in the paper. We would like to note that many existing methods are more restricted in the types of models they apply to, typically being linear or kernel-based, while our approach applies to general nonlinear models via the adjoint perspective. For example, L1 Lewis weights have been studied for linear predictor models, e.g., logistic regression, but are not trivially extended to the nonlinear predictors we consider, e.g., single index model. To our knowledge, the only prior works applicable to our setting are that of Gajjar et al. (2023, 2024). However, their results rely on linear leverage scores, and there is no general construction for nonlinear importance sampling like we offer in the paper. In this light, the plots in our paper labeled as ”LS - L,” i.e., those based on linear leverage scores, represent the expected performance from Gajjar et al. (2023, 2024). ### Run Time Analysis Our theoretical focus was primarily on sample complexity and its effect on loss approximation, which in turn reduces the time complexity when training a sub-sampled dataset. Analyzing runtime depends on various components that are outside the scope of this work, such as the choice of optimization algorithm, its hyperparameters, and the algorithm used for approximating the nonlinear scores. We will add a brief discussion in the revision, noting that the cost of approximating these scores is analogous to the linear case and its associated components. ### Utilizing Nonlinear Scores for Active/Transfer Learning Thank you for raising this important suggestion. While we did not explicitly discuss active or transfer learning in our paper, the general concept of “nonlinear importance scores” can be naturally extend to these areas. In an one-shot active learning setting, for instance, selecting the most “informative” points would help reduce labeling effort. For transfer learning, one could use these scores to identify samples in a source domain that are most representative of a target domain’s features. We will add a discussion note in the final version outlining these possibilities to strengthen the paper. ### Quadratic Dependence on $p$ We appreciate your comment. We recognize that many subspace embedding results, including ours, exhibit an $\mathcal{O}(p^2)$ sample-size term. However, as an upshot, we provide a loss approximation guarantee of the form (2), which to our knowledge has not been done prior to our work in the context of nonlinear predictor models such as neural networks (please see page 3, left column, lines 148–156, as well as Remark 3.2 on page 7). Achieving lower complexity for fully general nonlinear embeddings while maintaining a loss approximation guarantee of the form (2) remains an open challenge. In our work, we primarily operate in an underparameterized regime, where the number of features is not excessively large compared to the number of observations, i.e., $n \gg p $, which might help mitigate this issue to some extent. While we have noted this limitation in Remark 3.2, we will emphasize it more clearly and reference recent advances in randomized embeddings to highlight potential approaches and future directions for addressing this bottleneck in broader high-dimensional nonlinear settings. ### Extension Beyond Squared-loss Our Appendix A.3 indicates that the framework extends conceptually to other positively homogeneous losses, but the complete theoretical guarantees for general losses remains for future work. ### Primary Computational Bottlenecks For Directly Calculating Nonlinear Importance Scores Thanks for your question. For arbitrary nonlinear models, computing the adjoint operator may require numerical integration if no closed-form expression exists. However, in more structured settings, such as single-index models or certain ReLU-based networks, explicit formulas described in the main paper allow for standard matrix operations to calculate row-norm or leverage scores, similar to the linear case. For row-norm scores, one simply evaluates the norm of each row in the “nonlinear dual matrix.” For leverage scores, a QR or SVD-like factorization of the dual matrix is needed, for which standard randomized NLA techniques (e.g., approximate factorizations) can help speed up these factorizations. We will highlight these computational considerations in the next revision of our paper. ### Additional Links https://anonymous.4open.science/r/ICML2025Review/
Summary: The paper proposed a sampling method for important data by extend the norm and leverage scores in linear models to nonlinear models and reduce the computational complexity. Claims And Evidence: The proposed methods for important data points in nonlinear models offer several advantages, such as reduced computational complexity, enhanced explainability, and improved outlier detection. These claims are supported by both theoretical analyses and experimental results. Methods And Evaluation Criteria: The paper evaluates the proposed method on four standard datasets, but these datasets are relatively small, and the image sizes are quite tiny. While the results on these smaller datasets are useful for demonstrating the method's functionality, they may not fully represent the challenges of real scenario. The evaluation on larger datasets with higher-dimensional data would provide stronger evidence for the proposed approach. Theoretical Claims: The theoretical aspects of the paper are reasonable and well-justified. Experimental Designs Or Analyses: see Methods And Evaluation Criteria Supplementary Material: The supplementary material provides additional details of theory and experiments. Relation To Broader Scientific Literature: The paper effectively bridges the gap between existing methods used for linear models to nonlinear models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper extends the concept of importance sampling from linear models to nonlinear models, which is highly meaningful, as most neural networks used today are nonlinear. The proposed method can improve the efficiency of training these models. The paper provides a robust theoretical foundation for the proposed method, which supports its claims effectively. The experiments demonstrate that the proposed method performs well on small subsets of datasets. However, it would be beneficial to compare the convergence speed of the proposed method, as this would provide additional insights into its practical efficiency. Other Comments Or Suggestions: see above Questions For Authors: Does the proposed method improve convergence of model training? How does the proposed method perform on larger datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer Z4FF, we sincerely appreciate the time you took to review our work. To the best of our ability, we aim to address your comments and questions in detail below. ### Experiments with High-dimensional Data Thank you for your observations and feedback. As mentioned at the outset in the introduction, we make the assumption that $n \geq p$, i.e., the **underparameterized** setting. While the construction of our Adjoint Operator in Definition 3.1 remains valid for all dimensions, the nonlinear scores in Definitions 3.3 and 3.4 only provide useful information when $n \geq p$. Otherwise, if $p \geq n$, the nonlinear leverage scores for all data points would be the same, meaning that all data points are considered equally important in fitting the model. In this case, our sampling distributions imply a uniform sampling scheme rather than a non-uniform importance sampling scheme. This inherent limitation is also present across the field of randomized numerical linear algebra (RandNLA) and widely used importance sampling methods such as leverage scores [1]. In fact, for many of the theoretical guarantees in RandNLA to be meaningful, the number of data points must often be exponentially larger than the dimension. Unfortunately, extending importance sampling methods like leverage scores in a meaningful way to overparameterized settings, even in the linear case, remains an open problem. In this light, we would like to clarify that in our setting, “high dimensional” is interpreted in terms of the number of observations rather than the number of features. While many of the datasets considered in our experiment contain large numbers of observations, the underparameterized regime limits us from using more conventional datasets that would result in overparameterization and large number of features. [1] Woodruff, D. P. (2014). Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 10(1–2), 1-157. ### Improving Convergence of Model Training Thank you so much for your suggestion. We note that our main focus in the paper and our theoretical results concentrate on sampling complexity and its effect on loss approximation rather than the convergence speed of a given optimization method used for training. As you agree, convergence speed can be influenced by many factors, such as the choice of the optimization algorithm, selection of hyper-parameters, computing/network architecture, etc. While our results are not concerned with these factors, to address your question, we have provided a plot showing training time (in seconds) and relative error with respect to sample size for subsampling using nonlinear leverage score (the results are consistent across different nonlinear sampling methods), offering insight into the tradeoff [(please see here)](https://anonymous.4open.science/r/ICML2025Review/Convergence.pdf). As shown, training a model sampled according to nonlinear leverage scores achieves $10^{−2}$ relative error faster than the time it takes to train the model on the entire dataset. This speedup, coupled with the reduction in the cost of labeling the additional data, showcases the advantages of our nonlinear scores in reducing overall costs of training. ### Additional Links 1. New Datasets (Numerical Experimentation): [Please see here](https://anonymous.4open.science/r/ICML2025Review/NewDatasets.pdf) 2. Existing Classification Dataset (Numerical Experimentation): [Please see here](https://anonymous.4open.science/r/ICML2025Review/Fig2Quant.pdf)
Summary: This paper introduces a novel family of distributions that extends leverage score distributions—widely used in subset selection for linear models—to nonlinear models. The key component of this construction is a newly defined nonlinear adjoint operator, which satisfies the identity: $L(\theta) = \|\|\hat{F}^{*}(\theta) \hat{\theta}\|\|^2,$ where $\hat{\theta}:= [\theta|1]$. This mirrors the classical case in linear regression, where the loss function satisfies: $L(\theta) = \|\|X \theta - y\|\|^2 = \|\| \hat{X} \hat{\theta}\|\|^2,$ where $\hat{X}:= [X|-y]$. Leveraging this formulation, the paper explores importance sampling based on these newly introduced nonlinear leverage scores. Specifically, it investigates conditions under which the following bound holds with high probability: $L(\theta_S) \leq L(\theta^*) + O(\epsilon)$ where $S$ is the index set of $s$ samples drawn via probability weights determined by the nonlinear leverage scores, also referred to as nonlinear norm scores. The authors focus on two classes of nonlinear models: generalized linear predictors and ReLU neural networks. In both cases, they demonstrate that nonlinear leverage scores can be upper bounded by linear leverage scores. Furthermore, they establish necessary conditions for the high-probability validity of the above bound when using the quadratic loss in a regime where the sample size scales nearly linearly with the dimension. Finally, numerical simulations illustrate the advantages of sampling based on nonlinear leverage score distributions over traditional linear leverage score distributions. ## Update after rebuttal I would like to thank the authors for their response. I increase the score from 2 to 3. Claims And Evidence: - While the claims about the novelty of the construction are valid. The claims about the theoretical guarantees are overstated, since they are only valid for the squared loss, and they are only applicable for a limited family of nonlinear models. - The claims about the numerical performance are barely substantiated by the shown results: while we can observe that the use of nonlinear leverage scores leads to an improvement compared to the use of the classical leverage scores, the improvement is marginale and barely noticeable (e.g., Figure 1-b). Moreover, for the classification task, the comparison between non linear leverage scores and linear leverage scores is at best qualitative. Methods And Evaluation Criteria: A quantitative evaluation of the methods was only conducted for two datasets. The rest of the experimental section is based on a qualitative comparison that is not relevant to illustrate the theoretical guarantees established in Section 3. Theoretical Claims: All proofs were examined, but not carefully checked, and no glaring mistakes were identified. Experimental Designs Or Analyses: See Methods/Evaluation criteria. Supplementary Material: All proofs were examined, but not carefully checked, and no glaring mistakes were identified. Relation To Broader Scientific Literature: The use of importance sampling to build coresets for linear models has been extensively studied in recent years, leading to the development of a new subfield at the intersection of machine learning and randomized linear algebra. While linear leverage scores provide a robust solution to the subsampling problem from both theoretical and empirical perspectives, extending this framework beyond linear models has remained a persistent challenge for the community. Indeed, unlike in the linear case—where the leverage score distribution is independent of the optimal solution to the underlying optimization problem—existing approaches for non-linear models typically involve a chicken-and-egg problem, as the sampling distribution depends on the very parameter being optimized. While the proposed construction of the non-linear adjoint operator is elegant, it does not resolve this fundamental issue. Essential References Not Discussed: Existing literature on coresets is missing from the section dedicated to related work (section 2). For instance: * Determinantal Point Processes for Coresets https://jmlr.org/papers/volume20/18-167/18-167.pdf * On Coresets for Logistic Regression https://arxiv.org/abs/1805.08571 Overall the comparison with sensitivity sampling schemes is missing in the line of [Langberg and Schulman 2010]. Langberg, M. and Schulman, L.J., 2010, January. Universal $\epsilon$-approximators for integrals. In Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms (pp. 598-607). Society for Industrial and Applied Mathematics. Other Strengths And Weaknesses: The article is well written, and it is easy to follow for someone who is familiar with the litterature. The construction of the nonlinear adjoint operator is elegant, and the study of this operator would be of interest per se. A notable weakness is the lack of an extensive empirical validation. The comparison with linear leverage scores was restricted to two datasets, and the observed advantage of the newly introduced scores remains unclear. Other Comments Or Suggestions: - Questions For Authors: 1) Is there a way to strengthen Theorem 3.1. by assuming that sampling is done according to the $\tau_i(\theta)$ and not $\tau_i$? 2) Could you provide an empirical comparison, similar to Figure 1, for a task other than regression? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer Nx6D, we are grateful for the time and effort you devoted to reviewing our paper. We sincerely hope to address your comments and questions in detail below. ### Scope of Theoretical Guarantees 1. Thank you for your observation. While the examples in the paper (single-index models, ReLU networks) illustrate how the adjoint operator can be computed explicitly from Proposition 3.1, we would like to clarify that the method extends conceptually to broader classes of functions via its natural definition (Definition 3) through numerical integration. 2. While our approach can conceptually extend to broader nonlinear models, we agree that our current theoretical guarantees focus on squared losses. Appendix A.3 outlines initial steps toward more general losses, but fully developing the theory in this setting remains an open challenge we are pursuing. We will make sure to clearly state these limitations in the revised version. ### Numerical Experimentation (Figure 1) 1. We agree that incorporating additional datasets would enhance the quality of our empirical evaluations. To address this, we have now included two additional datasets–one for classification & one for regression [(Please see here)](https://anonymous.4open.science/r/ICML2025Review/NewDatasets.pdf). 2. We believe our method can offer substantial performance gains in many cases. The perception otherwise may stem from our choice of axis scales, which make the graphs appear less visually impressive. In our numerical experiments, we used a log-scale on the Y-axis. However, we should have better highlighted that even a small shift on a log-scale can represent a significant absolute difference. For example, in the California Housing Prices dataset, comparing linear row-norm with non-linear row-norm, the shift from −0.75 to −2 in log-scale corresponds to a relative error reduction from 18% to 1%. We will clarify this more clearly in the text. ### Qualitative Interpretation of Classification Tasks (Figure 2) We agree that our current results for the classification task rely on qualitative interpretation rather than quantitative analysis. Our main goal was to demonstrate the perceptual interpretability and diagnostic power of our framework, highlighting that our nonlinear scores capture meaningful information about the data that would otherwise be unavailable. To our knowledge, this is the first approach of its kind, introducing a new interpretability & diagnostics paradigm for classification with nonlinear models. However, we fully agree that incorporating quantitative metrics would strengthen our work. To address this, we have added two additional figures that compare the quantitative performance on two existing datasets, namely SVHN & QD [(Please see here)](https://anonymous.4open.science/r/ICML2025Review/Fig2Quant.pdf). ### Fundamental Challenges Our theoretical guarantees are flexible in that the underlying sampling distribution only requires approximations to the nonlinear scores that are independent of the parameter being optimized. In many cases, as demonstrated in Examples 3.3 and 3.4, such parameter-independent estimates can be obtained, making our approach less susceptible to the “chicken-and-egg” problem you mentioned. However, obtaining a solution $\theta_{S}^{\star}$ that satisfies (2) requires solving a constrained optimization problem, where the constraint set must be large enough to contain the true solution $\theta^{\star}$. Hence, while our approach relaxes the dependency of the sampling distribution on the parameters being optimized, it still relies on a constraint that implicitly assumes some prior knowledge of $\theta^{\star}$. As a result, the core issue you highlighted remains an inherent challenge. Addressing this “chicken-and-egg” problem in broader nonlinear contexts remains an open problem. We will ensure that these limitations are discussed more prominently. ### Literature on Coresets Thank you for pointing this out, and we apologize for the oversight. Although we briefly mentioned the idea of coresets and their applicability beyond linear embeddings on page 2, we will ensure that the final version includes a dedicated related work section on coreset frameworks. ### Strengthening Theorem 3.1 Thank you for the thoughtful question. Our approach aims to construct a sampling distribution that is independent of the parameter being optimized. Accordingly, our theory allows for an approximation of $\tau_i(\theta)$ in the form of $\tau_i$, which serves as a uniform bound for all $\theta$. As you noted, this can lead to conservative guarantees, potentially requiring a larger sample size than necessary. In our experiments (Figure 2 (k, i)), we observe that importance scores become more informative as the model nears optimality. Motivated by this, we are exploring how to replace the uniform bound with one at the optimal point, $\tau_i(\theta^{\star})$, and its implications for theory. A full development is left for future work.
null
null
null
null
null
null
null
null
Multi-band Frequency Reconstruction for Neural Psychoacoustic Coding
Accept (poster)
Summary: This paper proposes multi-band frequency spectral residual vector quantization (MBS-RVQ) for quantizing latent speech across different frequency bands. Additionally, the results demonstrate the performance of zero-shot text-to-speech models using the proposed Neural Audio Codec. ## Update after rebuttal While the proposed methods could enhance RVQ, the improvement is incremental and still it requires many residuals, which is a burden for downstream tasks. Recently, many codecs are designed only with a single layer such as LLASA. I could not find any advantage in terms of efficiency. I will maintain my score as it is. Claims And Evidence: Using a multi-band audio representation is not new for neural audio codecs. Specifically, Spectral Codecs [1] proposed a multi-band spectral codec that encodes disjoint mel bands separately and quantizes them using frequency-wise vector quantization. HALL-E [2] introduced a Multi-Resolution Requantization (MReQ) method to quantize the latent representation from low to high frequencies. PyramidCodec [3] quantized the latent representation hierarchically by employing RVQ on multi-scale features. Language-Codec [4] also separates the latent representation and quantizes them individually. [1] Langman, Ryan, et al. "Spectral Codecs: Spectrogram-Based Audio Codecs for High Quality Speech Synthesis." arXiv preprint arXiv:2406.05298 (2024). [2] Nishimura, Yuto, et al. "HALL-E: hierarchical neural codec language model for minute-long zero-shot text-to-speech synthesis." ICLR, 2025. [3] Jianyi Chen, Zheqi Dai, Zhen Ye, Xu Tan, Qifeng Liu, Yike Guo, and Wei Xue. 2024. PyramidCodec: Hierarchical Codec for Long-form Music Generation in Audio Domain. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 4253–4263, Miami, Florida, USA. Association for Computational Linguistics [4] Ji, Shengpeng, et al. "Language-codec: Reducing the gaps between discrete codec representation and speech language models." arXiv preprint arXiv:2402.12208 (2024). Methods And Evaluation Criteria: The model comparison is not entirely fair because other codecs, such as Encodec, DAC, HiFi-Codec, and Mimi, were not trained using four RVQs. However, the comparison may have been conducted using only four RVQ levels for the baselines. Furthermore, while Encodec and Mimi use causal convolutional layers for streaming generation, Muffin employs non-causal convolutional layers with a greater number of layers, which makes the comparison somewhat unfair. Please discuss more details for other models. Theoretical Claims: This paper utilizes residual vector quantization, a well-established method. Experimental Designs Or Analyses: Please include details such as token rate, codebook size, codebook number, and frame rate, following LLASA [5]. [5] Ye, Zhen, et al. "Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis." arXiv preprint arXiv:2502.04128 (2025). Supplementary Material: . Relation To Broader Scientific Literature: . Essential References Not Discussed: [1] Langman, Ryan, et al. "Spectral Codecs: Spectrogram-Based Audio Codecs for High Quality Speech Synthesis." arXiv preprint arXiv:2406.05298 (2024). [2] Nishimura, Yuto, et al. "HALL-E: hierarchical neural codec language model for minute-long zero-shot text-to-speech synthesis." ICLR, 2025. [3] Jianyi Chen, Zheqi Dai, Zhen Ye, Xu Tan, Qifeng Liu, Yike Guo, and Wei Xue. 2024. PyramidCodec: Hierarchical Codec for Long-form Music Generation in Audio Domain. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 4253–4263, Miami, Florida, USA. Association for Computational Linguistics [4] Ji, Shengpeng, et al. "Language-codec: Reducing the gaps between discrete codec representation and speech language models." arXiv preprint arXiv:2402.12208 (2024). Other Strengths And Weaknesses: . Other Comments Or Suggestions: The ablation study for the modified snake function is not conducted. Typo on line 034: "12.5 kHz" might be "12.5 Hz." Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your time and the effort you’ve put into helping us improve our presentation. **Novelty (It appears that there are some misapprehension):** Our quantizer fundamentally differs from plain RVQ by employing a multi-band split directly at the latent level, guided by psychoacoustic features such as content, formant articulation, and speaker characteristics. Unlike other multi-band codecs (e.g., Spectral Codec, which splits at the input level, or HALL-E and PyramidCodec, which use multi-scale down-sampling), our approach **as accurately pointed by other reviewers that it performs a spectral split in the latent space. This design allows the codec to construct psychoacoustical disentangled features, with each codebook optimized specifically for different frequency bands (supported by theory and empirical observations), clearly distinguishing from previous work**. As demonstrated in Theorem 3.1, our method not only enhances the elegance and robustness of the codec but also pushes the boundaries of performance. We believe this novel perspective is an important contribution to the community and bridge the gap of traditional work of MP3. **Fairness (It appears that there are some confusion but we are happy to clarify):** Baseline Alignment-We intentionally align our work to HiFi-Codec (baseline) by matching codebook used of 4. HiFi-Codec represents SOTA speech performance, making it a rigorous and relevant benchmark. Additionally, we retrained the baseline on the same dataset to ensure robust performance and to control for any potential data bias, thereby ensuring a fair comparison. Benchmark other work: For our comparisons with other codecs (e.g., DAC and EnCodec), we adopt the “early codebooks” approach, consistent with recent studies such as WavTokenizer, SemantiCodec, and Speech Tokenizer. MUFFIN performs well even when evaluated across other papers of same dataset, showing consistency in the literature. Computationally- RVQ (benchmarked model) mathematically encourages each codebook to be as self-sufficient as possible. Each stage minimizes its own L1/L2 reconstruction error without “looking ahead” to future stages. This independent optimization ensures that the early codebooks in any model, whether a model is trained with 4 or 32 codebooks, are directly comparable, since each stage is forced to capture as much residual information as possible. Lastly, MiMi is compared with the officially reported count of 8 codebooks to further validate our results. Overall, our comparative tables are constructed on a scientifically fair basis, allowing for meaningful insights rather than merely demonstrating superior performance or any attempt to over-claim our work. **Streaming:** Our streaming capability is built on a fully CNN-based model that follows exactly like Encodec’s design so it will be **inaccurate to say that ours is non-causal**. Specifically, our system processes audio in small window frames (3.5 seconds) that are non-causal within each frame, i.e., using such global window context, but causal over past windows for all streaming applications. Since CNNs are fundamentally local feature extractors, they do not inherently capture global context as transformers do. This locality is advantageous in streaming applications, as it allows for more stable and consistent performance when operating under strict causal constraints. By contrast, self-attention models (wavtokenizer), although trained in a non-causal manner, must be adapted to causal computation during streaming, which can lead to instability. Our approach leverages the stability of CNNs in local processing, ensuring robust streaming even when constrained to causal operation. **Following LLASA:** We fully agree with the reviewer’s concern regarding the importance of these metrics. Due to space constraints, we did include the detailed table in the original submission Appendix F, where we demonstrate their impact comprehensively with latency metric of MACs and model size as well. However, we will update the references raised and the typo in line 034. **Ablation of Snake Activation:** While modified snake activation is not our core contribution in this work, we agree that showing the ablation performance helps to improve the quality of the presentation. We will include the results in appendix C while showing the performance as below **LibriTTS (test-clean)** | Model | STFT | MEL | PESQ | STOI | UTMOST | ViSQOL| |-|-|-|-|-|-|-| | Added amplitude & bias (Ours) | 1.555 | 0.692 | 2.996 | 0.954 | 4.017 | 4.516 | | Added amplitude | 1.603 | 0.744 | 2.928 | 0.945 | 3.943 | 4.448 | | Vanilla | 1.635 | 0.760 | 2.876 | 0.940 | 3.905 | 4.409 |
Summary: MUFFIN is an improved RVQ-based neural audio codec (NAC) using a multi-band spectral split for each RVQ sub-layer, to better disentangle different frequency bands into separate RVQ sub-layer codebooks ("psychoacoustically guided"). This enables improved bitrate allocation based on psychoacoustic studies, which bridges traditional codec design (MP3, Opus) and NAC towards perceptual-oriented architectural design. Claims And Evidence: The claims and evidence are mostly presented adequately via quantitative and qualitative analysis, including sub-layer reconstruction and PCA analysis of each codebook. However, I would like to see an ablation study by disabling "MBS" portion of MBS-RVQ, while keeping the other design unchanged. In other words, having a MUFFIN model only trained on plain RVQ (EnCodec, DAC, or HiFi-Codec) without the psychoacoustic guidance as comparison would be better, since the usage of "MBS" is a core claim in this work, such ablation study seems to be the most important experiment. Methods And Evaluation Criteria: The work follows conventional metrics in codec reconstruction, which makes sense. The addition of WER is a nice addition, as some of recent low frame-rate codes do not perform great in these metrics even though they are good at the acoustic reconstruction metrics, including UTMOS and ViSQOL. I suggest the authors to also consider SECS as a viable metric as well. Theoretical Claims: A theoretical claim of the psychoacoustic evidence of perceptual speech characteristics is based on a well established literature, which is not necessarily new claim but serves as a good reference to bridge the existing theory into a neural model design. Experimental Designs Or Analyses: Since the authors retrained HiFi-Codec with the same configuration, but not others, I think it's good to annotate it in the evaluation result tables. Supplementary Material: I reviewed the appendix and the demo page. Relation To Broader Scientific Literature: The findings can potentially give the NAC community an attention to bring psychoacoustic study which is well-studied for past dacades, to bring domain-specific knowledge into the neural design rather than disconnecting from the an estabilished past. Essential References Not Discussed: BigVGAN [ICLR'23] is the first work that introduced Snake activation into the audio decompression domain (as a mel spectrogram vocoder), and in fact, it is also the first study that also proposed a learnable scaling factor β (called SnakeBeta) from its official implementation. However, current manuscript only points this to follow-up studies (DAC and Stable Audio). Since this paper introduces a further study of the periodic activation function design, I suggest the authors to include the above-mentioned original reference. Other Strengths And Weaknesses: Please see Claims And Evidence and Questions section. Other Comments Or Suggestions: None Questions For Authors: As mentioned in the claims section, while most of the paper are well structured, I feel the no-MBS ablation important to add to make the paper's core claim stronger. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your dedication to carefully scrutinize our work and it means a lot to us. **Disabling MBS:** We agree that conducting an ablation study by disabling “MBS” is important to better demonstrate its contribution to the reconstruction performance. Part of this analysis has already presented in Table 5, where we compare MUFFIN with plain RVQ to evaluate WER, STOI, and the behavior of each individual codebook. To further strengthen our empirical evidence, we will include detailed reconstruction performance results in a new appendix, as shown below focusing on speech reconstruction. **LibriTTS (test-clean)** | Model | STFT | MEL | PESQ | STOI | UTMOST | ViSQOL| |-|-|-|-|-|-|-| | MUFFIN | 1.555 | 0.692 | 2.996 | 0.954 | 4.017 | 4.516 | | RVQ | 1.627 | 0.768 | 2.856 | 0.940 | 3.875 | 4.328 | | MUFFIN (12.5 Hz) | 1.663 | 0.807 | 2.360 | 0.932 | 4.074 | 4.225 | | RVQ (12.5 Hz) | 1.755 | 0.879 | 2.260 | 0.924 | 3.785 | 4.017 | **LibriTTS (test-other)** | Model | STFT | MEL | PESQ | STOI | UTMOST | ViSQOL| |-|-|-|-|-|-|-| | MUFFIN | 1.615 | 0.758 | 2.658 | 0.934 | 3.444 | 4.454 | | RVQ | 1.683 | 0.810 | 2.544 | 0.917 | 3.318 | 4.268 | | MUFFIN (12.5 Hz) | 1.725 | 0.875 | 2.086 | 0.904 | 3.560 | 4.129 | | RVQ (12.5 Hz) | 1.863 | 0.963 | 1.940 | 0.815 | 3.399 | 3.993 | **IEMOCAP** | Model | STFT | MEL | PESQ | STOI | UTMOST | ViSQOL| |-|-|-|-|-|-|-| | MUFFIN | 1.399 | 0.675 | 2.178 | 0.806 | 1.903 | 4.000 | | RVQ | 1.510 | 0.793 | 2.039 | 0.715 | 1.805 | 3.883 | | MUFFIN (12.5 Hz) | 1.429 | 0.754 | 1.726 | 0.723 | 2.026 | 3.612 | | RVQ (12.5 Hz) | 1.584 | 0.835 | 1.644 | 0.645 | 1.917 | 3.455 | From the above tables, there is consistent improvement over using MBS demonstrating the effectiveness of MUFFIN (supported by theorem). ___ **Using SECS for evaluation:** We acknowledge that our current evaluation metrics for the codec do not include human evaluations, and we agree that SECS may be a viable addition. The metrics we adopted follow previous work (e.g., Codec-SUPERB), which argued that these objective measures provide sufficient coverage. Nevertheless, we appreciate your suggestion and have attempted it. However, given the STOI scores of the reconstruction results and the nature of our task (i.e., not generating entirely new speech but reconstructing existing ones), it can be challenging for human evaluators to reliably distinguish subtle quality differences, as shown in the demos (especially without cherry-picking samples). Similar issue has been discussed in [1]. Therefore, we find it difficult to implement and believe that relying on objective metrics is more appropriate for evaluating the codec’s performance with more precise distance metrics. However, we are also careful with our evaluations and have indeed used human evaluation for our TTS outputs, as shown in Table 6, where naturalness (MOS) and speaker similarity can be meaningfully assessed. We trust that this is a common valid concern and will discuss in a new appendix section to dismiss some misunderstanding/issue with evaluating codec performance with human while evaluating/citing SECS. [1] Varadhan, Praveen Srinivasa, et al. "Rethinking MUSHRA: Addressing Modern Challenges in Text-to-Speech Evaluation." arXiv preprint arXiv:2411.12719 (2024). ___ **Annotation of off-the-shelf models in tables and snake activation references:** It makes perfect sense and we will update the manuscript with the provided references to enhance the credibility of our results. We deeply appreciate your effort in pointing out our weaknesses. ___ --- Rebuttal Comment 1.1: Comment: Thank you for rebuttal. I find the no-MBS ablation (only disabling MBS while keeping other details of MUFFIN identical) helpful for the readers to understand the merit in a precise manner. The acoustic metrics seem to agree with the motivation with consistent improvements. Can the authors present the audio recon demos of this baseline using the samples from (F) Psychoacoustic Codebook Auditory Analysis in the demo? This will help the readers also to evaluate the claimed weak disentanglement MBS brings (vs. plain RVQ) by disabling it, and to form the reader's own opinion about its perceptual significance. To clarify regarding SECS, I meant speaker encoder cosine similarity (also noted as SIM-o in zero-shot TTS literature) using a speaker encoder model (WavLM-TDCNN), originally proposed in VALL-E and became as one of the golden metrics (alongside CER/WER) to measure the speaker similarity. This can be placed alongside S-MOS in table 6. Since the authors have already conducted human evaluations, having additional objective metrics with SIM-o will strengthen the results of MUFFIN used as a speech LM tokenizer. --- Reply to Comment 1.1.1: Comment: We appreciate the opportunity to engage with your feedback once again and take your valuable comments seriously. **Demos:** We have included the ablation audio in section (F) and fully agree that providing such materials enhances the immersive experience for the reader. We invite you to be our first reader to revisit the demo page again to compare the results of a plain RVQ model, which optimizes purely for residual error. In this setup, most of the information is forced into the first codebook, while the subsequent codebooks capture only minor residuals, often lacking meaningful representation. In contrast, our proposed MBS approach is inspired by psychoacoustic studies. It organizes auditory information by frequency bands, which may encourage a more natural, unsupervised separation of perceptually relevant features. This design can help the model capture semantically useful representations without relying on explicit labels, potentially easing the burden of manual annotation. Furthermore, it supports more effective neural optimization and reconstruction, in line with psychoacoustic principles exploited in traditional codec designs such as MP3. ___ **SECS metrics:** Thank you for the clarification regarding SECS and its relation to SIM-o in the zero-shot TTS literature. Following your suggestion, we have calculated SECS using Resemblyzer and updated Table 6 accordingly: | Systems | WER | MOS | S-MOS | SECS | | - | - | - | - | - | | VALL-E w/ Encodec | 21.05% | 3.91 | 3.70 | 0.5914 | | VALL-E w/ Hifi-Codec | 32.35% | 4.00 | 4.04 | 0.5874 | | VALL-E w/ MUFFIN | 12.20% | 4.18 | 4.19 | 0.6099 | We appreciate your suggestion regarding the inclusion of SECS as an objective metric for speaker similarity and acknowledge its increasing adoption in recent literature. While SECS can certainly provide complementary insights, we would also like the reader to know that such metrics can be sensitive to factors such as the choice of speaker encoder, background noise, and linguistic content, which may introduce ambiguity in interpretation. Given these considerations, it explains why we prioritized human evaluations using S-MOS given in the initial Table 6, which directly assess perceived speaker similarity and capture aspects often overlooked by embedding-based metrics — including prosody, speaking style, and emotional nuance, as highlighted in prior studies. **Nevertheless, we agree that combining both measures provides a more comprehensive and robust evaluation of speaker similarity, and we are happy to include SECS in our updated report.** ___ We hope that your concerns have been well-addressed. If not, please let us know as we are eager to further improve our work and strengthen its potential impact on future research.
Summary: The paper introduces MUFFIN, a neural psychoacoustic codec leveraging Multi-Band Spectral Residual Vector Quantization (MBS-RVQ) and a modified snake activation function. By decomposing latent representations into psychoacoustically motivated frequency bands, MUFFIN optimizes bitrate allocation and achieves state-of-the-art audio reconstruction quality across speech, music, and environmental sounds. Extensive experiments demonstrate superior performance over existing codecs (e.g., HiFi-Codec, Encodec) in both standard and high-compression settings, with applications in zero-shot text-to-speech synthesis. Claims And Evidence: Yes, both the experimental data provided by the author and the audio provided on the homepage demonstrate the effectiveness of their method. Methods And Evaluation Criteria: Yes Theoretical Claims: I haven't checked all the proofs because I'm not very familiar with this field and haven't found any errors yet. Experimental Designs Or Analyses: I checked the main experiments in the paper and refer to the section on Other Strengths and Weaknesses for detailed opinions. Supplementary Material: I didn't review the supplementary material, but I checked the provided homepage. Relation To Broader Scientific Literature: The key contributions of MUFFIN are deeply rooted in and extend the broader scientific literature on neural audio coding, psychoacoustics, and multi-band signal processing. By introducing MBS-RVQ, leveraging psychoacoustic principles, and proposing novel architectural improvements (e.g., modified snake activation), MUFFIN addresses longstanding challenges in the field and sets a new standard for high-fidelity, efficient audio compression. Its applications in zero-shot TTS and potential integration with LLMs further underscore its relevance to cutting-edge research in speech and audio processing. Essential References Not Discussed: I am not deeply versed in this field, but I believe the author has provided a fairly comprehensive citation of relevant work. Other Strengths And Weaknesses: Strengths: MBS-RVQ effectively disentangles speech attributes (content, speaker identity) into distinct codebooks, aligning with psychoacoustic principles. This is a significant advancement in neural audio coding. The Lipschitz continuity analysis of the encoder and ablation studies (e.g., t-SNE visualizations, codebook-specific reconstructions) validate the design choices. MUFFIN outperforms baselines across metrics (PESQ, STOI, UTMOS) and datasets (LibriTTS, IEMOCAP, GTZAN), particularly at high compression rates (12.5 Hz). The codec’s efficiency (lower MACs than HiFi-Codec) and compatibility with LLMs (via tokenized representations) highlight its potential for real-time and generative applications. Weaknesses: The ESC-50 dataset (3 hours) is small compared to speech/music datasets, raising concerns about generalizability to environmental audio. Automated MOS (UTMOS/ViSQOL) is used instead of human evaluations, which are critical for perceptual quality claims. While MACs are reduced, latency and real-time performance are not quantitatively compared to streaming-focused codecs like AudioDec. Although misuse risks (e.g., deepfakes) are acknowledged, concrete mitigation strategies are absent. Other Comments Or Suggestions: - Include human subjective evaluations (MOS) to strengthen perceptual quality claims. - Expand environmental sound experiments with larger datasets (e.g., AudioSet). - Discuss latency benchmarks relative to real-time codecs (e.g., OPUS, AudioDec). - Clarify ethical safeguards (e.g., watermarking synthesized audio) in the impact statement. Questions For Authors: - How does MUFFIN handle non-stationary or transient sounds (e.g., percussive elements in music), given the focus on speech-centric psychoacoustics? - Could the environmental sound performance be improved with a larger dataset, or is the current approach inherently biased toward speech/music? - What are the practical limitations of the 12.5 Hz variant in real-time streaming, given the increased downsampling rate? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your constructive comments and thoughtful concerns, which help to improve the impact of this work and spark further discussion. **Using larger audio set:** We agree with the valid concern regarding the size of the environmental audio dataset, and we welcome further discussion on this topic. Our findings indicate that integrating both speech and music data help to overcome low-resource while achieving SOTA environmental audio performance, even when using a smaller set, comparing to various off-the-shelf models (DAC, EnCodec) trained on much larger collections (see Table 4). This can be attributed to our training that follows existing work, which uses short 1s segments that capture brief vocal or instrumental passages. These segments tend to be somehow similar in terms of their audio characteristics to some environmental audio, thereby reducing the reliance on distinctive larger audio datasets. Moreover, our interest is on the vocal, where psychoacoustic features, such as vocal timbre and articulation, are the highlight of the neural psychoacoustic codec. Thus, we will not consider using larger audio dataset in this work as compared to speech and music (covering singing vocal). **Human evaluation and latency:** We agree that the absence of human evaluations is a common concern. To address this, we will add a dedicated subsection in Appendix F to explain how the objective metrics, widely adopted in the literature, correlate with human perceptual quality, thereby demonstrating the self-sufficiency of our report without human for codec. Further detailed clarifications will also be provided in response to Reviewer grYU (using SECS for evaluation). Similarly, our latency benchmarks, presented in Appendix F, are based on MACs and model parameters, which provide a good objective measure of inference time while normalizing for factors such as GPU specifications. **Transient audio:** We appreciate the reviewer’s thoughtful observation. While psychoacoustic studies have primarily focused on speech, we agree that applying similar analysis to non-stationary or transient sounds, such as those in music, is both important and intriguing. To explore this, we extended our decomposition approach to a variety of musical genres, including singing, classical, jazz, and symphonic music. Consistent with the psychoacoustic framework used in speech analysis, we observed that: • Codebook 1: primarily captures vocal content and coarse rhythmic beats. • Codebook 2: emphasizes vocal clarity and mid-frequency information. • Codebook 3: encodes pitch details reflective of the singer’s unique characteristics. We have updated our demos to include samples that support these observations. Interestingly, instrumental content does not clearly separate across Codebooks 2 and 3, suggesting that our psychoacoustic-guided representation is particularly effective in disentangling vocal attributes (speech and singing), but less so for purely instrumental channels. This finding reinforces the theoretical value of psychoacoustic principles for modeling vocal properties, an area that remains underexplored in neural codecs. While applying this framework to instrumental music remains challenging, we believe this opens new research directions. Further investigations, beyond the scope of the current study, will be discussed in our future work section to encourage continued exploration of this promising line of research in appendix. **Practical limitation of 12.5 Hz:** Achieving high compression rates in audio codecs often challenges the preservation of the full spectrum of human hearing, potentially leading to muffled sounds or perceptible artifacts (especially so for streaming with reference to MiMi's performance). To address this, integrating psychoacoustic models can enhance reconstruction quality by optimizing compression across multiple frequency bands, focusing on perceptually significant components. However, implementing such models typically necessitates more complex and deeper neural networks to effectively quantize and encode the nuanced psychoacoustic information without significant loss. This increased complexity can lead to larger model sizes, which may offset the benefits of efficient compression by demanding more computational resources and storage capacity. Therefore, a careful balance must be struck between leveraging psychoacoustic properties for improved audio quality and managing the trade-offs related to model complexity and compression efficiency. This could spark more research work to investigate in this area. --- Rebuttal Comment 1.1: Comment: Thank you for the author's reply. I will maintain my score.
null
null
null
null
null
null
null
null
Towards a Formal Theory of Representational Compositionality
Accept (poster)
Summary: This paper introduces a notion of compositionality grounded in algorithmic complexity. The authors propose to treat compositionality employing Kolmogorov complexity related to representations and to a discrete language that is used to make the conversion. The contribution rests mainly in bridging how such a measure can capture compositionality, grounding many observations from intuitions in cognitive science and AI, and contrast it to topological similarity. The idea is very appealing and makes sense. The authors also present some empirical investigation of how this can be used to characterize compositionality in both synthetic and real-world representations. Claims And Evidence: The main claim is the proposal of a theory of representation compositionality that can capture intuitions from several previous works. This rests in using Kolmogorov complexity and relating representations to a sort of composition of symbols from a language. The examples and the long discussion with related literature provide evidence that Kolmogorov complexity can be useful, and overall, the idea is simple enough to adapt to different research areas. Based on the discussion and on the results, it is a particular interesting proposal. Methods And Evaluation Criteria: Experiments on natural language are limited to one model and one dataset. Theoretical Claims: I found the theory pretty clear and the authors did a good work to introduce it. Some limitations are discussed by the authors, concerning the representations Z and the language W. - **Z is continuous, W is discrete.** This is one limitation of the proposed approach. The theory of compositionality the authors propose is grounded on discrete symbols and cannot capture the complexity of real numbers. This is sensible, especially if representations capture factors of variations like the continuous values of the color of an object, its size, or whatever. It looks like this creates a complication to treat it in this framework that can only be addressed by leveraging quantization. Moreover, Kolmogorov's complexity does not easily adapt to continuous numbers. - **The measure of compositionality requires more intuition.** I found the examples clarifying but it is not clear how values of C can be interpreted. Specifically, while C=1 is the lowest possible value, that corresponds to trivial cases, what is the meaning associated to higher values of C and is there an upper limit for C? - **Usefulness to ood generalization and combinatorial generalization?** The idea of compositionality is especially intriguing when related to ood generalization. Is there a relation with representation compositionality? This aspect is only mentioned but it is worth expanding. Experimental Designs Or Analyses: The synthetic experiments are confirming the theory. Since data are generated according to the theory, I am not particularly surprised by the representation compositionality but it is interesting to see the topological counterpart not behaving as expected. As I am not an expert in this field, I wonder if other measures of compositionality can be used in those experiments. The algorithmic side of the paper should be more central and is now hidden in the supplementary, but it would be beneficial to know how to measure C. This can be of help with new real world experiments. Can the authors comment on the values obtained on different languages? What is a.u.? As far as I understood, C=1 reveals low compositionality and all languages reveal pretty similar representation compositionality (around 1 and 1.25)? Does this mean that the model does not attain any compositionality? Supplementary Material: I checked the examples and experimental design of synthetic experiments. Relation To Broader Scientific Literature: There is an interesting link to _combinatorial generalization_ [1]. In that problem, based on a limited set of observations, e.g., over variations of few object factors, the model is tested whether it generalizes to new unseen combinations of these factors. It would be interesting to see if there is a relation between this generalization and representation compositionality. [1] The role of Disentanglement in Generalisation, Montero et al., ICLR (2021) Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: I suggest the authors include some examples from the supplementary material in the main text, even in a shorter version. They are quite helpful in grasping what C is measuring. Questions For Authors: - **I have asked some questions in the previous sections.** - **Relation to interpretability.** It would be interesting to connect this notion of compositionality and the use of language W to current focus in probing interpretability of neural/LLMs representations. Many works consider the so called "Linear Representation Hypothesis", whereby concepts (or symbols) are linearly encoded in model representations [2,3]. If these interpretable concepts are seen as elements of a language, is it possible to measure some form of representation compositionality based on that language? Do the authors have an intuition on this matter? [2] The Linear Representation Hypothesis and the Geometry of Large Language Models, Park et al., ICML (2024) \ [3] All or None: Identifiable Linear Properties of Next-token Predictors in Language Modeling, Marconato et al., AISTATS (2025) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the constructive review. **Limitation:** $Z$ **is continuous,** $W$ **is discrete** While continuous values are problematic for Kolmogorov complexity, this may not be a significant limiting factor in practice: for instance, tokenization methods for continuous data (e.g., VQ-VAE) often exhibit surprisingly low information loss. We take the point, however, that some representation attributes (e.g., “size”) might be inherently continuous and difficult compress using a discrete $W$. We leave open the possibility that our definition can be extended to drop the requirement of a discrete $W$. **Meaning associated to higher values of** $C$ It is easiest to get some intuition by fixing the denominator (which depends on $f$ and a reconstruction error) and to imagine how the numerator $K(Z)$ might be scaled. A concrete example comes from disentanglement: each word in the vocabulary has a vector embedding encoded in a lookup table of $f$, and $f$ simply concatenates or adds these vector embeddings. If we consider representations modeled by increasingly longer sentences, $f$ remains identical but $K(Z)$ keeps increasing due to increases in $-\log_2 p_w(W)$ (longer sentences have higher entropy). This is precisely what is occurring in Fig. 2b leftmost plot. The meaning of a higher $C$ is therefore that expressivity (loosely, the number of different things that can be represented) increases, but the semantics $f$ of how things are represented through parts stays the same. In theory, $C$ could be unbounded, but this requires the length of sentences to approach $\infty$, which is unlikely to be an optimal compression scheme for a representation. **Relationship to OoD compositional/combinatorial generalization?** We believe there is a relationship—we discuss this in Appendix E. Testing the empirical relationship between the two is an interesting direction for future work. We also give some hypotheses about how to specify inductive biases for compositional representation (which could improve OoD compositional generalization) in Appendix F. **Algorithmic approaches for measuring** $C(Z)$ **are in Appendix B, but should be more central** Given that we did not apply the methods in Appendix B (this is a direction left for future work, which we were transparent about in the Experiment and Conclusion sections), we thought it appropriate not to include it in the main text. The current paper focuses on validating $C(Z)$ in synthetic settings where the optimal compression of $Z$ is known from a ground-truth generative model (lines 238-246 RHS), and otherwise computes $C^L(Z)$ on real data (which is easier than $C(Z)$ because $W$ is given, lines 360-365 LHS). Our methods for estimating $C^L(Z)$ are discussed in the main text, Section 4.2. **Values for $C^L(Z)$ obtained on different natural languages** > What is a.u.? > Apologies, this should be in the paper: a.u. stands for arbitrary units. To compute $C^L(Z)$, we need to estimate $K(Z)$ in the numerator. For the emergent languages experiment in Section 4.2 this was simple to do (lines 352-355 RHS), but for the natural languages it is difficult. Instead, we make the (commonly-held) assumption that all languages are equally expressive in their abilities to express ideas and identify referents, which translates to equal $K(Z)$ (lines 431-435 LHS). The units are therefore “arbitrary” in Fig. 4 because we use an arbitrary constant numerator in place of $K(Z)$ that is shared among the different languages. We used the $K(Z|W,f)$ obtained from German as this constant, which also explains why German has a $C^L(Z) = 1$ in these arbitrary units: it does not mean that German obtains the trivial lowest possible compositionality, because we don’t know the true $K(Z)$ for German or the other languages. While assuming that all languages have equal $K(Z)$ simplifies analysis, it is a limitation that only allows us to compare the *relative* $C^L(Z)$ for different languages without knowing their absolute values. **Suggestion: include some examples from the supplementary material in the main text to help grasp what C is measuring** Thank you for the suggestion—we will do this. **Interpretability and linear representation hypothesis** Thank you for the great question—we have indeed thought about this! Our hypothesis is that DNNs often linearly represent these concepts because this in fact *maximizes* compositionality according to our definition: $f$ is simple because it need only store and sum word embeddings. However, we might find that far more concepts are compositionally represented and interpretable if we relax the strong assumption of linearity and allow for semantics $f$ that are more flexible, yet still simple. Interesting future work inspired by our definition could for instance train a flexible DNN as $f$ to predict LLM activations from latent concepts, and then use the same methods we applied in Sections 4.2 & 4.3 to estimate $K(f)$ and compositionality. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions and for clarifying my perplexities about the measure of $C$. > The units are therefore “arbitrary” in Fig. 4 because we use an arbitrary constant numerator in place of $K(Z)$ that is shared among the different languages. We used the $K(Z|W,f)$ obtained from German as this constant, which also explains why German has a $C^L(Z) = 1$ in these arbitrary units: it does not mean that German obtains the trivial lowest possible compositionality, because we don’t know the true $K(Z)$ for German or the other languages. While assuming that all languages have equal $K(Z)$ simplifies analysis, it is a limitation that only allows us to compare the relative $C^L(Z)$ for different languages without knowing their absolute values. This is not clear to me, and it should be mentioned in the text. Can you expand on this? Instead of arbitrary units, is it the case you are considering "relative units"? As a minor stylistic note, German should appear as the first column if it is the relative term. This also suggests a limitation in measuring the compositionality of languages in absolute terms. Can you further elaborate on this implication? It suggests that we do not know how models encode linguistic structure in them, in an absolute sense, but that these models just achieve higher representation compositionality of Japanese compared to German. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our paper and our reply in-depth, and for recognizing our contribution. > [The "arbitrary units" in Fig. 4] is [still] not clear to me, and it should be mentioned in the text. Can you expand on this? We will take care to significantly clarify this point in the text. The crux of the issue is that it is difficult to estimate $K(Z)$--which appears in the numerator of the $C^L(Z)$ term we are trying to measure--for arbitrary data such as natural language representations. This would require the application of sophisticated compression methods that attempt to tightly bound $K(Z)$. While this is by no means impossible (numerous powerful compression algorithms exist in ML, both variational and prequential in nature), we instead opted to make the simplifying assumption in this current experiment that all natural languages have the same $K(Z)$. As described previously, we used the $K(Z|W)$ obtained from German as this constant numerator in $C^L(Z)$; crucially, this numerator is "arbitrary" in the sense that we don't know what the true numerator $K(Z)$ actually is. We emphasize that if the assumption of constant numerator $K(Z)$ is correct (i.e., languages are equally expressive, which linguists believe they are), then knowing this numerator would still be essential for computing the absolute score $C^L(Z)$, but we do not need to know it to compare the *relative* $C^L(Z)$ between languages. > Instead of arbitrary units, is it the case you are considering "relative units"? Yes, precisely. In fact, we will switch the naming to "relative units" in the text and figures as this is more clear. > As a minor stylistic note, German should appear as the first column if it is the relative term. Good point, we'll make that change. > This also suggests a limitation in measuring the compositionality of languages in absolute terms. Can you further elaborate on this implication? It is indeed a limitation of our particular experiment, which did not attempt to estimate the $K(Z)$ of natural languages for the sake of simplicity. It is not, however, a fundamental limitation of the measure $C^L(Z)$, since there does exist methods for estimating $K(Z)$ (i.e., practical compression algorithms). > It suggests that we do not know how models encode linguistic structure in them, in an absolute sense, but that these models just achieve higher representation compositionality of Japanese compared to German. Precisely. We do not know the expressivity with which models encode linguistic structure $K(Z)$ in the absolute sense, but assuming that this expressivity is roughly equivalent between languages as linguists believe it is, we know that the models represent Japanese text with relatively more compositional structure than German text. We hope that this discussion has clarified our results on natural language presented in Fig. 4.
Summary: The paper builds a more rigorous version of compositional generalization, compared to the ones proposed by linguists. It claims to be the first to do so, although this may be debatable. This seems to be the first serious attempt based on kolmogorov complexity and is more agnostic of the learning model achitecture than predecessors, at the cost of being hard to compute and often even to approximately bound. However the authors cover a few examples of toy model applications, and they delineate a plan for future research in order to further "concretize" this notion. Claims And Evidence: I think that the claims are mostly correct and well defended. I don't agree that this is exactly the first attempt at rigorously defining compositionality (this sounds as an unbearably bold statement), and the authors cite a few works that have already tried general enough (in my view) approaches. However I agree that the work is new and worth reading. As a minor point, the authors don't discuss any alternatives to Kolmogorov Complexity as a basis for quantifying compositionality. This is a gap in the justifications which could make the notion questioned in the long term. However right now, most people agree that KC is "the" canonical measure. I feel that the representation of previous theory of compositionality is oversimplifying and therefore several statements are misrepresenting it. The paper states that "structure" is not defined in the usual definition, then goes on to immediately contradict that by hinting at the correct statement, that actually structure is defined under assumptions on sentence parsing/type/semantics. Furthermore, even if the abstract formulas do not have many assumptions, still e.g. the core setup summarized e.g. in Fig. B.1 involves several strong assumptions without which the actual value of C(Z) is not computable or realistic (as mentioned after Fig. B.1). But the paper's abstract claims that the new quantity is (a) quantitative and (b) conceptually simple (meaning it has no strong structural assumptions, which seems to be the main difference to previous works). It seems that there is a strong trade-off between (a) and (b), as when we want (a) we have to give up (b) and impose structural assumptions, and when we want (b) we have to give up (a). This trade-off is fine for me, but should be highlighted for transparency. I would like to know the view of the authors on whether this trade-off is an inherent problem of KC-based metrics, or of the notion of compositionality itself: can we get rid of the trade-off? The related claim that "sentences are simply strings" also is oversimplifying, which may lead to confusion between semiology and computer science approaches. Better say "we consider sentences as simply strings". This simplification (forgetting that sentences are uttered by intelligent agents in order to communicate) is related to the simplification that KC is considered as unquestioned as a measure of compositionality. Using KC is fine and valuable, but the fact that it is a modelling simplification should be at least hinted at, at least in the part in which the paper refers to model "brains" and human interactions at several places in the document. Methods And Evaluation Criteria: The datasets used are a bit on the toy model side. It would be useful to have a more thorough discussion of limitations due to the hardness of computing KC in practice. The authors show some practical improvements like using prequential coding, VQ-VAE ideas, and others, but no experiment or comparison is present to show what can help and where. Why do you decide that "prequential coding is the answer"? I can't find where the use of this method is justified in the paper. I think that the discussion about heuristics behing the blue curves in figure 2 is the main validation of the metric, and the rest of the experiments are not very enlightening: - There is a graph about how compositional are different languages according to this metric. How do we know that this is realistic, is there no other benchmark or consensus about e.g. Japanese being slightly more compositional than the others? In other words I don't see how the experiment about actual languages is relevant to validating this particular metric. - It is not clear at all that TopSim is the correct counterpart of this particular complexity measure: why is that so? Also, what does the comparison to TopSim say, in terms of evaluation? This differential in behavior is not discussed, so it's not clear why there is a graph between metrics. The authors limit themselves to a series of heuristic descriptions of their own metric, and no description of "what TopSim did wrong". This being the case, why did they put the graph of TopSim metric in the paper? Theoretical Claims: I don't think that there are theorems/proofs in the mathematical sense. I checked the calculations and they are OK. Experimental Designs Or Analyses: See "methods and evaluation criteria". Another point is that in the paragraph on "vocabulary size" (line 302-304 more or less) the text says there should be a comparison between sentence length and vocabulary size. But no graph highlights that scaling/comparison. It would be good to validate the sentence by some data. Supplementary Material: Yes, I reviewed it all. Relation To Broader Scientific Literature: I think that this paper will give a good starting benchmark for the line of future research hinted at by the authors. In particular, it invites the community to improve upon the computability/approximability of abstract KC-based metrics. Essential References Not Discussed: I didn't find something concrete to point out. Other Strengths And Weaknesses: nothing comes to mind. Other Comments Or Suggestions: line 057: compressed "more easily" or just "more"? line 162: the \mathcal N notation was not introduced, so one has to wonder for a bit and then notice that you talk about Gaussians around that point. Maybe say it's a gaussian density before using the symbol. line 345-348: the description of "depth" is too short, I couldn't understand it fully. maybe expand a bit Questions For Authors: See the parts "claims and evidence" and "methods and evaluation criteria". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the constructive review. **Tempering claims on novelty and highlighting limitations** Reviewer JsGE made a similar comment—in retrospect, we agree. While we believe that existing definitions of compositionality suffer from pitfalls that ours addresses, it is unfair to claim that ours is the first and premature to claim that it is the “true” or most useful one. We plan to edit our paper accordingly: 1. Remove premature claims in the abstract and elsewhere that frame our definition as uniquely “correct” or “the first formal definition”. 2. Add a new section “Comparisons to prior work” that systematically points out advantages (e.g., generality, no assumptions on the structure of $f$) and limitations. This should help provide a neutral framing of our specific contributions. 3. Add a new section discussing “Limitations”, including challenges the reviewer has mentioned such as: 1. The difficulty of estimating KC. 2. The need for “strong assumptions” in practice to estimate $C(Z)$ as in Fig. B.1. **Fundamental tradeoff between (a) easily estimating complexity and (b) making strong structural assumptions?** There is indeed a fundamental tradeoff. KC requires an uncomputable search over all programs, so compression schemes impose constraints on the search space: the more constraints, the smaller the search space. We will highlight this in the paper. However, we also emphasize that the advantage of an abstract definition like ours is precisely that it allows estimators to make different tradeoffs. To some extent DNNs mitigate the tradeoff: they make few structural assumptions, but are easy to train and have inductive biases for simplicity resulting in excellent compression (Wilson 2025, Goldblum 2023). This is why we advocate for using DNNs in Appendix B. **Oversimplification of the “intuitive” definition?** > The paper states that "structure" is not defined in the usual definition, [but] structure is defined under assumptions on sentence parsing/type/semantics. > Our point is that the intuitive definition refers to the structure of a complex expression as if it were provided. Of course this definition is alluding to grammars, but it does not specify how such grammars are inferred for a given representation. Our definition addresses this because it precisely defines “structure” as an intrinsic property of the representation through the semantics $f$ that optimally compress it. > "sentences are simply strings" also is oversimplifying […] Better say "we consider sentences as simply strings" > Agreed—we will change to your wording. **Alternatives to Kolmogorov complexity** Better notions of complexity might be developed, but for the moment KC is indeed “the canonical measure”. We will better justify it, especially for unfamiliar readers. We note the presence of additional introductory material for KC in the supplement. **Why use prequential coding to measure KC?** We mention it “provides good estimates in practice”. Empirically, it provides tighter bounds on the KC of DNNs than other methods (Blier 2018), and we are using DNNs in our experiments to parameterize $f$. If other compression schemes with better bounds become available, they should be used instead. We will clarify this in the text. **Validating the natural language results?** There is no consensus on which natural languages are more/less compositional, and it is in fact a longstanding debate in linguistics. The purpose of this experiment is not to validate our compositionality metric, but rather to demonstrate it as a tool to help resolve this debate. This is explained in lines 286-296 LHS. Our results suggest that these languages are roughly equally compositional (limitations in Appendix J). **Comparisons to TopSim** We compare to TopSim because it is frequently used in the literature as a measure of compositionality (we cite several works). It is a valid competing metric because, like $C^L(Z)$, it depends on the pair $(W, Z)$. While we do not investigate “what TopSim did wrong” and why in-depth, we include it simply to show how it, as a competing metric, gives more counter-intuitive results. Whenever it deviates from the results of our definition, we flag this (e.g. that it gives nonsensical results for the natural language experiment, namely that Japanese is negatively compositional). **Other comments** > line 302-304 says there should be a comparison between sentence length and vocabulary size. It would be good to validate the sentence by some data > We will add a new result showing these curves for different sentence lengths. > line 057: compressed "more easily" or just "more"? > “more”—we will change it. > line 162: the $\mathcal{N}$ notation was not introduced […] say it's a gaussian density before > We will do that. > line 345-348: the description of "depth" is too short > It is better defined in Appendix H.2 with a concrete grammar as an example. We will define it more clearly in the main text. --- Rebuttal Comment 1.1: Comment: Thank you for the response, it is along the lines that I was expecting, and confirms my initial understanding of the paper. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our paper and our reply in-depth, and for recognizing our contribution.
Summary: This paper argues that a quantitative measure of compositionality, beyond the traditional colloquial definition, is needed for a more precise understanding of the concept. The authors propose a measure of representational compositionality based on optimal compression using Kolmogorov complexity. Specifically, the measure is defined as the ratio of the Kolmogorov complexity of the representation to the Kolmogorov complexity of a compositional function that maps the underlying structure and parts to the representation. The authors show how the intractability of Kolmogorov complexity may be approximation in practice and demonstrate through simulation studies that meaningful estimates of the proposed measure are possible. Their numerical experiments show that the measure aligns with intuitive expectations of compositionality, capturing dependencies on sentence length, vocabulary size, representational dimensionality, and disentanglement. Claims And Evidence: The authors propose a quantitative measure of representational compositionality as the ratio of the Kolmogorov complexity of a representation to the Kolmogorov complexity of the representation given a set of sentences. While the proposed framework is well-motivated and provides an interesting perspective on compositionality through the lens of optimal compression, there are some aspects that could benefit from further clarification and support. A key concern is that the framework seems to address the question of whether an efficient compositional code can *generate* a representation, rather than measuring *how compositional* a given representation is itself. This distinction suggests that the measure may reflect the potential for compositional generation rather than the intrinsic compositionality of a representation. To put it as a question: if we want to study compositionality, should we aim to measure the "compositionality" of $W$, $f$, or $Z$? It seems more natural to ask if a (or what) representation $W$ can be composed into an expressive $Z$ by $f$, rather than whether $Z$ can be generated by some compositional code. The claims that the framework addresses issues with the colloquial definition of compositionality (such as expressivity, compression, and the intrinsic nature of constituent parts) are intuitive and reasonably supported by the definition. However, claims about structure-preserving maps and modularity are weakly supported and not directly justified by the definition. Crucially, none of the five claims are validated or examined through numerical experiments. The authors state that $p_w$, $W$, and $f$ are "not free parameters" because they are intrinsic to the representation in that they best compress $Z$. However, in practice, the measure is highly sensitive and determined by the specific choice of $f$ and $W$ (even when they are the result of a training procedure). It is unclear how the measure would generalize beyond synthetic datasets with an underlying generative model — and even in those cases, it’s plausible that some choices of $f$ and $W$ could lead to better representational compositionality scores than the actual generative model itself. Or in other words, the Kolmogorov complexity remains untraceable, even when it is decomposed into data and a model. Lastly, the claim that the absolute value of the measure is interpretable raises some questions. The authors state that the lowest possible compositional score is 1 in the emergent language experiment, representing an arbitrary mapping from sentences to representations. However, in the natural language study, German reaches a compositionality score of ~1 — which seems counterintuitive, as natural languages are typically considered compositional. This suggests that there is a strong dependency of the calculation of $K(f)$ and $K(Z∣W,f)$ and the interpretability of absolute values of C(Z). Methods And Evaluation Criteria: The authors evaluate the proposed measure across four distinct tasks: a synthetic lookup table, a context-free grammar task, an emergent language task from multi-agent training, and a natural language task using a multilingual large language model (LLM). These evaluation criteria are reasonable for demonstrating the versatility of the proposed measure across different domains. The authors show that the measure aligns with intuitive expectations of compositionality, capturing relations between compositionality and factors such as sentence length, vocabulary size, representational dimensionality, and disentanglement. Comparing the measure to topological similarity also seems reasonable, as it provides a useful benchmark for assessing structural relationships between representations. No simulations are provided to justify expressivity, compression, the intrinsic nature of constituent, the structure-preserving maps and modularity statments. No simulations or benchmarks are provided to show the limitations of the measure, explitly providing insights when it fails, when it fails our intuitions and when the interpretability of absolute values fails (see above). Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: The authors did not provide any supplementary material, and the code is not made available. However, the appendix effectively clarifies the theoretical and practical underpinnings of the proposed measure. I especially appreciated the sections on Background on Kolmogorov complexity and Examples of compositional representations, which were informative and well-explained. Additionally, the experimental details and hyperparameters provided in the appendix are thorough and detailed, contributing to the overall clarity and reproducibility of the work. Relation To Broader Scientific Literature: Compositionality, the idea that complex expressions derive meaning from their parts and structure, has it's roots in cognitive science and linguistics. Chomsky’s theories on language productivity (Chomsky, 1956) and Fodor’s Language of Thought hypothesis (Fodor, 1975) highlight the systematic nature of thought and language, however, without providing a definition of compositionality that goes beyond intuition (Szabo, 2022). A quantitive theory of compositionality that goes beyond symbolic algorithms and provides a quantitative measure is in particular important as artificial neural networks like LLMs and brains capture and process compositional structures through abstract, non-symbolic representations. The presented paper builds on these foundations by proposing a Kolmogorov complexity-based measure of representational compositionality, linking insights from cognitive science, AI, and neuroscience. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall, the paper makes a strong case for the potential of the proposed measure. However, it would greatly benefit from a dedicated Limitations section that clearly and explicitly outlines the limitations of the proposed measure and highlights potential pitfalls through additional simulation studies. This would provide a more balanced perspective and help clarify the conditions under which the measure is / is not reliable and interpretable. Other Comments Or Suggestions: The following sentence is difficult to follow and could benefit from a rewrite: "But where do these expressions and their constituent parts come from when considering neural representations themselves such as in the Language of Thought hypothesis, where thoughts are encoded in distributed patterns of neural activity?" The idiom "he kicked the bucket" may be less familiar to some readers; consider using a more widely known example. The sequence should consistently follow the same order, i.e., either K(p) + K(X|p) or K(X|p) + K(p) Typo in section 4: "we will first illustrate [...] where where [...] Grammar: (e.g., that it is linear, a hierarchical, etc.) Questions For Authors: 1. In practice, do we want to measure the "compositionality" of $W$, $f$, or $Z$? It seems more natural to ask how a given representation $W$ can be composed into a meaningful $Z$ with $f$, rather than whether $Z$ can be generated by some optimal compositional code. It seems like there might be a distinction between measuring compositionality of the representation versus the process that generates it. Could you shed some light on this? 2. How, why, and when do you expect the measure to fail or produce misleading results? 3. In Section 4, what do you mean by "disentanglement," and how is it measured? 4. Could you clarify the interpretation of a compositionality score. When does it lead to an absolute value that can be interpreted and when not? 5. The claims about structure-preserving maps and modularity are not directly justified by the definition nor evaluated numerically. Do you have plans to study these aspects more explicitly? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the constructive review. **Q1 / Should we aim to measure the compositionality of** $W$**,** $f$**, or** $Z$**?** In our definition $C(Z)$, we are interested in the compositionality of representation $Z$ where no $W$ or $f$ are provided. We believe that our definition does in fact measure $Z$’s compositionality in terms of whether it can be expressed as a simple function of parts. We would reframe your statement as “our framework addresses the question of whether there exists an efficient code $W$ and simple model $f$ that can *generate* a given representation $Z$”, and we have argued in our paper that this is precisely how the compositionality of $Z$ should be defined. > It seems more natural to ask if a (or what) representation $W$ can be composed into an expressive $Z$ by $f$, rather than whether $Z$ can be generated by some compositional code > This is an interesting and related question! Whether $Z$ can be compressed through $W$ and $f$ measures compositionality, but whether there exists *other* codes $W'$ that can be composed into an expressive $Z'$ by $f$ is related to “productivity” in cognitive science (the ability to generate novel and meaningful representations using the same semantics) and compositional generalization in AI. We discuss this in Appendix E. **Q5 / Claims about structure-preserving maps and modularity are not supported** While we were not explicit about this, both were tested experimentally in Section 4.1. We will clarify in revisions. Lookup table semantics with disentanglement=1 are structure-preserving maps, and disentanglement>1 increasingly warps structure. This is why topological similarity, which only tests whether $f$ is a structure preserving map, agrees with our definition for the disentanglement results in Fig. 2b rightmost plot. The context-free grammars construct representations with modular semantics because every production rule is a separate module and each $z$ is constructed through a composition of these modules (Fig. 2c). As expected, increasing the number of modules decreases compositionality as the semantics become too complex (Fig. 2d rightmost plot), unlike highly compositiona languages in which a small number of grammar rules provide immense expressivity. > none of the five claims are validated or examined through numerical experiments > The other 3 claims are simply restatements of our definition: - Expressivity and compression: the numerator is expressivity, the denominator is compression with respect to parts - Constituent parts are intrinsic to $Z$: $W$ is defined through $Z$ in terms of optimal compression - Systematicity and generalization: functions with low complexity generalize better, and compositionality is maximized by low $K(f)$ in the denominator **Sensitivity to modeling choices** In our definition $f$ and $W$ are intrinsic to $Z$, but in practice modeling choices must indeed be made for these components and the sensitivity of our measure should be tested. This is a valid criticism, and in our revised paper we will have a dedicated section acknowledging limitations to be addressed in future work (following your suggestion to include such a section). We note, however, that the abstract definition can be useful in and of itself. Kolmogorov complexity for instance is uncomputable but still conceptually useful. **Q4 / German reaches a minimal compositionality score of ~1, which seems to invalidate claims about interpretable absolute values** This is a lack of clarity on our part—German does not have a compositionality of 1. In Fig. 4, a.u. stands for arbitrary units. To compute $C^L(Z)$, we need to estimate $K(Z)$ in the numerator. For the emergent languages this is simple to do (lines 352-355 RHS), but for natural languages it is not. Instead, we make the commonly-held assumption that all languages are equally expressive, which translates to equal $K(Z)$ (lines 431-435 LHS). The units are therefore “arbitrary” because we use an arbitrary constant numerator in place of $K(Z)$ that is shared for all languages. We use the $K(Z|W,f)$ obtained from German as this constant, which also explains why German has a $C^L(Z) = 1$ in these arbitrary units. While assuming equal $K(Z)$ simplifies analysis, it is a limitation that only allows us to compare the *relative* $C^L(Z)$ for different languages without knowing their absolute values—we will clarify this in the text. **Q2** We expect the definition to fail when implemented with improper modeling assumptions (e.g., wrong DNN architecture for $f$), poor training hyperparameters, or insufficient data for training DNNs. We will expand on this in a new Limitations section. **Q3** Disentanglement (defined in lines 248-258 RHS) refers to the size of the n-grams used to generate our synthetic lookup table representations. For instance, if disentanglement=2, the lookup table has an entry for all possible *pairs* of words and $z$ is generated by concatenating these pair embeddings. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ thoughtful and detailed response in addressing the concerns I raised. Their clarifications have been valuable in better understanding the scope and limitations of the proposed approach. I also appreciate that the authors recognize the need to temper some of the claims in general (see also their responses to reviewers JsGE and q4Av) and to more precisely reflect the conditions under which they hold. In particular, the definition heavily dependents on modeling choices. Additionally, claims regarding structure-preserving maps and modularity seem to be supported primarily in highly controlled toy setups, rather than in more complex or real-world scenarios. Similarly, while the interpretability of the compositionality score holds in controlled settings, it does not generalize well, as evidenced by its breakdown in the natural language experiments ... That said, I fully agree that "the abstract definition can be useful in and of itself," and in light of these clarifications, I have decided to revise my overall recommendation. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our paper and our reply in-depth, and for recognizing our contribution. One minor clarification we would like to make: we don't believe that the interpretability of the compositionality score breaks down in our natural language experiments. The absolute value of the score no longer has meaning, but it is still valuable for assessing the *relative* compositionalities of natural languages and gives interpretable results (unlike an alternative metric like topological similarity, which is also a relative score but gives counter-intuitive results in our natural language experiments).
Summary: This submission frames compositionality as a quantitative measure of how compressible a representation is into a specific family of probabilistic models. This quantitative measure of compositionality is tested in three settings: One in which the generative model of the data is known and specific parameters can be controlled, and two in which it is not known. Claims And Evidence: The abstract boldly claims that: "Our definition has the potential to inspire the design of novel, theoretically-driven models that better capture the mechanisms of compositional thought." This is quite a bold claim for a paper that has three small experiments, one on procedurally generated data. I believe a better description of this paper is: a model-based quantitative measure of compositionality. Methods And Evaluation Criteria: adequate Theoretical Claims: n/a Experimental Designs Or Analyses: > Figure 4. Compositionality of natural language systems. This is not natural language data but instead elicitations from an LLM. That should be made clear. Supplementary Material: no Relation To Broader Scientific Literature: Realizes compositionality as compression, following closest to the work of Kirby that shows that compositional language structures emerge via cultural transmission as a balance between expressive communication and efficient (compressed) encoding (Kirby, 2015; 2019). Essential References Not Discussed: Several important works in cognitive science: 1. Cognition as compression in a probabilistic framework, efficient coding: Chater N, Vitányi P. Simplicity: a unifying principle in cognitive science? Trends Cogn Sci. 2003 Jan;7(1):19-22. doi: 10.1016/s1364-6613(02)00005-0. PMID: 12517354. Feldman J. The simplicity principle in perception and cognition. Wiley Interdiscip Rev Cogn Sci. 2016 Sep;7(5):330-40. doi: 10.1002/wcs.1406. Epub 2016 Jul 29. PMID: 27470193; PMCID: PMC5125387. And a methodological connection in computer science: 1. Connections between compression and induction of simple grammars: Adriaans, P., Jacobs, C. (2006). Using MDL for Grammar Induction. In: Sakakibara, Y., Kobayashi, S., Sato, K., Nishino, T., Tomita, E. (eds) Grammatical Inference: Algorithms and Applications. ICGI 2006. Lecture Notes in Computer Science(), vol 4201. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11872436_24 Other Strengths And Weaknesses: #### Stengths Very clear. #### Weaknesses Claims are too advanced for what is shown relative to prior work. Other Comments Or Suggestions: > Thus, in practice, Z must be discretized to some finite precision and a discrete approximation of the Normal distribution must be used (e.g., the Skellam distribution). I think you mean "can be used" instead of "must be used". Specific approximations, including this one, are not justified a priori. > we first collected a dataset of English sentences describing natural images (COCO, 2024) Did you collect the dataset? Or did you make use of it? > We introduced a novel definition of compositionality, representational compositionality, that is grounded in algorithmic information theory. Isn't compositionality always about representations? Questions For Authors: 1. Given that connections between compression and compositionality have been made by Kirby, including in emergent communication, what is the advancement that your work brings? If it is a specific methodology for approximating compositionality, and/or a drop-in replacement for topological similarity, then the paper should be reframed to make that claim central, rather than overclaiming about a new approach to explaining compositionality in language and thought. 1. There is almost no interpretation of the "natural" language results in Figure 4 other than no significant differences per the measure of representational compositionality. What are we supposed to conclude from this experiment? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the constructive review. **Tempering our claims** Reviewer q4Av made a similar comment—in retrospect, we agree. While we believe that existing definitions of compositionality suffer from significant pitfalls which our definition addresses, it is unfair to claim that ours is the first to rigorously define compositionality and premature to claim that it is the “true” or most useful definition in all circumstances. It is also premature to claim that it has the potential to dramatically improve AI models, and such claims are best left for future work that actually attempts this. We plan to edit our paper accordingly in the following ways: 1. Remove premature claims in the abstract and elsewhere that frame the definition as uniquely “correct”, being “the first formal definition”, or having the potential to dramatically improve AI. 2. Add a new section “Comparisons to prior work” that systematically compares our work to other definitions of compositionality that have been proposed, pointing out both advantages (e.g., generality, no assumptions on the structure of $f$) and limitations (e.g., computability, sensitivity to modeling choices in practice). This should help provide a neutral framing of our specific contributions. 3. Add a new section discussing additional “Limitations” of our definition. **Discussing additional references** The Chater and Feldman references are only tangentially related, as we don’t define compositional representations as simple (only that they can be compressed as a simple function of parts)—i.e., $C(Z) \neq K(Z)$. However, we will add these references when discussing compression more broadly (paragraph on “expressivity and compression”). The Adriaans & Jacobs reference is highly relevant as it pertains to finding simple grammars that explain data—the function $f$ in our framework—which is required to compute $C(Z)$. Thank you! We will include this. **Advances upon Kirby et al.’s work on compositionality** 1. Our definition provides a formal and quantitative framing of ideas in Kirby’s work, clearly defining what is meant by a language (symbols $W$, meanings $Z$, and their mapping $f$), effective communication (high $K(Z)$ that can represent many things, low information loss during communication $K(Z|W, f)$), and language simplicity (low $K(f)$). Lines 196-202 RHS made this contribution explicit, but we will make this more central throughout the paper (especially in the abstract, introduction, and discussion) so that readers understand how we are building on existing literature. 2. Kirby’s work presents ideas around the compositionality of *language systems, or mappings from a given* $W \rightarrow Z$, which pertains to our $C^L(Z)$ definition. In this sense, $C^L(Z)$ is a drop-in replacement for topological similarity*.* While this is relevant to natural language, it is not directly applicable to questions about the compositionality of *representations, where only* $Z$ *is given*, such as in testing the Language of Thought hypothesis. Our definition $C(Z)$ (but not Kirby’s theory or topological similarity) remains applicable in this case. In our revisions, we will summarize these points. **Interpretation of natural language results** The motivation for the natural language experiments is stated on lines 386-392 LHS: it is unknown whether different languages are equally compositional, partly because we lack principled definitions of compositionality. We therefore took this as an opportunity to apply to our definition to a real problem in linguistics. Our results in Fig. 4 suggest that these natural languages are roughly equally compositional. We will edit our paper to relate these results back to the original motivation for the experiment. We also wanted to show how another measure often used as a proxy for compositionality of language systems, topological similarity, gives different and counter-intuitive results. **Other comments** > I think you mean [a discrete approximation of the Normal distribution] "can be used" instead of "must be used" > Yes, thank you; we will correct the text. > Did you collect the [English sentences] dataset? Or did you make use of it? > We made use of the existing dataset; we will correct the text. > Isn't compositionality always about representations? > It is discussed more broadly in the literature, or at least would require a much broader notion of “representation” than the one used in our paper. It is sometimes about data [e.g., 1], functions [e.g., 2], or mappings from externally-defined latents to a representation as in $C^L(Z)$ [e.g., 3]. We can clarify this in the text to better situate our definition in the broader AI and cognitive science communities. [1] Aitchison 1982. The Statistical Analysis of Compositional Data [2] Lepori 2023. Break It Down: Evidence for Structural Compositionality in Neural Networks [3] Ren 2023. Improving Compositional Generalization Using Iterated Learning and Simplicial Embeddings
null
null
null
null
null
null
Hierarchical Reinforcement Learning with Targeted Causal Interventions
Accept (poster)
Summary: This paper considers Hierarchical Reinforcement Learning (HRL) by leveraging causal discovery to improve training efficiency in long-horizon tasks with sparse rewards. In particular, the subgoal structure is modeled as a causal graph and an algorithm to learn this hierarchy is introduced. Instead of random subgoal interventions during exploration, the proposed method prioritizes interventions based on their causal importance in achieving the final goal. This targeted intervention strategy significantly reduces training costs. The paper provides theoretical analysis, showing improvements for tree-structured and Erdős-Rényi random graphs. Experiments show that the proposed framework outperforms existing HRL methods in terms of training efficiency. Claims And Evidence: There is reasonable evidence for the main claims, both in terms of theory and experiments. Methods And Evaluation Criteria: The evaluation seems reasonable, but still limited to only one test environment. The consider grid world setup with subgoals is quite standard test environment for hierarchical RL. Theoretical Claims: The theoretical claims seem reasonable. Did not manage to read the proofs in the appendix in much detail, only skim through some main steps, and I did not see any issues. Experimental Designs Or Analyses: The consider grid world setup with subgoals is quite standard test environment for hierarchical RL. It could be better explained how the hyper-parameter tuning is done, especially for the baseline algorithms. More description is need in the plots, e.g., there seem to be some confidence bounds that are not explained. Supplementary Material: I skimmed through it quickly. Relation To Broader Scientific Literature: Mostly seems reasonable. It seems that prior work on reward machines could be relevant, they also formalize subgoal structures and use them to guide reinforcement learning by providing a structured decomposition of tasks. Reward machines leverage automata-based representations to define subgoals and shape rewards, which could complement the proposed causal discovery approach by offering an alternative way to encode and utilize hierarchical structures in HRL. Icarte, Rodrigo Toro, et al. "Reward machines: Exploiting reward function structure in reinforcement learning." Journal of Artificial Intelligence Research 73 (2022): 173-208. Essential References Not Discussed: not that I am aware of. Other Strengths And Weaknesses: Strengths: The paper reads very well, and based on the results, the suggested approach significantly improves reasonably selected base line algorithms. The approach intuitively makes sense. Weaknesses: The problem formulation is quite limiting, it seems to cover mostly these type of grid-world examples with subgroups. I cannot see this used in any type of realistic setting, or maybe I am wrong, but it would then be interesting if some more meaningful application than simple grid-world toy environments. Also, compared to the baseline algorithms, the considered algorithms are clearly more simple efficient, but there seem to be high computational burden in the sub-goal structure discovery, it would be useful to contrast this what is done in the baseline algorithms, and maybe compare, not only in number of system probes, but also considered other dimensions. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and address the concerns under the following headings. ___ ## 1- Reviewer: The evaluation seems reasonable, but still... As the reviewer mentioned, grid-world environments are widely used in HRL research, as they capture key challenges such as subgoal discovery and hierarchical planning. In our work, we primarily focus on the 2D Minecraft environment as other environments (such as Mini-Behavior environments) are relatively simple and do not adequately showcase the strengths of our method. ___ ## 2- Reviewer: It could be better explained how the hyper-parameter... Due to the space limitations in the main text, we have provided a detailed description of the experiments and hyper‑parameter tuning in Appendix G. ___ ## 3- Reviewer: It seems that prior work on reward machines... We agree with the reviewer that the reward machine can be viewed as an alternative way to represent hierarchical structures in HRL and can be effectively used for training policies. As the reviewer noted, our work takes a causal approach, representing subgoal structures within the framework of structural causal models. In the revised version, we will include a discussion on the line of research about reward machines in the related work. ___ ## 4- Applicability of the framework to realistic settings We acknowledge that our framework currently assumes an explicit definition of EVs, which may seem restrictive in some deep RL applications. However, in practice, many environments provide state vector–based observations that are inherently disentangled—meaning that each state dimension corresponds to an independent feature. This makes it significantly easier to extract EVs. Common benchmarks such as Atari, MuJoCo, and 2D‑Minecraft, all naturally possess this property. Moreover, many hierarchical RL methods—such as LESSON, DSC, and SkilD (mentioned by the reviewer)— already rely on such structured representations. In scenarios where EVs are not directly provided, one can leverage disentangled representation learning, which aims to reconstruct a latent, disentangled state representation from observations. Methods such as CausalVAE, DEAR, and the reference [9] suggested by the reviewer q4Ho specifically focus on learning disentangled representations from raw visual input. Therefore, they enable our framework to be applied in such settings too. Note that, in the submitted version, we conducted a sensitivity analysis to evaluate the impact of missing EVs on performance. As shown in Figure 12 (Appendix G.4), HRCh (SSD) remains fairly robust even with up to 20\% of EVs missing. We would like to emphasize that one of the main messages of the current work is that learning the causal structure among subgoals enables more efficient training of hierarchical policies. By leveraging this structure to guide exploration, our approach can be adapted to a wide range of applications where hierarchical decision-making is required. ___ ## 5- Reviewer: Also, compared to the baseline algorithms, the considered algorithms are clearly more simple efficient, but there seem to be high computational burden... Regarding computational complexity, our experiments indicate that although our method introduces some additional computation compared to some HRL baselines, this is offset by a significant reduction in the number of system probes and overall training cost. Below, we provide a runtime comparison between various methods to reach a success rate of 0.5 in Figure 6(a) of the paper: | **Metric** | **Average Time (mins)** | |-------------------|-------------------------| | HER | 249.1 | | HAC | 185.5 | | OHRL | 133.8 | | CDHRL | 352.3 | | HRC_h (ours) | 33.7 | | HRC_b (ours) | 38.1 | **Table**: Runtime Comparison of HRL Algorithms
Summary: This paper tackles the long-horizon RL tasks with hierarchical abstractions. Specifically, the authors propose Hierarcahical RL via Causality (HRC) which enables the agents to prioritize some causally impactful subgoals over the others. Among the HRC framework, the authors also develop a new subgoal-based causal discovery approach and derive the theoretical guarantees for it. Compared to the existing hierarchical causal RL baselines, HRC empirically outperforms them in both synthetic data and MineCraft tasks. Claims And Evidence: The major claim about the theoretical property (training cost bound in interventional causal discovery) and empirical performance of HRC are generally supported by Theorem 7.4, as well as Figure 5, 6 and Table 2, respectively. Methods And Evaluation Criteria: **Method**: The paper’s method involves a hierarchical reinforcement learning framework that leverages causal discovery to learn the subgoal structure underlying long-horizon, sparse-reward tasks. Specifically, it introduces a new causal discovery algorithm tailored for HRL, and then uses targeted causal interventions—prioritizing subgoals via ranking rules (such as causal effect and shortest path ranking)—to guide exploration and improve training efficiency. **Criteria**: The evaluation criteria include cost complexities for theoretical cost analysis. For the empirical performance on benchmark environment, the authors use metrics like success ratio for the task reward and structural Hamming distance (SHD), missing edges, and extra edges for causal graph accuracy. The proposed methods and evaluation criteria make sense to me. They directly address the challenges of sparse rewards and inefficient exploration in HRL by exploiting causal relationships to guide subgoal selection, and the combination of theoretical and practical evaluation provides a well-rounded assessment of the framework’s efficacy in complex, long-horizon tasks. Theoretical Claims: The paper's main theoretical claims are that by exploiting the causal structure among subgoals using a novel, HRL-tailored causal discovery algorithm, the proposed HRC framework can significantly lower the training cost compared to random exploration. In particular, 1. Undet the assumption 4.2, the authors provide formal guarantees under certain assumptions, the subgoal structure is identifiable up to discoverable parents in Proposition 8.3 and Theorem 8.4. 2. Under the assumption 7.1, 7.2, 7.3, the targeted causal interventions based on ranking rulesyield lower cost complexities—in tree and semi‐Erdős–Rényi graph models—compared to naive exploration strategies in Theorem 7.4. I reviewed the proofs provided for Theorem 7.4 (the cost complexity analysis) and Theorem 8.4 (the identifiability of the causal subgoal structure). The derivations appear logically sound and make appropriate use of standard techniques. While I did not perform a line‐by‐line verification, I did not find any obvious mathematical errors. However, in general deep RL applications, the assumptions may limit their generality in more practical tasks, e.g. the subgoal is not explicitly defined and is hard to abstract from pure observation, or the direct intervention is not practical in the simulator. Experimental Designs Or Analyses: The experiments compare different variants of the proposed algorithm (using various ranking rules and causal discovery methods) against several state-of-the-art HRL baselines using metrics like training cost (system probes), success ratio, and structural Hamming distance for recovered causal graphs under the synthetic experiments and 2D-MineCraft evaluation. 1. The evaluation protocol is comprehensive in evaluating both the success ratio and causal discovery quality. 2. The ablation study is not explicitly mentioned. Since the pipeline in Algorithm 1 is pretty long, it might be helpful to conduct throrough ablation studies and discuss how important each module is and how likely some assumptions would hold in practice, such as the assumption 7.1. 3. I'm not sure whether OR, AND could cover all the possible subgoal relationship in all the long-horizon decision making problems. The authors may include this in the limitation discussions. 4. It might be helpful if the authors could compare the HRC with other causal RL approaches listed in some of the essential references [5, 6, 7]. Supplementary Material: I check the section C and E for the detailed algorithm implementation, section F for causal discovery, section G for experiment details. For section D, I check the overall skeleton of the proof and the logical chain is consistent. Section A and B also help to cover some missing related works and problem definition in the main text. Relation To Broader Scientific Literature: Beyond the hierarchical RL and causality community, the Definition 4.1 and Assumption 7.1 remind me of the controllability of the subgoal seems to be quite similar to the reachability analysis [1] in stochastic control domain. It might be interesting to reveal the inner connection between HRC and the stochastic control methods. > [1] Amin, Saurabh, et al. "Reachability analysis for controlled discrete time stochastic hybrid systems." Hybrid Systems: Computation and Control: 9th International Workshop, HSCC 2006, Santa Barbara, CA, USA, March 29-31, 2006. Proceedings 9. Springer Berlin Heidelberg, 2006. Essential References Not Discussed: In the Neuro-Symbolic Reinforcement learning, there are some related works that also use similar formulations of target tasks [1, 2]. There are also some works on causal discovery from interventional data, and also derive theoretical bounds over them [3, 4]. There are some some causal RL works [5, 6, 7, 8] that use either causal discovery or explicit hierarchical structure in the policy, and other causal RL works that use controllability-related state abstraction [9, 10], which is relevant to the task domain that the authors are tackling with. > [1] Jiang, Zhengyao, and Shan Luo. "Neural logic reinforcement learning." ICML 2019 > > [2] Kimura, Daiki, et al. "Neuro-symbolic reinforcement learning with first-order logic." ACL 2021 > > [3] Yang, Karren, Abigail Katcoff, and Caroline Uhler. "Characterizing and learning equivalence classes of causal dags under interventions." ICML 2018. > > [4] Brouillard, Philippe, et al. "Differentiable causal discovery from interventional data." NeurIPS 2020 > > [5] Scherrer, Nino, et al. "Learning neural causal models with active interventions." NeurIPS workshop on Causal Inference & Machine Learning, 2021 > > [6] Wang, Zizhao, et al. "SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions." NeurIPS 2024 > > [7] Lin, Haohong, et al. "BECAUSE: Bilinear Causal Representation for Generalizable Offline Model-based Reinforcement Learning." NeurIPS 2024 > > [8] Hu, Jiaheng, et al. "Disentangled unsupervised skill discovery for efficient hierarchical reinforcement learning." NeurIPS 2024 > > [9] Zhang, Amy, et al. "Learning invariant representations for reinforcement learning without reconstruction." ICLR 2021 > > [10] Wang, Tongzhou, et al. "Denoised mdps: Learning world models better than the world itself." ICML 2022 Other Strengths And Weaknesses: **Strengths** 1. The paper is generally well-written and easy to follow, the paper is well-structured and well-motivated. 2. The baseline comparison is comprehensive, and it considers both the task success ratio and causal discovery quality. 3. The theoretical derivation is generally sound and solid, which makes the results potential more generalizable in broader subgoal-based causal discovery applications. Moreover, the authors verify the theoretical results with empirical causal discovery performance in Figure 5. 4. The empirical performance demonstrate a superior performance than SDI in Structured Hamming Distance, and a better assymptotic performance and convergence speed compared to other HRL baselines. **Weaknesses** 1. Additional causal RL baselines such as the ones in [5, 6, 7, 8] in additional references, would be helpful. Comparison to other HRL approaches with implicit causal abstraction (e.g. the causal abstraciton based on controllability and/or reward relevance [9, 10] in additional references) could also be interesting. 2. If the cost complexity bound can be comparable with other literatures, please provide for a better reference. 3. The authors do not include any limitation discussions or future works in the main text. 4. The assumption of AND, OR subgoal abstraction may not be expressive enough for all the scalable applications in long-horizon real-world decision-making tasks, such as robot manipulation or autonomous driving tasks which require complex reasoning. Other Comments Or Suggestions: The paper structure can be merged into a few major sections and some subsections. Now it has 10 sections, which may be too much for the audience to keep up with the authors stories. Questions For Authors: See the above sections for more details. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and address the concerns under the following headings. ___ ## 1- Reviewer: The ablation study... We already conducted several ablation studies (Figure 6) to evaluate our approach. To assess the impact of our targeted strategy, we compared three HRC algorithm variants: HRCh (SSD), HRCb (SSD), and HRC (optimal). As shown in Figure 6(a), HRCh (SSD) outperforms HRCb (SSD), highlighting the ranking rule's effectiveness. To examine the causal discovery component, we also compared two HRC versions: one using our SSD method and another using a standard method (SDI). Results show SSD significantly outperforms SDI, demonstrating the benefits of our tailored causal discovery. Lastly, a sensitivity analysis (Figure 12, Appendix G.4) shows HRCh (SSD) remains robust even with up to 20% of EVs missing. ___ ## 2- Reviewer: However, in general deep RL applications... ***Regarding the comment "Generality to more practical applications" please see number "4" of our response to Reviewer 5Zcn*** Regarding interventions, as noted in Remark 4.3, in environments where EVs correspond to specific resources or skills, we assume that once a resource or skill is acquired at some time step, it remains accessible in subsequent time steps (Assumption 4.2). Under this assumption, a subgoal being controllable by a policy $\pi$ (as defined in Definition 4.1) is equivalent to performing an intervention on the corresponding EV with policy $\pi$. This interpretation is applicable in both simulated and real-world environments. ___ ## 3- Reviewer: I'm not sure whether OR, AND... As noted in lines 161–164, our proposed solutions can be readily extended to settings with non-binary domains for EVs, albeit at the cost of heavier notation. In fact, our experiments already evaluated the proposed methods in non-binary settings. The assumption of binary variables for resource EVs with AND/OR subgoals is made solely to facilitate the cost analysis in Section 7 and to provide theoretical guarantees for our causal discovery method in Section 8. We will clarify this further in the revised version. ___ ## 4- Other references Thank you for pointing out these references. In Related Work section, we will add references [1, 2] regarding neuro-symbolic RL; [3–5] for the use of interventional data in learning causal structures; [6, 8] for subgoal/skill discovery; and [7, 9, 10] for causal representation learning/causal abstraction in RL. We will also discuss about connection with stochastic control methods. References [3–5] pertain to learning causal structures among random variables in structural causal models while our causal discovery method is specifically tailored to the HRL setting, working with sequenced data. In our experiments (Figure 6(b)), we have already compared our method with SDI, a representative method from this class that has been used in prior HRL work as a subroutine to learn the causal structure. Regarding [9], this line of research on causal representation learning is more geared towards recovering EVs, which is not the focus of our paper. Similar to prior work, we assume that either the EVs are already available or that the environment provides state vector–based observations. As for prior work on skill discovery (e.g., [6,8]), please refer to our response "Comparison with skill discovery" to the reviewer T49j for more details. Regarding [1–2], ILP-based methods represent policies as logical rules to enhance interpretability. Our work considers a multi-level policy, with a focus on recovering the hierarchy among subgoals (with our explicit hierarchical structure definition) for training the multi-level policy more efficiently. Herein, we do not impose any structural limitations on our policy. For our theoretical analysis, we assumed that the causal mechanism of each subgoal follows an AND/OR structure. This assumption pertains to the environment rather than for representing the policy using logical rules. We will clarify this in the related work. ___ ## 5- Reviewer: If the cost complexity bound... To the best of our knowledge, this is the first work to rigorously model the HRL problem in SCM framework with a formal associated cost, and provide theoretical guarantees on the performance of the proposed methods. Analytical comparison with other related HRL approaches is challenging, as most existing methods are experimental in nature and lack a rigorous formalism. Nevertheless, in our analysis, we compared our approach with a baseline that uses random exploration for subgoal discovery, and our method outperforms it in both considered subgoal structures (tree and Erdős–Rényi graphs). Regarding limitations, due to space constraints in the main text, we were unable to include a detailed discussion. However, in the revised version, we will add a section to discuss the limitations and the future work.
Summary: The paper presents a novel approach to Hierarchical Reinforcement Learning by leveraging causal discovery to identify hierarchical structures among subgoals. The key contribution is a causal discovery algorithm that learns the subgoal structure, which is then used to guide interventions during exploration. This targeted intervention strategy improves training efficiency compared to random exploration. The paper provides a formal analysis of the method and demonstrates empirical improvements on synthetic datasets and a gridworld environment (2D-Minecraft). Results show that the proposed method outperforms existing HRL approaches. ## update after rebuttal I am happy with the rebuttal content. I'm raising my score to 4. Claims And Evidence: Most of the claims are well-supported. The paper provides a formal analysis for its proposed method, which strengthens its theoretical grounding. More ablation studies, particularly on the impact of different causal discovery techniques, would strengthen the claims regarding the proposed algorithm's effectiveness. In addition, the writing of this paper is not super clear, with complicated and sometimes unexplained notations scattered over a total of 9 different sections. It would be beneficial to organize the method sections with a clearer intuition of what each subsection is doing. Methods And Evaluation Criteria: The authors conduct experiments on both synthetic datasets and a real-world inspired HRL environment (2D-Minecraft), demonstrating the effectiveness of the approach. While the method performs well on the relatively toy 2D-Minecraft domain, it is unclear how well it scales to more complex real-world tasks with high-dimensional state spaces. In particular, I’m worried about two assumptions. First, the proposed method relies on being able to detect whether a environment variable is controllable. While this may be relatievly easy when the variable is discrete (especially when binary), but seems to be more problematic when the variable is continuous. Second, the possible “preconditions” for achieving a subgoal can grow exponentially w.r.t the total number of environment variables. It’s unclear whether this method will still work with a large number of environment variables. Theoretical Claims: The proofs look correct to me. Experimental Designs Or Analyses: Please see the "Evaluation Criteria" section above. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper fits nicely into the causality-inspired skill discovery literature, and introduces a principled way to incorporate causal discovery into HRL. In particular, the causally-guided ranking system seems quite novel. However, the addition of some missing references (as detailed in the next section) would help assess the novelty of this work. Essential References Not Discussed: The overall idea of this paper seems to resemble previous works [1][2], in terms of using causal dependency as a way to guide skill learning. Can the author discuss the connection/difference between this work and the aforementioned ones, and possibly compare their performance? [1] Chuck, Caleb, et al. "Granger-causal hierarchical skill discovery." TMLR 2023 [2] Wang, Zizhao, et al. "SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions." NeurIPS 2024. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: See the questions above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and address the concerns under the following headings. ___ ## 1- Reviewer: In particular, I’m worried about two assumptions... Regarding controllability, our current framework assumes discretized subgoals, which is a reasonable assumption in domains where acquiring specific resources is required. For environments with continuous variables, one can apply disentangled representation learning methods to high-dimensional continuous observations in order to extract categorical latent variables—such as using categorical VAEs. We view this as a natural extension of our current work. The assumption of binary variables for resource EVs is made solely to facilitate the cost analysis in Section 7 and to provide theoretical guarantees for our causal discovery method in Section 8. To the best of our knowledge, this is the first work to rigorously model the HRL problem in SCM framework with a formal associated cost, and provide theoretical guarantees on the performance of the proposed methods. We will clarify this further in the revised version. As for the exponential growth of preconditions, we agree that in the worst case, the number of potential combinations can be large. However, in practice, many environment variables (EVs) exhibit sparse dependencies, i.e., the parent set is small. As shown in Theorem 7.4, our method leverages the sparsity in the subgoal structure to guide exploration more efficiently toward the target goal. For example, in a tree-structures, our method achieves a logarithmic cost with respect to the number of EVs, whereas a random strategy incurs a quadratic cost. This efficiency enables our approach to scale effectively to environments with a large number of EVs in such structures. ___ ## 2- Clarity and organization of the paper We will include a table in the appendix listing all notations along with references to their definitions in the text. Additionally, we will add a brief paragraph at the end of Section 1 to clarify the overall structure of the paper, particularly the methodology section. It would be helpful if the reviewer could refer any notation that they believe was not explicitly or adequately defined in the text. ___ ## 3- Comparison with skill discovery The goal of skill discovery is to learn a diverse set of skills, which are later used to train a higher-level policy for a downstream task. In that context, a skill is conceptually similar to a subgoal in our work. However, there are key methodological differences in subgoal vs. skill discovery: 1- In skill discovery, the process of learning the skill set is often decoupled from the downstream task. In contrast, our work—framed within the context of HRL—conditions the learning process on a target goal. By leveraging the learned subgoal structure, our approach guides exploration toward only the relevant subgoals that contribute to achieving the target goal. In contrast, skill discovery methods often aim to learn as many diverse skills as possible, regardless of their relevance to the target goal in the downstream task. 2- In our framework, the lower-level policy is itself hierarchical and is trained according to defined hierarchical structure (where we defined it formally based on the discovered subgoal structure). This results in a significant reduction in sample complexity compared to standard policy training, which typically does not utilize such a structure. Beyond these methodological distinctions in subgoal discovery and training policies, to the best of our knowledge, our work is the first to rigorously study HRL within a causal framework. We formally defined the cost formulation, proposed subgoal discovery strategies (with our key measure ECE) with performance guarantees (Theorem 7.4), and provided theoretical bounds on the extent to which the subgoal structure can be learned in an HRL setting (Prop. 8.3). Furthermore, we introduce a causal discovery algorithm tailored to this setting, with provable guarantees on its correctness (Theorem 8.4). ### Empirical comparison For empirical comparison, we consider [2] as it showed superior performance to [1]. [2] conducted a similar experiment on a simplified Minecraft version with only 8 EVs and a 500-step horizon, achieving a 0.5 success rate after 2 million system probes. Our environment is much more complex with 21 EVs and a 100-step horizon, yet our method achieves a success rate approaching 1 (Figure 6). When testing [2]'s code on our Minecraft version, it consumed over 450 GB of memory before crashing. Therefore, we couldn't complete further testing of their approach, but we're currently evaluating our method on their Minecraft version for direct comparison. --- Rebuttal Comment 1.1: Comment: I thank the author for the rebuttal, which has addressed many of my concerns. I encourage the authors to add the aforementioned clarifications, the promised comparisons as well as the missing related works in the next version of this paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their supportive comment. We conducted experiments on the CraftWorld environment (non-binary) provided by [2] (SkiLD). In Figure 9 of their paper, they report achieving a maximum success rate of approximately 0.6 after training the downstream task (note that they need pretraining in their method, and environment steps for pretraining are not plotted in this Figure). However, when we attempted to reproduce their results using the provided code, we were unable to match their reported performance. Additionally, the code ran extremely slowly on our GPU server, requiring about 10 hours to complete just 100k environment steps. Due to time constraints, we contacted the authors to clarify the number of pretraining steps. We received two responses—10 million and 20 million steps—but without a definitive confirmation. Based on Figure 9 of their paper, it appears that at least 12 million environment steps are needed to reach a success rate of 0.5. In contrast, the **performance of our method on this environment is shown in the following plot: https://ibb.co/dsYvwWJR (alternative link: https://postimg.cc/tZgy9sBM)**. Each unit on the x-axis represents 10 million environment steps. As illustrated, our method surpasses a 0.5 success rate with just 5 million environment steps, whereas SkiLD requires at least 12 million environment steps to reach the same level. Furthermore, **our approach achieves a maximum success rate of approximately 0.8 , compared to about 0.6 for SkiLD**. We will ensure that the clarifications, comparisons, and related works mentioned in the rebuttal are thoroughly incorporated into the next version of the paper to improve its quality. If you find our work convincing and aligned with your expectations, we would greatly appreciate your support in recommending it for acceptance.
null
null
null
null
null
null
null
null
Textural or Textual: How Vision-Language Models Read Text in Images
Accept (poster)
Summary: This paper systematically analyzes how the encoder-only vision-language models (i.e., CLIP) perceives the textual and semantic information. The paper uses ID and linear probe to measure the representation complexity and semantic perception ability of vision-language models. The analysis suggests that at earlier layers the model captures texture information while in the later layers, the model captures semantic information. Based on above findings, the paper proposes a method for against typographic attacks by solely fine-tuning the last layer and the experimental results show the improved performance. ## update after rebuttal Thanks for the effort, since the concerns are not well addressed, I decide to keep the score. Claims And Evidence: The two main experiments designed in Section 4.2 can not fully support the claims. 1. Semantic Constancy with Varying Font Size: The author hypothesizes that increasing font size enhances texture complexity, thereby influencing the model's judgment. However, changes in font size may simultaneously affect semantic readability (e.g., smaller fonts may be harder to recognize), introducing confounding variables. For instance, the ViT model's accuracy decreases with larger fonts (Table 1), which the author attributes to texture interference. However, it is possible that the actual reason is that larger fonts obscure more image content, rather than solely due to changes in texture. 2. ​Linear Probe on Paronym-Synonym Pairs​: The author conduct experiments on only 10 pairs of words, which constitutes a relatively small sample size. Furthermore, is the selection of word pairs balanced in terms of factors such as word frequency and visual similarity? For instance, do the visual and semantic differences between "goose-moose" and "goose-gander" adequately represent the broader spectrum of such comparisons? If there is bias in the selection of word pairs, the conclusions drawn may lack generalizability. The conclusion drawn from observations are problematic: 1. Figure 5 shows a significant improvement in the classification accuracy of synonyms in deeper layers, which the authors attribute to the delayed formation of semantic understanding. However, the increase in accuracy might merely reflect that deeper features are more linearly separable, rather than indicating the enhanced comprehension of semantic. For instance, deeper features could become easier to classify due to dimensionality compression, but this does not necessarily imply semantic encoding. 2. The authors interpret the decrease in ID as semantic compression. However, the reduction in ID could also stem from decreased feature redundancy or noise suppression, which may not directly correspond to semantic abstraction. Additional evidence, such as feature visualization or intervention experiments, is needed to substantiate the causal relationship between changes in ID and semantic representation. Methods And Evaluation Criteria: See claims and evidence Theoretical Claims: NA Experimental Designs Or Analyses: The experiment presented in Section 5.1 needs another baseline, i.e., fine-tuned the whole layers. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: This paper engages with the existing literature in the following aspects: It proposes a defense strategy by solely fine-tuning the last layer, balancing efficiency and performance. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The paper is well-structured. Weakness: see claims and evidence. Other Comments Or Suggestions: See claims and evidence. Questions For Authors: No question Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments. We appreciate the focus on the underlying assumptions behind our interpretation, and we have carefully examined all concerns related to experimental validity and inference logic. We address each point below, and will incorporate clarifications, results, and new visualizations into the revised version. Additional materials can be found at: https://anonymous.4open.science/r/TOT-078C/ --- **Q1. Font size may introduce multiple confounds (texture vs. visibility vs. occlusion)** We agree that font size may introduce occlusion or legibility effects. To control for this, we included a **vision-only ViT as a baseline** in Table 1. Since ViT lacks language input, its accuracy drop reflects purely visual factors. CLIP, with the same visual backbone, shows a different trend, performance degrades more when the overlaid text **conflicts semantically** with the image. This contrast suggests that CLIP’s behavior cannot be explained by occlusion alone, and that semantic interference plays a key role. **Q2. Small sample size in paronym-synonym probing; representativeness of word pairs** The probe experiment was designed as a controlled interpretability analysis using **representative and well-matched word pairs**. Each pair was selected to reflect either semantic or orthographic similarity, while controlling for **word frequency, visual appearance, and word length**. All words used are **frequent and familiar**, ensuring they are easily interpretable by both humans and models. While the total number of pairs is small, they were chosen to **typify common semantic vs. paronym contrasts** (e.g., “goose–gander” vs. “goose–moose”). The key observation holds consistently across all pairs, supporting the robustness of the trend. We will clarify the selection criteria and framing in the revision, and agree that expanding to a larger lexical set is a valuable direction for future work. **Q3. Increased accuracy in deep layers may not imply semantic abstraction** We understand the concern that deeper layers may naturally yield higher separability. However, **if this were purely due to compression or feature refinement,both paronym and synonym distinctions should improve similarly**. Instead, we observe a clear asymmetry: orthographic separability is strong from early layers, while semantic separability emerges only in the final block. This pattern cannot be explained by general separability alone, and provides a strong structural signal of **delayed semantic abstraction**. **Q4. ID drop may reflect redundancy reduction, not semantic compression** We agree that a decrease in ID can result from various factors such as redundancy reduction or noise suppression, and does not directly prove semantic abstraction. In our work, we treat ID as a proxy for representational complexity, not for semantic content itself. To support our interpretation, we provide **multiple converging signals**: 1. ID Analysis shows a **consistent drop** in the final block, particularly under semantic perturbations (e.g., synonym substitutions), suggesting a shift in representational dynamics. 2. Linear Probing reveals that only the final block supports linear separation of semantic distractors (synonyms), whereas paronym separability appears **much earlier**. 3. **Grad-CAM visualizations** (Figure C1) show that: in the final block, **attention re-centers on the object only when the overlaid text is semantically aligned with the image**. In contrast, for irrelevant or nonsensical text, the model continues to attend to the text region, even in the deepest layers. Together, these findings suggest that semantic abstraction is not merely a byproduct of general compression, but emerges **selectively and meaningfully** in the representational structure. We do not claim causal proof, but this triangulation offers strong evidence of **a structured link between ID reduction and semantic resolution**. We will clarify this framing and add more explanation for Figure C1 in the revision. **Q5. Lack of full-model fine-tuning baseline in Section 5.1** Full-model fine-tuning (**Table A1**) underperforms our final-block-only strategy by **over 30%** across all splits, likely due to **overfitting and disruption of early representations**. Our analysis shows that semantic abstraction emerges in the final block, making targeted fine-tuning both more effective and efficient. --- Thank you for your thoughtful and precise feedback. We’ve carefully responded to the points raised to clarify assumptions, tighten the experimental framing, and better reflect the structure of our findings. We hope our explanations help strengthen the rigor and improve the clarity of the work.
Summary: This paper investigates images with overlaid text, and how vision-language models process them. The paper analyzes representations throughout model layers using Intrinsic Dimension as a measure of complexity. It finds that early layers primarily encode textures while the last one encodes semantics. Through these insights, they are able to achieve significant improvements against typographic attacks by only fine-tuning the final layer. Claims And Evidence: - Claim 1. In early layers texture and semantics compete, while in late layers semantic accuracy improves. - Sec 3 discusses the design of the evaluation set, which contains paronyms (which are texturally related) and synonyms (which are semantically related). - Figure 4 shows that any typography increases ID in middle layers. In the last layer, there is a clear ordering where the most to least consistent typography has the smallest to biggest ID. - Table 1 shows for a pure vision ViT is less sensitive to text semantics and more to size whereas multimodal CLIP is heavily sensitive to semantics. - I don’t understand the result in Figure 5. It seems to imply that the model achieves higher accuracy on the paronym pairs, for example “moose” overlaid on a picture of a goose, which can also be considering to be a conflicting or “irrelevant” case. This result conflicts with Table 1, where “Cons_80” significantly outperforms “Irr_80” for CLIP. The setup of this experiment is also unclear: is every image duplicated four times, as shown in Figure 2, with different text overlaid, and the classification accuracy is computed averaged across all these duplicated images? - Claim 2. Building on these insights, fine-tuning only the final block of the model is sufficient to achieve state-of-the-art performance for ignoring typography. - This result is supported by Tables 3 and 4. The prior work used as baselines focus on subspace discovery, weight interpolation, or prefix tuning, rather than direct fine-tuning as investigated by this work. Methods And Evaluation Criteria: See “Claims and Evidence” above. Theoretical Claims: N/A Experimental Designs Or Analyses: See “Claims and Evidence” above. Supplementary Material: N/A Relation To Broader Scientific Literature: This work conducts a controlled evaluation of texture versus semantics in typographic attacks (Figure 4, Table 1), and shows that direct fine-tuning of the final block is sufficient to achieve state-of-the-art results (Table 3), compared with more hand-crafted methods in prior work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - I liked how the insight from Figure 4, where it is identified that most of the semantic separation occurs in the last block, is used to motivate the experiments in Sec. 5. Weaknesses - I don’t understand the experiment in Figure 5 / L313, where the result seems to conflict with the result in Table 1 (see “Claims and Evidence” above). Overall, I liked the premise of investigating texture versus semantics in typographic attacks, and the experiments reasonably supported this investigation. The result in Sec. 5 is also strong. However, the presentation is hard to parse, in particular keeping track of all the evaluation sets described in Sec 3.1. Other Comments Or Suggestions: Below are minor clarity and writing comments that do not affect my score, but are intended as constructive feedback. - I don’t understand L044-045 of the Introduction; is there a prior work that shows that “textual elements […] are often encoded similarly”? - Typo, in L063 “We” should be lowercased as “we” - I took me a long time to understand that Figure 6 was a combination of Figures 4 and 5; you should also reference these figures in the figure caption. It would be much easier to read if Figures 4 and 5 were stacked on the same page or in a single figure, rather than overlaid in Figure 6. - Typo, in L064 the citation for Cao et. al. should use \citet - In L157, I would recommend renaming the categories [“Consistent”, “Irrelevant”] → [“Matched”, “Mismatched”] or some more symmetric naming. - In Figure 5 I would recommend renaming [”Orthographic”, “Semantic”] → [”Paronym”, “Synonym”] so it’s easier to understand what is being evaluated. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful and encouraging review. We're glad the main idea came through clearly: we aim to understand how vision-language models handle overlaid text, and how representational insights can guide robustness improvements. Your suggestions on Figure 5 and the evaluation setup are especially helpful. Below we respond point by point. Additional materials can be found at: https://anonymous.4open.science/r/TOT-078C/ --- **Q1. Figure 5 appears to contradict Table 1, and the setup is unclear** These two experiments address different questions and use different evaluation metrics: - Figure 5 uses a **probing classifier** to test whether representations at **each layer** contain enough information to distinguish orthographic (paronym) vs. semantic (synonym) distractors. - Table 1 evaluates end-to-end **image-text matching accuracy (by the last layer)** under typographic attacks, focusing on task-level performance. The confusion may stem from the similar word pairs appearing in both contexts. In Table 1, they’re assessed for semantic alignment; in Figure 5, they are diagnostic probes for how semantic and visual cues are encoded across layers. The lower probe accuracy on synonym distractors (e.g., goose-gander) indicates that semantic information is harder to linearly extract at intermediate layers, whereas orthographic differences remain easier to capture. We also provide a detailed explanation of both the evaluation and training setups in **Figures B1 and B2** in the appendix. We hope this helps clarify how the probe task (Figure 5) differs from the classification task (Table 1). Rather than contradicting, this result **reinforces our central finding**: semantic abstraction emerges later than visual/textural cues in VLMs. We will revise the text to clarify the goals and setup of the probing experiment. **Q2. L044–045 of the Introduction: Is there prior work showing that "textual elements [...] are often encoded similarly"?** Thank you for pointing this out. That sentence was intended as a motivating question rather than a factual claim. We agree that the phrase “are often encoded similarly” may overstate the point. In the final version, we will cite prior work that offers perspectives on both sides of this question, and present it with an objective and open stance. **Q3. Minor typos and naming / layout suggestions** Thank you for these valuable presentation suggestions. We will revise the manuscript accordingly: - Clarify that **Figure 6 combines Figures 4 and 5**, and update the caption; - Stack Figures 4 and 5 on the same page for easier visual comparison; - Rename categories from **"Consistent / Irrelevant" → "Matched / Mismatched"**, and **"Orthographic / Semantic" → "Paronym / Synonym"**, to improve interpretability. --- We appreciate your engagement with both the technical and presentation aspects of this work. Your suggestions helped us clarify our framing and improve the accessibility of the paper. We believe the revised version reflects these improvements and presents a clearer, more complete contribution. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying the experiments in Figure 5 and Table 1, and providing additional visualization of the setup in Figures B1 and B2. The differences make sense given that Figure 5 is measuring the classifier's ability to detect orthographic vs semantic differences (where it makes sense that it is overall easier to detect orthographic differences) whereas Table 1 is measuring image-text matching accuracy. The results are indeed consistent and provide a comprehensive picture of the paper's claims. The experiments are thorough and well-motivated. I appreciate the authors' efforts to incorporate my presentation suggestions; however, the clarity of the original submission still impacts my score. I maintain my positive score and recommend the paper for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and kind recommendation. We truly appreciate your recognition of our experimental clarifications and your helpful suggestions. We will make sure to improve the clarity of our writing in the final version.
Summary: This paper investigates Typographic attacks in vision language models. They investigate whether these models encode textual semantics through representation complexity, and identify the mechanisms by which text disrupts visual understanding. To decouple orthography from semantics, they introduce the ToT dataset, containing minimal pairs of words that either share semantics with distinct visual forms (synonyms) or match visual forms with conflicting semantics (paronyms). By analyzing layer-wise Intrinsic Dimension (ID), they found that early layers exhibit competing dynamics between orthographic features and semantics, while later layers improve semantic accuracy primarily through orthographic disambiguation. Crucially, semantically driven representations emerge only in the final block, challenging the assumption of progressive semantic understanding. ## update after rebuttal Due to this limited scope of experiment, and that the writing of this paper definitely needs to be improved, I'd like to remain my original score. Claims And Evidence: First of all I'm not familiar with this area of Typographic attacks in vision language models. After reading the paper, I think most claims are supported with evidence. But there are some confusing parts. 1. It seems that all the conclusions can only apply to CLIP, instead of general vision language models. Could you extend to other models, e.g.DINOv2, SigLIP, MetaCLIP...? 2. In Table 1, how to read the numbers in the first row? It's hard to understand what the authors want to say about the Irr_s. 3. Also, Sec 3.2 mentions PARONYMS VS. SYNONYMS CONFUSION but their performance comparison is not presented in Table1? Where can we read their experiment results? 4. The Table 3 row names and column names are the same, it's not clear what the "sota methods" are exactly. 5. In Table 2 and 4, under "Hard", why are the Irr and Nons scores much higher than Cons score, opposite to the other columns? Shouldn't Cons always be easier than Irr and Nons? Methods And Evaluation Criteria: There are some confusing points. See above. Theoretical Claims: N/A Experimental Designs Or Analyses: There are some confusing points. See above. Supplementary Material: N/A Relation To Broader Scientific Literature: It's related to safety issues for general vision language models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think the presentation of the paper can be further improved. Also, the finetuning defense method seems a bit lack of novelty. More models and more explanations should be added. Other Comments Or Suggestions: N/A Questions For Authors: See above. Happy to raise my score if questions are addressed. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful and open review. This paper is primarily an **interpretability-driven study**: our goal is to understand how VLMs represent and process text in images across layers. Rather than proposing a new defense method, our core contribution lies in using Intrinsic Dimension (ID) to **trace where semantic abstraction emerges, and how it breaks** under typographic attacks. The fine-tuning experiment serves to validate our explanation in a practical setting. We address your concerns point by point below. Additional experimental results can be found at: https://anonymous.4open.science/r/TOT-078C/ --- **Q1. Could you extend to other models (DINOv2, SigLIP, MetaCLIP...)?** Yes, while our main experiments focus on CLIP (a widely-used baseline), we have extended our analysis to **ViT-L/14, ViT-H/14 DINOv2, SigLIP, and MetaCLIP**. Please see Figures A1–A5 in the supplementary material. We observe a **consistent semantic abstraction pattern across these models**: semantic distinctions between consistent, irrelevant, and nonsense text inputs remain indistinguishable in early layers, but diverge only in the final block (reflected by their ID separation). This suggests that delayed semantic resolution is a **shared property of VLMs, not limited to CLIP**. Notably, **DINOv2, a pure vision model** without language pretraining (similar to the ViT in Table 1), does not exhibit such ID bifurcation in the final block, also supporting our interpretation. **Q2. In Table 1, how should we read the “Irr_s” numbers in the first row?** The first-row entries like “Irr_80” refer to **irrelevant text overlaid at a font size of 80**. Appendix Figure 8 shows examples. Font size modulates the visual salience of the text: larger fonts induce stronger texture-level interference, allowing us to evaluate robustness across texture intensity. We will make this explicit in the table caption. **Q3. Section 3.2 mentions paronym vs. synonym confusion, but their comparison is not in Table 1. Where is it?** This analysis is in **Figure 5**, via a linear probing experiment across residual blocks. We trained logistic regression classifiers to distinguish **paronym pairs** (orthographically similar but semantically different) from **synonym pairs** (semantically similar but orthographically distinct). Each line represents a word pair; darker lines are averages over 10 pairs. This reveals **how orthographic and semantic information evolve across depth**, and aligns with our main claim: semantic abstraction is delayed and fragile. We will revise Section 3.2 to better connect with this figure and clarify the setup. **Q4. In Table 3, row and column names are the same. What exactly are the “SOTA methods”?** Thank you for pointing this out, we see how it could be confusing. Table 3 is a **cross-dataset evaluation**: each row represents a defense method, and each column is the dataset it was trained on (each paper proposes a new dataset). The diagonal shows **in-domain robustness**, while off-diagonal cells reveal **cross-domain generalization**. We deliberately adopted this structure to **decouple the effects of methods and datasets**, ensuring a more rigorous and fair comparison. Our method achieves the **best or second-best performance across all datasets**, demonstrating its broader applicability. We will clarify this in the table caption. **Q5. In Tables 2 and 4, under “Hard,” why are Irr and Nons scores higher than Cons? Shouldn't Cons be easier?** This is a subtle but important point. In the “Hard” setting, the training set contains more Irr and Nons samples than Cons, introducing both a distributional shift and higher conflict complexity (see Fig. 7). As a result, models trained in this setting tend to perform better on the more frequent categories, even if Cons is conceptually easier. While Cons is generally easier for the **original CLIP**, once we fine-tune the model, it begins to **balance trade-offs across tasks** such as consistent vs. irrelevant recognition. This can lead to non-monotonic changes in relative difficulty, where Cons is no longer always the easiest class. **Q6. The writing can be improved. The fine-tuning method seems to lack novelty.** We will revise the paper for clarity, especially in table structure, dataset presentation, and experimental details. As for the fine-tuning method, it is not positioned as a core novelty, but rather as a **minimal but effective validation** of our main claim regarding layer-wise semantic abstraction. The method is **intentionally simple**: our goal is to show that **understanding** where semantic features emerge allows even lightweight interventions to yield significant robustness gains. This underscores the **practical value** of our interpretability-driven analysis. --- We appreciate your thoughtful engagement. We hope our responses have addressed your concerns and clarified our motivation. Thank you for considering a re-evaluation. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! Many verification questions become much clearer with the explanations. (Please correct me if I'm wrong.) However, I still did not find any training related experiments extended to models other than CLIP and VIT (such as in Table 1 and Table 2). Due to this limited scope of experiment, and that the writing of this paper definitely needs to be improved, I'd like to remain my original score. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up and for acknowledging parts of the clarification. Regarding your remaining concern about training-related experiments beyond CLIP and ViT: **our main claim centers on representation dynamics**, specifically, the emergence of semantic abstraction as captured by ID. Fine-tuning in our study serves only to **support the interpretability-based explanation grounded in ID**, rather than being a primary focus of investigation. Accordingly, our additional experiments (**Figures A1–A5** in the appendix) focused on ID trends across a range of models, including ViT/L, ViT/H, MetaCLIP, SigLIP, and DINOv2, where consistent dynamics were observed. **Given your interest in training-related validation,** we have also conducted fine-tuning experiments on **MetaCLIP**, following the same setup as in Table 1 and Table 2. These results are now available in **Table A2 and A3** at https://anonymous.4open.science/r/TOT-078C/, and demonstrate **a similar robustness pattern**, further confirming the generality of our findings. We would also like to clarify that the experiments in Table 1 **do not involve fine-tuning**. They are inference-time evaluations designed to probe robustness under typographic perturbations. We mention this as the nature of the experiment appears to have been somewhat misinterpreted. We hope this additional information offers clarity. While we understand different reviewers may weigh emphasis differently, we believe the key contribution—**tracing the delayed and fragile emergence of semantics using ID**—is now supported by both theoretical insight and extended empirical results across architectures. We appreciate your time and feedback.
Summary: This paper explores how vision-language models (e.g., CLIP) process text in images via the ToT (Textural or Textual) dataset, showing that early layers rely on visual texture while semantic understanding emerges in the final blocks. Using Intrinsic Dimension (ID) analysis, the paper reveals changing representational complexity across layers and propose fine-tuning the last layer to counter typographic attacks, yielding substantial performance gains in various defense scenarios. Claims And Evidence: The paper’s claim is clear and well-supported by experiments. Through the ToT dataset and ID analysis, the paper show how the model processes text and visuals across layers, culminating in stronger semantic understanding in the final layer. **Weakness:** 1. The experiments were conducted only on ViT/B-16 and have not been verified on larger-scale ViT models (e.g., ViT/L-14, ViT/H-16) or other architecture (ResNet) of image encoders. 2. The paper classifies stages solely by changes in ID and accuracy. Could metrics commonly used in information bottleneck theory—such as mutual information—be adopted to further support the four-phase explanation of VLM image processing? Methods And Evaluation Criteria: The analysis and fine-tuning strategies fit the problem well. The ToT dataset carefully balances text semantics and visual forms, and metrics (accuracy, ID) effectively measure complexity and defense performance. Theoretical Claims: The paper lacks theoretical explanations, focusing on experimental results but not exploring why semantic understanding emerges late or how ID changes relate to it. Experimental Designs Or Analyses: The experimental design is solid and the analysis methods are effective. Through ID analysis and t-SNE, the authors show how representations evolve across layers and validate the defense strategy with fine-tuning. **Weaknesses:** 1. The experimental could be expanded by including more model architectures to better validate the conclusions. 2. It is unclear whether all comparison baselines were fine-tuned on the ToT dataset, potentially affecting the reliability of the results. Specifically, it remains unclear whether performance improvements stem from higher-quality data or from the new fine-tuning strategy. Supplementary Material: I have read all the supplementary material. Relation To Broader Scientific Literature: This paper uses ID analysis to reveal changes in representational complexity across different layers, and proposes a defensce strategy against typographic attacks. However, the defence strategy is relatively simple to implement and shares core ideas with some existing work, suggesting limited novelty. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** 1. The paper is well-written, with a clear motivation. 2. The design of the ToT dataset contributes a valuable resource for subsequent research in this area. **Weaknesses:** 1. The conclusion that semantic understanding is delayed until the final layer lacks deeper theoretical explanation, and the available experiments only cover ViT-B/16. It remains uncertain whether the same pattern holds for larger ViT models. 2. There is no detailed training loss using in the fine-tuning. Other Comments Or Suggestions: N/A Questions For Authors: See above weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their close reading and thoughtful comments. We would like to clarify that this paper is primarily an **interpretability study** that analyzes how vision-language models process text in images, using Intrinsic Dimension (ID) as a lens to reveal the transition from textural perception to semantic abstraction across layers. Fine-tuning experiments are not our core contribution but serve to validate our representational insights. We intend our findings to serve not as a conclusion, but as a **foundation for deeper inquiries** into the representational dynamics of multimodal models. We address the reviewer’s concerns point-by-point below. Additional experimental results are available at: https://anonymous.4open.science/r/TOT-078C/ --- **Q1. Generalizability beyond ViT-B/16** We appreciate the concern. Results on **ResNet** (Figure 9) are included in the appendix. Additionally, we evaluate **ViT-L/14, ViT-H/14, SigLIP, and MetaCLIP** in Figures A2–A5. Across all these architectures, we observe that significant shifts in ID values **consistently emerge in the final block**, suggesting that the phenomenon is not specific to ViT-B/16 but generalizes across a broad range of visual encoders. **Q2. Why not use mutual information or other theoretical metrics to support the stage transitions?** We understand the reviewer’s suggestion to explore more theoretical metrics such as mutual information for supporting the stage transitions. While MI is theoretically relevant, its estimation in high-dimensional, structured representations is **fundamentally limited by the bias-variance tradeoff of current estimators**. Methods like MINE or kNN often yield unstable or uninterpretable results across layers due to distributional shifts and the absence of ground-truth joint distributions. Given our focus on semantic emergence, ID offers a more stable and interpretable alternative. Our segmentation into representational "four stages" is not intended as a strict theoretical taxonomy, but rather as a **descriptive scaffold** that highlights where semantic abstraction emerges most distinctly. ID is used as a tractable and interpretable proxy for representational complexity, and it is especially well-suited to our goal of diagnosing typographic vulnerability. We view this as a methodological decision, **balancing interpretability with empirical rigor**. **Q3. Could the observed gains stem from data quality rather than the fine-tuning strategy?** To isolate the effect of fine-tuning, we introduced several controls: 1. In Appendix A.2, we replicate our ID analysis on a **distinct dataset** with matched structure. Results remain consistent, reinforcing that the observed gains are not tied to specific image content. 2. In Table 3, we conduct **cross-evaluation**, where each model is tested on all datasets (not just the one it was trained on). 3. All baselines in Table 4 were **fine-tuned on our datasets** to ensure fair comparisons under the same conditions. We believe these controls confirm that performance improvements arise from the **fine-tuning strategy**, not data artifacts. **Q4. The conclusion that semantic understanding is delayed until the final layer lacks deeper theoretical explanation** We believe the lack of a settled theory on when and how semantic understanding emerges in vision-language models reflects the field’s ongoing search for foundational insight, rather than a weakness. Our work contributes to this effort by introducing Intrinsic Dimension (ID) as a **structured, interpretable proxy** for representational complexity. We find that semantic fragility under typographic attacks coincides with sharp ID shifts in the final layers, suggesting a transition toward higher-order abstraction beyond surface features. While our interpretation remains exploratory, we argue that **such empirical signals are essential for building theoretical understanding**. By combining observations (Sec. 4), cross-model evidence (Sec. 5), and actionable metrics, we aim **not to close the question, but to make it visible and tractable**. **Q5. Clarify the fine-tuning loss** We use the **standard CLIP contrastive loss** for all fine-tuning experiments. The only modification lies in the construction of **positive pairs**, which we adapt to reflect semantic similarity across different attack settings. We do not introduce new loss functions. Figure B2 provides explicit training details. --- Thank you again for helping improve this work. We hope these clarifications address your concerns and highlight both the rigor and potential impact of our contributions. --- Rebuttal Comment 1.1: Comment: Thank you for providing the additional experiments and clarifications. Your response help the paper’s contributions and address my earlier concerns. Hence, I will raise my assessment score to 3. --- Reply to Comment 1.1.1: Comment: Thanks again for revisiting the score, we really appreciate your thoughtful engagement. We’re glad the responses were helpful and will clarify these points in the revision. Thanks for helping strengthen the paper.
null
null
null
null
null
null
From Token to Rhythm: A Multi-Scale Approach for ECG-Language Pretraining
Accept (poster)
Summary: This paper introduces MELP, a novel multi-modal ECG foundation model that leverages hierarchical supervision at the token, beat, and rhythm levels from clinical text to improve ECG representation learning. Experimental results on multiple public ECG datasets demonstrate that MELP outperforms existing self-supervised and multi-modal models in tasks such as zero-shot classification, linear probing, and transfer learning. Claims And Evidence: The authors assert that ECG signals have an inherent hierarchical structure with three distinct levels. They argue that clinical text naturally encodes meaningful information corresponding to each of these levels. This is motivated by the clinical practice where cardiologists first examine fine-grained waveform details (token level), then group these details into individual heartbeats (beat level), and finally assess the overall rhythm (rhythm level). Although the authors argue that ECG signals possess a hierarchical structure at the token, beat, and rhythm levels, it is difficult to fully embrace token-level learning of ECG and clinical text given that beat-level or lead-level analysis is the standard in ECG interpretation. However, I agree that this approach may reveal novel insights that conventional clinical interpretations might overlook. Methods And Evaluation Criteria: The paper introduces a multi-scale cross-modal pretraining framework that aligns ECG signals with clinical text at three hierarchical levels. At the token level, an encoder-decoder generates report tokens to capture fine-grained waveform details. At the beat level, token embeddings are aggregated into beat representations and aligned with clinical sentences using contrastive learning. At the rhythm level, global ECG representations are created by averaging beat embeddings and aligning them with overall text using a global contrastive loss. Overall, the approach is presented in a way that is largely consistent with the intuitive understanding of ECG analysis. Theoretical Claims: There isn’t a particularly strong theoretical claim. Experimental Designs Or Analyses: The authors present a thorough evaluation of their proposed model on multiple publicly available ECG datasets (e.g., PTB-XL, CSN, and CPSC2018) across various downstream tasks. They conduct comprehensive evaluations and extensive ablation studies comparing different configurations. The ablation studies involve removing one or more components, with the observed performance drops indirectly confirming the contributions of each level. However, the paper does not offer a fully isolated evaluation of each hierarchical level (token, beat, rhythm). A more granular, independent analysis of each level could further strengthen the empirical support for the model’s design. Supplementary Material: The authors have made a commendable effort to support reproducibility by providing the source code, which is a significant strength. Relation To Broader Scientific Literature: This paper extends previous multimodal studies that utilize ECG signals and clinical text. Compared to earlier works, it captures the intuitive perspective of ECG analysis more effectively, and its attempt to evaluate the model fairly using a standardized protocol is particularly noteworthy. Essential References Not Discussed: They referred to recent papers. Other Strengths And Weaknesses: One strength of the paper is its clear and well-written presentation. However, while the authors acknowledge certain limitations, it would have been interesting to see experiments that vary the number of beat quantizations, as this could potentially yield more intriguing results. Other Comments Or Suggestions: - Questions For Authors: As mentioned earlier, it would be beneficial if the paper provided further explanation or experimental results on the following two points: 1. Although token-level learning is an innovative approach, it contrasts with the conventional beat-level or lead-level analysis typically used in ECG interpretation. It would be useful to see additional experiments or detailed explanations on how token-level learning compares with standard methods. 2. The current evaluation assesses the integrated contribution of token, beat, and rhythm levels via ablation studies, where one or more components are removed. However, a more granular, independent evaluation of each level would provide clearer insights into how each contributes to the overall performance. Experiments that isolate token-level, beat-level, and rhythm-level supervision could help verify the specific benefits and potential limitations of each component. 3. Many recent ECG representation learning papers include a reconstruction loss alongside contrastive loss to capture both the generative and discriminative features of the data. The absence of a reconstruction loss in the proposed method may be a deliberate design choice. While reconstruction loss could help retain fine-grained signal details, it might also add complexity and may not be necessary if the contrastive loss sufficiently captures the critical features. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful evaluation and for recognizing the strength of our comprehensive experiments and the novelty of our multi-level supervision design. **We also updated the manuscript accordingly.** **[D.1] Clarification of Token-level Pretraining (Q1)** Thanks for acknowledging our approach may offer novel insights! From our understanding, token-level embeddings may denote fine-grained ECG features, such as P wave shape, QRS duration, and ST segment changes, by modeling local temporal patterns. Compared with beat-level, it provides more low-level understanding of ECG signals. Although the cardiologists may not explicitly mention token-level features, we think incorporating this level of analysis still encourages deep learning models to better capture the fine-grained characteristics of ECG signals, ultimately improving the model's generalizability. **[D.2] Ablation on Isolated Variant (Q2)** Thanks for the valuable comment! As the global rhythm level plays a central role in zero-shot ECG classification, we did not initially report isolated module results. To address this, we conducted additional ablation studies evaluating each level independently. Results are shown in Table D.1. These results show that our full model consistently outperforms each isolated variant consistently, confirming the effectiveness of multi-level supervision. *Table D.1* | Loss | | PTBXL-Form | | | CPSC2018 | | | CSN | |Average | |--------------------------|----------------|------|------|---------------|------|------|--------------|------|------|----------------| | | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | | $\mathcal{L}_{\mathrm{LM}}$ | 52.95 | 63.80 | 76.91 | 64.19 | 73.05 | 85.26 | 69.81 | 79.37 | 84.41 | 72.19 | | $\mathcal{L}_{\mathrm{Local}}$ |49.81 | 67.82 | 81.41 | 64.18 | 84.08 | 93.17 | 55.89 | 79.77 | *88.79* | 73.88 | | $\mathcal{L}_{\mathrm{g}}$ | 57.93 | 72.14 | 82.07 | 78.52 | 87.07 | 92.57 | 75.94 | 82.04 | 86.66 | 79.44 | | MELP | **63.41** | **76.71** | **83.30** | **88.54** | **91.75** | **94.32** | **78.25** | **84.83** | **90.17** | **83.48** | **[D.3] Analysis on Number of Beat Tokens** Thanks for your insightful suggestion! Our initial choice of 10 heartbeats was based on the assumption that most ECG recordings in MIMIC-IV-ECG, with a 10-second duration, would contain roughly one beat per second. However, after analyzing the dataset, we found that the median number of heartbeats per recording is approximately 12–13, as shown in Table D.2. To investigate the impact of this hyperparameter, we conducted an ablation study with varying numbers of heartbeat tokens. The results, presented in Table D.3, show that our model is relatively robust to this variation. Notably, using 14 heartbeat embeddings yields slightly better performance than our initial setting. *Table D.2* | Beat Count | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | Others | |------------|-------|-------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Frequency | 18357 | 47635 | 93075 | 112010 | 112424 | 93830 | 74027 | 62150 | 47997 | 24509 | 15987 | 11606 | 8347 | 23493 | | Percentage | 2.5% | 6.4% | 12.5% | 15.0% | 15.1% | 12.6% | 9.9% | 8.3% | 6.4% | 3.3% | 2.1% | 1.6% | 1.1% | 3.2% | *Table D.3* | #.Beats | | PTBXL-Form | | | CPSC2018 | | | CSN | | Average | |--------------------------|----------------|------|------|---------------|------|------|--------------|------|------|----------------| | | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | | 10 | 63.41 | 76.71 | 83.30 | 88.54 | 91.75 | **94.32** | 78.25 | 84.83 | 90.17 | 83.48 | | 12 | 62.33 | 76.94 | 84.35 | 88.58 | 92.70 | 93.76 | **79.89** | 87.22 | 90.29 | 84.01 | | 14 | 64.11 | **78.92** | **84.80** | 87.58 | 92.84 | 94.14 | 79.11 | **87.87** | **91.50** | **84.54** | | 16 | **64.74** | 76.91 | 83.21 | **89.18** | **93.15** | 94.07 | 78.91 | 87.18 | 90.23 | 84.18 | **[D.4] Further Discussion (Q3)** Thanks for your comment! From our perspective, the contrastive learning excels at learning global representations from ECG. However, its ability to capture fine-grained details is relatively limited. To address this, we have incorporated token-level pretraining based on generation to further enhance the model's detailed understanding for ECG representions. Its benefit can be witnessed in Tables 5 and 6 of our original manuscript, by showing better performance in various downstream datasets. While this added complexity may seem significant, we think it is manageable. Our entire model can be pretrained on 4 GTX 3090 GPUs, with a batch size of 64 per device. We have discussed this point in our revised manuscript.
Summary: This paper proposes a multimodal self-supervised pretraining method for paired electrocardiograms (ECGs) and text. This method, MELP, is unique in its use of multi-scale representation learning and supervision by breaking down an ECG signal into hierarchical levels of the full rhythm view, the smaller beat view, and the smallest token view. Comprehensive experiments show gains over state-of-the-art ECG-language models on a variety of interpretation tasks under varying amounts of labeled data, ranging from zero-shot to fully supervised settings. ## Update after rebuttal I will maintain my original recommendation of acceptance. Claims And Evidence: Claims appear to be sound, and results are both thorough and convincing at first glance. My only hesitation surrounds the implementation details of baseline methods: were these methods pretrained from scratch on MIMIC-IV-ECG (like MELP, for fair comparison), or were they taken as is (potentially pre-trained on different data) and fine-tuned? Methods And Evaluation Criteria: Yes. This paper leverages large-scale, well-known ECG datasets and evaluation metrics/settings that are consistent with prior literature. Theoretical Claims: N/A Experimental Designs Or Analyses: Experimental design appears sound. Source code has also been provided to aid reproducibility. As mentioned above, my only hesitation concerns whether baseline methods were pretrained from scratch on the same data as MELP or whether they were used “as is”. Supplementary Material: Yes – all of it. Relation To Broader Scientific Literature: This study falls as one of a few recent vision-language models released for multimodal ECG-text representation learning. However, it is unique in its multi-scale treatment of ECG representation learning and superior performance compared to relevant state-of-the-art models. Essential References Not Discussed: There are a few areas where more examples of relevant literature could be mentioned, but these are not critical omissions that change the interpretation of results. E.g., there are additional examples of contrastive methods for ECG [1], reconstruction-based methods for ECG [2], hybrid approaches for ECG [3], and other relevant vision-language foundation models for ECG [4-6]. Jin et al. [6] is the most relevant, particularly because it conceptualizes beats as “words”, but I am aware that this represents a concurrent work. [1] Sangha, Veer, et al. "Biometric contrastive learning for data-efficient deep learning from electrocardiographic images." Journal of the American Medical Informatics Association 31.4 (2024): 855-865. [2] Yu, Han, Huiyuan Yang, and Akane Sano. "ECG-SL: Electrocardiogram (ECG) Segment Learning, a deep learning method for ECG signal." arXiv preprint arXiv:2310.00818 (2023). [3] Song, Junho, et al. "Foundation Models for ECG: Leveraging Hybrid Self-Supervised Learning for Advanced Cardiac Diagnostics." arXiv preprint arXiv:2407.07110 (2024). [4] Han, Yu, et al. "Foundation Models in Electrocardiogram: A Review." arXiv preprint arXiv:2410.19877 (2024). [5] Tian, Yuanyuan, et al. "Foundation model of ECG diagnosis: Diagnostics and explanations of any form and rhythm on ECG." Cell Reports Medicine 5.12 (2024). [6] Jin, Jiarui, et al. "Reading your heart: Learning ecg words and sentences via pre-training ecg language model." arXiv preprint arXiv:2502.10707 (2025). Other Strengths And Weaknesses: *Strengths*: - Writing, organization, and presentation are very high-quality and clear to the reader. - The multi-scale treatment of ECG signals is unique and appears to be beneficial for downstream performance. - Experiments are thorough, with large-scale pretraining and validation on a variety of datasets and tasks to many relevant competitive baselines. Ablation studies help identify which components are most useful. *Weaknesses*: - A few methodological details can be clarified – no obvious weaknesses! Release of source code and model weights will be important to facilitate reproducibility. Other Comments Or Suggestions: - Sec 2.1: Title should probably read “ECG Representation Learning” (rather than “Presentation”)? - Line 107 RHS: “Yu et al.” is repeated – change this to a parenthetical in-text citation - Equation 7: Comma should go inside equation - Fig. 1: In the caption, a space is needed before “Token Level” Questions For Authors: 1. How specifically were baseline models treated with respect to pretraining? Were they pretrained from scratch on the same data as MELP, or were their weights used “as is” for eventual fine-tuning or zero-shot evaluation? Alternatively, were results in tables ever taken directly from the paper (without the authors running analyses themselves)? Please clarify these details. 2. Do the authors plan to publicly release the code and model weights? 3. In Sec 3.2, how specifically is “cardiology-related data” extracted or filtered from PubMed and Wikipedia? Include these details in the Supplement. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful evaluation and for recognizing the novelty of our multi-scale ECG-language pretraining approach. We also appreciate your positive remarks on the clarity of our writing and the thoroughness of our ablation studies. Below, we address your concerns regarding the experimental section in detail. **For each point, we have updated our manuscript accordingly.** **[C.1] Source of Baseline Results (Q1)** Thank you for your helpful question. The baseline results reported in Tables 2 and 3 are cited from MERL [1], which is a standardized benchmark by pretraining its model along with 10 existing self-supervised approaches on the MIMIC-IV-ECG dataset. To be fair, we strictly followed the same experimental setup to ensure a fair comparison. Specifically, we adopted the same pretraining dataset, dataset splits, preprocessing pipeline, and fine-tuning hyperparameters as provided in the official MERL GitHub repository. To verify the reported results, we reproduced the MERL model using their released code and pretrained weights (as seen in Table C.1). Our reproduced results were consistent with those in the original paper, supporting their reliability. In some cases, our reproduced performance was slightly lower, likely due to differences in hardware or software environments. To ensure fairness, we report the original (higher) results from the MERL paper in our comparisons. We will clarify this point in the revised manuscript and include the updated explanation accordingly. *Table C.1* | | | PTBXL-Rhythm | | | PTBXL-Form | | | PTBXL-Sub | | | PTBXL-Super | | | CPSC2018 | | | CSN | |Average | |------------------------|---------------------|----------------------|-----------------------|------------------|-------------------|--------------------|-----------------|------------------|-------------------|-------------------|-------------------|--------------------|-----------------|-----------------|------------------|-------------|-------------|------------|------------| | | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | Average | | Reproduced results | 45.33 | 83.92 | 86.13 | 56.62 | 66.03 | 76.57 | 71.41 | 79.05 | 83.30 | 81.19 | 84.66 | 86.77 | 61.59 | 80.07 | 88.83 | 62.33 | 80.22 | 83.44 | 75.41 | | MERL paper | 52.33 | 82.88 | 88.34 | 58.26 | 72.43 | 79.65 | 64.90 | 80.56 | 84.72 | 82.39 | 86.27 | 88.67 | 70.33 | 85.32 | 90.57 | 66.60 | 82.74 | 87.95 | 78.05 | | Difference | 7.00 | -1.04 | 2.21 | 1.64 | 6.40 | 3.08 | -6.51 | 1.51 | 1.42 | 1.20 | 1.61 | 1.90 | 8.74 | 5.25 | 1.74 | 4.27 | 2.52 | 4.51 | 2.64 | **[C.2] Release of Source Code and Weights (Q2)** Thanks for your great suggestion! To support reproducibility, we will release the full codebase and pretrained model weights to facilitate further research in the community. **[C.3] Curation of Cardiology-related Corpus (Q3)** Thank you for the comment. We followed the HeartBERT [2] procedure to collect cardiology-related data from PubMed and Wikipedia. For PubMed, we used cardiology journal names and glossaries from SJR, NIH, Aiken, and the Texas Heart Institute to query abstracts via the API. For Wikipedia, we extracted articles under the "Cardiology" category and its subcategories, supplemented with glossary-based queries. This resulted in a curated dataset of ~5.6 GB (912.5M corpus). We have added more details in the Appendix of revised manuscript . **[C.4] Not Discussed References and Typos** Thanks for pointing these out! We have modified typos and updated not discussed references in our revised manuscript according to your suggestions! **[C.5] Clarification about Methodology Details** Thank you for the positive feedback and helpful suggestions. We have revised the methodology section to improve clarity. **References** [1]. Liu et al. Zero-shot ecg classification with multimodal learning and test-time clinical knowledge enhancement. ICML, 2024. [2]. Gwon et al. Medical language model specialized in extracting cardiac knowledge. Scientific Reports, 2024.
Summary: This study proposes MELP (Multi-scale ECG-Language Pretraining), which introduces an innovative multi-scale supervision mechanism in the field of ECG pretraining. By integrating cross-modal alignment at the token, beat, and rhythm levels, MELP effectively enhances the feature learning capability of ECG signals. Compared to existing methods, MELP achieves significant performance improvements in zero-shot ECG classification, linear probing, and transfer learning tasks, demonstrating exceptional generalization ability, especially in low-data scenarios. Claims And Evidence: The overall argumentation of the paper is relatively clear, and MELP's performance is systematically validated through standardized benchmarks. However, the implementation of token-level pretraining in the paper may not be entirely convincing. The study employs token-level embeddings to predict the masked portion of the corresponding text, yet the text itself provides an overall description of the ECG signal (e.g., "sinus rhythm"). As I understand it, if the goal is to learn fine-grained representations of ECG signals, the corresponding text should also contain fine-grained descriptions. The authors need to further clarify the motivation and experimental design for this aspect. Methods And Evaluation Criteria: Yes Theoretical Claims: I have checked the theoretical section in the main text and found no issues. Experimental Designs Or Analyses: The experimental setup follows the configuration recommended by MERL and is generally reasonable. Supplementary Material: Yes. I have specifically reviewed the appendix, which primarily provides detailed information on the dataset, training and testing procedures, as well as the uni-modal pretraining process. Relation To Broader Scientific Literature: This paper can be associated with the field of ECG-Text multimodal learning, offering new perspectives for accurate disease diagnosis in the future. Essential References Not Discussed: The paper does not cite METS (Frozen Language Model Helps ECG Zero-Shot Learning, published in MIDL 2023), which is a pioneering work in the field of ECG-Text multimodal learning. METS first proposed an ECG-Text multimodal zero-shot learning approach, making it a crucial reference for understanding this study. Other Strengths And Weaknesses: The originality of this paper lies primarily in its introduction of the multi-scale concept into multimodal pretraining and its subsequent validation of its effectiveness. However, I believe the main weakness of this paper stems from certain motivational issues in token- and beat-level training. The textual reports used in MELP describe the overall ECG signal rather than providing descriptions at the heartbeat or waveform level. This mismatch in granularity may limit the alignment effectiveness of MELP. Other Comments Or Suggestions: - In the Related Work section, ST-MEM is incorrectly spelled as ST-EME。 - Incorrect citation of HeartLang. In page 3, Beat view: Heart beat-sentence Alignment, and Table 1, HeartLang is cited inconsistently in two different ways. Please use the latest citation: https://openreview.net/forum?id=6Hz1Ko087B. Questions For Authors: - The token-level alignment seems to have a granularity mismatch issue. Could you further clarify the motivation behind this design choice? - How are positive and negative samples selected for the rhythm-level contrastive loss? - If there is indeed a text granularity misalignment issue, could this limitation be explicitly acknowledged in the paper’s discussion of current constraints? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and for acknowledging the novelty of our multi-level supervision and the strength of our experiments. We also appreciate your constructive feedback on the token-level design, which we address below. **All responses are updated in the revised manuscript**. **[B.1] Clarification of Token-level and Beat-level Supervision (Q1)** Thanks for your insightful feedback. While some ECG reports summarize rhythm-level findings (e.g., “sinus rhythm”), many contain detailed waveform-level observations. Below are examples from MIMIC-IV-ECG, with fine-grained observations in bold: - Sinus rhythm. **Poor R wave progression** – probable normal variant. **Anterolateral T wave changes** may be due to myocardial ischemia. Abnormal ECG. - Sinus tachycardia. **Short PR interval**. Borderline ECG. - Atrial fibrillation. **Extensive ST-T changes are nonspecific**. Abnormal ECG. - Probable accelerated junctional rhythm. **Low QRS voltages in limb leads**. Abnormal ECG. High-level diagnoses, like "sinus rhythm", still depend on low-level indicators such as P wave consistency and PR interval regularity. Table B.1 illustrates more examples of how clinical interpretations often depend on both global and local ECG features. Since our token-level pretraining uses a GPT-style objective, rather than masked token prediction, to generate full diagnostic reports from token-level ECG embeddings. By providing full waveform features to the decoder, we allow the model to learn these relationships and generate reports with varying levels of granularity. It may encourage the model to analyze local features and implicitly learn these indicators. Thus, our token-level pretraining design could enhance the model’s ability to learn more generalized, fine-grained ECG representations. As for beat-level pretraining, it aims to align beat embeddings with corresponding sentences. While general descriptions in sentences may hinder this process, we think that detailed descriptions encourage the model to understand the ECG at a beat-level. As shown in the ablation studies in Tables 5 and 6 of the original manuscript, both token-level and beat-level pretraining improve the learned ECG representations, which further supports our claims. We also agree that including more detailed descriptions in the ECG report would further enhance token-level and beat-level pretraining. Please refer to Reply [B.3] for more details. *Table B.1* | Clinical Diagnosis | ECG Criteria | Local Feature Mentioned | |------------------------------------------|--------------------------------------------------------------------------------------------------|------------------------------------| | **Atrial Fibrillation** | Irregularly irregular rhythm | False | | | No P waves | True | | | QRS complexes usually < 120ms | True | | **Sinus Rhythm** | Regular rhythm at a rate of 60–100 bpm | False | | | Each QRS complex is preceded by a normal P wave | True | | | The PR interval remains constant | True | **[B.2] Details of Positive and Negative Pairs (Q2)** Thank you for the helpful question. We train with a batch size of 64 per device across 4 GPUs. Each ECG and its paired report form a positive pair, while all other ECG-report combinations within the mini-batch (i.e., 255 pairs) are treated as negatives for contrastive learning. **[B.3] Further Discussion of Limitation (Q3)** Thank you for the suggestion. We further clarified motivation of token-level pretraining in Reply [B.1]. Additionally, we agree that incorporating more explicit fine-grained knowledge, such as detailed waveform descriptions for rhythm diagnosis, could provide stronger supervision and better alignment with clinical criteria. For instance, breaking down terms like "sinus rhythm" into their underlying features (e.g., P waves before each QRS, consistent PR intervals) may enhance learning. We have explicitly added this discussion to the revised Limitations section. **[B.4] Not Discussed References and Typos** Thanks for the careful feedback. We have corrected typos and not discussed references in the revised manuscript.
Summary: The authors propose Multi-scale ECG-Language Pretraining (MELP), which is a two-step process: First is the cardiology language pretraining step, which pretrains a text encoder using cardiology-focused corpus to maximize the language model’s utility for cardiology. Second step is the multimodal pretraining step, which integrates three levels of cross-modal supervision (token, beat, rhythm) for ECG-language pretraining. Claims And Evidence: In Section 3.3 Motivation, the authors state “Cardiologists interpret ECG signals in a hierarchical manner, analyzing features at multiple scales – from individual waveform components (tokens) to heartbeats (beats) and overall rhythm”. This claim must be supported with appropriate citation from reputable sources, as it is a crucial statement that inspired the proposed model MELP. Methods And Evaluation Criteria: - The proposed method make sense, but the evaluation criteria lacks depth. Specifically, in Section 3.3 Token-view: Learning to Generate Captions, the authors state “Flexibility in adapting to downstream tasks, such as ECG report generation and ECG-based question-answering” as key advantage of using generative pretraining approach. However, no experiments were conducted to utilize this advantage, and therefore it is unclear why this “advantage” is necessary in this current SSL setting that only conducts ECG classification experiments. - The authors mention in Section 2.1 ECG Presentation Learning “Recent efforts (Oh et al., 2022; McKeen et al., 2024) combine contrastive and generative objectives to develop ECG foundation models”. However, the authors state these methods overlook rich semantic correlations between ECG signals and clinical text reports, because the methods are pre-trained on only ECG datasets. In the Experiments section, these recent efforts are not included in the baseline models used for comparison against MELP. Therefore, the author's statement is speculative and not substantiated with quantitative evaluations. - In Section 4.1 Implementation Details the authors mention using Wav2Vec 2.0 architecture for the ECG encoder. However, there are no explanations on why the specific architecture is chosen, or comparison between other architectures such as the CMSC architecture from CLOCS (Kiyasseh et al., 2021) or Wav2Vec 2.0 + CMSC + Random Lead Masking from (Oh et al., 2022, McKeen et al., 2024). - The authors conduct evaluations on multiple datasets and various evaluation settings (zero-shot, linear probing, cross-domain adaptation), but only on the ECG classification task. This does not truly test the model’s generalizability beyond ECG classification, which is a limitation given the strong generalizability claims of hierarchical representation learning. Theoretical Claims: No. The paper does not present theoretical claims requiring proof verification. The contributions are empirical rather than theoretical, focusing on the effectiveness of MELP compared to other baseline models in ECG classification. Experimental Designs Or Analyses: Yes I checked the validity of experimental designs and analyses, specifically all classification performance results shown in Tables 2-7. There are no issues. Supplementary Material: I checked the dataset details in Table 9 and training details in Appendix B. Relation To Broader Scientific Literature: Research on Self-supervised learning (SSL) methods for ECG-related tasks has gained significant attention in recent years due to its potential to enable various downstream tasks without relying on extensive annotated ECG datasets. The contributions of this paper align well with the broader scientific literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strenghts - The paper proposes a novel multi-scale approach for ECG-Language Pretraining - The paper is well-written and easy to understand - The Ablation Study is conducted thoroughly to answer some questions regarding the effectiveness of multi-scale approach and cardiology language pretraining Weaknesses - As mentioned in Evaluation Criteria, the experimental section lacks depth - Some questions remain unanswered such as why only 10 learnable queries are used in the Heart beat-sentence Alignment section Other Comments Or Suggestions: There are minor typos - Line 148 pretraing -> pretraining - Line 427 Table 4.3 -> Table 8 Questions For Authors: - Why were works such as Oh et al., 2022; McKeen et al., 2024 not included in the baseline models for evaluation? - Why was the Wav2Vec 2.0 architecture chosen for the ECG encoder? Why were architectures such as CMSC (Kiyasseh et al., 2021) or Wav2Vec 2.0 + CMSC + RLM (Oh et al., 2022) not chosen / compared to the Wav2Vec 2.0 encoder? - Were any additional downstream tasks conducted beyond ECG classification to show the strong generalizability of hierarchical representation learning? (e.g., Patient Identification) - Why were 10 learnable tokens used in Section 3.3 Beat view: Heart beat-sentence Alignment? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the thoughtful and detailed review. **All responses have been updated in our revised manuscript**. **[A.1] Support for Multi-level Motivation** Thanks for your suggestion. From prior work, we have found several supports for diagnosing ECG by multi-level observations: - **Token-level**: Atrial Fibrillation is identified by the absence of P waves and QRS duration <120ms [1]. - **Beat-level**: Sinus rhythm requires each QRS complex to be preceded by a normal P wave [2]. - **Rhythm-level**: Left Bundle Branch Block diagnosis requires Variable ventricular rate [2]. As such, we think our proposed multi-scale ECG pretraining is reasonable. We have also incorporated more support in our manuscript. **[A.2] Additional Compared Baselines (Q1)** Thanks for your comment. As you suggested, we have compared with Wave2Vec 2.0 + CMSC + RLM (Oh et al., 2022) and ECGFM (McKeen et al., 2024) in below Table A.1. As shown, MELP consistently outperforms these two approaches across all experimental settings. *Table A.1* | Method | | PTBXL-Form | | | CPSC2018 | | | CSN | |Average | |--------------------------|----------------|------|------|---------------|------|------|--------------|------|------| ------| | | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | | Wave2Vec 2.0 + CMSC + RLM | 52.72 | 67.81| 80.72| 75.70 | 88.16| 92.61| 65.65 | 78.82| 87.87| 76.67 | | ECGFM | 60.95 | 74.99| 85.54| 82.18 | 89.52| 93.26| 71.51 | 83.17| 88.89| 81.11 | | MELP | **63.41** |**76.71**|**83.30**| **88.54** |**91.75**|**94.32**| **78.25**|**84.83**|**90.17**| **83.46** | **[A.3] Justification of ECG Encoder Architecture (Q2)** Thanks for your suggestion. Our understanding is that CMSC is a patient-specific contrastive training approach, and Random Lead Masking is a data augmentation strategy. Neither serves as a network backbone. We have used Random Lead Masking by default in our implementation (see code). We also tested CMSC by pretraining the ECG encoder before multimodal training (as it cannot be directly applied into our framework). As shown in Table A.2, the CMSC variant consistently underperforms, with an average drop of -3.53%. Therefore, we did not adopt it in the final model. *Table A.2* | ECG Encoder | | PTBXL-Form | | | CPSC2018 | | | CSN | |Average | |--------------------------|----------------|------|------|---------------|------|------|--------------|------|------|----------------| | | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | Wave2Vec 2.0 | **63.41** | **76.71** | **83.30** | **88.54** | **91.75** | **94.32** | **78.25** | **84.83** | **90.17** | **83.81** | | Wave2Vec 2.0 + CMSC | 62.07 | 75.55 | 82.57 | 80.69 | 88.40 | 92.91 | 71.89 | 81.00 | 87.42 | 80.28 | **[A.4] Additional Downstream Tasks (Q3)** To further show the generalizability of MELP, we evaluated it on two tasks: **ECG report generation** and **patient identification**. We evaluated report generation using the ECGBench dataset (500 samples), comparing it to the 7B PULSE [3] model. As shown in Table A.3, MELP significantly outperforms PULSE, highlighting its strong fine-grained ECG understanding. Moreover, we evaluated MELP on patient idenfitication in PTB-XL (Table A.4), where it achieved the highest Top-K recall. These results further demonstrates MELP's superior generalizability. We agree that ECG question answering is a promising direction, while this task requires text encoder finetuning and we will include in future work. *Table A.3* | Models | Size | BLEU-1 | BLEU-4 | METEOR | ROUGE-L | BERTScore F1 | |----------|------|--------|--------|--------|----------|---------------| | PULSE | 7B | 5.12 | 0.83 | 13.76 | 8.15 | 10.96 | | MELP | 284M | **13.02** | **1.87** | **11.28** | **18.50** | **44.08** | *Table A.4* | Method | R@1 | R@5 | R@10 | |------------------------------|-------|-------|-------| | Wave2Vec 2.0 + CMSC + RLM | 39.8 | 52.14 | 59.21 | | ECGFM | 49.18 | 60.70 | 67.76 | | MERL | 16.12 | 26.32 | 31.74 | | MELP | **49.67** | **66.12** | **70.89** | **[A.5] Analysis of Number of Beat Tokens (Q4)** Thanks for your valuable question. We initially chose 10 heartbeats based on the assumption of one beat per second in 10-second ECGs. Please see Response [D.3] under Reviewer 5Gq6 for more analysis. **References** [1]. Mitchell et al. Canadian Cardiovascular Society atrial fibrillation guidelines 2010: Prevention and treatment of atrial fibrillation following cardiac surgery. Can J Cardiol, 2011. [2]. Mattu et al. Electrocardiography in emergency, acute, and critical care. American College of Emergency Physicians, 2019. [3]. Liu et al. Teach Multimodal LLMs to Comprehend Electrocardiographic Images. arXiv preprint 2024.
null
null
null
null
null
null
To Steer or Not to Steer? Mechanistic Error Reduction with Abstention for Language Models
Accept (poster)
Summary: This paper introduces Mechanistic Error Reduction with Abstention (MERA), a framework for conditional activation LM activation steering that addresses a fundamental challenge in the steering literature: that interventions can often hurt overall performance and are often applied unnecessarily. Unlike traditional steering methods that apply fixed intervention strengths, MERA trains linear error estimators to predict model errors from activations and makes calibrated steering decisions - intervening only when the predicted error exceeds a threshold and with strength proportional to the estimated error magnitude. The work expands upon existing steering literature by focusing on continuous error mitigation for improving classification performance rather than binary alignment goals like reducing toxicity prevalent in the steering literature. The authors include experiments with multiple classification tasks and compare against popular baseline methods. Claims And Evidence: This work claims that MERA is an improvement over baseline steering techniques. While MERA does seem to generally outperform baseline steering techniques, this claim can benefit from additional clarification. **Inconsistent Gains**: Line 361 notes that MMLU-HS does not meaningfully benefit from MERA. The authors note that the increased class cardinality over the other binary classification tasks may make it an especially difficult setting to steer. That MERA struggles to generalize beyond binary classification tasks suggests that the technique may suffer from challenges in generalization faced by baseline techniques (line 81). **MERA May Still Be Expensive**: The paper implies that MERA is more practical than baseline techniques since baselines often require expensive hyperparameter searches (Lines 199 & 416). However, MERA's multi-step approach, including training an auxiliary model as well as searching for optimal hyperparameters (Line 300), suggests that MERA is not an obvious efficiency improvement over baseline techniques when it comes to hyperparameter optimization. Methods And Evaluation Criteria: **Benchmarks**: The studied benchmarks overall make sense for studying simple classification tasks. However, it is unclear why only the High School splits of MMLU were included. Including all MMLU splits would give readers a sense as to how well MERA generalizes to more difficult topics. **No Steering Results in Main Paper**: The main paper does not include the absolute baseline results. The main results instead include a bespoke metric with relative improvements in performance. While this bespoke metric can tell readers whether accuracy improves or regresses, it is unclear to what absolute degree MERA improves benchmark performance. Theoretical Claims: NA Experimental Designs Or Analyses: See previous section Supplementary Material: Lines 662 states that code will be provided. Howevr, no codebases appear to be provided with the submission. Relation To Broader Scientific Literature: Activation steering has emerged as an alternative method to prompting for dynamically controlling LM behavior at inference time. While great progress has been made in eliciting behavior of interest via steering, adverse effects on overall performance remain one of the primary open problems in the field. This paper introduces a conditional steering technique for sidestepping performance issues by estimating the LM's error ahead of time and only steering if the situation calls for it. This is a timely and relevant contribution to the activation steering literature.nd only steering if the situation calls for it. This is a timely and relevant contribution to the activaiton steering literature. Essential References Not Discussed: There are no essential references which wouldn't be considered concurrent work as far as I can tell. Other Strengths And Weaknesses: Safety-related behavior is a common focus of the steering literature. This paper focuses on steering for error reduction on classification tasks, which is a differentiator and a strength. The paper also raises the interesting point that this setting, where there is a verifiable correct answer, is a practical test-bed for steering techniques. Other Comments Or Suggestions: The citation on line 671 appears broken. Questions For Authors: Can the authors please clarify how their approach to conditional steering differs and or improves upon [1]? Lee, B.W., Padhi, I., Ramamurthy, K.N., Miehling, E., Dognin, P.L., Nagireddy, M., & Dhurandhar, A. (2024). Programming Refusal with Conditional Activation Steering. ArXiv, abs/2409.05907. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for all these important remarks. We’re encouraged that you found our work a timely and relevant contribution to the steering literature and that our focus on steering for error reduction is a differentiator/ strength! We’ve addressed all of your remarks below. Please let us know of any remaining questions. **1) Inconsistent gains.** We also note that MMLU-HS is more difficult to steer than the other tasks evaluated (except Llama-IT!). This corroborates [3] + aligns with a broader limitation shared by all additive linear steering methods — assuming that the activations and the target quantity (here, the model error) are linearly related. One of MERA’s key contributions is to explicitly detect and gracefully handle these “unsteerable” instance cases or datasets. Our selective steering mechanism (see Eq. 7) and then also the abstention criterion (see Sec. 3.2), ensure that we intervene only when confident (which is controlled by the parameter $\delta$). Rather than steering on all inputs and datasets, MERA steers conditionally. On reflection, we see that attributing MMLU's limited steerability only to class cardinality is premature. Other confounding factors such as label ambiguity, semantic class overlap, or general task complexity may be equally or more relevant. It may be that steering toward correct answers might be easier when classes lie along a single semantic axis (e.g., yes/no, positive/negative) than in a general MCQA format. This needs to be empirically verified — characterising steering techniques’ limits wrt distinct dataset properties deserves careful consideration and is an exciting direction for future work (see e.g., [1]). We've now removed the prior formulation citing cardinality as the primary cause for MMLU-HS’s limited steerability. We appreciate your attention to this detail and the opportunity to improve our paper in this regard! **2) MERA may still be expensive.** You're right that MERA introduces its own components, such as probe training and calibration. However, these steps are principled and crucially, MERA avoids per-model*, per-layer, per-token and per-multiplier hyperparameter sweeps, which is the common sweep strategy in activation steering**. This creates a huge search space and brings on huge computational costs, typically forcing the practitioner to prioritise one hyperparameter over the other (we refer to Appendix A.1 for a discussion, which we’re also currently expanding). MERA replaces this process with a search of ~10 $\alpha$ values per task (dataset and model combination). This seriously reduces computational costs and as we see it, establishes MERA as a more efficient method in comparison. *Even with significant compute, the best outcome of steering papers is often model-specific advice like “steer on layer 13 for Llama-7B” which may not generalise well. **An Anthropic blog post [9] summarises current steering practice well: "For all evaluations, we varied the 'steering factor' between -20 and 20. This decision was arbitrary." **3) Benchmarks.** We used the MMLU-HS subset simply because the questions would be consistent in difficulty and thus make per-sample comparisons cleaner. Practically, this subset also matched the sizes of the other datasets in the paper. But we agree that it would be interesting to evaluate MERA on more difficult subsets of MMLU. Consequently, we're currently running the MMLU "professional" subset as well. **4) No steering results in main paper.** As steering is a post-training intervention, we found it more important to focus on reporting the _relative_ steering performance in the main manuscript (over the absolute results) but we appreciate that absolute metrics provide complementary information that can be helpful to understand true steering effects! In Table 4 in the Appendix we included all the unsteered accuracies for each model and dataset combination (from which steering gains could be indirectly judged) but to make it simpler for the reader, we’ll complement this table with an additional table containing steered accuracies as well as the raw deltas. We will discuss these results in the main manuscript as well. Also, to understand why a bespoke metric was introduced, see answer (3) of Reviewer u9yC. **5) Code.** Please see answer (3) of Reviewer 7dxY. **6) Broken citation.** Line 671 is fixed. **7) Difference to CAST.** Thank you for sharing this relevant work! We have read the paper (and cite it in Appendix A.1). Our method differs from CAST in several ways (i) conditioning logic: MERA conditions on predicted error and CAST on cosine similarity between “condition vectors” and the model’s activations exceeding $\theta$, (ii) steering direction: MERA learns probe weights per layer and CAST uses PCA on contrastive pairs, with multiple directions, (iii) strength selection: MERA solves for $\lambda^{\star}$ with closed form and CAST uses a grid search. [8] https://www.anthropic.com/research/evaluating-feature-steering --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response to my suggestions. My primary open concern is missing MMLU data. Do the authors expect to have the remaining MMLU experiment results available before the end of the discussion period? Update (April 7th): I have raised my score in response to the additional MMLU experiments --- Reply to Comment 1.1.1: Comment: Thank you for responding and sharing your remaining concern! That's really helpful. We appreciate the opportunity to provide these additional MMLU results. Below we report both the raw unsteered and steered accuracies on the professional MMLU subset across the six language models. We also include four common baselines: steer with prompt (adding a suffix: "Think before you answer"), with additive probe and with contrastive (50, 100) pairs. The table reports accuracy in "last" and "exact" prediction modes. **Table: MMLU-Professional – Accuracy (Delta)** | Model | Mode | Unsteered | With Prompting | With Additive Probe | With Contrastive-50 | With Contrastive-100 | With MERA | |------------------|-------|-----------|---------------------|------------------------|------------------------|-------------------------|-------------------| | Llama-3.2-1B | Last | 0.252 | 0.257 (+0.01) | 0.238 (-0.01) | 0.233 (-0.02) | 0.224 (-0.03) | 0.262 (+0.01) | | | Exact | 0.233 | 0.086 (-0.15) | 0.219 (-0.01) | 0.229 (-0.00) | 0.210 (-0.02) | 0.262 (+0.03) | | Llama-3.2-1B-IT | Last | 0.295 | 0.248 (-0.05) | 0.300 (+0.01) | 0.257 (-0.04) | 0.252 (-0.04) | 0.295 (+0.00) | | | Exact | 0.262 | 0.152 (-0.11) | 0.052 (-0.21) | 0.248 (-0.01) | 0.271 (+0.01) | 0.262 (+0.00) | | Gemma-2-2B | Last | 0.252 | 0.252 (+0.00) | 0.252 (+0.00) | 0.252 (+0.00) | 0.252 (+0.00) | 0.252 (+0.00) | | | Exact | 0.000 | 0.010 (+0.01) | 0.000 (+0.00) | 0.019 (+0.02) | 0.005 (+0.01) | 0.067 (+0.07) | | Gemma-2-2B-IT | Last | 0.248 | 0.271 (+0.02) | 0.252 (+0.00) | 0.305 (+0.06) | 0.262 (+0.01) | 0.248 (+0.00) | | | Exact | 0.190 | 0.195 (+0.01) | 0.205 (+0.02) | 0.224 (+0.03) | 0.167 (-0.02) | 0.195 (+0.01) | | Qwen2.5-3B | Last | 0.333 | 0.338 (+0.01) | 0.252 (-0.08) | 0.319 (-0.01) | 0.310 (-0.02) | 0.333 (+0.00) | | | Exact | 0.310 | 0.262 (-0.05) | 0.233 (-0.08) | 0.290 (-0.02) | 0.271 (-0.04) | 0.310 (+0.00) | | Qwen2.5-3B-IT | Last | 0.190 | 0.195 (+0.01) | 0.190 (+0.00) | 0.271 (+0.08) | 0.271 (+0.08) | 0.271 (+0.08) | | | Exact | 0.190 | 0.195 (+0.01) | 0.190 (+0.00) | 0.271 (+0.08) | 0.271 (+0.08) | 0.195 (+0.01) | --- Some key observations: - MERA is safe — matches or improve accuracy in all cases (at worst no improvement, but at best 8% improvement in accuracy in the exact match which measures accuracy in the real generated model answer). - Baselines like the contrastive methods, additive probes and prompting are more variable — sometimes helpful, sometimes quite degrading in performance (see e.g., Llama-3.2-1B and Llama-3.2-1B-IT with sharp drops) These results are supporting our already reported findings in the main paper. Our view from this is of course that **steering should not always be applied**. The lower the value of $\delta$, the more MERA would abstain and subsequently be more safe. This is unlike baselines, which apply the same fixed-strength intervention to all inputs. **Changes to the paper.** We highlighted the absolute accuracies here, but we will of course integrate all the additional results to the paper: adding raw accuracy/error tables (like above) + deltas, updating SPI in Table 1, and also, importantly, expanding our discussion with an additional paragraph more directly discussing the limitations of additive linear steering, echoing our previous response to your reivew. Lastly, we’re also exploring **nonlinear extensions** of MERA as discussed in Reviewer H7hE answer (6) to advance MERA. We believe non-linear probe-based steering can offer more expressive yet still safe (non-degrading) interventions. We’d be happy to clarify anything further. It is to our hope that this additional evidence helps resolve your concern and could justify raising the score.
Summary: The authors a new steering technique (MERA). MERA formulates steering as an error reduction problem. It first trains a linear probe to determine the linear direction which is most effective for reducing the error. It then adaptively selects a steering multiplier alpha based on how far the prediction is from the desired threshold. Unlike previous methods, which use a global fixed alpha, MERA chooses alpha adaptively on a sample-wise and token-wise basis. Compared to previous work, MERA solves the problem of under-steering or over-steering specific examples. As a result, if the prediction was already correct, MERA avoids doing steering. The authors compare to prior work (contrastive activation addition and probe-based steering). They also ablate different parts of their methodology (MERA baselines). They evaluate on four different binary tasks, and claim MERA generally outperforms baselines. ## Update after rebuttal The authors have improved the paper substantially with additional experiments, as well as improved the quality of writing. I have updated my score to a 4. Claims And Evidence: The authors claim that existing methods may 'over-steer' or 'under-steer', and use this as a motivating factor for MERA. It is not clear to what extent this happens with existing methods. To demonstrate this more clearly, the authors should consider doing UMAP visualizations of positive and negative activations before and after steering. The authors compare to previous methods like CAA. However, CAA requires a hyperparameter sweep to determine the optimal layer for steering. It is also apparent that the authors did perform this layer sweep for MERA and related baselines. Hence I am concerned that the baseline is weak. I think the authors should publish their layer sweep curves for CAA. The authors base all their evaluation on SPI. It is difficult to understand what this metric measures and I would appreciate a more in-depth explanation with figures. I would also appreciate an explanation of why other, simpler, metrics are flawed. The authors also claim that MERA outperforms baselines. I agree with this, but would add some caveats - the models considered are mostly quite small and the tasks considered seem relatively simple. The authors claim MERA can trade off safety and efficiency (fig 5). I did not understand this; what is delta and where did it come from? Why is (1-delta) interpreted as confidence? This claim could be explained much better. The authors mention in Discussion that "LMs are frustratingly error prone". It is unclear what evidence was provided to support this claim - did the authors evaluate the models with no steering on the tasks provided? Methods And Evaluation Criteria: The proposed steering method makes sense. The evaluation benchmarks and metrics also broadly make sense. Theoretical Claims: The authors claim their steering method precisely solves the error reduction problem. I did not rigorously check the proof, but the broad argument makes sense. Experimental Designs Or Analyses: Yes, I checked the experimental design and analysis. Please refer to 'claims and evidence' above. Supplementary Material: No. There was no supplementary material provided at time of writing Relation To Broader Scientific Literature: Better methods for steering are valuable as they facilitate few-shot adaptation; this has benefits for transfer learning and personalization. The method seems directly applicable to small LLMs run on local, consumer-grade hardware for specific tasks. More generally, the mathematical formulation provided may become a building block in future work. Essential References Not Discussed: Not as far as I am aware. Other Strengths And Weaknesses: It is currently unclear to what extent the authors' findings generalise beyond the setting considered in this paper. Currently, the authors primarily consider rather small LLMs and rather simple classification tasks. Conceptually, the method seems hard to generalise to free-form generation, since it involves training linear probes. Other Comments Or Suggestions: I think Fig 3 (overview of MERA) should be prominently displayed on page 2. There are many mathematical quantities defined throughout the paper. It would help to have a summary table of definitions and interpretations (preferably co-located with Fig 3). Generally the writing contains many dense mathematical equations. It would help the reader to unpack these equations and make them easier to understand. Plausibly, it is not important to have these equations in the main paper at all - the interpretations seem more important. Questions For Authors: Did you do any experiments on larger models? Did you try any free-form generations? Do the answers make sense when you do that? How did you convince yourself that current methods 'over-steer' or 'under-steer'? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the very detail-oriented and helpful review! We’re excited that you believe our work has benefits for transfer learning and personalisation and that our mathematical framework may become a building block in future work. We have addressed all of your points below: **1) Evidence for over-/under- steering.** We appreciate the suggestion — thank you for the suggestion—visualising activation space geometry (as in [4, 6]) can indeed help interpretation. However, instead of using UMAP, we now provide a more direct analysis of over- and under-steering effects by reporting transition counts on a sample-by-sample basis: how many examples move from incorrect to correct (0→1), correct to incorrect (1→0), or remain incorrect (0→0) or correct (1→1). This analysis will be included in the Appendix together with the SPI results and referenced in the main manuscript. **2) Baseline layer sweep.** Please see the answer (2) in Reviewer H7hE. **3) Evaluation with SPI.** We identified the need to define a new metric (SPI) since we ourselves couldn’t find a suitable metric that not only expresses improvement but critically, also _degradation_ in a bounded, interpretable way. Existing metrics (e.g. from OpenAI [7]) only track positive gains. By normalising relative to performing ceiling (or floor), the reader can get a sense of how _good_ the steering method is compared to how its best or worst case. How much (in percentage %) does steering improve or degrade LM performance? If SPI=1, steering makes the LM a perfect classifier. If SPI=-1, steering made the LM incorrect on all test samples. SPI is easy to interpret and particularly helpful when comparing LMs side-by-side (which often differ vastly in initial unsteered performance!). We’ll make sure to add an expanded, more clear motivation of the metric in the paper. **4) Caveats on scope.** Totally fair — our experimental scope includes language models going up to 3B and datasets of multi-choice. Given our compute constraints, we prioritised coverage across model types (base vs. instruction-tuned, and three distinct families), and chose supervised tasks to allow controlled evaluation where error can be perfectly recovered at distinct token positions (and not approximated e.g., using external oracle LMs). We'll emphasise our scope better in Section 7. **5) Clarifying $\delta$.** In our method, $\delta$ is user-defined and sets the confidence level for steering. For instance, setting $\delta$ = 0.05 corresponds to a 95% confidence threshold: steering is applied only when we're statistically confident that it will improve performance. To make room for new clarifications from this rebuttal, we’ve decided to remove this section in the revised manuscript! Thank you again! **6) LMs error proneness.** Thank you for flagging this — we’ve adjusted the wording to “[can be] frustratingly…” instead of “[are] frustratingly…” to better reflect variability. As shown in Table 4 (Appendix), even capable base/ IT models can sometimes yield low accuracy (e.g., 5.6%) and high error rates such as 0.89) on certain tasks. **7) Clarifying generalisability.** While we focus on classification tasks for clarity and control, MERA is a general framework. It only requires a real-valued or binary signal to train the probe — so it can, in principle, be applied to other supervision types like truthfulness, toxicity, or helpfulness! Our optimisation and calibration steps are agnostic to task type and independent of model size. To apply MERA to free-form generation, each output should be paired with a target label. This can be done, for instance, using external oracles or LM-based rating systems. We’ll clarify this in Section 7 to better highlight MERA’s broader applicability. **8) Formatting.** We have revised Figure 3 for improved clarity! We have also streamlined the writing and notation in Sections 2 and 3 to make the mathematical content accessible. **9) Questions.** a) We have not yet evaluated any _larger_ models than 3 billion parameters, but are planning to do that with access to more compute. b) Yes — in all our experiments, the steering methods are evaluated in two complementary modes: "last" and "exact". (i) In "last", we evaluate the model’s logits at the final prompt token. (ii) In "exact", we check whether the model generates the correct answer in its free-form output… where the second “exact” mode accommodates the open-ended settings! c) Fixed-strength baselines apply the same intervention regardless of the model’s current activation state, making over- or under-steering inevitable by design. Empirically, we observed this in two ways: (i) that baselines often output negative SPIs (i.e., steering inadvertently degrades the model), and (ii) instance-level transitions where steering flips correct predictions to incorrect ones. We’ll now include the transition analysis in the paper! [6] https://arxiv.org/pdf/2312.01037 [7] https://openai.com/index/weak-to-strong-generalization/ --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I am willing to upgrade my score to a 4 conditioned on the stated improvements being made. --- Reply to Comment 1.1.1: Comment: Thanks again for your comments + for being open to raising the score. Just to confirm — we're currently rerunning the full benchmarking to expand scope with transition analysis (0→1, 1→0, 0→0, 1→1) across all 6 language models and 5 datasets (we also added an additional MMLU subset, see our recent rebuttal response to Reviewer MXKB). Results so far align well with our reported SPI trends. We'll also add absolute steered accuracies and the differences to unsteered accuracies in the Appendix. All the other points you flagged such as improving description and motivation of SPI, being more clear on scope and MERA's generalisability and improving wording/ formatting etc are also addressed.
Summary: The paper proposes a new latent space steering methodology for LLMs. The basic setup is that we prompt an LLM with a question from a finite-class classification task, and we let it generate an open-ended response. We can decode the model's prediction from its open-ended generation by either - **last token**: using the argmax over discrete labels of the logits at the last token of the prompt; "last" is a bit of a misnomer, since really it's the last token of the prompt. - **exact token**: find the first match to one of the discrete labels in the generation, and use that as the prediction (together with its associated next-token probability given by the model) The **error** of the model, denoted $E(z)$ in the paper, is a soft measure of how far the generation is from the true answer, and is measured as 1 minus the probability of the true answer (under either "last token" or "exact token"). The goal with steering is to make the model better at the classification task, i.e. increase accuracy. $1-E(z)$ is a soft proxy for the accuracy. The main innovations of the steering method are: - it frames the steering problem as a constrained optimization, minimizing the magnitude of the latent space perturbation subject to achieving a certain degree of desired effect on $E(z)$, measured via a proxy, namely a linear probe on the activations that is intended to predict the $E(z)$. - In particular, when the unperturbed activation already has a satisfactorily low error, it is unchanged. Thus the steering is conditional - it also additionally frames the degree of desired effect to seek in steering as another optimization problem, rooted in a more "human-interpretable" measure, e.g. the 0-1 based, "hard" metric of accuracy. The method is evaluated on several classification datasets and several LLMs in the parameter range 1B-3B, both base and instruction-tuned. The choice of steering direction (probe vs other options) is ablated. Baselines such as contrastive steering via difference in means are considered. Results show mostly improvement over baselines. Some problems with the methodology, correctness and analysis are discussed in subsequent sections of this review. ## Update after rebuttal The rebuttal has not meaningfully changed my recommendation; I am still in favor of accepting the paper if possible. Claims And Evidence: The claims are largely supported by the given evidence. The most informative artifact is Table 1, which shows performance of all methods by both LLM and dataset. The claims are quite straightforward, clear & verifiable by looking at the metrics reported. Methods And Evaluation Criteria: Some issues: - the linear probe to estimate error from activations has as its targets the error **probability**, a quantity in the range $[0, 1]$ with quite nonlinear behavior (going from 0.90 to 0.95 reflects a much more significant change in model internals than from e.g. 0.45 to 0.5). The regression loss is mean squared error. This doesn't quite typecheck: it is advisable to use the pre-softmax logits as the targets of regression instead. These quantities are "linear-ish" functions of internal activations with unbounded range where relative changes at different levels are more comparable. - I am confused by the description of the baseline BASE-$\mu_k$. - Why do we use for this baseline the layer "identified as most effective by a probe"? Shouldn't we use the layer in which this baseline itself is most effective, regardless of what other methods might tell us? It feels like mixing multiple methods to bias the baseline instead of just getting the true performance we can squeeze from the baseline (of course, these might happen to be the same. But this is a methodological issue). - Second, it is unclear how the choice of $k$ for the number of contrast pairs to use was made. This is important as there are cases in Table 1 where 50 vs 100 leads to dramatic differences. I get that we want extreme correct/incorrect examples to get a contrastive direction here, but still Theoretical Claims: The derivation of the correction term $\sqrt{\log(2/\delta) / (2N)}$ for the calibration threshold (3.2. Calibrating for Safety) is wrong. This is because in the sweep over values of $\alpha\in(0,1)$, we use the same random sample $D_{cal}$ multiple times. We should either sample a new i.i.d. calibration dataset for each value of $\alpha$ we try, or account for looking at the same data multiple times, which is easiest via a union bound. As a result, a reported level of confidence, e.g. 95%, should be treated as a lower level of confidence; with 10 values of $\alpha$, it could be as low as $100 - (10*5) = 50$ percent via a union bound. Experimental Designs Or Analyses: - The designs are clear and straightforward. - The plots in Figure 4 combine across datasets and/or models in a way quite prone to noisy results and unclear conclusions. Table 1 shows significant variation across these conditions. E.g., MMLU barely shows any improvement at all, while the SMS SPAM dataset shows huge improvement. This is further complicated by the fact that the metric SPI being averaged is also difficult to interpret in the absence of the accuracy of a given model on a given dataset. Averaging these wildly different values likely reduces these "aggregates" to metrics dominated by noise and/or the conditions exhibiting the strongest effects. Supplementary Material: N/A Relation To Broader Scientific Literature: I think the paper did a good job at summarizing the state of steering and situating the findings within it. Essential References Not Discussed: Given that framing steering as an optimization problem is central to the message of this paper, it would benefit from a brief comparison with the below paper which takes a very different optimization approach: Cao, Y., Zhang, T., Cao, B., Yin, Z., Lin, L., Ma, F. and Chen, J., 2024. Personalized steering of large language models: Versatile steering vectors through bi-directional preference optimization. _Advances in Neural Information Processing Systems_, _37_, pp.49519-49551. Other Strengths And Weaknesses: Strengths: - I think that framing steering as an optimization problem is a great way to move the fields towards more principled foundations, and to be clear and precise about what we are trying to achieve and why. Weaknesses: - While the method is mostly agnostic to the steering target, the motivation for the particular choice of steering objective is quite questionable. Why would we hope that steering - a blunt and extremely low-expressiveness instrument - would be able to make a language model "smarter"? In a strong sense, steering can only "work with what is already there", surfacing and amplifying existing representations. Intuitively, the only way for this to work is if the model has a **systematic** failure mode on some dataset, and furthermore, this systematicity is somehow represented internally in a crisp way. For instance, consider some previous work cited below that showed that you can do steering interventions on an LLM to make it more truthful on the TruthfulQA dataset. This dataset contains many questions that have common misconceptions associated with them. It makes sense that an LLM would represent internally both the correct answer (which is also consistently represented in the pretraining data), as well as the widely repeated incorrect one (being a next-token predictor). Thus it makes sense that we can steer the model towards saying the less common answer. However, for a dataset like MMLU, it would be extremely surprising if we can get strong gains from just steering. This begs the question: what do steering experiments on MMLU really teach us? Li, K., Patel, O., Viégas, F., Pfister, H. and Wattenberg, M., 2023. Inference-time intervention: Eliciting truthful answers from a language model. _Advances in Neural Information Processing Systems_, _36_, pp.41451-41530. Other Comments Or Suggestions: - writing is not very clear in 3.2, 3.3 - there are many questions left, such as: - what is meant by "This direction is then scaled using the closed-form solution at both the token and layer levels"? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for all the time taking to provide a very detail-oriented and helpful review! We’re glad to hear that you found our claims straightforward, clear + verifiable, our optimisation framing is a great way to move the fields towards more principled foundations and our experimental designs are clear. Now, we have addressed all of your points below: **1) Regression target for probes.** Regarding learning the linear model not directly on the error but on the inverse of the sigmoid (i.e., the logits), we agree that this is a promising direction that could potentially enhance the performance of MERA. We are currently re-running this experiment on all evaluations to assess its impact. Preliminary results steering Llama and Gemma models on SMS spam dataset suggest a slight improvement over the original formulation. We’ll reassess when the complete results are in and if supported, update the methodology accordingly. **2) Describing BASE-$\mu_{k}$.** Thank you for pointing this out! We agree that mixing strategies across methods can undermine fair comparisons and we appreciate the opportunity to clarify! In our experiments, we do not use any MERA-specific best-performing layer for the BASE-$\mu_{k}$ baseline. Instead, we choose the layer with the lowest RMSE under a trained linear probe — a shared external signal reflecting where the error is most linearly related to model activations —which we think enables the fairest comparison on balance. As we see it, probe performance is a good heuristic for selecting contrastive steering layers, since it neutrally depends on how linearly our target is related to the activations (and does not favour MERA which steers on _all_ layers). It is worth mentioning that the community hasn’t converged to one single criterion for layer selection for contrastive difference-in-means steering (see distinct approaches in [2-5]) but that many methodologically valid strategies exist. We’ll revise the section to make our approach clearer! On the choice of $k$ — we agree that results vary significantly between $k = 50$ and $k = 100$. To us, this reflects a general instability in contrastive baselines. To construct these sets, we sorted the error scores of the 3000 training samples and selected the top-k and bottom-k examples. We’ll add these details to the manuscript as well! **3) Theoretical claim derivation.** We acknowledge that there is indeed a selection bias in our original derivation, since the same calibration dataset was used both to construct the confidence intervals and to select the best-performing $\alpha$. We have now addressed this issue by adding a Bonferroni correction to the confidence bound. As a result, the reported confidence levels have been adjusted: the results previously shown at 99\% confidence are now updated to 90\%. Full details are provided in our response to Reviewer 7dxY. We also outline a less conservative alternative and will clarify in the next paper version that our updated method has formal guarantees under the i.i.d. assumption. **4) SPI aggregration.** We appreciate this observation. Figure 4 was intended to offer a practitioner-oriented overview (for a quick comparison across settings to help identify overall trends) and Table 1 to provide complete details on model- and dataset-specific effects. As we see it, these provide _complementary perspectives_ which are helpful to the reader in distinct ways. We now emphasise this dual purpose more clearly in the manuscript and explicitly caution readers about over-interpreting the aggregated view. **5) Personalised steering citation.** Thank you for pointing us to this interesting read! We find the work highly relevant as a steering reference and have thus cited it in our paper (+ added a discussion in Appendix A.1). This work is different from ours in several ways, most notably, in the specific problem it is targeting — where they don’t directly solve for steering strength ($\lambda$) (but find it via hyperparameter sweeps, see Table 5 on p.8). This contrasts with MERA, which gives the solution in a closed form. **6) Steerability and MMLU.** What MMLU can/not teach us is a very fascinating comment! We refer to answer (1) in Reviewer MXKB for a discussion on this topic. Naturally, linear steering has its limitations. In this paper, we suggest possible extensions in Appendix A (Non-linear Case). A simpler alternative, however, is to use a first-order approximation of the non-linear model. This approach would essentially replace the fixed linear weight used across all instances with the gradient of the non-linear function for each specific input. **7) Improved writing.** Thank you for taking the time to write out the example where our writing could be improved! We’re revising Sections 3.2 and 3.3 and also the rest of the paper for clarity. [2] https://openreview.net/pdf?id=HuNoNfiQqH [3] https://arxiv.org/pdf/2312.06681 [4] https://arxiv.org/pdf/2310.06824v3 [5] https://arxiv.org/pdf/2402.14433 --- Rebuttal Comment 1.1: Comment: Thank you for engaging with my review. Re:BASE-$\mu_k$, I'm still unconvinced this is a fair baseline. Many results in the literature show that it is possible to very successfully probe for certain concepts even in layers where altering the concept has no causal effect on model behavior. Furthermore, the choice of $k$ remains arbitrary from my point of view --- Reply to Comment 1.1.1: Comment: Thank you. We agree that your suggestion makes sense in principle. Selecting the best-performing layer per baseline and $k$ is ideal if one has the resources to do it. But, as you're likely aware, doing this properly would require a massive number of runs as all hyperparameters are entangled (see Reviewer MXKB answer (2) for a comment on this). To make the choice of the empirically best layer and $k$ combination, we’d need to run 3120 steering evaluations _per baseline_. And if we opened up token position choices (like in [5]) the combinatorics would explode. For context, if we assume we try 4 choices for $k$ then: * LLaMA-1B: 16 layers × 4 × 5 tasks = 320 * Gemma-2B: 26 layers × 4 × 5 tasks = = 520 * Qwen-3B: 36 layers × 4 × 5 tasks = 720 → Total = 1560 runs × 2 (both base + instruction tuned model) = 3120 runs per baseline This scale of sweeps isn’t feasible for us. What can be added here is that we did perform an ablation experiment in the Appendix where we intervened on all layers as well (not just best selected by the probe) for the contrastive baselines. We did not find any significant difference in scores between the two approaches. As you know, no single heuristic for choosing layer or $k$ is perfect: prior work has done a variety of things like using values as low as 20 or 50 ([6], [7]), pruning contrastive pairs for more signal post hoc ([8]) with different layer selection strategies ([1–4]). Our aim here was to use a methodology that is cheap + consistent across the baselines. Our probe-based approach does not directly imply casual steerability (as you point out!) but it’s simple and importantly doesn’t privilege any method. Incidentally, this is exactly the problem MERA is designed to solve. Rather than tuning over $k$ and the layers + other steering hyperparameters, MERA steers selectively but only when the predicted error is higher than a calibrated error threshold ($\alpha$). It sidesteps the need for exhaustive tuning altogether. To support a broader contrastive comparison, we will also include $k = 200$ baseline variant in the main paper. [5] https://arxiv.org/pdf/2308.10248 [6] https://openreview.net/pdf?id=HuNoNfiQqH [7] https://arxiv.org/pdf/2312.06681 [8] https://arxiv.org/pdf/2410.01174
Summary: Current steering methods for LM error mitigation use fixed intervention strengths, which has the risk of under-/oversteering. The paper introduces MERA, which does adaptive activation steering guided by linear error probes; the intervention thresholds (α) is calibrated via Hoeffding’s inequality. The framework further optimizes steering strength per-token/layer and abstains when unnecessary, ensuring minimal intervention. Evaluations on three LLMs show gains on binary/ternary tasks while avoiding degradation seen in fixed-strength approaches. Claims And Evidence: Yes, except the Hoeffding's inequality part as I comment in "Theoretical Claims" Methods And Evaluation Criteria: Yes, the proposed methods make sense for the problem Theoretical Claims: I understand Hoeffding’s inequality and in general high-dimensional statistics, but I do not get what is the rationale of using it in line 195 (left). It would be great if one could start with Hoeffding’s inequality, explain what are random variables, which assumptions of Hoeffding’s do they (approximately) satisfy or not satisfy, and how it leads to the bound in line 195 (left). Experimental Designs Or Analyses: The authors observed that datasets with high cardinality (e.g., MMLU-HS with 4 classes) are overall difficult to steer. I do not know if four choices per question is a high one, since it is common as high-school level questions, and there are tests with 6 options if not more. Even in high school, one could face open questions without choices, meaning infinite many choices. Could the authors provide some insights and comments on this? Supplementary Material: The authors wrote "***NOTE: We will provide code for our experiments in the Supplementary Material during the review phase.***”. I could not find on OpenReview either code files or anonymized repositories - could someone please enlighten me where the code is? Relation To Broader Scientific Literature: Please find my comments in "Essential References Not Discussed". Essential References Not Discussed: The proposed work seems highly related to an existing paper \[1\], and therefore it is important to clarify the similarity and differences between the two. As far as the reviewer see: 1. Motivation-wise, both the work of \[1\] and the current paper mention that existing methods rely on fixed steering strength, which leads to under/over steering. 2. The two works use different existing ways of extracting a steering *direction* (independent of the strength): \[1\] uses PCA on the normalized contrastive vectors, while this paper uses linear probes. Nevertheless, \[2\] points out that the two ways of extracting steering directions are equivalent under certain conditions. 3. Given one steering direction, both work can estimate the strength of steering directions. \[1\] does so by decomposing the activation along the steering directions, while this paper does so by finding the smallest $\\ell^2$-normed vector that sufficiently reduces the probing probability. Interestingly, equation (6) of this paper share some features with proposition 2 of \[1\]: both depends on the inner product of the steering direction and the activation, which can be viewed as achieving adaptive scaling. 4. To achieve a steering task, \[1\] in its general form uses multiple directions each corresponding to a semantic concept, and finds the steering strength for all of them with one sparse decomposition problem, while this paper focuses on one direction. 5. This paper investigated the choice of the representation to steer in section 4, while \[1\] followed a prior work. Anyhow, I understand that papers could have similarities and what not, but it is important to place the proposed work relative to the literature to attribute what is existing and clarify what is the contribution. \[1\] PaCE: Parsimonious Concept Engineering for Large Language Models, NeurIPS 2024\. \[2\] The linear representation hypothesis and the geometry of large language models, ICML 2024\. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the constructive and informative review! We have addressed your four key points below: **1) Hoeffing inequality — Clarifying Our Selection Procedure.** Our selection procedure ensures we do **not** choose any $\alpha \in (0,1)$ unless it **statistically significantly** improves performance. Specifically, we first identify a set of $\alpha$ values with provably positive performance gain (with high probability), then select the empirically best among them. Let $f(\alpha, X_i) \in [0,1]$ be a performance function and $X_i$ a random input. Given a calibration dataset $D_n = \\{ X_1, \dots, X_n \\}$, the empirical performance is: $$ P(\alpha, D_n) = \frac{1}{n} \sum_{i=1}^n f(\alpha, X_i), $$ with $P(\alpha, D)$ denoting the true (population) performance. #### Procedure Overview 1. **Discretize** $[0,1]$ into $M$ values: $\alpha_{set} = \\{\alpha_1, \ldots, \alpha_M\\}$. 2. **Confidence Bands**: Apply Hoeffding’s inequality to each $\alpha_j$: $$ \Pr\left( \left|P(\alpha_j, D_n) - P(\alpha_j, D)\right| \le \delta_n \right) \ge 1 - \frac{\alpha}{M}, $$ where $\delta_n = \sqrt{\frac{\ln(2M/\alpha)}{2n}}$. 3. **Union Bound**: Ensures all bounds hold simultaneously with high probability: $$ \left|P(\alpha_j, D_n) - P(\alpha_j, D)\right| \le \delta_n \quad \forall j=1,\ldots,M. $$ This yields a uniform guarantee: $$ \Pr\left(\sup_{\alpha \in \alpha_{set}} \left|P(\alpha, D_n) - P(\alpha, D)\right| \le \varepsilon_n\right) \ge 1 - \alpha. $$ We define the **valid set**: $$ \alpha_{\text{valid}} = \\{\alpha : P(\alpha, D_n) - \epsilon_n > 0\\}, $$ and select: $$ \alpha^* = \arg\max_{\alpha \in \alpha_{\text{valid}}} P(\alpha, D_n). $$ Since the confidence bands hold uniformly with high probability, we guarantee $P(\alpha, D) > 0$ for all $\alpha \in \alpha_{\text{valid}}$. Hence, $\alpha^*$ corresponds to a **true performance improvement**, or we abstain otherwise. > **Note**: We will make this guarantee more explicit in the next version of the paper. --- ### **Alternative Calibration Perspective** Instead of using a Bonferroni-style correction (which can be conservative), one could: - **Split the data** into two parts: - Use one part to find the empirically optimal $\alpha^\star$. - Use the other to estimate a confidence interval for $P(\alpha^\star, D)$. We accept $\alpha^\star$ only if its confidence interval lower bound exceeds a desired threshold. This avoids overly conservative bounds and allows for tighter, adaptive inference at the cost of splitting the data. **2) Cardinality versus steerability.** Thank you for raising this important point. We agree that four classes is not especially high, and in retrospect, attributing the difficulty of steering on MMLU-HS to cardinality alone is premature (dataset charactistics like semantic overlap, label ambiguity, and task complexity could also play a role in a task’s steerability). See further discussion in answer (1) at Reviewer MXKB. As we see it, a dedicated, systematic study (e.g., along the lines of [1]) that carefully varies such characteristics while analysing steering performance would be necessary to understand the limits of additive methods like MERA (+ its baselines). We've revised the formulation in the manuscript. **3) Code availability.** Apologies — we missed attaching the code at submission. We had it ready for rebuttal, but then we just learned that ICML guidelines don’t allow updates to the original submission in the discussion phase. If we’re allowed to share an anonymous link (can AC please let us know if this is acceptable) we can upload the source code, notebooks and installation guides directly there! **4) Relation to PaCE.** Thank you for pointing out the connection to PaCE! Your analysis is very insightful. While there are mathematical similarities, as you point out (i.e. adaptive intervention strengths based on the inner product of activations and steering directions), the overarching goals of PaCE and MERA are very different. The PaCE model is trained on data demonstrating ‘benign’ and ‘harmful’ concepts from some predefined dictionary, and performs interventions to suppress the harmful ones, while MERA is fundamentally concerned with error mitigation on well-defined prediction tasks. We also can’t see a direct analogue of our calibration step, which uses calibration data to identify the optimal intervention strength for error mitigation on a per-layer basis. That said, we consider the work relevant enough for inclusion in Section 3 and have also added it to the Related Works in Appendix A.1. [1] https://arxiv.org/pdf/2407.12404
null
null
null
null
null
null
LEMoN: Label Error Detection using Multimodal Neighbors
Accept (poster)
Summary: LEMON is a method designed to identify mislabeled image-caption pairs in large vision-language datasets, which often contain noisy data scraped from the web. Unlike previous approaches that rely solely on image-caption embedding similarity for filtering, LEMON leverages multimodal neighborhood information in the latent space of contrastively pretrained models to detect label errors. The authors theoretically justify and empirically validate LEMON across eight datasets and ten baselines, demonstrating that it improves label error detection by over 3% and enhances downstream captioning performance by 2 BLEU points. Claims And Evidence: YES Methods And Evaluation Criteria: YES Theoretical Claims: YES Experimental Designs Or Analyses: YES Supplementary Material: YES Relation To Broader Scientific Literature: YES Essential References Not Discussed: YES Other Strengths And Weaknesses: Strengths: - The author utilizes multimodal neighbor detection to identify mislabeled data, a simple and effective method that is easy to follow. - The author employs theoretical analysis to prove the effectiveness of the LEMON method. - Extensive experiments demonstrate that mislabeled data can degrade model performance, providing significant insights for future research. Weaknesses: - I noticed that the most recent dataset used was published in 2020. Is the method still effective on datasets published in the last two years? Do mislabeled data still exist in these recent datasets? - The author compares the method with LLaVA but does not provide fine-tuned results for LLaVA. Since it is known that mislabeled data degrade model performance, and LLaVA does not address mislabeled data through fine-tuning, the comparison with large models lacks persuasiveness. - The author only considers image and text modalities. However, could the method generalize to more widely used modalities such as video and audio? Can relevant experiments be provided to support this? - While the paper claims novelty in using multimodal scoring for label noise detection, a similar approach has recently been explored in "VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language Models," ICLR 2024. Other Comments Or Suggestions: The study does not provide results on fine-tuning large models such as LLaVA. Since mislabeled data can degrade model performance, it is important to examine whether filtering with LEMON improves the performance of fine-tuned large models. A comparison with fine-tuned LLaVA would strengthen the persuasiveness of the results. Additionally, the current work focuses on image and text modalities. Expanding the study to include other widely used modalities, such as video and audio, would help demonstrate the generalizability of the method. Conducting experiments on multimodal datasets beyond image-caption pairs could further validate LEMON’s effectiveness in broader applications Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful review and constructive feedback! > I noticed that the most recent dataset used was published in 2020. Is the method still effective on datasets published in the last two years? Do mislabeled data still exist in these recent datasets? We clarify that we evaluated our method on CC3M [1] (from 2021) in Appendix I.9 and the DataComp benchmark [2] (from 2023) in Appendix I.10. As we have motivated in the introduction, and also has been highlighted in many prior works [2-4], the issue of mislabeled data is only growing with time due to the use of billion-sample scale datasets collected from scraping the web. Finally, we note that the published work which the reviewer later references [5] uses no datasets from later than 2015. > The author compares the method with LLaVA but does not provide fine-tuned results for LLaVA. Since it is known that mislabeled data degrade model performance, and LLaVA does not address mislabeled data through fine-tuning, the comparison with large models lacks persuasiveness. > The study does not provide results on fine-tuning large models such as LLaVA. Since mislabeled data can degrade model performance, it is important to examine whether filtering with LEMON improves the performance of fine-tuned large models. A comparison with fine-tuned LLaVA would strengthen the persuasiveness of the results. We have already conducted several experiments showing that filtering with LEMoN improves the performance of downstream large models. In particular, we have finetuned GenerativeImage2Text models in Section 6.2, pretrained CLIP models on MIMIC-CXR in Section 6.4, pretrained CLIP models on CC3M in Appendix I.9, and pretrained CLIP models on DataComp in Appendix I.10. We believe this sufficiently addresses the reviewer's concern. > The author only considers image and text modalities. However, could the method generalize to more widely used modalities such as video and audio? Can relevant experiments be provided to support this? > Additionally, the current work focuses on image and text modalities. Expanding the study to include other widely used modalities, such as video and audio, would help demonstrate the generalizability of the method. Conducting experiments on multimodal datasets beyond image-caption pairs could further validate LEMON’s effectiveness in broader applications We believe that demonstrating LEMoN's effectiveness on image-text pairs is a sufficient contribution, and extending it to other modalities is out of scope for this paper. We will note this as an area of future work in the revision. > While the paper claims novelty in using multimodal scoring for label noise detection, a similar approach has recently been explored in "VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language Models," ICLR 2024. We have already compared our method to this baseline in Table 2. We strongly dispute the claim that VDC is a "similar approach" to LEMoN. VDC entirely relies on prompting LLMs and VLLMs. In contrast, our method does not utilize any prompt engineering, and instead utilizes the neighborhood information in image and text representations of contrastively pretrained models. As a result, not only does VDC perform worse empirically (Table 2), it also has much higher runtime (Table I.11). We emphasize that VDC does not utilize multimodal neighbors, or even embeddings at all, in any form. [1] Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts. CVPR 2021. [2] DataComp: In search of the next generation of multimodal datasets. NeurIPS 2023. [3] Multimodal datasets: misogyny, pornography, and malignant stereotypes. arXiv:2110.01963. [4] What's In My Big Data? ICLR 2024. [5] VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language Models. ICLR 2024.
Summary: This paper tries to apply a neighbor-based noisy sample detection method to a multimodal dataset (image-text pairs dataset) with the help of a pre-trained vision-language model. The authors also provide theoretical proof to show that their method has better noise detection capability than random detection. Experiments on various datasets have shown the proposed method's efficacy and robustness to hyper-parameters. Claims And Evidence: - assumption 2: in appendix A.2, authors only present the visualization results for classification tasks, which is a much easier case compared with tasks involving image captioning, where the text can be natural language and can have more diverse space on $\mathcal{J}(x)$. - For assumption 2, though appendix A.2 shows the distribution of the whole dataset, it is much better to show the sample-wise visualization results, since assumption 2 is a sample-wise claim. Say, a clean sample $X$ can belong to a Gaussian distribution with a lower mean value compared with its noisy version or noisy neighbors, however, does this claim still hold when comparing $X_1$ and $X_2$ that $X_1 \neq X_2$ and have even different $Y_1$ and $Y_2$? - the conlcusion on "our proposed multimodal neighborhood score, provides a better than random signal at detecting mislabeled samples" with $\mathbb{P}\left(S_m\left(X^{\prime}, Y^{\prime}\right)>S_m(X, Y)\right)>0.5$ is tricky. As long as there are slight distribution differences over two distributions (say, with different $\mu$) in appendix A.2, this conclusion still holds, but does not provide significant insight into how the detection method is good enough. - In the theoretical part, authors try to set $\gamma_1=\gamma_2=0$ for the proof, and the final AUROC result is still higher than the random signal case. Could authors please show the experiment results on directly setting $\gamma_1=\gamma_2=0$ in the algorithm? - for different noise ratio experiments. For the neighbor-based intuition of the proposed method, and that the neighbor is from the original dataset, my concern for the authors: when the noise level increases, the neighbor pairs can also contain a lot of noisy pairs, which makes the algorithm not that reliable since it heavily relies on the quality of neighbor pairs. However, it seems that the impact of noise level is not presented in the theoretical part, and authors directly claim that the proposed method can achieve better noise detection performance than the random signal case. Thus, the impact of noise level in the neighbor pairs should also be considered in the theoretical part, and how this noise level impacts the final AUROC. - Notice the experiement in Figure I.1 in Appendix for different noise levels, please explain the phenomenon and tendency on F1/AUROC on mother datasets. It seems when noise level increases, the F1 will drop, and AUROC becomes very unstable (large variance). Could you please combine these observation into the theoretical part? or how to understand this phenominon based on the theoretical part. Methods And Evaluation Criteria: Yes. Authors follows the classic noise annotation setting from image clasificaiton task for vision-language datasets, and also follows the experiment setting from previous noisy vision-language related work. Theoretical Claims: Yes. I have check the theoretical part. Please check the feedback in __Claims And Evidence__. Experimental Designs Or Analyses: I have checked the experiment design. Related concern: - In appendix A.2, authors only present the visualization results for classification tasks, which is a much easier case than tasks involving image captioning, where the text can be natural language and have more diverse space on $\mathcal{J}(x)$. For experiments on different noise levels, please explain the phenomenon and tendency on F1/AUROC on the mother datasets. It seems that when the noise level increases, F1 drops, and AUROC becomes very unstable (large variance). Could you please combine these observations into the theoretical part or explain how to understand this phenomenon based on the theoretical part? Supplementary Material: Yes. All Appendix except the complete detailed proof part. Relation To Broader Scientific Literature: - This paper tries to apply neighbor-based noisy sample detection method for multimodal dataset (image-text pairs dataset) with the help of pre-trained vision-language model. - Though their previous work uses neighbor-based methods for unimodal datasets with noise, this paper is the first to apply neighbor-based methods on multimodal settings. - Authors also provide theoretical proof to show their method has better noise detection capability than the random detection. Essential References Not Discussed: - Line 065 left column "While prior techniques utilize unimodal neighbors for label error detection", please add reference for this sentence. - Line 131 right column "Prior works have alternatively aimed to maximize the F1 score", please add reference Other Strengths And Weaknesses: __Strengths:__ - clearly written, easy to follow, and understand the paper - Originality: Though I got insights from many related previous works, the originality of this paper is enough for publication. __Weakness:__ please check the __Experimental Designs Or Analyses__ and __Claims And Evidence__. Other Comments Or Suggestions: N/A Questions For Authors: Please check the __Experimental Designs Or Analyses__ and __Claims And Evidence__. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful review and constructive feedback! > In appendix A.2, authors only present the visualization results for classification tasks, which is a much easier case than tasks involving image captioning, where the text can be natural language and have more diverse space on J(x). > For assumption 2, though appendix A.2 shows the distribution of the whole dataset, it is much better to show the sample-wise visualization results, since assumption 2 is a sample-wise claim. Thank you for pointing this out! To address these two concerns simultaneously, we conduct an experiment on captions from the flickr30k dataset. We select 20 random captions, then use Llama 3.1-8B-instruct to generate 50 paraphrasings of each caption (via sampling with temperature), corresponding to 50 samples from $\mathcal{H}(Y)$ for each caption. For the samples $Y' \not\in \mathcal{H}(Y)$, we randomly select 50 other captions from the dataset. To match the support of the Gaussian, we take the distance function to be the log cosine distance (note that this does not change the ordering of the score across samples). We compute this distance using the text encoder from OpenAI CLIP ViT-B/32, and plot histograms for each caption. [The results are shown here](https://postimg.cc/HVpKsCp0). Running the same Shapiro-Wilk test from Appendix A.2, we find that of the positive samples, 8/20 are Gaussian, and of the negative samples, 16/20 are Gaussian. Thus, there is some evidence the Gaussianity assumption holds for natural language and complex paraphrase functions. > the conclusion on "our proposed multimodal neighborhood score, provides a better than random signal at detecting mislabeled samples" is tricky. As long as there are slight distribution differences over two distributions (say, with different $\mu$) in appendix A.2, this conclusion still holds, but does not provide significant insight into how the detection method is good enough. We emphasize that Lemma 4.2 is a specialization of Theorem 4.1 meant to demonstrate that only loose conditions are necessary to obtain non-random signal. Our Theorem 4.1 provides the exact expression for the AUROC of the score as a function of the distribution parameters. > Could authors please show the experiment results on directly setting γ1=γ2=0 in the algorithm? We have provided results for setting $\tau_1 = \tau_2 = 0$ in Table I.9. > However, it seems that the impact of noise level is not presented in the theoretical part, and authors directly claim that the proposed method can achieve better noise detection performance than the random signal case. The impact of noise level in the neighbors is accounted for in Theorem 4.1 via the $p$ term. > Notice the experiment in Figure I.1 in Appendix for different noise levels, please explain the phenomenon and tendency on F1/AUROC on mother datasets. To better match the theoretical setting, we [examine the performance of individual scores](https://postimg.cc/ct7mG6gq) $s_n$ and $s_m$, without the $d_{mm}$ term and with $\tau_1 = \tau_2 = 0$. Looking at the influence of $p$ in Theorem 4.1, we find that as $p \rightarrow 1$, the AUROC approaches 0.5. As $p \rightarrow 0$, taking distribution parameters to be fixed (i.e. $\mu_1, \sigma_1$, etc), the AUROC approaches a fixed constant, which, under the assumptions of Lemma 4.2, is greater than 0.5. For $p \in (0, 1)$, the function is strictly decreasing in $p$. Thus, from the theory, we would expect the AUROC to be strictly decreasing with higher noise rate, going down to 0.5 for $p = 1$. Empirically, we do observe the decrease in AUROC, with a faster decrease for mscoco than mmimdb (which according to the theory is due to dataset specific parameters like $\mu$'s, $\sigma$'s, and the moments of $\zeta$). Regarding variance: we would like to note that the result of our Theorem 4.1 is for the "population" AUROC without finite sample considerations. In practice, as in our experiments, AUROC is estimated using finite samples from a fixed dataset. The variance that is observed is due to the variance of this statistical estimator. The variance of empirical AUROC is related to the variance of a Mann-Whitney U statistic, and has been characterized in [1]. This statistical variance is independent of our theorem. Finally, our theory does not provide an explanation for the F1 score. This is tricker to characterize theoretically, as F1 is computed given a particular threshold on the score. This threshold is selected to be the one that maximizes the F1, and the F1 is a non-concave function of this threshold. > Essential References Not Discussed Thank you for pointing these out. We have added [2] and [3] respectively to address these. [1] Confidence Intervals for the Area under the ROC Curve. NeurIPS 2004. [2] Deep k-NN for Noisy Labels. ICML 2020. [3] Detecting Corrupted Labels Without Training a Model to Predict. ICML 2022.
Summary: The paper presents LEMoN, a method to detect label errors in paired image-text data by using a pretrained CLIP model. Given a dataset of image-text pairs, LEMoN constructs a score $f(x,y)$ which is a weighted combination of CLIP score of $(x,y)$ and two nearest neighbor based intra-modal scores. The intuition is that if $(x,y)$ is mislabeled, then i) captions corresponding to images similar to $x$ will be mismatched with $y$, and ii) the corresponding images of captions similar to $y$ are far away from $x$. The paper provides a theoretical justification for this scoring function. Experiments are performed on 4 classification datasets and 4 image captioning datasets (with artificial noise added) where the proposed scoring outperforms relevant baselines in detecting noisy samples. The paper also reports downstream classification and captioning performance following filtering. Experiments are also performed on real world datasets CC3M and Datacomp. \### update after rebuttal: I have gone through the rebuttal and the comments of other reviewers, and have raised my score to 3. The rebuttal addressed a few of my concerns regards design choices and comp. with LNL algorithms, which I hope will be incorporated in the revision. However, echoing reviewer qofm, I have concerns over the downstream applicability of LEMoN (based on perf. on CC3M and Datacomp). Claims And Evidence: The paper theoretically justifies their score by deriving an expression for the detection AUROC. Extensive empirical evidence on noise simulated datasets is also provided for the same. However, it is not clear if this performance is translated to datasets with realistic noise, as evidenced by the CC3M experiment. Methods And Evaluation Criteria: Experiments are performed on 8 datasets covering both classification and captioning. The paper evaluates efficacy of the proposed score on label error detection (AUROC & F1 score) as well as downstream impact of filtering noisy samples. Theoretical Claims: Theorem 4.1 (AUROC of k-NN score) is not specific to the proposed score, and may be valid for other kinds of scoring (one such alternative is proposed below). I have not checked correctness of Thm A.1 in the appendix. Experimental Designs Or Analyses: Yes, the experimental design is sound Supplementary Material: I reviewed parts of the appendix referenced in the main paper Relation To Broader Scientific Literature: The paper extends existing work on filtering noisy labels to incorporate nearest-neighbor consistency in a multimodal fashion. This is a novel contribution. The paper also performs thorough empirical analysis. However there is no empirical comparison with a related body of work on learning with noisy correspondences [1,2]. [1] Radenovic, Filip et al. “Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training.” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023): 6967-6977. [2] Huang, R., Long, Y., Han, J., Xu, H., Liang, X., Xu, C.,and Liang, X. Nlip: Noise-robust language-image pretraining. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 926–934, 2023 Essential References Not Discussed: Some discussion on methods that learn from noisy correspondences is missing [1,3, 4] [1] Radenovic, Filip et al. “Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training.” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023): 6967-6977. [3] Andonian, Alex et al. “Robust Cross-Modal Representation Learning with Progressive Self-Distillation.” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022): 16409-16420. [4] Chen, Hao et al. “Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks.” ArXiv abs/2309.17002 (2023): n. pag. Other Strengths And Weaknesses: Strengths: \+ Preprocessing large scale web-crawled datasets is an essential step in the training/fine-tuning large foundational models. The paper explores a novel neighborhood consistency based approach to filter out noisy correspondences from datasets, which can help in improving downstream performance. \+ The proposed approach is motivated well, and the paper is well written. However, some further discussion is needed to understand why simpler cross modal alternatives are not explored (see weaknesses below). \+ Empirical evaluation is thorough, and the proposed scoring rule outperforms baselines in detecting noisy samples (improved F1 and AUROC metrics) for both classification and captioning datasets synthetically augmented with noise. Weaknesses: \- Empirical performance on realistic noise. Downstream performance on CC3M and Datacomp is almost the same as Clip Similarity baseline. Although a human study on noise detection is performed in Section 6.5, it is not clear if the proposed score translates to downstream performance. \- Simpler ways to incorporate k-NN information. For ex. average CLIP score between the caption $y$ and neighboring images of $x$ (and vice versa). This avoids the $\tau_2$ hyperparam, but ignores paired information of neighbors. It is not clear to me if the proposed approach is the optimal way of adding multimodal consistency. A clarification of the same would be appreciated. \- It is not clear if filtering noisy data is preferred to techniques that learn in the presence of label noise. Empirical comparison with [2] on captioning datasets would strengthen the paper. Other Comments Or Suggestions: \- In L1731 should refer to table I.14 \- In table I.14, unfiltered outperforms both kinds of filtering on average Questions For Authors: 1. As described in weaknesses, a discussion on simpler ways to incorporate k-NN information would be informative. 2. When are filtration based approaches preferred over techniques that learn with label noise? 3. Empirical performance on realistic downstream tasks is unsatisfactory. Perhaps fine-tuning under limited data (few-show training samples) is better suited to evaluate these methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful review and constructive feedback! > Empirical performance on realistic noise. Downstream performance on CC3M and Datacomp is almost the same as Clip Similarity baseline. First, we would like to emphasize that we evaluate our method against the baselines on several other datasets with human noise, including CIFAR-10N, CIFAR-100N [1], and StanfordCars and MiniImageNet [2]. These datasets contain noise from human annotations collected on Amazon Mechanical Turk and the Google Cloud Data Labeling Service respectively. Second, we would like to note that, in addition to CC3M and Datacomp, we also observe increased downstream performance of training on LEMoN filtered datasets in CIFAR-10N and CIFAR-100N (Figure 3) and mscoco (Table 3). > Empirical performance on realistic downstream tasks is unsatisfactory. Perhaps fine-tuning under limited data (few-show training samples) is better suited to evaluate these methods? Thank you for this suggestion! We have conducted additional experiments for these CC3M-pretrained models by linear probing on the VTAB benchmark, first in the few-shot setting where we select 5 random samples per class, and next where we finetune on the standard training split of each dataset. [Our results can be found here](https://postimg.cc/VdpjZjHS). Overall, we find similar trends as the zero-shot setting, where LEMoN marginally outperforms the baseline, with both underperforming the model that has been pretrained on the whole corpus. > Simpler ways to incorporate k-NN information. For ex. average CLIP score between the caption $y$ and neighboring images of $x$ (and vice versa). This avoids the $\tau_2$ hyperparam, but ignores paired information of neighbors. > As described in weaknesses, a discussion on simpler ways to incorporate k-NN information would be informative. Thank you for this suggestion! We have added this alternate way of integrating neighbor information as suggested by the reviewer: $s(x, y) = d_{mm}(x, y) + \beta s_n (x, y, \mathcal{D}) + \gamma s_m(x, y, \mathcal{D})$ with $s_n(x, y, \mathcal{D}) = \frac{1}{k} \displaystyle\sum_{j=1}^k d_{mm}(x_{n_j}, y)e^{-\tau_{1, n} d_{\mathcal{X}}(x, x_{n_j})}$ and $s_m(x, y, \mathcal{D}) = \frac{1}{k} \displaystyle\sum_{j=1}^k d_{mm} (x, y_{m_j})e^{-\tau_{1, m}d_{\mathcal{Y}} (y, y_{m_j}) }$ Which drops the $\tau_2$ hyperparameter as the reviewer suggested. To maintain fairness, we use the same model selection strategy and hyperparameter grid for remaining hyperparameters as $\text{LEMoN}\_{\text{opt}}$. We evaluate this alternate neighborhood score versus LEMoN, and [these results can be found here](https://postimg.cc/BXnj5XSj). We find that LEMoN outperforms this alternate neighbor method on the majority of datasets. Finally, we note that we have also discussed some alternate ways of integrating neighbor information in Appendix B when we compare LEMoN conceptually and empirically with a baseline which uses semantic neighborhood information for a different purpose. We will add more exposition to this discussion in the revision. > It is not clear if filtering noisy data is preferred to techniques that learn in the presence of label noise. Empirical comparison with [2] on captioning datasets would strengthen the paper. > When are filtration based approaches preferred over techniques that learn with label noise? In our view, noise-robust training algorithms are a disjoint field of work from noisy label identification. In particular, identifying noise labels is a more flexible approach, with applications beyond just removing these samples for downstream model training. By identifying mislabeled samples, we can also characterize systematic errors or biases in datasets (such as in Figures I.4 and I.5), which can then be fixed, both by repairing existing data and improving future data collection practices. This is especially important for practitioners looking to release high-fidelity datasets for others to train (and especially evaluate) on, as mislabeled samples in test sets have been shown to destabilize ML benchmarks [3]. We will add some discussion on this to the revised version of the paper. Finally, per-sample mislabel identification methods such as LEMoN can also be deployed to flag incorrect human inputs in an online setting. One particular example might be flagging simple mistakes made by radiologists when writing notes from chest X-rays (as motivated by our MIMIC-CXR setting). [1] Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations. ICLR 2022. [2] Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels. ICML 2020. [3] Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. NeurIPS 2021. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments and for conducting additional experiments. I am satisfied with the explanations and have thus increased my score.
Summary: This paper presents LEMoN, a method for detecting label errors in image-text pair datasets. The authors define a scoring function in the CLIP embedding space that combines the pairwise image-text distance with distances to nearest neighbors in both the image and text modalities. Specifically, the score integrates multimodal distances and distance scores with neighbors in each modality (text, image). The proposed method is theoretically justified and empirically validated across eight classification and captioning datasets (including the healthcare dataset), consistently outperforming existing baselines. The authors conduct comprehensive ablation studies, with a strong focus on demonstrating the method’s robustness to variations in hyperparameters. Claims And Evidence: 1. I believe the proposed method to be well-motivated and theoretically sound, supported by Theorem 4.1 and Lemma 4.2. The empirical results are strong across several datasets, and the experiments are particularly thorough in evaluating the robustness to hyperparameter choices—a critical aspect given the number of hyperparameters involved. 2. While the proposed method demonstrates strong performance compared to existing baselines, its practical advantages remain somewhat unclear. - For instance, BLIP-based filtering methods may not be prohibitively expensive, as they can be fine-tuned on a domain-specific dataset (e.g., movie or biomedical domains) with relatively modest computational costs. Such fine-tuning could potentially outperform the proposed method—as suggested by results on the MS-COCO dataset. A direct comparison of computational overhead between LEMoN and these BLIP-based approaches would strengthen the paper’s claims. Furthermore, CapFilt appears to perform reasonably well even in a zero-shot setting (i.e., without fine-tuning on MS-COCO), wondering about this simple zero-shot CapFilt result. Clarifying the necessity of LEMoN in contrast to these simpler alternatives would help reinforce its practical relevance. - At first glance, LEMoN's strong performance on mimiccxr seems to highlight its generalizability and ease of adaptation to out-of-domain data—a potential advantage over BLIP-based methods. However, since the proposed method also relies on a domain-specific encoder (BiomedCLIP), the comparison may not be entirely fair. I believe the advantage over the BLIP variant should be more clearly explained —particularly through comparisons with zero-shot CapFilt and fine-tuned CapFilt (e.g., on domain-specific datasets), along with an analysis of their relative computational costs. In addition, exploring whether the proposed scoring function could be integrated into or combined with existing methods like CapFilt might potentially enhance its practical utility. - Finally, many recent vision-language pipelines rely on synthetically generated high-quality captions (e.g., post-BLIP processing). In such scenarios, the role and added benefit of LEMoN seems less obvious. 3. Limited impact on CC3M. I believe one of the most practical use cases for label error correction lies in large-scale web-crawled image-text datasets. However, the marginal performance gain of the proposed method over the baseline—especially when it underperforms the default (unfiltered) setting—raises concerns about its practical effectiveness. I believe that points 2 and 3 must be thoroughly addressed in the rebuttal. Methods And Evaluation Criteria: I wrote them above Theoretical Claims: I checked them Experimental Designs Or Analyses: It seems that experimental dI wrote them above. Supplementary Material: I reviewed all the parts. Relation To Broader Scientific Literature: I believe the Essential References Not Discussed: believe the paper includes relevant citations overall; however, the main topic is also closely related to the issue of false negatives in vision-language models. The authors may consider to include references such as [Chun et al., ECCV 2022] and [Byun et al., MAFA, CVPR 2024] Other Strengths And Weaknesses: Wrote them above Other Comments Or Suggestions: Wrote them above Questions For Authors: Wrote them above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful review and constructive feedback! > For instance, BLIP-based filtering methods may not be prohibitively expensive, as they can be fine-tuned on a domain-specific dataset (e.g., movie or biomedical domains) with relatively modest computational costs. Such fine-tuning could potentially outperform the proposed method—as suggested by results on the MS-COCO dataset. A direct comparison of computational overhead between LEMoN and these BLIP-based approaches would strengthen the paper’s claims. Furthermore, CapFilt appears to perform reasonably well even in a zero-shot setting (i.e., without fine-tuning on MS-COCO), wondering about this simple zero-shot CapFilt result. Clarifying the necessity of LEMoN in contrast to these simpler alternatives would help reinforce its practical relevance. *CapFilt Inference Runtime*: We compare the inference time (per sample runtime, milliseconds) of LEMoN with CapFilt using the same setup as Table I.11. We find that the two methods have generally comparable inference runtimes. | | mscoco | flickr30k | mimiccxr | mmimdb | | :-------- | ----------: | ----------: | -----------: | -----------: | | LEMoN | 18\.8 (1.8) | 35\.9 (1.2) | 52\.2 (2.7) | 21\.1 (1.4) | | CapFilt | 21\.4 (9.9) | 28.7 (23.8) | 31\.6 (0.2) | 33\.8 (3.0) | *Advantages of LEMoN over CapFilt*: We clarify that for the results reported in the paper, we utilized the pretrained “Salesforce/blip-itm-base-coco” checkpoint. This model was trained on the **clean** training split of MSCOCO, which is why we refer to it as the "oracle". This training includes minimizing image-text matching loss, which is a binary classification objective designed to predict whether a caption matches a given image. LEMoN does not ever need access to the clean dataset (except optionally for hyperparameter tuning), only the noisy data. As such, we clarify that CapFilt is not applied in a “zero shot” setting, especially not for MSCOCO. Further, we note that CapFilt appears to be more domain specific than LEMoN. As CapFilt has been trained on clean MSCOCO, it does well on MSCOCO and Flickr30k (both contain COCO‐style captions), but does worse than LEMoN on mmimdb. > At first glance, LEMoN's strong performance on mimiccxr seems to highlight its generalizability and ease of adaptation to out-of-domain data—a potential advantage over BLIP-based methods. However, since the proposed method also relies on a domain-specific encoder (BiomedCLIP), the comparison may not be entirely fair. I believe the advantage over the BLIP variant should be more clearly explained —particularly through comparisons with zero-shot CapFilt and fine-tuned CapFilt (e.g., on domain-specific datasets), along with an analysis of their relative computational costs. In addition, exploring whether the proposed scoring function could be integrated into or combined with existing methods like CapFilt might potentially enhance its practical utility. We clarify that all other baselines on MIMIC-CXR except CapFilt do also utilize BiomedCLIP (where applicable), and LEMoN outperforms all such baselines. We highlight that we have also explored label error detection without an external domain-specific encoder for MIMIC-CXR in Table 4. Finally, as the reviewer points out, one can leverage representations from BLIP for label error detection with LEMoN, and LEMoN's score could also be combined with other mislabel scores (e.g. through ensembling). We highlight this as one area of future work. > Finally, many recent vision-language pipelines rely on synthetically generated high-quality captions (e.g., post-BLIP processing). In such scenarios, the role and added benefit of LEMoN seems less obvious. We highlight that such vision-language pipelines assume that high quality synthetic caption generators already exist. LEMoN is designed to improve these synthetic caption generators by providing them with better real training data to start with (e.g. Section 6.2). Additionally, LEMoN may be used to detect errors in synthetic captions as well, thus adding the potential of improving the filtering component of such vision-language pipelines. > believe the paper includes relevant citations overall; however, the main topic is also closely related to the issue of false negatives in vision-language models. The authors may consider to include references such as [Chun et al., ECCV 2022] and [Byun et al., MAFA, CVPR 2024] Thank you for these references – we will add them in the revised version of the paper! --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I will maintain my score (though I lean more toward a borderline recommendation), as I still have concerns regarding the practical use case of the proposed method and the marginal results on the CC3M benchmark. While I don't consider these to be paper-killing issues, I strongly recommend that the authors clarify these points in the final version if the paper is accepted. --- Reply to Comment 1.1.1: Comment: Thank you again for the constructive feedback! Regarding the CC3M results, we believe that the improvement is only marginal as CC3M is already filtered to some extent --- it has gone through four filtering steps as described in [1 Section 3]. We note that we do also conduct experiments on another even larger scale dataset (DataComp) in Table I.13, where LEMoN outperforms the CLIP similarity baseline as well as unfiltered training. Additionally, we also emphasize that identifying incorrectly labeled data points has utility beyond just removing these samples for downstream model training. For example, we can detect systematic errors or biases in datasets (such as in Figures I.4 and I.5), and improve data collection strategies. This is especially important for practitioners looking to release high-fidelity datasets for others to train and evaluate on. We will clarify this in the revision if the paper is accepted. Thank you again for engaging with us during the rebuttal period! [1] Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning. ACL 2018.
null
null
null
null
null
null
Sharp Generalization for Nonparametric Regression by Over-Parameterized Neural Networks: A Distribution-Free Analysis in Spherical Covariate
Accept (spotlight poster)
Summary: This paper studies the generalization bounds of two-layer neural networks under the neural tangent kernel (NTK) regime. Using the critical radius as an error measure, this paper establishes distribution-free generalization bounds of the network, which recovers the optimal bounds derived in the previous literature in certain distributional cases. Moreover, this paper also reduces the requirement of the neural network width in the literature using a new error decomposition. Claims And Evidence: Yes. Methods And Evaluation Criteria: NA Theoretical Claims: The proof seems to be solid with clear proof roadmap and justified technical improvements. Experimental Designs Or Analyses: NA Supplementary Material: I have reviewed the proofs in the appendix roughly. Relation To Broader Scientific Literature: This paper improves the results in the literature, establishing distributional-free generalization bounds for two-layer neural networks under the NTK regime, while previous literature mainly focuses on specific distributions. Also, this paper reduces the requirement of the neural network width by using a new error decomposition. The contributions are mainly technical improvements. Essential References Not Discussed: I recommend adding this paper, which, as far as I know, is the first to study early stopping under the RKHS framework. > Yao, Yuan, Lorenzo Rosasco, and Andrea Caponnetto. “On Early Stopping in Gradient Descent Learning.” Constructive Approximation 26 (August 1, 2007): 289–315. https://doi.org/10.1007/s00365-006-0663-2. Other Strengths And Weaknesses: ### Weaknesses The setting of problem seems to be quite simplified in this paper. For example, the neural network is a two-layer network with only trainable inner weights. Also, while the distribution of the data can be arbitrary, the distribution is required to be supported on the sphere. These could impact the generality of the results. Other Comments Or Suggestions: Minor: The choice of word "early stopping" or "early-stopping" should be consistent throughout the paper. Questions For Authors: 1. What does "preconditioned" in Section 4 heading "Preconditioned Gradient Descent" refer to? 2. It seems that in the literature the EDR remains to be $d/(d-1)$ for varous data distributions. Can you provide concrete examples of other EDRs of the NTK under some distribution? 3. Regarding the simulation, what is the performance of the neural network under the theoretical stopping time $\hat{T}$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below. In the following text, the line numbers are for the revised paper. Regarding the weakness, we emphasize that the two-layer neural network studied in this paper still achieves the sharp risk bound of $O(\epsilon^2)$, supporting the claim in[ Bietti & Bach, 2021] that shallow over-parameterized neural networks with ReLU activations exhibit the same approximation properties as its deeper counterpart. Regarding the suggested reference and the wording suggestion, we will discuss [Yuan et al. 2007], and use “early stopping” consistently in the final version of this paper. Below is our response to the questions. **(1) "What does "preconditioned" in Section 4 heading "Preconditioned Gradient Descent" refer to?"** The "preconditioned" in Section 4 heading is a typo, and we have fixed it which will be reflected in the final version of this paper. **(2) "It seems that in the literature the EDR remains to be $d/(d-1)$ for varous data distributions. Can you provide concrete examples of other EDRs of the NTK under some distribution?"** Theorem 10 of [Li et al., 2024] suggests that the polynomial eigenvalue decay rate (EDR) of $\lambda_j \asymp j^{-(d+1)/d} $ is achieved by learning the bias in a neural network, which is different from the EDR of $\lambda_j \asymp j^{-d/(d-1)} $ discussed in this paper. In this case, the main results (Theorem 5.1 and Corollary 5.2) can be applied to obtain the minimax optimal nonparametric regression risk of the order $O(n^{-(d+1)/(2d+1)})$. **(3) "Regarding the simulation, what is the performance of the neural network under the theoretical stopping time $\hat T$?"** In the following table we report the minimum test loss and the test loss at the theoretical stopping time $\Theta(\hat T) = 9 n^{d/(2d-1)} $ of the neural network trained in our simulation study (in Section D of the appendix) with the training data size $n$ ranges within $[100,1000]$ with an increment of $100$. It can be observed that the test loss at the theoretical stopping time $\Theta(\hat T)$ is only marginally higher than the minimum test loss for each training data size $n$, justifying the good generalization of the model at the stopping time $\Theta(\hat T)$. | Training Data Size ($n$) | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | |:----------------------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:| | Test Loss at Step $\Theta(\hat T)$ | 0.3269 | 0.1901 | 0.1735 | 0.1425 | 0.1127 | 0.1058 | 0.0853 | 0.0771 | 0.0769 | 0.0665 | | Minimum Test Loss | 0.3265 | 0.1889 | 0.1732 | 0.1419 | 0.1098 | 0.1039 | 0.0850 | 0.0771 | 0.0767 | 0.0663 | **References** [Li et al., 2024] Li, Y., Yu, Z., Chen, G., and Lin, Q. On the eigenvalue decay rates of a class of neural-network related kernel functions defined on general domains. JMLR 2024.
Summary: This paper addresses the generalization capabilities of over-parameterized two-layer neural networks (NNs) trained by gradient descent (GD) with early stopping for nonparametric regression. The authors establish a sharp generalization bound for the nonparametric regression risk. This result is distribution-free, meaning it does not rely on specific distributional assumptions about the covariate, as long as the covariate lies on the unit sphere. The authors also provide minimax optimal rates for specific cases, such as when the eigenvalues of the NTK decay polynomially. The paper contributes to the theoretical understanding of over-parameterized NNs by bridging the gap between classical kernel regression and finite-width NNs trained by GD. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: No Supplementary Material: No Relation To Broader Scientific Literature: The contributions of this paper are related the community of deep learning theory. Essential References Not Discussed: Some related papers. 1. Dinghao Cao, Zheng-Chu Guo, and Lei Shi. Stochastic gradient descent for two-layer neural networks. arXiv preprint arXiv:2407.07670, 2024. 2. Mike Nguyen and Nicole M¨ucke. How many neurons do we need? a refined analysis for shallow networks trained with gradient descent. Journal of Statistical Planning and Inference, page 106169, 2024. Other Strengths And Weaknesses: Strengths: 1. The authors establish sharp generalization bounds for over-parameterized neural networks based on weaker assumptions, including distributional assumptions and the eigenvalues of the empirical NTK matrix. 2. The proof techniques, particularly the use of local Rademacher complexity and uniform convergence to the NTK, are novel and provide a fresh perspective on analyzing the generalization of over-parameterized NNs. Weaknesses: 1. The analysis is restricted to covariates lying on the unit sphere. 2. The results assume that the target function lies in the RKHS associated with the NTK. This assumption may not hold in practice. 3. The paper focuses exclusively on two-layer NNs. Other Comments Or Suggestions: 1. Page 1, line 34, right, "learning learning" should be "learning" 2. Page 7, line 348, "limit the number of steps T in for Algorithm" delete for 3. Page 8, line 409, r or r'? Questions For Authors: 1. Can the results be extended to deeper neural networks? If so, what additional challenges arise in the analysis? 2. The assumption that $f^*\in\mathcal{H}_K$ may not hold in practice. How robust are the results if this assumption is relaxed? Are there any results for target functions outside the RKHS? 3. The paper suggests that a constant learning rate can be used. Are there any conditions under which a varying learning rate might be beneficial, or is a constant rate always sufficient? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below. In the following text, the line numbers are for the revised paper. (1) “Can the results be extended to deeper neural networks? If so, what additional challenges arise in the analysis?” Yes, this work can be extended to deeper neural network. The key roadmap is presented as follows. (1) We can establish the uniform convergence to the NTK of the deeper neural network using the existing uniform convergence results in [Li et al., 2024] and still apply the proof strategy in Theorem C.8 of this paper to decompose the neural network function $f$ by $f = h+e$, where $h \in \mathcal H_K$ is the “kernel part” of $f$ and $e$ is the error function. (2) Lemma C.9 of this paper then can still be used to bound the local Rademacher complexity of all such neural network function, and we can use Theorem C.10 of this paper together with the convergence results about the training loss of the deeper neural network widely studied in the literature such as [Li et al., 2024, Allen-Zhu et al. 2019] to prove the sharp nonparametric risk bound of $O(\epsilon_n^2)$, where $\epsilon_n$ is the critical population rate of the NTK of the deeper neural network. We would like to emphasize that the two-layer neural network studied in this paper still achieves the sharp risk bound of $O(\epsilon_n^2)$, supporting the claim in [ Bietti & Bach, 2021] that shallow over-parameterized neural networks with ReLU activations exhibit the same approximation properties as its deeper counterpart. (2) "The assumption that $f^* \in \mathcal H_K$ may not hold in practice. How robust are the results if this assumption is relaxed? Are there any results for target functions outside the RKHS?" We would like to point out that our generalization results can be extended to the relaxed case that the target function $f^* \in \mathcal H_K(\mu_n)$ with $\mu_n$ diverges, that is, $\mu_n \to \infty$ as $n \to \infty$. In this relaxed case, $f^*$ is not in the RHKS of constant norm, that is, $f^* \notin \mathcal H_K(\mu_0)$ with $\mu_0$ being a positive constant considered in this paper. We remark that in this relaxed case, the RKHS norm of $f^*$ goes to $\infty $ as $n \to \infty$, which is close to the setup considered in [Bordelon et al., 2025] where the RKHS of $f^*$ goes to $\infty$ (with $\beta < 1$ in their work). We note that Theorem C.10 still holds with $B_h = \mu_n+1+\sqrt 2$ and $B_0 = (B_h+\mu_n)/{\sqrt 2} + 1$, and Theorem C.10 with such new $B_0$ can be used to prove the main result, a new version of Theorem 5,1, for this relaxed case. The new version of Theorem 5.1 would have a nonparametric regression risk bound, $Q(B_0) \epsilon_n^2$, which has a multiplicative factor $Q(B_0)$ depending on $B_0$ before $\epsilon_n^2$, and this risk bound still converges to $0$ if $Q(B_0) \epsilon_n^2 \to 0$ as $n \to \infty$. For example, when $P$ is the spherical uniform distribution, we have $\epsilon_n^2 \asymp n^{-d/(2d-1)}$, and such risk bound converges to $0$ if $Q(B_0) n^{-d/(2d-1)} \to 0$. (3) "The paper suggests that a constant learning rate can be used. Are there any conditions under which a varying learning rate might be beneficial, or is a constant rate always sufficient?" Varying learning rate could be beneficial compared to constant learning rate in the nonconvex optimization literature such as the optimization of DNNs, which will be discussed in the final version of this paper. In this paper, we state that the constant learning rate $\eta = \Theta(1) $ leads to an empirically faster convergence speed for training the neural network than the literature where the infinitesimal learning rate $\eta \to 0$ is used. Furthermore, we will fix the typos, and $r’$ should be $r$ in line 409. We will also discuss the suggested works in “Essential References Not Discussed” in the final version of this paper. **References** [Li et al., 2024] Li, Y., Yu, Z., Chen, G., and Lin, Q. On the eigenvalue decay rates of a class of neural-network related kernel functions defined on general domains. JMLR 2024. [Allen-Zhu et al., 2019] Allen-Zhu, Z., Li, Y., and Song, Z. A convergence theory for deep learning via over-parameterization. ICML 2019. [Bietti & Bach, 2021] Bietti, A. and Bach, F. R. Deep equals shallow for ReLU networks in kernel regimes. ICLR 2021. [Bordelon et al., 2025] B. Bordelon, A. Atanasov, C.Pehlevan. How Feature Learning Can Improve Neural Scaling Laws. ICLR 2025.
Summary: This manuscript contributes to the generalization analysis of overparameterized ReLU neural networks (with one hidden layer) in the context of nonparametric regression tasks. The training data $\{(\overrightarrow{x_i}, y_i)_{i=1}^n\}$ is assumed to be such that $\overrightarrow{x_i}$ are drawn from a unit sphere $\mathbb{S}^{d-1}$ in $\mathbb{R}^d$, while the response $y_i$ is generated by a target function $f^*$ belonging to an RKHS ball $\mathcal{H}_K$, perturbed with random sub-Gaussian noise. Specifically, consider $\widehat{f}$ to be such a neural network trained by gradient descent with constant learning rate and early stopping, it is proved that the expected risk $\mathbb{E}[(\widehat{f} - f^*)^2]$ converges to $0$ at a rate $\mathcal{O}(\varepsilon_n^2)$, where $\varepsilon_n$ is the critical population rate of the NTK associated with the network, and $n$ is the number of the training data. Notably, this result holds without imposing any additional distributional assumptions on the input $\overrightarrow{x_i}$. As a corollary of the above-mentioned main result, consider the special case when the eigenvalues of the integral operator associated with the reproducing kernel $K$ decays polynomially, the minimax optimal rate $\mathcal{O}(n^{-\frac{d}{2d-1}})$ in nonparametric regression can be obtained. Simulation studies are presented at the end of the Appendix to illustrate that the ratio between the "empirical early stopping time" and the "theoretically predicted early stopping time" remains stable across different values of $n$. This stability suggests a proportional relationship between empirical observations and the theoretical predictions of early stopping time. Claims And Evidence: Yes, the claims made in the submission are supported by clear evidence. All theoretical results are backed by proofs that appear correct based on my assessment. I did not identify any problematic claims. Methods And Evaluation Criteria: The paper primarily focuses on shallow yet overparameterized ReLU neural networks with a single hidden layer, a well-studied setting in theoretical analysis. The hidden layer is trained using gradient descent with a constant learning rate, while the second-layer weights remain fixed. This network architecture and training approach are reasonable choices. While considering deeper neural networks (i.e., those with multiple hidden layers) would better align with practical applications, the use of a shallow network is understandable given the theoretical focus of the study. Theoretical Claims: Yes, I have reviewed the proofs presented in the paper. Based on my assessment, they appear to be correct. Experimental Designs Or Analyses: This paper is a theoretical work and include only a simple simulation study. It does not include any real-data experiments. Supplementary Material: I confirm that I have read the majority of the supplementary material. Relation To Broader Scientific Literature: This paper contributes to a deeper understanding of the generalization analysis of overparameterized neural networks with algorithmic guarantees. Essential References Not Discussed: I am not aware of any essential reference that is missing so far. Other Strengths And Weaknesses: Strengths: 1. I appreciate the presentation of this work. The theoretical proofs are easy to follow. The authors did a good job emphasizing the novelty of their results, comparing to previous works such as (Hu et al., 2021; Suh et al., 2022), which is achieving the same rate of convergence in nonparametric regression without imposing distribution assumption on the input covariate as long as it lies in a $d$-dimensional sphere. 2. As a follow-up of the previous point, I believe the novelties (from a technical viewpoint) of this work include: (i) Adopt a new error decomposition $f_t = h_t + e_t$ such that $e_t$ is an error function bounded by the width $w$, $h_t$ lies in a bounded RKHS ball. With this decomposition, the author no longer requires approximating the kernel regressor $\widehat{f}_t^{(NTK)}$ and no longer need the uniform distributional assumption on the input covariate. (ii) Establish a lower bound of the network width depending on $d$ and $\epsilon_n$, and does not depend on $(\widehat{\lambda}_i)$ (iii) Use local Rademacher complexity to, in turn, bound the Rademacher complexity of the hypothesis space consisting of all the neural network functions trained by GD. 3. Admittedly, studying the theoretical guarantees of training neural networks via gradient descent in the NTK regime is not entirely new. However, overall, I believe this work is technically solid. For other weaknesses, please refer to the questions below. Other Comments Or Suggestions: See questions below. Questions For Authors: 1. In the paper, the authors presented a special example when the eigenvalues decay polynomially as $\lambda_j \asymp j^{-\frac{d}{d-1}}$, a convergence rate of the expected risk of $\mathcal{O}(n^{-\frac{d}{2d-1}})$ can be obtained, which is considered minimax optimal as claimed by previous papers like (Yang and Barron, 1999; Yuan and Zhou, 2016). Can the authors provide any other examples as extensions of Theorem 5.1? For example, if the target function $f^*$ belongs to an RKHS ball induced by the Gaussian kernel, what should be the convergence rate of the expected risk? 2. I have some concerns about the practicality of implementing the early stopping rule for the algorithm. The early stopping time $\widehat{T}$ defined at (8) relies on the empirical kernel complexity $\widehat{R}_K$. If one does not know which RKHS the target function $f^*$ lies in (or more specifically, if one has no information on the behaviour of eigenvalues $\lambda_i$), how should one estimate $\widehat{R}_K$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below. (1) "Can the authors provide any other examples as extensions of Theorem 5.1? For example, if the target function belongs to an RKHS ball induced by the Gaussian kernel, what should be the convergence rate of the expected risk?" It is known that the eigenvalue decay rate (EDR) of the Gaussian kernel on the unit sphere in $\mathbb R^d$ is $\lambda_j \asymp j^{-C j^{2/d}}$ ( the super-geometric decay in [Scetbon et al., 2021]) where $C$ is a positive constant depending on $d$, and the kernel regression with the Gaussian kernel has a regression risk of the order $O(\log n/n)$ by Theorem 3.1 of [Scetbon et al., 2021]. Furthermore, Theorem 10 of [Li et al., 2024] suggests that the polynomial eigenvalue decay rate (EDR) of $\lambda_j \asymp j^{-(d+1)/d} $ is achieved by learning the bias in a neural network, which is different from the EDR of $\lambda_j \asymp j^{-d/(d-1)} $ discussed in this paper. In this case, the main results (Theorem 5.1 and Corollary 5.2) can be applied to obtain the minimax optimal nonparametric regression risk of $O(n^{-(d+1)/(2d+1)})$. (2) "If one does not know which RKHS the target function $f^*$ lies in (or more specifically, if one has no information on the behaviour of eigenvalues $\lambda_i$), how should one estimate $\hat R_K$"? We respectfully point out that $\hat R_K$ in Eq. (7) is defined in terms of the (empirical) eigenvalues $\hat \lambda_i$ ($i \in [n]$) of the kernel gram matrix $\mathbf K \in \mathbb R^{n \times n}$ defined on the training data. In the case that one does not know the behavior of the population eigenvalues $\lambda_i$ for $i \ge 1$, one can still estimate the stopping time $\hat T$ by the fixed point of $\hat R_k$ with the eigenvalues $(\hat \lambda_i )_{i=1}^n$ of the gram matrix $\mathbf K$. To do so, we can estimate the fixed point by approximating the numerical solution to the equation $\hat R_K(\hat \epsilon) = \hat \epsilon^2/\sigma_0$. **References** [Scetbon et al., 2021] M. Scetbon, Z. Harchaoui. A Spectral Analysis of Dot-product Kernels. AISTATS 2021.
Summary: The authors study risk rate convergence of the infinite-width NTK for two-layer neural networks with ReLU activations trained with gradient descent and early stopping. The authors present theoretical results that relax some assumptions made in prior work: specifically it is common for results to be derived on data sampled uniformly from the hypersphere and the authors recover these results under data on the sphere that does not require uniform sampling. Claims And Evidence: The claims are well supported by proofs which do not rely on uniform spherical data. The scope of the paper itself seems fairly incremental and the results seem to be largely a verification of existing theoretical results in this slightly relaxed setting. It looks like the proofs offer some novelty in proof techniques. Methods And Evaluation Criteria: N/A Theoretical Claims: Did not check proofs in detail but skimmed them & the proof overview and it looks reasonable Experimental Designs Or Analyses: N/A Supplementary Material: Skimmed some proofs, did not look in detail Relation To Broader Scientific Literature: This work contextualizes their results in many related work Essential References Not Discussed: N/A Other Strengths And Weaknesses: It seems the main results of this paper are moving from uniform spherical data to spherical data (not necessarily uniform), and showing that this setting recovers optimal rates from existing work. Further this paper works in the setting where only the first layer weights are trained and the second layer weights are initialized randomly and fixed whereas it seems the prior work trains all layers of the network. It seems like the authors here are trading off one somewhat restrictive assumption (uniform spherical data) for another (fix the second weight layer to random +1/-1 weights). I'm not sure if I see one assumption as more restrictive than the other. The authors also reduce the lower bound on the width of the network required to achieve optimal rates, which seems to be a new contribution (I am not familiar enough with the literature to know). Overall the contributions seem incremental but altogether can provide a useful basis for future work studying generalization bounds, hence my score. Other Comments Or Suggestions: N/A Questions For Authors: The paper Ko & Huo (2024) seem to have different relaxations of the data distribution that don’t even require data being on the hypersphere either, are they comparable to the results in Table 1? If so, would the authors consider explaining more the relationship between these two? As far as I can see they also give some rates? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below. In the following text, the line numbers are for the revised paper. **(1)"... It seems like the authors here are trading off one somewhat restrictive assumption (uniform spherical data) for another (fix the second weight layer to random +1/-1 weights)."** We respectfully point out that the training setup (fixing the weights of the second layer to random +1/-1 weights) is not an assumption, and such training setup reduces the training cost by not training the second-layer while obtaining a neural network with sharp risk bound. Such training setup has also been used in the prior analysis of the generalization of two-layer neural network such as [Du et al., 2019]. **(2)"The paper Ko & Huo (2024)…"** In this paper, we focus on the generalization analysis of neural networks with algorithmic guarantees, that is, the generalization bounds for neural networks trained by gradient descent or its variants. Such analysis is of particular practical importance since the neural networks used in practice are mostly trained by GD or its variants. Although the work mentioned in this review, Ko & Huo (2024), provides certain risk rates, as mentioned in lines 181-182, it does not provide algorithmic guarantees and the neural networks achieving such rates are not trained by GD or its variants. **Novelty of This Paper** We would like to emphasize the novelty of this paper, which gives the first sharp nonparametric regression risk bound without distributional assumptions on the unit sphere. Our analysis introduces new proof techniques including the new uniform convergence results for NTK and the novel local Rademacher complexity based analysis (acknowledged by Reviewer EyzF Reviewer JB4U) which lead to the distribution-free generalization bounds with spherical covariate. Our results also give better lower bound for the network width compared to the existing literature. Furthermore, in our response to Reviewer JB4U (points (1) and (2)), we provide detailed discussions about (1) how to extend our work from a two-layer neural network to a deeper neural network, and (2) how to relax the assumption that the target function $f^*$ is in a ball RKHS of constant radius. The current novelty and contributions together with such extension and relaxation would benefit the theoretical deep learning literature with new insights and proof strategies. **References** [Du et al., 2019] Du, S. S., Zhai, X., Poczos, B., and Singh, A. Gradient descent provably optimizes over-parameterized neural networks. ICLR 2019.
null
null
null
null
null
null
Screener: Self-supervised Pathology Segmentation Model for 3D Medical Images
Reject
Summary: In this paper, an unsupervised visual anomaly detection algorithm is proposed, which the authors describe as a segmentation algorithm, although I disagree with this characterization. The method exploits the inherent rarity of pathological patterns compared to healthy ones. Two different self-supervised learning strategies are employed to train a descriptor and a condition model. The outputs of these models are then used to train a density model that generates voxel-wise anomaly scores. The model, trained on over 30,000 unlabeled 3D CT volumes, appears to outperform existing methods on four test datasets, comprising 1,820 scans with diverse pathologies. Claims And Evidence: The authors claim to introduce a pathology segmentation algorithm designed for accurate segmentation of all pathological findings in 3D medical images, with the ability to handle pathology classes beyond those in the training datasets. They also claim to reframe pathology segmentation as an unsupervised visual anomaly segmentation problem. However, I believe it is inaccurate to describe the algorithm as a segmentation model, and there is no actual reframing—the algorithm is designed for anomaly detection. Additionally, while I noticed that disjoint data are used for training and testing, this doesn't mean novel pathology classes exist in the testing data and the first claim is well justified. Do testing images include novel pathology classes not present in the training data? Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: My concerns are: 1) Whether the claim that the proposed method can be used for novel anomaly detection is justified, and 2) that the algorithm is for anomaly detection rather than segmentation. For an anomaly detection algorithm evaluation, the experimental design and evaluation metric seem okay. Supplementary Material: Yes. I reviewed all four sections in SM. Relation To Broader Scientific Literature: This paper is closely related to anomaly detection algorithms like DRAEM in both computer vision and medical imaging domain. Essential References Not Discussed: Some recent related works, such as [1], are not mentioned or compared. It remains unclear whether the proposed method outperforms them and to what extent improvement is achieved. [1] Huang, Chaoqin, et al. "Adapting visual-language models for generalizable anomaly detection in medical images." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Other Strengths And Weaknesses: The idea is interesting and the results seem promising. However, I have the following two concerns that need to be addressed: 1. It is unclear what assumptions are made about the training data but not explicitly stated. Does the training data need to be dominated by images without pathology? 2. It is unclear how different the features extracted by the descriptor and condition models are and if different, whether one can be inferred from the other, which is crucial for training the density model. My concern is that if the input feature pairs are significantly different, how can one be reliably inferred from the other? Conversely, if they are too similar, the output of density model would be less meaningful. Other Comments Or Suggestions: 1. I believe the proposed method can only be considered a segmentation method if its final output is a segmentation mask. I encourage reconsidering the use of terms like ‘segment’ and ‘segmentation’ throughout the paper. 2. I suggest analyzing the differences between the outputs of the descriptor and condition models and providing more insights and visualizations to help readers better understand the rationale. Questions For Authors: 1. Are there any assumptions about the training data? Would the method still work with a dataset consisting entirely of images with pathology? 2. It is mentioned that "the descriptor model must generate descriptors that effectively differentiate between pathological and normal positions." However, how this is guaranteed with the adopted training strategy? 3. How different are the features extracted by the descriptor and condition models? How reliably can the feature from the condition model be used to infer the feature from the descriptor model? Have you compared the inferred features with the extracted ones? My concern is that if the input feature pairs are significantly different, how can one be reliably inferred from the other? Conversely, if they are too similar, the density model’s output may be less meaningful, calling into question the rationale of the proposed method. 4. How did you determine the patch size for training? Would a smaller or larger patch size work as well? Specifically, I am interested in how the patch size affects the differences between the features extracted by the descriptor and condition models. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer *pFWf*, thank you for your thorough review and valuable feedback. We appreciate your thoughtful questions and have carefully addressed each point below. **Terminology: anomaly detection vs. segmentation** We agree that terminology in this field can vary. In our work we use **"anomaly segmentation"** to refer specifically to pixel-level anomaly detection, **consistent with established literature [1-3]**. While supervised segmentation models produce probability maps, our method generates **anomaly score maps** that **can be thresholded to obtain segmentation masks**. To better align with medical image segmentation evaluation standards, **we have added Dice scores to Table 2 (https://pdfhost.io/v/n7Bt5c7zJE_table_2) and Tables 3-4 (https://pdfhost.io/v/2RLGCYvMgy_tables_3_4)**. Initially, we omitted Dice scores due to a mismatch between our problem statement (segmenting all pathologies) and the available ground truth masks (limited to specific pathologies — lung cancer in LIDC, pneumonia in MIDRC, liver tumors in LiTS, kidney tumors in KiTS). Note that this discrepancy leads to an **underestimation** of unsupervised models’ Dice scores, as many test images indeed contain additional anomalies detected by our model but not annotated in the ground truth masks (see Figure 2 and more examples at https://pdfhost.io/v/PNygYvbsYn_mismatch which we will add in the Appendix). **Training data assumptions** **Our key assumption:** *each pathological pattern is rarer (has lower density in an embedding space) than any normal pattern*. This holds true when abnormal patterns are diverse (few similar cases), while normal patterns recur frequently across patients. In practice, our training data distribution presents a mixture of 25K chest CTs of screening patients (NLST) and 7K abdominal CTs from hospitals (AMOS, AbdomenAtlas). Our empirical results show that **our model is capable of detecting both chest and abdominal pathologies, despite being trained on such an imbalanced and uncurated dataset**. **Is Screener capable of detecting novel pathologies?** **Theoretically, Screener is capable of detecting novel pathologies** which are not present in training dataset, because in this case, there density is almost zero (not exactly zero due to the addition of gaussian noise during training), and Screener will assign large negative log density scores to them. Our **current evaluation uses pathologies (lung cancer, pneumonia, liver/kidney tumors) that likely exist in our training data**. While this doesn't demonstrate novel-class detection, it shows generalization across diverse manifestations of these pathologies. A compelling test would involve training on pre-pandemic data and evaluating on COVID-19 lesions. We will propose this as important future work in Section 6. **Are descriptor model features inferable from condition model features?** Descriptor model and condition model are trained separately, and produce different pixel-level feature vectors, denoted as $y$ (descriptors) and $c$ (conditions). Theoretically, there is no functional relation between descriptors and conditions: two different pixels may have different descriptors and the same condition. However, our density model learns the conditional density $q(y \mid c)$ of descriptors for every given condition. Intuitively, if $-\log q(y \mid c)$ is large, the observed $y$ has low probability according to the conditional distribution, and is treated as anomaly. This intuition is well supported by our both quantitative and qualitative empirical results. **How do we ensure the descriptor model to differentiate between pathological and normal regions?** We pre-train our descriptor model using dense VICReg loss (Section 3, Appendix A). VICReg’s regularization encourages the descriptors' covariance matrix to be near identity, ensuring **feature maps are non-trivial** (unit variance along spatial dimensions) and **uncorrelated** (distinct channels capture different features). This helps descriptors distinguish between different pixels, particularly pathological and normal ones. **Patch size selection** We selected patch size and voxel spacing based on our previous experience with supervised pathology segmentation models. We hope that we have answered your questions and addressed your concerns. Please let us know if further clarifications are needed. **References** [1] Bergmann, Paul, et al. "MVTec AD--A comprehensive real-world dataset for unsupervised anomaly detection." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019. [2] Ghorbel, Ahmed, et al. "Transformer based models for unsupervised anomaly segmentation in brain MR images." International MICCAI Brainlesion Workshop. Cham: Springer Nature Switzerland, 2022. [3] Zou, Yang, et al. "Spot-the-difference self-supervised pre-training for anomaly detection and segmentation." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
Summary: The authors introduce Screener, a self-supervised 3D pathology segmentation model that formulates the task as an unsupervised visual anomaly segmentation (UVAS) problem. It utilizes self-supervised feature learning and a masking-invariant condition model within a density-based UVAS framework. Trained on 30,000+ unlabeled CT scans, Screener is evaluated on 1,820 test scans across four datasets, achieving AUROC up to 0.96. The study conducts a large-scale evaluation of UVAS for 3D CT images and explores self-supervised learning for medical image pathology segmentation. Claims And Evidence: The paper provides quantitative and qualitative evidence to support its claims through experiments, ablation studies, and comparisons with multiple baseline methods. Methods And Evaluation Criteria: The evaluation criteria are generally appropriate but warrant some scrutiny: - AUROC and AUPRO are standard for anomaly detection tasks, making them fitting choices given the UVAS framing. They effectively evaluate the model’s ability to detect rare pathological pixels, which aligns with the problem’s focus on identifying deviations from normal tissue. However, traditional segmentation metrics like the Dice coefficient or Jaccard index, which measure overlap between predicted and ground truth segments, are more common in clinical segmentation tasks. These metrics provide direct interpretability for clinicians (e.g., “How much of the tumor was correctly segmented?”), which AUROC and AUPRO do not. Including both anomaly detection and segmentation metrics would offer a more holistic assessment. - Outperforming existing UVAS methods demonstrates the effectiveness of Screener’s innovations. However, a comparison with supervised methods (where labeled data is available) would contextualize the performance gap, providing insight into how close the self-supervised approach comes to fully supervised benchmarks—a relevant consideration for clinical adoption. Theoretical Claims: NA Experimental Designs Or Analyses: It is better to provide some supervised baseline for reference—comparison with a fully supervised segmentation model could help contextualize how much performance is lost by using UVAS. Supplementary Material: Yes. Most of the parts. Relation To Broader Scientific Literature: The key contributions of the paper build on and extend several existing approaches in self-supervised learning, anomaly detection, and medical image segmentation. - Traditional supervised segmentation models rely on large labeled datasets (e.g., UNet, Ronneberger et al., 2015), which are scarce for medical imaging. - Self-supervised learning (SSL) has been successfully applied to natural images (e.g., SimCLR, VICReg), but its application to 3D CT medical images is less explored. This work uses dense self-supervised learning for 3D CT volumes. Essential References Not Discussed: The authors may refer more to self-supervised medical image segmentation, e.g.,[1] and [2]. [1] Tang, Yucheng, et al. "Self-supervised pre-training of swin transformers for 3d medical image analysis." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [2] Zhou, Hong-Yu, et al. "A unified visual information preservation framework for self-supervised pre-training in medical image analysis." IEEE Transactions on Pattern Analysis and Machine Intelligence 45.7 (2023): 8020-8035. Other Strengths And Weaknesses: Strengths: - The paper extends density-based unsupervised visual anomaly segmentation (UVAS) to 3D CT pathology segmentation, an area with limited prior work. - Trains on 30,000+ unlabeled CT scans and evaluates on 1,820 labeled scans from four pathology datasets, providing strong empirical validation. Weaknesses: - While the paper compares against unsupervised baselines, it does not include a fully supervised segmentation model (e.g., UNet trained on labeled data). This makes it difficult to quantify how much performance loss occurs due to using UVAS instead of a supervised approach. - The model is trained on high-resolution 3D CT scans, but the paper does not discuss computational cost or inference speed, which are critical for clinical deployment. - The authors may try to improve the writing of this paper, especially for the method sections (e.g., notations and clarity of descriptions of concepts). Other Comments Or Suggestions: NA Questions For Authors: Please try to address the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer *hWnx*, thank you for taking the time to review our submission and for providing thoughtful and valuable feedback. Your suggestions regarding our evaluation design were especially valuable, and we have done our best to accomplish them, as well as address your other concerns. **Inclusion of Dice scores** To provide more interpretable metrics **we have updated Table 2 (https://pdfhost.io/v/n7Bt5c7zJE_table_2) and Tables 3, 4 (https://pdfhost.io/v/2RLGCYvMgy_tables_3_4) to include Dice scores and voxel-level AUROCs**. In order to improve tables’ readability, we decided to move AUROC / AUPRO up to 0.3 FPR metrics to the Appendix. Initially, we omitted Dice scores due to a mismatch between our problem statement (segmenting all pathologies) and the available ground truth masks (limited to specific pathologies — lung cancer in LIDC, pneumonia in MIDRC, liver tumors in LiTS, kidney tumors in KiTS). Note that this discrepancy leads to an **underestimation** of unsupervised models’ Dice scores, as many test images indeed contain additional anomalies detected by our model but not annotated in the ground truth masks (see Figure 2 and more examples at https://pdfhost.io/v/PNygYvbsYn_mismatch which we will add in the Appendix). **Comparison with supervised baseline** **We have added comparison with a Supervised UNet (Table 2, https://pdfhost.io/v/n7Bt5c7zJE_table_2)**, trained via cross validation on each dataset (LIDC, MIDRC, LiTS and KiTS). Since the supervised model segments only the target pathology, its metrics are naturally higher than those of unsupervised models. For a fair comparison, **we distilled Screener into UNet and fine-tuned it in a supervised manner (Fine-tuned Screener)**. Namely, at the distillation step, we pre-train UNet (without final sigmoid activation) to predict Screener’s anomaly score maps (using simple MSE loss). Then, at the supervised fine-tuning (SFT) stage, we randomly re-initialize the pre-trained UNet’s last conv layer and train it on each dataset similarly to Supervised UNet. **Table 2 (https://pdfhost.io/v/n7Bt5c7zJE_table_2) shows that Fine-tuned Screener consistently outperforms Supervised UNet**, especially on lung cancer segmentation. To demonstrate the significance of the latter result we also have drawn plots (https://pdfhost.io/v/yeejsw47bF_train_sizes) comparing Supervised UNet and Fine-tuned screener trained on datasets’ subsamples of different sizes (10, 20 or 40 images per train fold). Note that **when training on 20 images with annotated lung cancer, Fine-tuned Screener achieves 2 times higher Dice scores than Supervised UNet**. **Inference speed & computational cost** Our original Screener model has 133M parameters, patch-based inference for a whole CT volume (described in Section 3.3) on NVIDIA RTX H100 GPU requires 4 Gb of GPU memory and takes about 5-10 seconds depending on the number of slices. Also, as discussed above, Screener can be distilled into the standard UNet model (we did not observe significant changes in quality metrics for the distilled model). Thus, its inference costs can be the same as those of UNet. We use nnUNet with 350M parameters, its patch-based inference requires **5 Gb of GPU memory** and takes **0.5-1.0 seconds**. We will include this information in the Implementation details section. **Referring to existing SSL models for CT** We will add the suggested references in Section 5. **Improving paper writing** We will improve paper writing, for example, use consistent methods naming in Sections 3 and 4, use consistent math notation, improve writing in Section 5, etc. We sincerely hope these revisions address most of your concerns. Please let us know if further clarifications are needed to reconsider the score.
Summary: This paper proposed Screener, a self-supervised anomaly segmentation framework for volumetric CT images. The Screener was built upon dense self-supervised learning and a density-based anomaly segmentation framework. Specifically, it utilizes dense pixel-wise self-supervised learning (i.e., VICReg) to pretrain two encoders serving as descriptor and condition models. A density model (Gaussian or normalizing flow) takes the joint embedding to estimate and assign pixel-wise anomaly scores. The model was pretrained on large-scale 30k CT scans and was evaluated on four different CT datasets and outperformed other baseline methods. Claims And Evidence: The claims of contribution are mostly supported by convincing evidence. However, the value of conditioning variables is arguable as it does not affect the performance when using normalization flow as the density model. Methods And Evaluation Criteria: The methods and evaluation criteria make sense. Theoretical Claims: There is no proof of any theoretical claims. Experimental Designs Or Analyses: The reviewer found the experimental designs not comprehensive enough. Although the authors listed a few representative methods from different perspectives (i.e., synthetic anomalies, recon-based, density-based for nature image, and domain-specific medical unsupervised anomaly localization), the current manuscript misses a few of the most recent studies [1-5]. For example, the f-AnoGAN, a 2019 baseline, is the only one specifically designed for medical images in experiments. This incomplete baseline comparison weakens the convincingness of the results. [1]: Pinaya, Walter HL, et al. "Unsupervised brain imaging 3D anomaly detection and segmentation with transformers." Medical Image Analysis 79 (2022): 102475. [2]: Liu, Zhikang, et al. "Simplenet: A simple network for image anomaly detection and localization." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [3]: Iqbal, Hasan, et al. "Unsupervised anomaly detection in medical images using masked diffusion model." International Workshop on Machine Learning in Medical Imaging. Cham: Springer Nature Switzerland, 2023. [4]: Zhao, Yuzhong, Qiaoqiao Ding, and Xiaoqun Zhang. "AE-FLOW: Autoencoders with normalizing flows for medical images anomaly detection." The Eleventh International Conference on Learning Representations. 2023. [5]: Zou, Yang, et al. "Spot-the-difference self-supervised pre-training for anomaly detection and segmentation." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. Supplementary Material: Yes, the reviewer has gone through the supplementary material. Relation To Broader Scientific Literature: This paper explores using self-supervised learning to enhance the density-based anomaly segmentation framework, focusing on medical images. It is related to previous studies focusing on 3D medical anomaly segmentation, density-based anomaly segmentation, and self-supervised learning for anomaly segmentation. Essential References Not Discussed: Recent medical anomaly segmentation studies [1,3, 4] focus on the same problem (medical anomaly segmentation), and are more recent developments compared to f-AnoGAN discussed in the manuscript. [4] also utilizes normalizing flow, the same as the proposed method. [2, 5] are for the natural images but represent the most recent development. [5] also explore utilizing self-supervised pretraining for anomaly segmentation. [1]: Pinaya, Walter HL, et al. "Unsupervised brain imaging 3D anomaly detection and segmentation with transformers." Medical Image Analysis 79 (2022): 102475. [2]: Liu, Zhikang, et al. "Simplenet: A simple network for image anomaly detection and localization." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [3]: Iqbal, Hasan, et al. "Unsupervised anomaly detection in medical images using masked diffusion model." International Workshop on Machine Learning in Medical Imaging. Cham: Springer Nature Switzerland, 2023. [4]: Zhao, Yuzhong, Qiaoqiao Ding, and Xiaoqun Zhang. "AE-FLOW: Autoencoders with normalizing flows for medical images anomaly detection." The Eleventh International Conference on Learning Representations. 2023. [5]: Zou, Yang, et al. "Spot-the-difference self-supervised pre-training for anomaly detection and segmentation." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. Other Strengths And Weaknesses: The overall results look superior to the listed baseline methods, although a few more recent and relevant studies are not included in the experiments. The presentation could be improved and the logical flow can be optimized (e.g., move related work to an earlier position). Other Comments Or Suggestions: 1. The visualization of Figure 3 should be corrected. The color bar for other methods looks very strange; for example, MSFlow has a color bar from 0 to 8000. Questions For Authors: Why not consider other self-supervised pretext tasks besides SimCLR and VICReg, such as masked autoencoding? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer *X8PP*, thank you for your careful review and constructive feedback. We appreciate your acknowledgment of our contributions and have carefully addressed your suggestions below. **Inclusion of recent baselines** We recognize the value of benchmarking our approach against recent state-of-the-art methods. Below, we address each of the provided references: - **Masked Diffusion Model [3]**: We are now re-implementing this method for CT images and **will include it in Table 2 during the discussion period**. - **Simplenet [2]**: We are also implementing Simplenet (inspired by noise-contrastive methods for unnormalized density estimation) on top of our descriptor model and **will add it to Table 3** for comparison with our flow-based and gaussian density models. - **Transformer-based method [1]**: While we recognize the importance of [1], its implementation (requiring VQ-VAE pretraining, transformer-based autoregressive modeling of VQ-VAE latent codes, and likelihood threshold tuning) is too complex to complete within the rebuttal period. We will, however, discuss it in Section 5. The other two methods are not directly applicable to our setup: - **AE-FLOW [4]**: We note that [4] focuses on *image-level* anomaly detection, whereas our work targets *pixel-level* anomaly segmentation. - **Spot-the-difference [5]**: Though this self-supervised strategy (penalizing features’ sensitivity to synthetic anomalies) is relevant, [5] evaluates it on *supervised* anomaly detection. We will discuss its potential applicability to our dense SSL framework and unsupervised anomaly segmentation in Section 6 as future work. **MSFlow colorbar in Figure 3** Thank you for catching this issue. Following your remark, we have fixed our MSFlow implementation and updated its presentation in Table 2 (https://pdfhost.io/v/n7Bt5c7zJE_table_2) and Figure 3 (https://pdfhost.io/v/NHDraEfxhJ_main_results). **Alternative SSL strategies for descriptor and condition models** As described in Section 3, we employed dense joint embedding SSL methods because they offer a unified framework for training both descriptor and condition models and allow us to control the information content of the learned features by changing the augmentations. Namely, augmentations preserving local content ensure that the descriptor model captures pathology-aware features, while random masking results in the pathology-ignorant condition model. We agree that exploring other SSL strategies (e.g., masked autoencoding) is promising and will mention this direction in Section 6. **Positioning of Related Work section** Our current structure aims to balance clarity and emphasis on novelty: - **Section 2 (Background):** Discusses related works directly inspiring our method. - **Section 5 (Related Work):** Provides a broader review of UVAS families. We believe this flow better highlights our methodological contributions. We hope that our revisions will address your main concerns. If there are any other areas where you feel further improvements can be made, we would be grateful for your additional feedback. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for providing the detailed rebuttal and additional experiments. After reading everyone's comments and the corresponding rebuttal, I think the paper has been improved. Therefore, I will raise the score to 4: Accept. --- Reply to Comment 1.1.1: Comment: Thank you for your patience and for the opportunity to expand our experiments. Following your suggestion, we have now included two additional recent baselines. **Patched Diffusion Model [1]** **We have included Patched Diffusion Model [1] in Table 2 (https://pdfhost.io/v/5Gh3jLth8G_table_2, third row).** Patched Diffusion Model is a reconstruction-based method (see https://github.com/FinnBehrendt/patched-Diffusion-Models-UAD for illustration). During training, it cuts out patches from images and trains a diffusion model to reconstruct them based on the surrounding context. During inference, it splits an input image into a grid of patches. Diffusion model reconstructs each patch from its noised version based on the remaining clean patches. The reconstructed patches are aggregated into a full image reconstruction, and anomaly scores are obtained as pixel-wise reconstruction errors. Note that, if training dataset contains pathologies, the diffusion model can learn to reconstruct them as well as healthy regions, resulting in False Negative errors. Indeed, we empirically observe this behaviour (​​**see qualitative results at https://pdfhost.io/v/gGueNFpgUv_patched_diffusion_model**). *Why did not we implement Masked Diffusion Model [2]?* Initially, we planned to include [2] (https://arxiv.org/abs/2305.19867). However, its official implementation (https://github.com/hasan1292/mDDPM) includes critical pipeline components not described in the paper. Upon closer inspection, we observed that [2] heavily relies on [1] — its Masking Block is applied alongside [1]’s Cut Out during training, and it adopts the same patch-wise pipeline as [1] during inference. Given that these key aspects are not described in [2], we prioritized experiments with [1], as it offers a clearer alignment between the paper and the code. **Simplenet [3]** **We include experiments with Simplenet [3] in Table 3 (https://pdfhost.io/v/GBHyByNHS4_table_3)**, as it can be used as an alternative to the gaussian and flow-based density models in our framework. The main idea of Simplenet is to train a discriminator $d$ (MLP net) to distinguish between descriptors $y$ and their noisy counterparts $y^{\mathrm{noisy}} = y + \varepsilon$, $\varepsilon \sim \mathcal{N}(0, I)$. [3] also uses a so-called adaptor $a$ — a fully-connected layer applied to the descriptors as a trainable pre-processing step before adding noise and applying discriminator. Both adaptor and discriminator are trained to minimize the following objective: $$ \mathbb{E}_{y, \varepsilon}[\max(\alpha + d(a(y)), 0) + \max(\alpha - d(a(y) + \varepsilon), 0)] \to \min, \quad (1) $$ i.e. enforce $d(a(y))$ to be less than $-\alpha$ and $d(a(y) + \varepsilon)$ to be greater than $\alpha$ for some margin $\alpha > 0$. At the inference stage, discriminator’s pixel-wise predictions $d(a(y))$ are used as anomaly scores. The original Simplenet yielded poor results in our experiments: the training loss quickly decreased almost to zero, while validation AUROC remained about 0.5 (https://pdfhost.io/v/xh488GHMZA_simplenet). The reason was that the adaptor simplified the task for the discriminator and the latter did not learn to differentiate between normal and abnormal descriptors. Therefore, we omitted the adaptor and validation AUROC increased up to 0.85 (https://pdfhost.io/v/xh488GHMZA_simplenet). However, anomaly score maps looked overconfident (https://pdfhost.io/v/cRWURz5RXd_simplenet_anomaly_maps). We thought that the original training objective $(1)$ was too restrictive and decided to replace it with standard binary cross-entropy loss (BCE): $$ \mathbb{E}_{y, \varepsilon}[-\log \frac{\exp(d(y + \varepsilon))}{\exp(d(y + \varepsilon)) + \exp(d(y))}] \to \min $$ With this objective Simplenet achieves validation AUROC 0.9 ((https://pdfhost.io/v/xh488GHMZA_simplenet) and produces continuous anomaly maps (https://pdfhost.io/v/cRWURz5RXd_simplenet_anomaly_maps). We also trained conditional Simplenets, by feeding different conditioning variables to the discriminator as additional input. We provide results for both unconditional and conditional Simplenet models with BCE objectives in Table 3 (https://pdfhost.io/v/GBHyByNHS4_table_3). Their results are inferior to those of the flow-based models, probably because the latter explicitly estimate descriptors’ density which can be more appropriate for anomaly scoring. **References** [1] Behrendt, Finn, et al. "Patched diffusion models for unsupervised anomaly detection in brain mri." Medical Imaging with Deep Learning. PMLR, 2024. [2] Iqbal, Hasan, et al. "Unsupervised anomaly detection in medical images using masked diffusion model." International Workshop on Machine Learning in Medical Imaging. Cham: Springer Nature Switzerland, 2023. [3] Liu, Zhikang, et al. "Simplenet: A simple network for image anomaly detection and localization." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.
Summary: The paper presents Screener, a framework based on unsupervised visual anomaly segmentation (UVAS) for 3D medical scans. The proposed model aims to reduce the dependency on ground truth (GT) annotations. It is trained on a large dataset of 30K CT scans and evaluated on 1.8K scans, covering a variety of pathological segmentations. The authors claim that the model effectively addresses the challenges of anomaly segmentation without requiring manual annotations. Claims And Evidence: The authors claim that their method is designed for 3D medical image segmentation. However, this claim appears overstated based on the current evaluation. The framework has only been tested on CT scans, without validation on other key modalities such as MRI. Furthermore, while the authors repeatedly reference segmentation, the model has not been explicitly tested on segmentation tasks, nor are segmentation-specific metrics reported. This omission weakens the claim that the method is applicable to segmentation. Methods And Evaluation Criteria: The proposed evaluation methodology is incomplete. To substantiate the generalizability of the approach, the method should be evaluated on additional imaging modalities, such as MRI, to ensure its robustness across different medical datasets. Furthermore, the segmentation task itself is not explicitly evaluated, which is necessary given the claims in the title and introduction. Additionally, Table 2 lacks a sufficient number of comparative baselines, limiting the ability to assess the proposed method's relative performance. Theoretical Claims: I reviewed the theoretical claims presented in the paper, and they appear to be correct. Experimental Designs Or Analyses: While the experiments are generally well-executed, there are critical gaps in evaluation: 1- Lack of Segmentation Task Evaluation: Given the frequent references to segmentation, the paper should explicitly evaluate segmentation performance and report relevant metrics (e.g., Dice score, IoU). 2- Need for More Comprehensive Analysis: Additional metrics and statistical tests should be included to validate the significance of the results. 3- Lack of Robustness Testing: The method is tested on high-quality CT scans, but its performance on noisy or degraded scans remains unclear. Evaluating robustness to noise, artifacts, and variations in acquisition settings would strengthen the study. Supplementary Material: I reviewed the supplementary material. Relation To Broader Scientific Literature: The paper builds upon prior work in visual anomaly segmentation (UVAS) for 3D CT scans. The authors review density-based approaches and propose leveraging dense self-supervised learning (SSL) techniques to pre-train feature maps, which are then used in a density-based UVAS framework. This approach is well motivated and aligns with recent trends in self-supervised representation learning for medical imaging. Essential References Not Discussed: The paper is missing references to key related works that are essential for contextualizing its contributions. For instance, the following papers could be discussed: VISA-FSS: A volume-informed self-supervised approach for few-shot 3D segmentation, MICCAI 2023. Transformer-based models for unsupervised anomaly segmentation in brain MR images, MICCAI Workshop 2022. These studies provide valuable insights into self-supervised learning for medical image segmentation and anomaly detection, which are directly relevant to the proposed method. Other Strengths And Weaknesses: The paper tackles an important problem and introduces an interesting approach. However, several issues need to be addressed: 1- Limited Scope of Evaluation: The model is tested exclusively on CT scans, and there is no exploration of other medical imaging modalities (e.g., MRI). 2- Potential Bias in Dataset: The qualitative results (Fig. 1) suggest that the scans used are high-quality, but robustness to noisy or low-quality scans is not evaluated. 3- Lack of Comparison with Fully Supervised Methods: The method should be compared with fully supervised segmentation models (e.g., UNet) to assess its performance in a more practical clinical setting. Other Comments Or Suggestions: - Expand the Literature Review: The authors should discuss existing methods that incorporate registration-based approaches for segmentation, as well as the strengths and weaknesses of UVAS compared to other self-supervised pretraining techniques. - Clarify Key Claims: The paper should explicitly differentiate between anomaly detection and segmentation to avoid overstating its contributions. Questions For Authors: 1- How does the proposed method generalize to other medical imaging modalities, such as MRI? 2- Since segmentation is frequently mentioned in the paper, why is segmentation performance not explicitly evaluated with standard metrics? 3- How does the proposed model compare to fully supervised segmentation methods, such as UNet or nnU-Net? 4- What is the method’s robustness to low-quality scans, noise, and artifacts? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer *pobP*, thank you for your thoughtful review of our submission and for the time you dedicated to providing such detailed and relevant feedback. Your critical comments on our experimental design were particularly valuable, and we have done our best to address them thoroughly in the responses below. **Inclusion of segmentation metrics** To provide more relevant and interpretable metrics **we have updated Table 2 (https://pdfhost.io/v/n7Bt5c7zJE_table_2) and Tables 3, 4 (https://pdfhost.io/v/2RLGCYvMgy_tables_3_4) to include Dice scores** and voxel-level AUROCs. AUROC / AUPRO up to 0.3 FPR metrics have been moved to the Appendix. Initially, we omitted Dice scores due to a mismatch between our problem statement (segmenting all pathologies) and the available ground truth masks (limited to specific pathologies — lung cancer in LIDC, pneumonia in MIDRC, liver tumors in LiTS, kidney tumors in KiTS). Note that this discrepancy leads to an **underestimation** of unsupervised models’ Dice scores, as many test images indeed contain additional anomalies detected by our model but not annotated in the ground truth masks (see Figure 2 and more examples at https://pdfhost.io/v/PNygYvbsYn_mismatch which we will add in the Appendix). **Comparison with supervised segmentation** **We have added comparison with a Supervised UNet (Table 2, https://pdfhost.io/v/n7Bt5c7zJE_table_2)**, trained via cross validation on each dataset (LIDC, MIDRC, LiTS and KiTS). Since the supervised model segments only the target pathology, its metrics are naturally higher than those of unsupervised models. For a fair comparison, **we distilled Screener into UNet and fine-tuned it in a supervised manner (Fine-tuned Screener)**. Namely, at the distillation step, we pre-train UNet (without final sigmoid activation) to predict Screener’s anomaly score maps (using simple MSE loss). Then, at the supervised fine-tuning stage, we randomly re-initialize the pre-trained UNet’s last conv layer and train it on each dataset similarly to Supervised UNet. **Table 2 (https://pdfhost.io/v/n7Bt5c7zJE_table_2) shows that Fine-tuned Screener consistently outperforms Supervised UNet**. To further demonstrate the significance of the latter result we also have drawn plots (https://pdfhost.io/v/yeejsw47bF_train_sizes) comparing Supervised UNet and Fine-tuned screener trained on datasets’ subsamples of different sizes (10, 20 or 40 images). Note that **when training on 20 images with annotated lung cancer, Fine-tuned Screener achieves 2 times higher Dice scores than Supervised UNet**. **Robustness testing** We evaluated Screener on LIDC subsets with varying acquisition settings: - **Low-dose (<200 mA) vs. high-dose CTs:** Dice score 0.04±0.12 (low-dose) vs. 0.06±0.13 (high-dose), AUROC 0.94 (low-dose) vs. 0.97 (high-dose) (see ROC curves at https://pdfhost.io/v/8AzVC4JVTU_high_vs_low) - **With vs. without contrast agent:** Dice score 0.05±0.13 (with contrast) vs. 0.04±0.13 (without contrast), AUROC 0.96 (with contrast) vs. 0.96 (without contrast) (see ROC curves at https://pdfhost.io/v/8AW5PJr5kQ_contrast_vs_noncontrast) These results suggest **slightly better performance on high-dose CTs** and **robustness to contrast agents**. However, scatter plots (https://pdfhost.io/v/HLNkq7pd7W_doses) reveal minimal dependence on dose levels. Additionally, we provide examples of Screener’s anomaly maps for both a low-dose scan and an **image with artifacts** (https://pdfhost.io/v/3fBJ42LqH4_robustness). This analysis will be included in the Appendix. **Experiments on MRI** While our methodology is theoretically applicable to MRI images, empirical validation would require obtaining official access to MRI datasets and time-consuming experiments with our model and all the baselines. Unfortunately, we will not manage to accomplish this during the rebuttal period. Given our focus on CT images (one of the project goals was to retrieve images with different abnormalities from our large scale in-house CT database), we propose renaming the paper to: *“Screener: Self-supervised Pathology Segmentation Model for Medical CT Images”* — pending your approval. **Missing references** We will discuss the suggested related works (VISA-FSS, Transformer-based models for unsupervised anomaly segmentation) in Section 5. We hope these revisions address your main concerns. Please let us know if further clarifications are needed to reconsider the score.
null
null
null
null
null
null
Revisiting Neural Networks for Few-Shot Learning: A Zero-Cost NAS Perspective
Accept (poster)
Summary: This paper proposes an entropy-based expressivity metric, namely Few-shot Neural Architecture Search (IBFS), for training-free neural architecture search (training-free NAS), which uses Jacobian eigenvalues at initialization to estimate model performance. The authors claim that the motivation for such formulation is inspired by the theorem of global convergence of model-agnostic meta-learning (MAML) and information bottleneck (IB) theories. The proposed method IBFS is evaluated on NAS-Bench-201, mini-ImageNet 5-way, tiered-ImageNet 5-way, and ImageNet1k (i.e., DARTS search space). Ablation studies about the influence of $\theta$ are also reported. Claims And Evidence: This paper has **severe issues regarding clarity**, which significantly impact readability and make it unclear whether the proposed claims have been fully justified. Specifically: ### 1. Claims about global convergence of MAML (i.e., Theorem 4.1). The paper states > ... we derive that the global convergence of MAML can be guaranteed by only considering the first-order approximation of loss landscape, which transfers the problem of crafting a specialized architecture for FSL to find a suitable proxy ... However, it is **surprising** that no proof or reference (if it is a restatement) is given for Theorem 4.1, making it almost impossible to verify whether this claim is valid. Furthermore, it is unclear how is it connected to the flatness of loss landscape, as nothing in Theorem 4.1 actually discusses loss landscape curvature. In fact, MAML is a first-order $\ell_{inner}$ convergence problem, but Theorem 4.1 does not explicitly distinguish between first-order and second-order effects. In addition, the formulation of Therorem 4.1 itself is problematic. For example, a lot of terms used without definition (e.g., $\eta_0, \sigma_{\text{min}}(\Phi), \xi_{\text{max}}(\ell_{\infty})$). Even with defined terms like $\ell_{\text{inner}} = \nabla_{\theta} F_t(\hat{X}^t, X^t, Y^t)$, it is unclear what does $\nabla_{\theta}$ apply to? Is it the gradient of a loss function or some feature transformation? And there is no explicit definition of how $\ell_{inner}$ relates to the meta-learning optimization problem in MAML. Overall, it is **amazing** to see such a poor-quality theorem existing in a manuscript submitted to ICML. ### 2. Problematic derivation between IB theory and the proposed expressivity proxy While the authors claim their expressivity proxy is IB-driven, and they do provide a derivation in Sec 4.1, however, the derivation seems to be problematic. Specifically: - **Equation (8) does not follow directly from Eq. (7).** In standard IB, stationarity gives $p(r \mid x) \propto p(r) \exp \left[ -\beta \mathbb{E}_{p(y|x)} (-\log p(y \mid r)) \right] $. In the paper, the derivation inexplicably substitutes an _input entropy_ $H(X)$ in place of the typical “relevance to $y$.” - **Label dependence disappears.** The “$ \sum_y p(y \mid x) \log \dots $” part in Eq. (7) is crucial to measuring how well $r$ captures label information. In the next step, it is replaced by $\exp(-\beta H(X))$. That is not standard and is never shown in detail. - **Definition of NN expressivity is an inequation**. It is confusing how a definition can contain an inequation in Eq. (10). I suspect authors try to switch $x$ in Eq. (8) to $\varsigma$ in Eq. (10) so that the formulation of their NN expressivity is "IB theory-driven". They might want to say > "Because $p(r \mid x)$ is bounded by an exponential factor, therefore the resulting entropy over the eigenvalues of the network’s Jacobian is also bounded by a similar exponential factor." But that chain of logic is never spelled out (and usually, one would need to connect “$r$” to “Jacobians” to “label information” in detail). Even so, this logic itself is problematic. It just shows the authors are using the *same bounding pattern* for two different objects, without providing a rigorous link that ensures we are applying the same bound to the right distributions. In addition, what is $p$ in Eq. (10) is also not defined. - **Similar to Theorem 4.1, key constants and notations** $( p(\tilde{r}), \lambda(x), H, \text{etc.}) $ **either not defined or simply introduced as a black box.** Overall, the entire Section 4.1 plus Theorem 4.1 require a major revision for clarity and the current version is *far away* from the acceptance bar of ICML. Methods And Evaluation Criteria: In addition to the problems mentioned in Claims And Evidence, I have three more concerns about the methods. - Utilization of Jacobian metric is not new in training-free NAS, as authors themselves also mention the NASWOT, which uses the Jacobian covariance of activation layers to score the architectures. However, the authors didn't discuss the relationship between their proposed NN expressivity with NASWOT, nor why they think the eigenvalues would outperform. - The paper mentions Algorithm 1 which contains IBFS’s implementation details. However, I didn't find any reference to it in the main text. I would like to request authors to point it out. Also, it seems to have no relationship with the NN expressivity. - The empirical performance seems heavily influenced by a hyperparameter $\theta$. However, I didn't find a definition of it in the entire paper. Theoretical Claims: Yes, as stated in Claims And Evidence, the entire theoretical derivation is problematic and requires significant revision. Experimental Designs Or Analyses: Due to the clarity issues, I am unable to justify the validity of the experimental designs, as I don't know the definition of $\theta$. However, given that it seems to significantly influence the results given in Table 3, I would wonder how to determine a suitable $\theta$ if we have no prior information about the tasks/benchmarks we are testing on. If it requires the knowledge of choosing a specific $\theta$ and measuring the corresponding architecture's performance, it will compromise the benefits of low search costs of IBFS due to these trials and errors. In addition, the baselines used in this paper are somehow outdated. Training-free NAS has developed a lot since NASWOT and there are many other proxies/metrics available for comparison. I would suggest authors to refer to more recent results. A non-exhaustive list of training-free metrics until 2022 can be found in NAS-Bench-Suite-Zero [1]. [1] NAS-Bench-Suite-Zero: Accelerating Research on Zero Cost Proxies, NeurIPS Datasets and Benchmarks Track 2022. Supplementary Material: I did review all the appendices. I didn't review the codes. Relation To Broader Scientific Literature: The paper does not cite or compare against newer zero-cost NAS methods, making its benchmark evaluation incomplete. See Experimental Designs Or Analyses. Essential References Not Discussed: As mentioned in Methods And Evaluation Criteria, NASWOT is cited but does not carefully discuss the relationship with the proposed NN expressivity given they are both extracted from Jacobian metric. Other Strengths And Weaknesses: Strengths: - Introduce a new training-free metric, which is valuable and relevant to training-free NAS community - Empirical performance is strong, especially in NAS-Bench-201 and DARTS search space (although the chosen of $\theta$ might bring an unfair advantage for the method) Weaknesses: - Severe clarity issues **[major issue that suggests a clear rejection for this paper]**: Undefined notation, missing links between theory and experiments, and inconsistent references (e.g., Algorithm 1 is never cited in the main text) make the paper difficult to follow. - Theorem 4.1 is unverifiable: The paper states but does not prove its key theoretical result, raising concerns about its validity. - Key hyperparameter ($\theta$) is undefined: The ablation study shows that $\theta$ significantly impacts performance, but the paper never explains its theoretical role. - Experimental evaluation is outdated: More recent baselines can be included. - Weak justification of novelty: The paper does not sufficiently explain why entropy-based Jacobian measures are better than existing alternatives like Jacobian covariance (NASWOT). Other Comments Or Suggestions: I strongly encourage the authors to carefully revise Theorem 4.1 and Section 4.1. While some level of logical gap in derivations may be understandable, the omission of key term definitions is completely unacceptable for a submission to ICML. Such omissions make it impossible for readers to properly assess the claims and significantly hinder the paper’s clarity and rigor. Questions For Authors: Please refer to my previous comments. I do not have additional questions at this time. Unless the authors can demonstrate that I have overlooked key parts of the derivation, my evaluation of the current version of the paper is unlikely to change. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for the helpful and insightful review, which is very helpful for us to further improve this paper. Next, we will answer your questions one by one, and we hope this will improve your acceptance of the paper. **Q1**: Concern about Theorem 4.1. **A1**: Many thanks for your comments! We regret the lack of clarity in Theorem 4.1 and will provide a detailed proof and definitions for terms like \($ \ell_{\text{inner}}$ \), \( $\nabla \theta$ \), and \( $\phi$ \). However, we must respectfully correct a misunderstanding: MAML is not a ``first-order \( $\ell_{\text{inner}}$ \) convergence problem'' as stated. We will provide a detailed definition for $\theta$, H, etc, in the final version. $\theta$ is hyperparamter, $IB_{proxy}=- \theta \sum_{k=1}^{N} p \log p$. Proof. - **MAML Update Rule**: The inner-loop update in MAML is given by: $Wt′=Wt−α∇WtL(Dt^{train},Wt),W_t' = W^t - \alpha \nabla_{W^t} \mathcal{L}(D_t^{train}, W^t),$ where $\alpha$ is the learning rate for the inner loop. - **Meta-Update**: The meta-update step updates the parameters by: $Wt+1=Wt−η∑t∇WtL(Dt^{test},Wt′),W^{t+1} = W^t - \eta \sum_t \nabla_{W^t} \mathcal{L}(D_t^{test}, W_t'),$ where $\eta$ is the learning rate for the outer loop. - **Hessian Approximation**: The second-order term is approximated by the Hessian matrix, H=∇Wt2L(Dttrain,Wt)H = \nabla^2_{W^t} \mathcal{L}(D_t^{train}, W^t), and the meta-update is simplified using the transformation Φ=I−αH\Phi = I - \alpha H. - **Loss Function**: The loss function is defined as: $ℓ(Wt)=12∥Yt^−Yt∥22.\ell(W^t) = \frac{1}{2} \|\hat{Y^t} - Y^t\|_2^2.$ The gradient-based updates for both the inner and outer loops are correctly formulated. The inner loop updates the weights based on the training set $D_t^{train}$, and the outer loop meta-update step minimizes the loss over the test set $D_t^{test}$. The use of the Hessian matrix approximation for second-order terms is standard in MAML, and the transformation $Φ=I−αH\Phi = I - \alpha H$ is correctly introduced. This simplification allows for easier analysis of the convergence behavior.The proof shows how the loss function $\ell(W^t)$ evolves through the meta-learning process. After applying gradient descent, the relationship for $\ell(W^{t+1})$ is given by: $ℓ(Wt+1)≤ℓ(Wt)−η∥∇Wℓ(Wt)∥2.\ell(W^{t+1}) \leq \ell(W^t) - \eta \|\nabla_W \ell(W^t)\|^2$. The bound on the gradient is correct: $∥∇Wℓ(Wt)∥2≥σmin⁡(Φ)ℓ(Wt),\|\nabla_W \ell(W^t)\|^2 \geq \sigma_{\min}(\Phi) \ell(W^t),$ which implies that the loss function decreases at a rate determined by the eigenvalue $\sigma_{\min}(\Phi)$ of the matrix $\Phi$. The recurrence relation is derived correctly: $ℓ(Wt+1)≤(1−ησmin⁡(Φ))ℓ(Wt),\ell(W^{t+1}) \leq \left(1 - \eta \sigma_{\min}(\Phi)\right) \ell(W^t)$, we can obtain: $ℓ(Wt)≤ \left(1 - \frac{\eta_0 \sigma_{\min}(\Phi)}{3} \right)^{2t} R.$ **Q2**: Concern about Experiments. **A2**: Many thanks for your comments! We provide newer works for comparison below, demonstrating that our method still achieves the best performance. | | Year | Cost(s) | C10 (val) | C10(test) | C-100(val) | C-100(test) | Img(val) | Img(test) | | --------------------------- | -------- | ------- | ------------ | ------------------ | ------------ | ------------ | ------------ | ------------ | | GradSign [6] | ICLR2022 | 30.38 | - | 93.52 ± 0.19 | - | 70.57 ± 0.31 | - | 41.89 ± 0.69 | | ZiCo [7] | ICLR2023 | 6.2 | 93.50 ± 0.18 | - | 70.62 ± 0.26 | - | 42.04 ± 0.82 | - | | IS-DARTS [8] | AAAI2024 | 7200 | 91.55 ± 0.00 | 94.36 ± 0.00 | 73.49 ± 0.00 | 73.51 ± 0.00 | 46.37 ± 0.00 | 46.34 ± 0.00 | | AZ-NAS [9] | CVPR2024 | 0.71 | - | 93.53 ± 0.15 0.723 | - | 70.75 ± 0.48 | - | 45.43 ± 0.29 | | SWAP [10] | ICLR2024 | 4.7 | 87.31 ± 2.36 | 90.48 ± 0.94 | 65.92 ± 4.32 | 67.13 ± 1.83 | 33.85 ± 4.98 | 35.40 ± 3.96 | | IBFS(ours) ( $\theta$=0) | | 3.36 | 89.58 ± 0.57 | 92.96 ± 0.81 | 69.17 ± 1.81 | 68.94 ± 1.41 | 41.30 ± 1.79 | 41.11 ± 1.51 | | IBFS(ours) ( $\theta$=0.75) | | 3.82 | 91.55 ± 0.76 | 94.37 ± 0.34 | 73.31 ± 2.12 | 73.09 ± 2.08 | 45.59 ± 0.32 | 46.33 ± 1.27 | **Q3**: Concern about novelty. **A3**: Many thanks! Our paper is not incremental! The reviewer DRyn(**Rating:** 3), xKah(**Rating:** 3) and masR (**Rating:** 3) all appreciate the novelty. While NASWOT uses Jacobian covariance, our entropy-based Jacobian metric introduces a novel perspective by leveraging entropy to capture expressivity, which we will justify with a detailed comparison in the revision. We believe this distinction offers a meaningful contribution to training-free NAS. **Q4**: Concern about Algorithm 1. **A4**: We will cite the Algorithm 1 in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal, including the new experiments and the proof. However, my **Q2 under "Claims and Evidence" remains unresolved**. Even assuming Theorem 4.1 is valid, the connection between the convergence result and the proposed **NNexpressivity** proxy remains unclear. The proxy is actually **heuristically justified** using the “first-order sufficiency” theme, but there's **no formal derivation** showing that higher entropy actually implies better $\sigma_{min}(\Phi)$, faster convergence, or better generalization. Additionally, **many critical clarifications and definitions are deferred to the revision**. These include: 1. Definitions of key terms (a lot are missing, the authors are expected to proofread themselves instead of waiting for reviewers to point it out). 2. More comprehensive comparisons to NASWOT and other baselines. While I understand the rebuttal has character limits, this "to be seen in revisions" style response makes me unable to re-evaluate this paper. These are not minor omissions. **Minor comments:** - The paper claims “MAML is a first-order ($\ell_{inner}$) convergence problem” (lines 252-253 on page 5), yet the rebuttal contradicts this. - The proxy is redefined as $-\theta\sum p \log p$, but if $\theta$ is applied uniformly, it should not affect architecture ranking — yet empirical results vary with $\theta$. The above adds further confusion. I appreciate the effort in the rebuttal and admit the idea is promising, but I found the rebuttal unsatisfactory and the **clarity issue was not solved**. Anyway, I believe a major revision for this paper is required. I would like to keep my recommendation for this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 3Pd4, Thank you very much for your feedback and recognition, which is very helpful for us to further improve this paper. We are delighted to see that our comment has addressed your main concerns (concern of proof). Next, we will further answer your questions one by one, and we hope this will improve your acceptance of the paper. **Q1**: Concern about the connection between the convergence result and the proposed NNexpressivity proxy. **A1**: Many thanks for your comments! First, we need to clarify the goal of Theorem 4.1. As declared in Section 3, we have claimed Model-agnostic meta-learning (MAML) is **second-order problem**. This can be proved by several previous papers, i.e., "Model-agnostic metalearning for fast adaptation of deep networks", " Global Convergence of MAML and Theory-Inspired Neural Architecture Search for Few-Shot Learning". We cite and compare those paper in our paper. In addition, we have claimed second-order MAML suffer from **extremely computational costs**. As stated with AutoMeta, MetaNAS, if we aim to design a high-performance architecture for the FSL task using NAS using gradient-based way, due to MAML is second-order problem, previous methods (AutoMeta, MetaNAS) need cost over 100 GPU days for searching (this can be proved by AutoMeta, MetaNAS). Second, previous training-free method (i.e., NWOT) show the remarkable performance on first-order problem (i.e., supervised learning), therefore, if we aim to design new Zero-Cost method tailored for FSL without involving any training and eliminate a significant portion of the search cost for new tasks, we must derive that the global convergence of MAML can be guaranteed by only considering the first-order optimization. In summary, the goal of Theorem 4.1 is to prove MAML can be guaranteed by only considering the first-order optimization, this is orthogonal with our NNexpressivity proxy. If MAML can be guaranteed by only considering the first-order optimization, you can design any proxy for FSL. This is basic concept in NAS anf FSL filed. **Q2**: Concern about more comprehensive comparisons to NASWOT and other baselines. **A2**: Many thanks for your comments! First, we want to clarify the comparisons to NASWOT. NWOT is designed for NAS in classification tasks, not for few-shot learning. While both approaches utilize the Jacobian matrix, the Jacobian serves as a fundamental representation of neural network weights, it is just as using 224×224 images as input. This is basic knowledge. Our method is orthogonal to NWOT; it obtain from the second-order convergence challenges in MAML and derives a proxy for neural network expressivity based on IB theory. More importantly, our method achieves better performerance in terms of accuracy and search costs than NWOT on NAS-Benck-201 and mini-ImageNet and tiered-ImageNet datasets. Second, compared with other baselines on NAS-Benck-201 (as provided in our rebuttal), we can clearly see that our method achieves best performerance than latest methods, i.e., ZiCo (ICLR2023), IS-DARTS (AAAI2024), AZ-NAS (CVPR2024), SWAP (ICLR2024). We thank the suggestion of Reviewer 3Pd4, in the final version, we will revise our paper. **Q3**: first-order \( $\ell_{\text{inner}}$ \) convergence. **A3**: Many thanks for your comments! In lines 252-253 on page 5, we indeed claims MAML is first-order \( $\ell_{\text{inner}}$ \) convergence problem, however, this is result obtained Theorem 4.1. In fact, MAML is a second-order \( $\ell_{\text{inner}}$ \) convergence problem. In our paper, we have claimed Model-agnostic meta-learning (MAML) is second-order problem in Section 3. This can be proved by several previous papers, i.e., "Model-agnostic metalearning for fast adaptation of deep networks", " Global Convergence of MAML and Theory-Inspired Neural Architecture Search for Few-Shot Learning". We cite and compare those paper in our paper.
Summary: The paper proposes a novel framework called IBFS (Information Bottleneck-driven Few-shot Neural Architecture Search) for few-shot learning (FSL) tasks. IBFS leverages the Information Bottleneck (IB) theory to rank and select neural architectures without requiring any training, significantly reducing search costs. The authors demonstrate that the global convergence of Model-Agnostic Meta-Learning (MAML) can be guaranteed by considering only the first-order loss. Extensive experiments on NAS-Bench-201 and few-shot learning benchmarks show that IBFS achieves state-of-the-art performance with minimal search costs. ## Update After Rebuttal Thanks for the authors' response! I will keep the rating. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I did not check the correctness Experimental Designs Or Analyses: Yes. Supplementary Material: I went over it, though did not go through it carefully. Relation To Broader Scientific Literature: The key contribution of the paper is very helpful, especially to reduce the cost of meta-learning. Essential References Not Discussed: I am not in the field of NAS, not sure about this. Other Strengths And Weaknesses: Strength: 1. The motivation is clear and strong. 2. IBFS eliminates the need for training during the architecture search phase, reducing computational costs and making it highly efficient compared to traditional NAS methods. 3. Extensive experiments demonstrate the effectiveness of IBFS. Weakness: 1. Sensitivity: The paper mentions that IBFS is slightly sensitive to hyperparameters (e.g., $\theta$), which could affect its robustness and ease of use in different settings. 2. Scalability: Since IBFS can reduce cost and works well on small datasets, that would be great if its generalization to large datasets also could demonstrate its good performance. 3. Model-agnostic: It looks most experiments are conducted based on the traditional CNNs. How about the performance on Transformers or other tasks beyond image classification? Other Comments Or Suggestions: Line 665: should it be App. B? Questions For Authors: see the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the helpful and insightful review, which is very helpful for us to further improve this paper. Next, we will answer your questions one by one, and we hope this will improve your acceptance of the paper. **Q1**: Sensitivity. **A1**: Many thanks for your comments! We will provide some evidence to prove the robustness of our method. In Table 3, we present the ablation study to evaluate the influence of $\theta$ on NAS-Bench-201. As shown, the Kendall’s Tau consistently increases with the increase of $\theta$ until $\theta=0.75$, when $\theta=0.75$, our method obtains the highest 0.752 Kendall’s Tau and maximum accuracy on NAS-Bench-201. When $\theta \in [0.75, 1]$, Kendall’s Tau slightly decreases, for example, when $\theta=0.9$, Kendall’s Tau is 0.739. We can clearly observe that the impact of $\theta$ is extremely slight. More importantly, our method is only related to the **expressivity** of the neural networks. The search space included those neural networks that are stable in different devices and scenarios; therefore, even when we deployed our method on different devices and scenarios, our method still achieves optimal performance at $\theta=0.75$. **Q2**: Scalability. **A2**: Many thanks for your comments! Due to the page limit of the main text, we provide additional results on **larger dataset (ImageNet1k)** in **Appendix**. To be specific, the detailed experimental results on **ImageNet1k** are presented in **App. B**, which can be found in **line 715 - line 763** in our paper. As shown in **Table 4**, we can see that our IBFS method consistently outperforms compared SOTA methods: it achieves highest 76.6\% Top-1 accuracy, with 0.0042 (GPU-days) fewest search costs. Compared with its peer competitors, our method achieves the Top-1 accuracy of 1.6\% higher than SWAP, 1.6\% higher than NASI-ADA, and 1.1\% higher than TENAS. Notably, our method only searches in small CIFAR-10 dataset, and then can well generalize to large ImageNet1k, which largely reduces search costs. **Q3**: Model-agnostic. **A3**: Many thanks for your comments! The goal of this paper is to design neural networks for few-shot learning tasks. Our paper follows the MetaDiff (AAAI24), MetaNTK-NAS (CVPR22), which only provide 5way-1shot and 5way-5shot settings of few-shot learning tasks in mini-ImageNet and tiered-ImageNet datasets. As shown in Table 1, we can clearly see that all methods (i.e., MetaDiff) in FSL use CNNs as the backbone, therefore, we also use CNNs for fair comparison. If we use transformers as the backbone in FSL, it will bring unfairness for comparison. In addition, to evaluate the effectiveness of the proposed method, we provide the additional experiments on NASBench-201 search space in three datasets (i.e., n CIFAR-10, CIFAR-100, and ImageNet-16-120). Therefore, we believe our empirical evaluation is sufficient, and not beyond the scope of ICML 2025. To further scrutinize the generalizability of our method for **transformer**, we devote lots of effort \& exploration in the IBFS day and night, conducting additional experiments on AutoFormer [2] in ImageNet. The experimental setting is the same as TF-TAS-T [3]. We can find that IBFS achieves the highest Top-1 accuracy. Those empirical results demonstrate the strong generalizability of our method for transformer design. | **NAS method** | Year | **Top-1 (%)** | **Search Cost**(GPU Days) | Model Type | Search Method | | :--------------: | :------: | :-----------: | :-----------------------: | :---------: | :-----------: | | ViT-Ti [1] | ICLR2020 | 74.5 | - | Transformer | Manual | | AutoFormer-T [2] | CVPR2021 | 74.9 | 24 | Transformer | Evolution | | TF-TAS-T [3] | CVPR2022 | 75.3 | 0.5 | Transformer | Training-free | | ViTAS-C [4] | ECCV2022 | 74.7 | 32 | Transformer | Evolution | | Auto-Prox [5] | AAAI2024 | 75.6 | 0.1 | Transformer | Training-free | | **IBFS** | | 76.5 | 0.03 | CNNs | Training-free | **Q4**: Line 665: should it be App. B? **A4**: Many thanks for your comments! This is a typo. Thanks for pointing it out. Line 665 is App. B, we will correct it in the final revision. [1] An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR2020 [2] Autoformer: Searching transformers for visual recognition. In ICCV 2021 [3] Training free transformer architecture search. In CVPR 2022 [4] Vision transformer architecture search. In ECCV 2022 [5] Auto-prox: Training-free vision transformer architecture search via automatic proxy discovery. AAAI2024
Summary: This paper mainly considers the case that NAS is applied in few-shot learning scenarios, where previous works mainly search for the optimal architecture from scratch or borrow the architecture from other tasks. The paper presents a novel framework called IBFS (Information Bottleneck-driven Few-shot Neural Architecture Search) that addresses two key limitations in conventional Neural Architecture Search (NAS) for few-shot learning scenarios. Claims And Evidence: In this paper, the claims mainly include the following several aspects: 1. **Zero-Cost Architecture Selection**: The claim that architectures can be selected without training. Such claim is supported by: (1) theoretical analysis of MAML's convergence properties; (2) empirical validation showing correlation between zero-cost proxies and actual test accuracy; (3) Experimental results demonstrating state-of-the-art performance 2. **Information Bottleneck Theory Application**: The claim that IB theory provides a unified view for understanding machine learning models. Such claim is supported by: (1) analysis of information entropy variations across different architectures; (2) consistent Kendall's Tau correlation between accuracy and information entropy; (3) theoretical framework connecting IB principles to architecture selection Methods And Evaluation Criteria: The proposed method and evaluation criteria are sound and well-structured: 1. **Theoretical Framework**: - Clear derivation of MAML convergence properties; - Well-motivated connection to Information Bottleneck theory; - Logical progression from theoretical insights to practical implementation 2. **Evaluation Approach**: - Comprehensive comparison with existing methods; - Multiple evaluation metrics (costs, accuracy, generalization); - Validation across different architectures and datasets Theoretical Claims: The paper's theoretical contributions are sound: 1. **MAML Convergence Analysis**: - Theorem 4.1 provides a rigorous foundation for the approach; - The connection to first-order loss landscape is well-established; - The theoretical framework supports the practical implementation 2. **Information Bottleneck Integration**: - Clear connection between IB theory and architecture selection; - Well-motivated use of information entropy as a proxy; - Theoretical justification for the proposed metrics Experimental Designs Or Analyses: The experimental design is comprehensive and well-executed: 1. **Results Analysis**: - Thorough analysis of zero-cost proxies vs. accuracy; - Clear visualization of results (Figures 1-4); - Statistical validation of findings 2. **Ablation Studies**: - Analysis of different proxy metrics; - Investigation of architecture variations; - Validation of key components Supplementary Material: Yes, I checked all supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Pros: - The paper is well written. - The paper is based on theoretical results, which makes the paper solid. - The empirical results are good. Cons: - The motivation of the paper is not convincing enough. Other Comments Or Suggestions: N/A Questions For Authors: - As mentioned in the introduction, a concern raised in FSL is whether the model is ideal for those tasks. However, is it necessary for us to search for an optimal specific architecture for few-shot learning tasks? To some extent, the obtained architecture consumes the computation cost and lacks generality. Can you compare NAS and model parameter adaptation for few-shot classification? - Fig. 1 is quite confusing. Could you please provide some detailed information? I mean, how do you describe the cost/generalization and accuracy simultaneously in the same figure with a continuous curve? - I am a little bit confused of Eq. (6-7). Could you please provide me with some explanations? - How does the Information Bottleneck theory specifically guide the architecture selection process? - What is the optimal way to balance between expressivity and computational efficiency in the architecture search? - How does the framework handle different types of few-shot learning tasks beyond the current scope? For example, in cross-domain settings (meta-dataset), the vary-way vary-shot tasks are more complicated and challenging. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the helpful and insightful review, which is very helpful for us to further improve this paper. Next, we will answer your questions one by one, and we hope this will improve your acceptance of the paper. **Q1**: Concern about optimal architecture for few-shot learning tasks. **A1**: Many thanks for your comments! As mentioned in Section 1, we claim a statement of whether the frameworks of the DNNs are ideal for those tasks. This is because that popular networks (i.e., ResNet) are developed on supervised learning, it overfit to those task. This evidence indicates that the popular networks may not be optimal for tasks beyond supervised learning, e.g., few-shot learning (FSL) task. This statement can be supported with MetaNTK-NAS (CVPR2022). Therefore, it is worth and necessary to explore the frameworks of the DNNs tailored for FSL. Regarding computation cost, traditional methods (e.g., AutoMeta, MetaNAS) are expensive since each architecture requires full training. In contrast, our training-free proxy achieves top performance in ≤0.1hr on miniImageNet and tieredImageNet, demonstrating exceptional efficiency. For generality, our searched architecture not only excels in FSL but also achieves optimal performance on NAS-Bench-201 (CIFAR-10, CIFAR-100, ImageNet-16-120). Additional results on ImageNet1k (Appendix B) further confirm strong generalization, reaching 76.6% Top-1 accuracy with the lowest 0.0042 GPU-days search cost. Comapred with Model Parameter Adaptation (i.e., MAML), NAS guided FSL possesses the following advantages: (1) higher accuracy; (2) reduced human labor; (3) breaking limitations of parameter optimization of fixed network (ResNet-12); (4) NAS can design network for any setting of FSL. **Q2**: Concern about Fig 1. **A2**: Many thanks for your comments! Fig. 1 is a schematic diagram, visualizing the comparison of different methods in terms of cost, generalization, and accuracy. The data is derived from Table 2. For example, in the cost dimension, AutoMeta is at the top, indicating the highest computational cost. The continuous curves are used to better visualize performance trends across different algorithms, not real data. **Q3**: Explanations about Eq. (6-7). **A3**: Many thanks for your comments! Eq. 6 introduces the Lagrange multiplier $\lambda(x)$ to ensure normalization and utilizes the upper bound on mutual information to simplify the optimization problem. Eq. 7 shows the process of obtaining optimal $p(r|x)$ by taking the derivative and setting it to zero. **Q4**: The relationship of IB and architecture selection. **A4**: Many thanks for your comments! In this work, we use the Information Bottleneck (IB) theory to measure the expressivity of neural networks. A stronger neural network maintains more feature information from the input $x$, resulting in a higher $NN_{expressivity}$. During the NAS search process, architectures that preserve more input feature information are more likely to be selected. **Q5**: The optimal way to balance between expressivity and computational efficiency. **A5**: Many thanks for your comments! To clearly answer this question, let’s first assume a neural network **NetA**. When fully trained in supervised learning, **NetA** achieves maximum expressivity but also incurs the highest computational cost. To reduce computational cost, one approach is to decrease the number of training iterations, but this does not guarantee an accurate measure of expressivity. Instead, our method takes a different approach: rather than training NetA, we design a proxy that accurately measures its expressivity based solely on its architecture. This allows us to maximize the balance between expressivity and computational efficiency. While this approach may not be strictly optimal, it is the closest to the optimal solution, as demonstrated by our experimental results. **Q6**: Dicussusion about cross-domain. **A6**: Many thanks for your comments! First, our paper follows the MetaDiff (AAAI24), MetaNTK-NAS (CVPR22), which only provide 5way-1shot and 5way-5shot settings of few-shot learning tasks. Therefore, we believe our empirical evaluation is sufficient, and not beyond the scope of ICML 2025. Second, we greatly appreciate **masR**'s insights on cross-domain learning, which reinforce our belief that he is an outstanding expert in this field. His perspective provides valuable inspiration for our future research on cross-domain FSL. cross-domain FSL (e.g., Meta-Dataset) presents several challenges, such as differences between training and testing domains and limited data availability. The key challenge in cross-domain learning is extracting **domain-invariant** features. For example, **fo-Proto-MAML** leverages second-order optimization in MAML to enhance cross-domain FSL. Since our method is only related to MAML’s second-order optimization and is independent of the image domain, we believe our approach can generalize well to cross-domain FSL.
Summary: The paper introduces IBFS (Information Bottleneck-driven Few-shot Neural Architecture Search), a novel framework designed to efficiently select neural architectures for few-shot learning (FSL) without requiring any training. Traditional NAS approaches either search architectures from scratch—resulting in high computational costs—or transfer architectures from other tasks—potentially leading to suboptimal performance. IBFS addresses these limitations by leveraging Information Bottleneck (IB) theory and a Zero-Cost evaluation method. Main Contributions: 1. The paper derives that the global convergence of Model-Agnostic Meta-Learning (MAML) can be ensured by considering only the first-order loss landscape. 2. Information bottleneck provides a unified perspective on understanding machine learning models. The proposed Zero-Cost expressivity ranking method estimates an architecture's effectiveness without training, significantly reducing search costs. 3. IBFS achieves state-of-the-art results in FSL without requiring training, validating its effectiveness. Overall, the paper presents a theoretically grounded and empirically validated approach to optimizing neural architectures for FSL, offering a cost-effective alternative to conventional NAS techniques. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and easy to follow, making complex concepts accessible to the reader. 2. It introduces a novel zero-cost NAS framework specifically designed for few-shot learning (FSL), which effectively selects optimal architectures without training, significantly reducing computational overhead. 3. The proposed IBFS framework achieves state-of-the-art performance in FSL tasks without requiring any training, demonstrating both the efficiency and effectiveness of its architecture selection strategy. Weaknesses: 1. The experimental evaluation is somewhat limited, as it is conducted solely on MAML, an approach that is now considered outdated. Additionally, the FSL evaluation is restricted to 5-1 and 5-5 settings, whereas comparisons with 5-20 and 5-50 settings are typically necessary for a more comprehensive assessment. 2. The choice of baselines is not sufficiently up-to-date, as it lacks comparisons with recent NAS methods from the past two years, which may impact the fairness and relevance of the evaluation. 3. There is a discrepancy in Fig. 4—according to the annotation and the "Remark" section, the figure should contain multiple curves, but in its current form, only one curve is presented, which may lead to confusion or misinterpretation of the results. Other Comments Or Suggestions: After rebuttal, the authors have addressed all concerns. I will keep my rating. Questions For Authors: See weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the helpful and insightful review, which is very helpful for us to further improve this paper. Next, we will answer your questions one by one, and we hope this will improve your acceptance of the paper. **Q1**: Concern about MAML. **A1**: Many thanks for your comments! Our paper follows the MetaDiff (AAAI24), MetaNTK-NAS (CVPR22), which only provide 5-1 and 5-5 settings. 5-20 and 5-50 settings are indeed promising, however, the previous method did not conduct the experiments on 5-20 and 5-50 settings. We worry that it is unfair for comparison. **Q2**: Lack of newer works. **A2**: Many thanks for your comments! We devote lots of effort & exploration in the IBFS day and night, conducting experiments compared with newer works. The results demonstrate that our method still achieves the best performance. ``-" indicates that the value is not found in the original paper or the training codes are not provided. | | Year | Cost(s) | CIFAR10 (val) | CIFAR10(test) | CIFAR-100(val) | CIFAR-100(test) | ImgeNet(val) | ImgeNet(test) | | --------------------------- | -------- | ------- | ------------ | ------------------ | ------------ | ------------ | ------------ | ------------ | | SNAS [1] | ICLR2018 | - | 90.10±1.04 | 92.77±0.83 | 69.69±2.39 | 69.34±1.98 | 42.84±1.79 | 43.16±2.64 | | DSNAS [2] | ICLR2019 | - | 89.66±0.29 | 93.08±0.13 | 30.87±16.40 | 31.01±16.38 | 40.61±0.09 | 41.07±0.09 | | DARTS- [3] | ICLR2020 | 192 | 91.03±0.44 | 93.80±0.40 | 71.36±1.51 | 71.53±1.51 | 44.87±1.46 | 45.12±0.82 | | PC-DARTS [4] | ICLR2019 | 8.70 | 89.96±0.15 | 93.41±0.30 | 67.12±0.39 | 67.48±0.89 | 40.83±0.08 | 41.31±0.22 | | iDARTS [5] | ICML2021 | - | 89.86±0.60 | 93.58±0.32 | 70.57±0.24 | 70.83±0.48 | 40.38±0.59 | 40.89±0.68 | | GradSign [6] | ICLR2022 | 30.38 | - | 93.52 ± 0.19 | - | 70.57 ± 0.31 | - | 41.89 ± 0.69 | | ZiCo [7] | ICLR2023 | 6.2 | 93.50 ± 0.18 | - | 70.62 ± 0.26 | - | 42.04 ± 0.82 | - | | IS-DARTS [8] | AAAI2024 | 7200 | 91.55 ± 0.00 | 94.36 ± 0.00 | 73.49 ± 0.00 | 73.51 ± 0.00 | 46.37 ± 0.00 | 46.34 ± 0.00 | | AZ-NAS [9] | CVPR2024 | 0.71 | - | 93.53 ± 0.15 0.723 | - | 70.75 ± 0.48 | - | 45.43 ± 0.29 | | SWAP [10] | ICLR2024 | 4.7 | 87.31 ± 2.36 | 90.48 ± 0.94 | 65.92 ± 4.32 | 67.13 ± 1.83 | 33.85 ± 4.98 | 35.40 ± 3.96 | | IBFS(ours) ( $\theta$=0) | | 3.36 | 89.58 ± 0.57 | 92.96 ± 0.81 | 69.17 ± 1.81 | 68.94 ± 1.41 | 41.30 ± 1.79 | 41.11 ± 1.51 | | IBFS(ours) ( $\theta$=0.75) | | 3.82 | 91.55 ± 0.76 | 94.37 ± 0.34 | 73.31 ± 2.12 | 73.09 ± 2.08 | 45.59 ± 0.32 | 46.33 ± 1.27 | **Q3**: Concern about Figure 4. **A3**: Many thanks for your comments! Kendall’s correlation coefficient requires two lists, each containing multiple elements, to be computed. For a given epoch, we first calculate the proxy scores for the models DenseNet-40, SE-ResNet-20, ResNet-56, PyramidNet-110, and WRN-16. These scores form one list, while the corresponding true accuracy values at the same epoch form another. We then use Kendall’s coefficient to measure the correlation between these two lists. As our analysis focuses on the correlation between proxy scores and true accuracy at each epoch, it is naturally represented by a single line segment rather than multiple curves. [1] Snas: stochastic neural architecture search. arXiv preprint arXiv:1812.09926, 2018 [2] Dsnas: Direct neural architecture search without parameter retraining. In CVPR 2020 [3] Darts-: robustly stepping out of performance collapse without indicators. arXiv preprint arXiv:2009.01027, 2020. [4] “PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search.” In ICLR 2019 [5] idarts: Differentiable architecture search with stochastic implicit gradients. arXiv preprint arXiv:2106.10784, 2021 [6] GradSign: Model performance inference with theoretical insights. In ICLR2022 [7] ZiCo: Zero-shot NAS via inverse coefficient of variation on gradients. In ICLR2023 [8] "IS-DARTS: stabilizing DARTS through precise measurement on candidate importance." In AAAI 2024. [9] "Az-nas: Assembling zero-cost proxies for network architecture search." In CVPR 2024. [10] "SWAP-NAS: Sample-wise activation patterns for ultra-fast NAS." *arXiv preprint arXiv:2403.04161* (2024).
null
null
null
null
null
null
Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE
Reject
Summary: This paper proposes Jakiro, a method that boosts the performance of speculative decoding for Large Language Model (LLM) inference acceleration. Speculative decoding employs a smaller, faster "draft" model to predict upcoming tokens, which a larger "target" model then verifies. Jakiro introduces two primary innovations: a dynamic decoupling mechanism using a Mixture of Experts (MoE) approach to enhance prediction diversity and a hybrid inference strategy combining autoregressive and parallel decoding. This allows Jakiro to achieve state-of-the-art performance in speculative decoding. Claims And Evidence: The paper presents compelling ideas and generally provides evidence for its claims, there are certain areas where the support could be more robust and transparent: Strengths: Clear comparisons: The paper provides a detailed comparison with existing speculative decoding methods, including Medusa and Eagle, clearly outlining the limitations of these approaches and how Jakiro addresses them.   Experimental setup: The experimental setup is comprehensive, covering various models (Vicuna, LLaMA2-chat, LLaMA3-Instruct) and benchmark datasets (MT-bench, HumanEval, GSM8K, etc.).   Ablation studies: Ablation studies are conducted to analyze the impact of different components of Jakiro, such as the MoE settings and the contrastive mechanism, providing insights into their individual contributions. Area for improvement: Claim: "This suggests that Jakiro benefits from a more efficient drafting process that allows for longer and more stable sequences of tokens to be accepted, reducing the need for frequent re-sampling and minimizing the risk of errors during the inference process." Issue: This claim says attributes speedup to "minimizing the risk of errors during the inference process". What errors are the authors talking about? Issue: the work does not talk about the inference framework they used or if they did or did not use chunked prefill (a standard in modern production runtimes).   Overall, the paper presents a promising approach to speculative decoding with supporting evidence. Methods And Evaluation Criteria: Yes. As noted in the strengths above. Theoretical Claims: This paper does not present any formal theoretical proofs that would require checking for correctness. The claims made in the paper are primarily supported through empirical evidence obtained from experiments and ablation studies. Experimental Designs Or Analyses: Yes. As noted in the strengths in the "Claims And Evidence" section. Supplementary Material: I skimmed the appendix. Results presented in appendix look correct. Relation To Broader Scientific Literature: This work pushes the boundaries of Speculative Decoding by building on the latest in the space and integrating the MoE structure into it. Essential References Not Discussed: Non that I noticed Other Strengths And Weaknesses: Strength: They are using the latest in architecture design (MoE architecture) Weakness: The result seems very incremental. They used an MoE (which most people should be doing now anyway) and got slightly better results at the cost of needing to store more draft model parameters. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > C1: This claim attributes speedup to "minimizing the risk of errors during the inference process". What errors are the authors talking about? Thank you for pointing out the ambiguity in our phrasing. The "risk of errors" refers to the probability that a token generated by the draft model is rejected by the target model during the validation phase of Speculative Decoding (SD). This rejection risk directly corresponds to the inverse of the acceptance rate of the draft model. When a draft token is rejected at a particular step: - All subsequent tokens in the speculative sequence are discarded. - The system must re-sample a new token from the target model, which resets the speculative decoding process for subsequent steps. A low acceptance rate (i.e., high rejection risk) leads to frequent re-sampling events, which disrupts the efficiency of SD. This is because: - **Computational Overhead:** Re-sampling from the target model incurs additional latency, compromising the speed gains from speculative decoding. - **Loss of Speculative Progress:** Discarded speculative tokens waste the computational effort invested in generating them. Jakiro’s design mitigates this risk by optimizing the draft model’s alignment with the target model, thereby increasing the acceptance rate. This results in fewer re-sampling events and more stable, longer sequences of accepted tokens, ultimately enhancing the overall speedup of the SD process. > Q1: The work does not talk about the inference framework they used or if they did or did not use chunked prefill. Overall, the paper presents a promising approach to speculative decoding with supporting evidence. Thanks for your valuable comment. Similar to mainstream SD methods (e.g., Medusa, EAGLE, Hydra), our Jakiro implementation relies solely on the PyTorch framework without additional acceleration architectures. Regarding "chunked prefill" (Agrawal et al., 2023), we didn't employ this technique because: - Our experiments used batch_size=1 by default, may refer to our response to Reviewer aZSm's comment W2. - The models could be fully loaded onto GPUs without memory constraints. We acknowledge that integrating "chunked prefill" could benefit larger models or batch_size>1 scenarios. We plan to explore such optimizations (e.g., vLLM/SGLang integration) in future work to further enhance Jakiro's performance. > W1: The result seems very incremental. They used an MoE (which most people should be doing now anyway) and got slightly better results at the cost of needing to store more draft model parameters. We appreciate this critical perspective but wish to clarify that characterizing our results as merely incremental may not be entirely accurate. While we do employ MoE, simply increasing model parameters does not guarantee improved speedup—expanding the draft model might improve acceptance rates but also introduces additional overhead, potentially reducing the speedup gain (as demonstrated in our ablation studies in Table 3). Furthermore, we optimize the MoE architecture by using slimmer MLP dimensions (detailed in the lower part of Figure 3 in this paper). To verify that Jakiro introduces minimal additional parameters and almost no extra memory overhead during inference, please refer to our response to Reviewer LwQj’s comment W2. We maintain that integrating MoE into speculative decoding remains an innovative and non-trivial contribution. --- Rebuttal Comment 1.1: Comment: i have read the response and will keep my rating
Summary: This paper presents Jakiro, which utilizes the MoE technique to do two-token-ahead parallel decoding to enhance the diversity of draft model prediction. Upon the framework of EAGLE, Jakiro replaces the MLP layer of EAGLE drafter with an MoE layer consisting of a router and several experts. The authors also integrated popular contrastive decoding techniques at the feature level (i.e., hidden states before LLM head), where authors claimed to achieve further improvement on greedy decoding scenarios. Claims And Evidence: - Diversity of draft tokens increased: per my understanding, authors have a strong claim that the Jakiro-style method significantly increases the “diversity” of draft tokens. However, other than the drafted tokens are now from 2 sets of features from selected experts, I did not see any signs of diversity in the drafted tokens. I would rather ask the authors to give a clear definition of the diversity of draft tokens and then quantize it with such a definition. - If the authors only refer to the high performance in high-temperature decoding, I wouldn’t choose the term diversity if I were in their shoes. - The motivation for contrastively decoding the token with top-2 selected experts is strange. The authors have not explained the reason, and it is not obvious to tell. I would consider using the features from the most activated expert and least activated experts to do contrastive decoding, as they could serve the role of strong and weak models in the original contrastive decoding setups. Methods And Evaluation Criteria: The datasets and metrics used in this paper are quite established and standard. Theoretical Claims: No proofs for theoretical claims are provided in the paper. Experimental Designs Or Analyses: The experiments are sound and solid. Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The parallel decoding usage of SPD is a crucial contribution of Jakiro, however, the authors missed some of the recent works in parallel SPD including: BiTA: Bi-Directional Tuning for Lossless Acceleration in Large Language Models, Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration, ParallelSpec: Parallel Drafter for Efficient Speculative Decoding Other Strengths And Weaknesses: Strengths: - Jakiro achieved SOTA results with both greedy and non-greedy decoding modes, which is impressive as decoding with temperature has always been challenging for speculative decoding methods. - Appendix A.2 illustrates the speedup ratio on different devices, confirming the universal applicability of Jakiro. Weaknesses: - Table 3 seems to tell regretting facts that although introducing the MoE mechanism helps increase the average acceptance length, the speed-up ratio suffers as the number of experts increases. Per my understanding, N=K=2 means a simple EAGLE-style implementation with learnable ratios and contrastive decoding between two heads. Other Comments Or Suggestions: Typos: L311: 77B -> 70B Questions For Authors: See Claims And Evidence. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > C1: The authors should give a clear definition of the diversity of draft tokens and then quantify it with such a definition. Thanks for your thoughtful feedback. Building upon our previous response to Reviewer LwQj (W1), as illustrated in the figure, our Jakiro model, even with just two MOE heads, can generate richer tokens. To rigorously address your query on defining and quantifying diversity, we propose a composite metric that evaluates three dimensions: - **Generation Richness (G):** The number of unique draft tokens, used to evaluate the diversity of the drafting phase. - **Selection Effectiveness (S):** The number of unique accepted tokens, used to evaluate the diversity of the verification phase. - **Exploration Depth (E):** The total number of tokens generated in the final output, prioritizing deeper exploration. The diversity metric is formalized as: $$ \text{Diversity} = G + S +E = \frac {\sum ^{i}_{N} {(\alpha \cdot \ln{g_i} + \beta \cdot \ln{s_i} + \gamma \cdot \ln{e_i} )}} {N} $$ where $g_i$, $s_i$, and $e_i$ represent the Generation Richness, Selection Effectiveness, and Exploration Depth for each round of dialogue, respectively, and $N$ denotes the total number of dialogue rounds. The empirically validated weights are: $\alpha=0.4,\ \beta=0.4,\ \gamma=0.2$. Benchmark results on mt_bench (Vicuna-7B, T=1, top-k=10): Method | Avg. G ↑ | Avg. S ↑ | Avg. E ↑ | Diversity ↑ :-------------:|:--------:|:--------:|:--------:|:-----------: Eagle2 (Dense)| 1234 | 206 | 789 | 6.3 Jakiro (MoE) | 4830 | 246 | 865 | **7.0** Jakiro achieves 11% higher diversity than the Eagle, driven by its ability to generate more unique tokens, maintain high acceptance rates, and explore longer sequences. > C2: The motivation for contrastively decoding with top-2 selected experts. Consider using the features from the most activated expert and least activated experts to do contrastive decoding. Thank you for raising this important point. The rationale for using top-2 activated experts in contrastive decoding: - **Baseline Configuration:** Our initial implementation followed the mainstream MoE framework, where the top-2 experts (ranked by routing weights) are selected for token generation. - **Empirical Optimization:** As shown in Table 3 of the paper, using 2 experts achieved the optimal speedup. To rigorously evaluate our design, we conduct additional Vicuna 7B experiments comparing two strategies on A40: | | Strategy | #Expert | MT-bench Speedup | MT-bench τ | HumanEval Speedup | HumanEval τ | |:---:|:----------:|:-------:|:----------------:|:----------:|:-----------------:|:-----------:| | | top-2 | 5 | 2.59x | 5.13 | 2.95x | 5.60 | | | top-bottom | 5 | 2.45x (-5.4%) | 4.82 | 2.80x (-5.1%) | 5.20 | | | top-2 | 4 | 2.65x | 5.09 | 3.02x | 5.54 | | | top-bottom | 4 | 2.50x (-5.7%) | 4.75 | 2.85x (-5.6%) | 5.12 | We fully agree with the reviewer on the theoretical significance of exploring "strong-weak expert contrastive decoding." However, our experiments reveal that low-confidence experts (e.g., bottom-1) may introduce noise, leading to degraded output quality. We analyze that this discrepancy from the original contrastive decoding conclusions (which compare logits from strong/weak models) might stem from: our contrastive decoding mechanism compares **hidden states** of experts. This architectural difference could explain why the "strong-weak expert" paradigm behaves differently in our framework. > R1: Missed some of the recent works in parallel SPD. We will discuss recent parallel SPD works like BiTA, Parallel Decoding via Hidden Transfer, and ParallelSpec in the revised manuscript. > W1: The speed-up ratio suffers as the number of experts increases in Table 3. N=K=2 means a simple EAGLE-style implementation with two heads. We would like to clarify two key points: 1. It fundamentally differs from EAGLE in that our Jakiro uses dynamic **router-based** expert selection for *autoregressive phases* (Stages 1-4) and employs contrastive decoding only in the final *parallel phase* (Stages 5-6). 2. In smaller models (e.g., Vicuna-7B), increasing N introduces **additional computational costs** (e.g., router computation), which slows down the autoregressive phase. To address these trade-offs, we propose the following directions: - **Efficiency Optimization for Small Models:** Explore lightweight routing mechanisms or parameter-sharing techniques to reduce N’s overhead. - **Dynamic N/K Adjustment:** Adaptively set N and K based on task complexity (e.g., higher N for complex tasks, lower N for simplicity). Our MoE design inherently supports scalable improvements for large models (e.g., DeepSeek-V3-671B), and the draft model’s overhead becomes negligible relative to total computation (Experts N>2). --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. I keep my `accept` rating. I believe Jakiro is a good piece of work and good luck!
Summary: This paper proposes Jakiro, which leverages Mixture of Experts (MoE), where independent experts generate diverse predictions, effectively decoupling correlations among candidates. It demonstrates universal improvements across multiple different benchmarks. Claims And Evidence: Yes, LLM acceleration is a very important topic for current applications. However, the authors should test over more advanced settings like flash decoding and bsz > 1. Methods And Evaluation Criteria: The authors should also report thoughput result. Theoretical Claims: I did not see any proof. Experimental Designs Or Analyses: The authors should test over more advanced settings like flash decoding. The authors should test over throughput with different bsz. The authors should also add models like Qwen which are better at math and coding. Supplementary Material: YES. Relation To Broader Scientific Literature: NA. The idea of this paper is specifically designed for speculative decoding. Essential References Not Discussed: All the essential references are clearly discussed. Other Strengths And Weaknesses: Advantages: 1. The paper is well-written. 2. I think the idea is neat. Intuitively, both MoE and semi-autoregressive should work. 3. The experimental results strongly support the effectiveness of this method. Disadvantages: 1. The paper lacks a direct comparison with a strong baseline using flashdecoding. 2. According to the ablation study, the contrastive loss seems not to work. If the improvement is marginal, in my opinion, the authors can omit this part to keep the method simple. 3. The authors should test over more advanced settings like flash decoding. 4. The authors should test over throughput with different bsz. If the authors add more experiments over 3 & 4, I will increase score from 2 to 3. Other Comments Or Suggestions: see cons. Questions For Authors: see cons. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > W1: The authors should test over more advanced settings like flash decoding. Thank you for the feedback. We first clarify that Flash Decoding (FD) and speculative decoding (SD) operate at different optimization levels but can be effectively combined for greater efficiency. **Flash Decoding (Dao et al., 2023):** A system-level optimization that accelerates attention computation for long-context processing through: - Memory-efficient tiling for KV caching - Parallel reduction across sequence chunks **Speculative Decoding (Leviathan et al., 2023):** An algorithmic innovation leveraging the computation-memory gap in LLM inference: - Draft model proposes candidate tokens (reducing generation steps) - Target model verifies in parallel (exploiting GPU underutilization) By conducting Vicuna 7B experiments on A100 under T=0 (Refer to FlashDecoding++, Hong et al., 2023): Method | mt_bench | humaneval | gsm8k | Avg ---------------|----------|-----------|-------|------- Baseline + FD | 2.05x | 2.06x | 2.10x | 2.07x Jakiro | 3.34x | 3.81x | 3.22x | **3.46x** > W2: The authors should test over throughput with different bsz. Similarly, we conducted experiments on "Batch Sizes > 1" (T=0) with Vicuna 7B on A40, and the results are shown in the table below. These results validate that the speedup of our method increases as the batch size grows, but the gains diminish gradually. BS | mt_bench | humaneval | gsm8k | Avg ----|----------|-----------|-------|------- 1 | 3.02x | 3.40x | 3.08x | 3.17x 2 | 2.98x | 3.35x | 3.05x | 3.13x 4 | 2.85x | 3.25x | 2.95x | 3.02x 8 | 2.72x | 3.10x | 2.82x | 2.88x Additional Clarification on Applying SD to **Batch Sizes > 1**: - **Performance Degradation with Larger Batches**: Simply increasing the batch size during SD inference shifts the problem nature from memory-bound to compute-bound, leading to diminishing returns or even negative impacts as batch sizes grow (MagicDec, Sadhukhan et al., 2025). - **Practical Focus on Batch Size = 1**: Current SD optimizations prioritize batch size = 1 due to - *Sequence Length Variability*: Divergent acceptance rates across sequences in a batch result in varying candidate sequence lengths after the drafting stage. - *Verification Overhead*: This inconsistency increases computational costs (or latency) during the parallel verification phase of the target model. **Research Status & Outlook:** Existing SD implementations (e.g., the seminal SD work, Medusa, EAGLE, Hydra) predominantly target batch size = 1. Though optimizing for batch sizes > 1 remains an open challenge, it represents a promising direction for future research. > W3: If the improvement of 'contrastive loss' is marginal, the authors can omit this part to keep the method simple. We appreciate the reviewer's observation regarding the contrastive loss. While the absolute improvement may appear modest, the contrastive loss is architecturally indispensable for maintaining the system's end-to-end performance. Specifically on GSM8K of Table 4 in this paper, the contrastive loss contributes to a measurable Speedup enhancement (3.05x → 3.11x). Its critical role lies in: - Optimizing later-stage parallel decoding efficiency - Preserving the acceptance rate during initial auto-regressive phases Of course, this part can be omitted to maintain method simplicity if a certain performance loss is acceptable. > S1: The authors should also add models like Qwen with better at math and coding. Thanks for your valuable suggestion. We have added comprehensive experiments on Qwen2-7B-Instruct in A40 (Speedup: λ, Average accepted length: τ): Method | T | MT-bench λ | MT-bench τ | HumanEval λ | HumanEval τ | GSM8K λ | GSM8K τ | Alpaca λ | Alpaca τ | CNN/DM λ | CNN/DM τ | Natural Ques. λ | Natural Ques. τ | Mean λ | Mean τ --------|---|------------|------------|-------------|-------------|---------|---------|----------|----------|----------|----------|-----------------|-----------------|-----------|-------- Eagle2 | 0 | 2.13 | 4.16 | 2.23 | 4.18 | 2.05 | 3.93 | 1.70 | 3.30 | 1.75 | 3.43 | 1.44 | 2.73 | 1.88x | 3.62 Jakiro | 0 | 2.28 | 4.20 | 2.36 | 4.20 | 2.16 | 3.98 | 1.82 | 3.35 | 1.88 | 3.46 | 1.53 | 2.75 | **2.01x** | 3.64 Eagle2 | 1 | 1.61 | 3.18 | 1.69 | 3.28 | 1.75 | 3.41 | 1.30 | 2.56 | 1.18 | 2.36 | 1.13 | 2.19 | 1.44x | 2.83 Jakiro | 1 | 1.72 | 3.20 | 1.81 | 3.30 | 1.85 | 3.45 | 1.38 | 2.60 | 1.25 | 2.40 | 1.20 | 2.25 | **1.54x** | 2.85 These additions highlight Jakiro’s adaptability to specialized LLMs. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and the extensive experiments. I appreciate the effort you’ve put in. I would like to raise a few additional points and clarify my expectations for the final round: ***1. Flash Decoding Compatibility with Tree Attention:*** In W1, I asked about integrating Flash Decoding (FD) into your method. I would like to re-emphasize that FD is now a default system-level optimization with strong and stable acceleration benefits, and any method aiming for practical deployment must demonstrate compatibility with it. Your Tree Attention appears promising, but if it cannot be combined with FD, that significantly limits its real-world applicability. In fact, I have raised this exact concern in all speculative decoding papers I’ve reviewed. I do not consider the argument that FD and speculative decoding are "orthogonal" to be sufficient—FD is the default, and speculative decoding must work on top of it to be practically useful. If Tree Attention cannot be made compatible with FD, **I will not increase the score**. I encourage you to include substantial experiments showing this compatibility, especially with the **group query attention models**. Additionally, speculative decoding without tree attention is compatible with FD, and I have implemented one, which is faster but needs fewer flops than the tree attention one. **You should also include this setting as a baseline.** ***2. Throughput Under Batch Sizes > 1:*** Thank you for providing batch size > 1 results. However, the maximum batch size is too small to demonstrate practical throughput performance, and A40 is not ideal for this purpose. I strongly recommend testing on A100 (80GB), where throughput is commonly measured in thousands of tokens per second. In addition, I’m now requesting you to include GQA models like Qwen in your FD + batch size > 1 experiments to better validate Jakiro’s generality under modern architectures. (Note: this requirement is new and was not part of my initial comments.) ***3. Contrastive Loss Impact:*** The results confirm my intuition. ***Final Note:*** Please note that this is the **final** opportunity to respond, so I recommend reporting as many models' results as possible. You can focus on the single dataset SpecBench to simplify evaluation. That said, I will be monitoring your updates closely, as soon as you upload new results, I will review them promptly and revise my score if the key issues (particularly FD compatibility and max throughput) are convincingly addressed. Thank you again for your work, and I look forward to your final response. --- Reply to Comment 1.1.1: Comment: Thanks for your responsible review. (1) To be honest, the validation of applying Flash Decoding (FD) to speculative decoding (SD) with Tree Attention structure is indeed a challenging issue. While you emphasized that experimental validation of FD combined with SD is necessary to prove Jakiro's general applicability, to our knowledge, there appears to be no open-source Tree Attention-based speculative decoding method that has implemented this integration. If any exists, we sincerely hope you could inform us for future study. We attempted to integrate vLLM's technology (which claims to use FD) during the rebuttal but found they haven't applied it to Tree Attention. We think this is more of an engineering optimization issue than this paper's core focus. Although our paper initially received modest scores with slim acceptance chances, we also attempt to address your concerns responsibly. (2) Regarding your recommendation to conduct experiments with larger batch sizes on A100-80GB: Our lab and surrounding facilities lack this GPU model. We rented two instances on the AutoDL platform but observed unstable test results. The following data shows average results from 5 runs on Qwen2-7B-Instruct with GQA (for reference only): BS | mt_bench(T=0) | mt_bench(T=1) ----|----------------|--------------- 1 | 2.75x | 2.11x 2 | 2.93x | 2.28x 4 | 2.78x | 2.08x 8 | 2.71x | 2.03x 16 | 2.64x | 1.96x 32 | 2.48x | 1.89x Finally, we sincerely appreciate your feedback, though the requirements are indeed challenging. Even if this paper is not accepted, we will continue to explore the efficient integration of Jakiro with FlashDecoding technology to make it a more practical speculative decoding solution.
Summary: The paper claims that Jakiro improves speculative decoding by leveraging the Mixture of Experts (MoE) for dynamic decoupling and introduces a hybrid inference strategy that combines autoregressive decoding with parallel decoding in the last steps. The authors also claim that Jakiro achieves state-of-the-art performance in speculative decoding by significantly improving prediction accuracy and inference speed. Claims And Evidence: The claims are validated by experimental results. Methods And Evaluation Criteria: The proposed method can tackle the challenges identified by the authors. Theoretical Claims: No theoretical claims or proofs were provided. Experimental Designs Or Analyses: The experimental design is solid while lacking specific parts that can further improve the integrity. See weaknesses. Supplementary Material: Appendix Relation To Broader Scientific Literature: This work meaningfully advances speculative decoding research Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: Novelty: This work effectively decouples token dependencies within the draft tree, improving token diversity and verification accuracy. Experiments: The authors considered multiple model scales (7B to 70B) and diverse task benchmarks to provide strong empirical validation. Weaknesses: The authors claim that their work has improved diversity. However, the corresponding analysis is missing in the experiments. While MoE increases speedup, there is no analysis of efficiency or memory overhead compared to non-MoE speculative methods. The proposed method seems to be incorporating the architecture of MoE into speculative decoding. This may seem to lack novelty, and the authors should provide more justification for this design regarding its novelty and meaningfulness. Other Comments Or Suggestions: The paper lacks an overview figure for the proposed framework. Questions For Authors: 1. In Figure 1. Comparison of different speculative decoding methods, the authors state that this includes multiple speculative decoding methods. However, in the (a) part of the figure, it says it is speculative decoding. How is this particular method related to any other three methods in the figure, if all of them are considered speculative decoding in the caption, I suppose? 2. What does the name mean? The method name Jakiro seems not to be any particular machine learning algorithms or frameworks. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > W1: The analysis of diversity is missing in the experiments. To validate that our Jakiro method enhances the diversity of speculative sampling, we conduct a comparative analysis against Eagle2 (with temperature=1). Using a prompt from mt_bench: "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions." We statistically evaluate the responses averaged over 3 trials. The figure (https://anonymous.4open.science/r/Jakiro_rebuttal-C4C2/D.png) visualizes the frequency heatmap of the top-10 drafted tokens (default setting) sampled during the entire response generation. Tokens with frequencies below 10 were filtered out for clearer visualization. The results show: - **Higher Token Quantity**: Under identical experimental conditions, Jakiro generates significantly more draft tokens (vertical axis of the figure). - **Broader Semantic Coverage**: The generated tokens occupy a wider span in the semantic embedding space (horizontal axis), reflecting richer topical diversity. Additionally, we also provide the statistical results of accepted tokens for Jakiro* and Eagle2 on the MT-bench dataset. See https://anonymous.4open.science/r/Jakiro_rebuttal-C4C2/C2.png, which further highlights the diversity of our Jakiro. > W2: No efficiency or memory overhead analysis compared to non-MoE speculative methods. We sincerely appreciate this insightful question. Here is our Memory Overhead Analysis: *Table: Hardware Metrics on A40-45GB of Vicuna 7B* (measured by `nvidia-smi` as shown in https://anonymous.4open.science/r/Jakiro_rebuttal-C4C2/M.png) Metric | Dense | MoE-2 | Δ :------------------:|:-----:|:-----:|:------: Mem (GB) | 15.08 | 15.26 | +0.6% Latency (ms/token) | 13.13 | 11.74 | -10.6% Since Jakiro employs a lighter-weight MLP than Eagle, with merely 0.6% additional memory usage, the MoE-2 variant delivers a 10.6% speedup in token generation latency, showing highly efficient computation-memory scaling. > W3: The justification of Jakiro's design regarding its novelty and meaningfulness. Jakiro's novelty lies not in simply applying MoE to speculative decoding but in developing a *dynamic decoupling framework* that fundamentally addresses a newly identified bottleneck. Here are key justifications: - **Problem Innovation** Prior works focus on temporal decoupling (Eagle) or multi-head prediction (Medusa) but overlook *in-step candidate correlation*. This intrinsic limitation motivates our MoE-based decoupling at the *intra-step level*. - **Architectural Novelty** Compared to standard MoE applications: - *Dynamic Routing*: Experts specialize in two-branch token speculative decoding. - *Semi-autoregressive:* Combines autoregressive decoding for early tokens and parallel decoding for later stages. - *Contrastive MoE*: The first to apply a contrastive mechanism between activated experts. > S1: The paper lacks an overview figure for the proposed framework. Thanks for your suggestion. We will include the framework diagram (https://anonymous.4open.science/r/Jakiro_rebuttal-C4C2/F.png) in the revised version. > Q1: The relationship between the particular method in (a) part of Figure 1 and the other three methods. Figure 1 (a) presents the baseline method of classical speculative decoding (i.e., SpS in Table 1 of this paper), which serves as the comparative reference for the improved approaches in (b) Medusa, (c) Eagle, and (d) Jakiro. We will explicitly annotate this relationship in the revised version. > Q2: What does Jakiro's name mean? The name 'Jakiro' is inspired by the twin-headed dragon character from the DOTA game, symbolizing our method's dual-expert architecture where two specialized activated heads collaboratively generate diverse token predictions.
null
null
null
null
null
null
Primphormer: Efficient Graph Transformers with Primal Representations
Accept (poster)
Summary: This paper proposes a novel Graph Transformer model, named Primphormer, which models the self-attention mechanism in the primal space, avoiding costly pair-wise computations and enabling an efficient variant of Graph Transformers. By introducing an additional primal objective loss, Primphormer achieves high efficiency in terms of both runtime and memory usage, allowing for larger and deeper neural networks and enabling larger batch sizes, thereby enhancing the model's capacity and generalization ability. Furthermore, Primphormer preserves expressive power equivalent to that of traditional Transformers, effectively distinguishing non-isomorphic graphs. Experimental results on various graph benchmarks demonstrate the effectiveness and efficiency of the proposed Primphormer. Claims And Evidence: The paper provides convincing experimental results and theoretical evidence to support its claims. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are suitable for the given problem and application context. Theoretical Claims: I have reviewed the theoretical content of this paper, and there are no significant issues. For example, Theorem 2.2's duality derivation and the introduction of the KKT conditions are sound and consistent with related optimization theory. Lemma 2.3 effectively shows that the solutions in the dual space lead to a zero-valued objective in the primal space, which aligns with common results in duality theory. Furthermore, Theorem 3.2 provides a rigorous proof of the approximation ability of Primphormer for any continuous function, confirming its strong performance. Overall, the theoretical derivations in the paper are correct. Experimental Designs Or Analyses: I have reviewed the experimental design of this paper and found some issues. Specifically, the paper does not compare the latest baselines. For instance, it does not include comparisons with the GRIT [2] model and Graph ViT [1] model from ICML 2023, nor the GEAET [3] model from ICML 2024. These models have significant impact in the field of graph representation learning, and the lack of comparison could affect the comprehensive evaluation of Primphormer's performance. [1] X He, et al. A generalization of vit/mlp-mixer to graphs. ICML, 2023. [2] L Ma, et al. Graph inductive biases in transformers without message passing. ICML, 2023. [3] J Liang, M Chen and J Liang. Graph External Attention Enhanced Transformer. ICML, 2024. Supplementary Material: I have reviewed the appendix, and the dataset introduction in the experimental details is correct. In hyperparameter settings, the total number of parameters used for each dataset meets the requirements. The theoretical proof is also reasonable. Relation To Broader Scientific Literature: The key contribution of the paper is the introduction of Primphormer, based on primal space, which improves efficiency by avoiding pairwise computations. Unlike existing methods, Primphormer uses a dual approach to reduce computational complexity while maintaining expressiveness. Essential References Not Discussed: The paper overlooks Graph ViT [1] and GRIT [2] from ICML 2023, as well as GEAET [3] from ICML 2024, which could serve as meaningful comparison baselines. [1] X He, et al. A generalization of vit/mlp-mixer to graphs. ICML, 2023. [2] L Ma, et al. Graph inductive biases in transformers without message passing. ICML, 2023. [3] J Liang, M Chen and J Liang. Graph External Attention Enhanced Transformer. ICML, 2024. Other Strengths And Weaknesses: Strengths: -The proposed method is innovative. -The theoretical analysis is thorough and detailed. Weaknesses: -The performance of the proposed method is not particularly outstanding, and several models from 2023/2024 are not compared, such as GRIT, Graph ViT from ICML 2023, and GEAET from ICML 2024. -The paper claims to be an Efficient Graph Transformer, but it lacks testing on large graph datasets and only evaluates on graph-level tasks. Other Comments Or Suggestions: None. Questions For Authors: 1. The proposed method in this paper seems similar to linear attention methods. Could the authors explain the differences between the two approaches? Additionally, could the authors highlight the advantages of the proposed method compared to Graph Transformer methods with linear attention, such as Nodeformer from NeurIPS 2022 and SGFormer from NeurIPS 2023? 2. Could the authors provide performance results on large graph datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your appreciation of the innovation of Primphormer and insightful comments. We address your concerns below: - R4.1: Broader experiments. > Thank you for mentioning the latest baselines, Graph ViT/MLP-mixer [1], GRIT [2], and GEAET [3]. Following your suggestion, we have included comparisons with these methods, as shown in the tables below. > Table 1 Comparison on the CIFAR10 dataset. > | | Acc$\uparrow$| Time(s/epoch)$\downarrow$ | Memory(GB) $\downarrow$| > | :--- | :--- | :---: | :---: | > |Graph MLP-Mixer| 73.96±0.33 | 47.5 | 3.11 | > |Graph ViT | 72.11±0.55 | 49.7 | 3.05 | > |GRIT |$\small{\textbf{76.46}}$±$\small\textbf{0.88}$|158.8 | 22.8 | > |GEAET | 76.33±0.43 | 45.8 | 4.9 | > |Primphormer | 74.13±0.24 | $\small{\textbf{32.6}}$ | $\small{\textbf{2.7}}$ | > Table 2 Comparison on the MNIST dataset. > | | Acc$\uparrow$| Time(s/epoch)$\downarrow$ | Memory(GB) $\downarrow$| > | :--- | :--- | :---: | :---: | > |Graph MLP-Mixer|97.42±0.11|57.2|2.26| > |Graph ViT |97.25±0.23|53.6|2.25| > |GRIT |98.11±0.11|70.1|7.69| > |GEAET |98.41±0.09|49.2|2.11| > |Primphormer |$\small{\textbf{98.56}}$±$\small{\textbf{0.04}}$|$\small{\textbf{43.7}}$|$\small{\textbf{1.71}}$| > > It can be observed that our method achieves the lowest time and memory costs while maintaining competitive performance, which aligns with the motivation of this paper. > We have also conducted experiments on large graph datasets for node-level tasks, including ogbn-arxiv, ogbn-proteins, and Amazon2m. > >| | ogbn-arxiv | ogbn-proteins | Amazon2m | >|:--------|:--------|:-------|:------| >| \#Nodes | 169,343 | 132,534 | 2,449,029 | >| \#Edges | 1,166,243| 39,561,252| 61,589,140 | >|NodeFormer |59.90±0.42|77.45±1.15| 87.85±0.24 | >|SGFormer |72.63±0.13|$\small{\textbf{79.53}}$±$\small{\textbf{0.38}}$| 89.09±0.10| >|Primphormer|$\small{\textbf{73.10}}$±$\small{\textbf{0.24}}$|78.93±0.31|$\small{\textbf{90.33}}$±$\small{\textbf{0.32}}$| > > We hope these results contribute to a comprehensive evaluation of Primphormer's performance. - R4.2: Discussion with other linear attention methods. > Linear attention mechanisms, such as NodeFormer [4] and SGFormer [5], aim to reduce computational complexity by decomposing or approximating the kernel matrix, operating in the dual space. For example, NodeFormer uses a random feature-based approach, while SGFormer drops the softmax activation to approximate the kernel matrix. In contrast, our method adopts a technically different approach by leveraging the asymmetric kernel trick. Instead of operating in the dual space, we directly model the representation of attention outputs in the primal space. We will include the experiments and discussion in the final version of the manuscript. [1] X He, et al. A generalization of vit/mlp-mixer to graphs. ICML, 2023. [2] L Ma, et al. Graph inductive biases in transformers without message passing. ICML, 2023. [3] J Liang, M Chen and J Liang. Graph External Attention Enhanced Transformer. ICML, 2024. [4] Wu Q, et al. Nodeformer: A scalable graph structure learning transformer for node classification. NeurIPS, 2022. [5] Wu Q, et al. SGFormer: Simplifying and empowering transformers for large-graph representations[J]. NeurIPS, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have carefully reviewed your response and intend to increase my rating to 2. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your constructive feedback and the elevated score. Your suggestions regarding experiments with the latest baselines and large graphs are very insightful. As a method to improve efficiency, the reported results and additional experiments you mentioned support the improvement of efficiency, which have greatly enhanced our paper. Our method takes a different approach to reducing the cost of the self-attention mechanism by leveraging the primal-dual relationship. Unlike other methods, the proposed approach approximates the output of the self-attention rather than the attention scores. A discussion comparing this method with other linear attention methods will also help readers better understand our contributions.
Summary: This paper introduces an efficient graph transformer, called Primphormer, that addresses the quadratic complexity issue of traditional graph transformers by using a primal representation. The authors showed that Primphormer serves as a universal approximator for functions on both sequences and graphs, retaining its expressive power for distinguishing non-isomorphic graphs. Experiments on various graph benchmarks demonstrate that Primphormer achieves competitive empirical results. ## update after rebuttal By considering the additional experiments, I keep my positive score. Claims And Evidence: 1. Reduced computational complexity: The paper provides analysis showing that the primal representation used in Primphormer has linear complexity ($O(Nps$) time and $O(2N_ss + 2Np)$ memory), which is significantly more efficient than the quadratic complexity ($O(N^2s$) time and $O(N^2 + Ns)$ memory) of traditional graph transformers. 2. Theoretical properties: The paper demonstrates through Theorems 3.2 and 3.3 that Primphormer can approximate any continuous function on sequences and graphs arbitrarily well. Methods And Evaluation Criteria: This paper evaluates Primphormer on diverse benchmark datasets, including LRGB, standard GNN benchmarks, molecular datasets, large-scale graph, and graph isomorphism benchmark (BREC). These datasets cover a broad range of graph types and tasks, making them appropriate for comprehensive evaluation. Theoretical Claims: I checked the proofs in Appendix C. Experimental Designs Or Analyses: Primphormer is tested on a wide range of graph benchmarks, and compared against multiple relevant baselines.The experimental design also includes both running time and memory usage comparisons, which is crucial for validating their claim of improved computational efficiency. One concern is that it may need to be compared with some more efficient graph transformer approaches, such as [1], [2]. [1] Polynormer: Polynomial-Expressive Graph Transformer in Linear Time, ICLR 2024 [2] SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations, NeurIPS 2023 Supplementary Material: I reviewed Appendix A data descriptions and Appendix C Proofs of theoretical results. Relation To Broader Scientific Literature: This paper proposes a new appraoch of efficient graph transformers. The proposed Primphformer builds on established concepts from kernel machines, and universal approximation theory, while specifically addressing the unique challenges posed by graph data. Essential References Not Discussed: Please see the Experimental design and analysis, some efficient graph transformers should be cited and compared. Other Strengths And Weaknesses: Strengths: 1. The paper introduces a novel primal representation for graph transformers that addresses the quadratic complexity issue inherent in traditional self-attention mechanisms. 2. Primphormer is proved to be as powerful as Transformer in terms of distinguishing non-isomorphic graphs. 3. Primphormer is tested on a wide range of graph benchmarks and shows competitive performance. Weaknesses: 1. The main contribution of this paper is to design efficient graph transformer, reducing the quadratic complexity to linear. However, it lacks comparisons with other complexity reduction approaches, e.g., some recent advances in efficient attention mechanisms, to name a few [1], [2]. 2. The presentation of this paper could be improved to make it easier to follow, e.g., in the theoretical results section, explain how these theorems relate to the main claims of this paper. [1] Polynormer: Polynomial-Expressive Graph Transformer in Linear Time, ICLR 2024 [2] SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations, NeurIPS 2023 Other Comments Or Suggestions: 1. Maybe it is better to organize the experimental results according to different tasks, e.g., for node-level, link-level, and graph-level. Now they are mixed. 2. What is the KSVD on Line 770? Questions For Authors: I'd like to see more discussions/evaluations with other efficient graph transformer approaches. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your appreciation of the novelty of Primphormer. We address your concerns below: - R3.1: Broader experiments. > Following your suggestion, we have compared our method with other efficient graph Transformers such as Polynormer and SGFormer, as illustrated in the following table. >| Acc$\uparrow$ | Computer | Photo | CS | Physics | WikiCS | >|:--------|:--------|:-------|:------|:-------|:------- | >| \#Nodes | 13,752 | 7,650 | 18,333 | 34,493 | 11,701 | >| \#Edges | 245,861 | 119,081 | 81,894 | 247,962 | 216,123 | >|SGFormer |92.32±1.66|94.28±1.36| 95.21±1.14 | 96.65±1.26 | 79.48±0.96 | >|Exphormer |91.59±0.31|95.27±0.42| 95.03±0.09 | $\small{\textbf{97.16}}$±$\small\textbf{0.48}$ | 78.54±0.49 | >|Polynormer |$\small{\textbf{93.38}}$±$\small\textbf{0.13}$| 96.01±0.12 |95.50±0.18|97.12±0.12|79.64±0.67| >|Primphormer|92.47±0.55|$\small{\textbf{96.22}}$±$\small\textbf{0.29}$|$\small{\textbf{95.66}}$±$\small\textbf{0.22}$|97.02±0.17|$\small{\textbf{80.11}}$±$\small\textbf{0.84}$| > > We observed that Primphormer outperforms on three of the five datasets. We hope these results contribute to a comprehensive evaluation of Primphormer's performance. - R3.2: Presentation, organization, and KSVD. > Thank you for your insightful suggestions and careful reading. Following your suggestions, we will continue to improve the presentation, particularly in the theoretical section. We believe your feedback will enhance the readability of the manuscript. > In this paper, we followed a commonly used experimental organization approach in the graph learning community [1,2]. As we are unable to update the PDF during the author-reviewer discussion period, we will revisit the organization based on your suggestion and select one approach for the final version of the manuscript. > Thank you for your detailed comment. KSVD stands for Kernel Singular Value Decomposition. The KSVD optimization problem mentioned on Line 770 corresponds to our primal optimization problem outlined in Equation 2.5. We will revise this part in the manuscript to improve clarity. We will include the experiments and discussion in the final version of the manuscript. [1] Shirzad H, et al. Exphormer: Sparse transformers for graphs. ICML, 2023. [2] L Ma, et al. Graph inductive biases in transformers without message passing. ICML, 2023. --- Rebuttal Comment 1.1: Comment: Thank the authors for the additional experiments, and I'll keep my positive rating, though it is another paper addressing the scalability in graph transformers. --- Reply to Comment 1.1.1: Comment: Thank you for maintaining your positive rating. We sincerely appreciate your recognition of our work, and your valuable suggestions have greatly contributed to improving it.
Summary: This work presents Primphormer, a graph transformer architecture that leverages a primal-dual framework to reformulate the self-attention mechanism for graphs, which has previously been done for self attention for sequences. Unlike previous graph transformers (such as GraphGPS with a global vanilla Transformer) that compute pairwise attention, leading to quadratic complexity, Primphormer derives a primal representation by integrating a virtual node that aggregates global graph information with learned projection weights. This design reduces the computational and memory costs as expected but also retains the expressiveness and shows competitive performance with prior baselines of efficient graph transformers. Claims And Evidence: - The paper claims that Primphormer can universally approximate continuous functions on graphs while significantly reducing runtime complexity compared to standard GTs. - This is supported by- theoretical analysis demonstrating that the primal representation derived via an asymmetric kernel trick which avoids quadratic pairwise computations by integrating global information through a virtual node. - Empirical results across a diverse set of benchmarks (e.g., CIFAR10, MalNet-Tiny, ogbn-products) that show competitive performance with improved memory and runtime efficiency over baselines such as Exphormer and traditional Transformer-based approaches. - While the use of virtual nodes through f_x transformation is essential as claimed, how does its removal impact performance drop? - In addition, a single virtual node may introduce a bottleneck in aggregating global information because of being a single aggregation point. How can this be resolved in principle and keep the overall Primphormer benefits? - Also, the evidence for efficiency gains are not clear in Table 4, and the corresponding claim does not seem justified fully in section 4, page 6 (except when graph size is high as in malnet or ogbn-products). Methods And Evaluation Criteria: The method is implemented by having feature maps for queries and keys with learned projection weights (W_e and W_r) and a virtual node via a data-dependent aggregation function (f_X) that captures global graph info and ensures permutation equivariance. Evaluation is done by replacing the global attention component in GraphGPS with proposed method, and on public graph benchmarks that were used in GraphGPS and subsequent efficient graph transformers. Theoretical Claims: A key theoretical claim is that Primphormer is a universal approximator for functions on both sequences and graphs despite using a sparse, efficient attention mechanism. This is justified using the tools employed in previous methods. Experimental Designs Or Analyses: Experiment design is done by replacing the global attention component in GraphGPS framework with proposed method, which makes it a fair design. Supplementary Material: Dataset and experiment details, pseudo code and additional experiment with GRIT that has better performance but more memory/time consumption, on the 2 datasets used for evaluation, Relation To Broader Scientific Literature: The paper is good positioned within the current literature on scalable/efficient graph transformers and corresponding primal attention works in general transformer literature (eg. Chen et al, 2023). Essential References Not Discussed: NA Other Strengths And Weaknesses: In terms of understanding the contributions of the paper, there may be a concerns, the efficiency gains are not seen clearly in the experiments and the overall contribution is to extend primal attention from general sequential input to graph input while addressing the challenges during generalization. Other Comments Or Suggestions: NA Questions For Authors: already included in above sections, eg. in Claims And Evidence section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful suggestions. We address your concerns below: - R2.1: The removal impact performance of $f_X$. > Following your insightful suggestion, we report the performance drop of removing $f_X$ in the following table. >| | PascalVOC | COCO | Peptides-Func | Peptides-Struct | PCQM | >|:------:|:------------:|:------------:|:-------------:|:---------------:|:-------------:| >| Metric | F1$\uparrow$ | F1$\uparrow$ | AP$\uparrow$ | MAE$\downarrow$ | MRR$\uparrow$ | >| Primphormer | 0.4602 ± 0.0077| 0.3903 ± 0.0061 | 0.6612 ± 0.0065 | 0.2495 ± 0.0008 | 0.3757 ± 0.0079 | >| No $f_X$ | 0.4513 ± 0.0089| 0.3758 ± 0.0082 | 0.6509 ± 0.0072 | 0.2576 ± 0.0011 | 0.3516 ± 0.0126 | > > Removing $f_X$ results in lower performance and a higher standard deviation, highlighting its importance in contributing to the model's stability and effectiveness. - R2.2: Virtual nodes. > To address the bottleneck in aggregating global information, we can explore the use of multiple or hierarchical virtual nodes: > (1) Multiple virtual nodes: Introducing multiple virtual nodes instead of a single one can help distribute the burden of global aggregation. Each virtual node could focus on aggregating information from a subset of nodes within the graph. These subsets could be obtained by applying graph partitioning methods [1] before training, thereby reducing the bottleneck effect. > > (2) Hierarchical virtual nodes: Employing hierarchical virtual nodes enables a multi-level aggregation process. For example, virtual nodes at lower layers could aggregate local information, while higher-level virtual nodes could focus on capturing global information [2]. >Overall, your comment is highly insightful, and we consider this topic a valuable direction for future research. - R2.3: Clearer efficiency gains. > Thank you for your question regarding the gains. Indeed, the previous table may not have been very clear, as it included three metrics across several tasks. To better illustrate the improvements, we now directly report the differences rather than the absolute values. Additionally, we provide the average performance across all tasks for better clarity. The relative gain is defined as $R:=\frac{\delta}{L}$, where $\delta$ represents the difference between the compared method and GPS+Transformer, and $L$ is the value of GPS+Transformer. > We present the triplet $(R_A, R_T, R_M)$ where $R_A, R_T$, and $R_M$ denote the relative gains in accuracy, running time, and peak memory usage, respectively. The desired outcome is to achieve a higher $R_A\uparrow$, alongside lower $R_T\downarrow$ and $R_M\downarrow$, reflecting reduced running time and memory usage. >| $(R_A\uparrow, R_T\downarrow, R_M\downarrow)$ | Cifar10 | MalNet. |PascalVOC | Peptides-Func | Average | >| :--- | :--- | :--- | :--- | :--- | :--- | >| GPS+BigBird | (-2.53\%, 0.971, -0.262) | (-1.24\%, 0.401, -0.923) | (-26.07\%, 0.469, -0.362)|(-10.42\%, 3.054, -0.410) | (-10.07\%, 1.224, -0.489) | >| GPS+Performer | (-2.26\%, 0.814, 1.756) | (-0.92\%, -0.684, -0.672) | (-0.32\%, 0.396, -0.215)|(-0.92\%, 0.695, -0.089) | (-1.11\%, 0.305, 0.195) | >| GPS+Exphormer | (3.29\%, 0.589, 0.454) | (0.56\%, -0.732, -0.706) | (6.40\%, -0.011, -0.060)|(-0.12\%, -0.406, -0.431)| (2.53\%, -0.140, -0.186) | >| GPS+Prim-Atten |(-1.02\%, 0.146, -0.281) | (-0.57\%, -0.731, -0.927) | (-16.38\%, -0.278, -0.394)|(-1.35\%, -0.383, -0.601) | (-4.83\%, -0.311, $\small\textbf{-0.550}$) | >| GPS+Primphormer| (2.52\%, 0.164, -0.281) | (0.13\%, -0.733, -0.919) | (6.53\%, -0.289, -0.396)|(1.18\%, -0.398, -0.597) | ($\small\textbf{2.59}$\%, $\small\textbf{-0.314}$, -0.548) | > > The table above shows that Primphormer achieves the highest relative gains in accuracy and running time on average, as well as the second-highest (and very close to the best) gain in memory usage. We hope this helps to demonstrate the efficiency gains more clearly. We will include the discussion and additional experiments in the final version of the manuscript. [1] Çatalyürek Ü, et al. More recent advances in (hyper) graph partitioning. ACM Computing Surveys, 2023. [2] Vonessen C, et al. Next Level Message-Passing with Hierarchical Support Graphs. ArXiv:2406.15852, 2024. --- Rebuttal Comment 1.1: Comment: I thank the authors for their time in clarifying my questions on virtual nodes, its bottlenecks and the efficiency gains, which overall shows the benefits of the final Primphormer architecture. I increase the score thereby. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to carefully review our manuscript and for recognizing the benefits of the final Primphormer architecture. We truly appreciate your thoughtful feedback and the increased score, which motivates us to continue refining our work.
Summary: The paper aims to bypass the scale-restricting quadratic complexity of graph transformers with a primal representation of self-attention. This is accomplished by extending the linear-complexity primal representation of self-attention on sequences presented in [1]. The authors identify the lack of ordering in graph data (necessary for permutation equivariance) and the lack of flexibility of the data-adaptive weight as issues in [1]. These are addressed in optimization problem (2.5) with a virtual node allowing for global information collection under permutation equivariance which is then incorporated into projection matrices enabling a data driven dual basis. The duality of (2.5) is established under KKT conditions yielding a primal representation in which pairwise computation can be avoided relying on an asymmetric kernel trick. The paper continues with an analysis of Primphormer’s universal approximation property on sequences and graphs, followed by its expressivity in regards to the 1-Weisfeiler-Leman test for subgraph isomorphism. Experiments follow to demonstrate Primphormer’s performance, efficiency, and expressivity compared to baselines for a variety of common datasets and their corresponding tasks. [1] Chen, Y., Tao, Q., Tonin, F., and Suykens, J. A. K. Primal-attention: Self-attention through asymmetric kernel svd in primal representation. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, Primphormer is compared against several related architectures for a variety of benchmark datasets and their corresponding tasks with the convolutional layer remaining fixed, varying only the self-attention mechanism. Theoretical Claims: I found no notable issues with the theoretical claims. However, I was curious as to why Subformer was used for the proof of Theorem 3.2. I believe that Corollary 3.5 is minorly incorrect and also should have a stronger lower-bound. From the proof of Theorem 3.4 it should require positional/structure features and also that the Transformer with such features is strictly more powerful than 1-WL. Experimental Designs Or Analyses: Did you check the soundness/validity of any experimental designs or analyses? Please specify which ones, and discuss any issues. Overall the experimental designs and analysis are fine. However, Primphormer and GraphGPS+Transformer Table 1 and Table 3 the results for PascalVOC-SP do not match. I was not sure if this was simply an oversight. Otherwise, I was curious as to why Primphormer would perform consistently better than GraphGPS+Transformer given the analysis on performance and expressivity. Originally, I thought it odd that Table 5 did not include 1-WL given Corollary 3.5. However, if I remember correctly GraphGPS+Transformer and Graphormer, etc. with position information are more powerful than 1-WL, while at least Graphormer is upper-bounded by 3-WL per [2]. I believe this is the reason for its exclusion. I still would be interested in seeing the results of 1-WL just for the sake of comparison. [2] Jiarui Feng, Yixin Chen, Fuhai Li, Anindya Sarkar, and Muhan Zhang. 2022. How powerful are K-hop message passing graph neural networks. In Proceedings of the 36th International Conference on Neural Information Processing Systems (NIPS '22). Curran Associates Inc., Red Hook, NY, USA, Article 345, 4776–4790. Supplementary Material: Yes, but only gave Experiment_part1 and Experiment_part2, which contain the experiments, a cursory review. Relation To Broader Scientific Literature: The paper advances the state of the art of graph transformers with linear complexity self-attention without the requirement of sparseness in the graph. Though the main concept of a primal-dual formulation is primarily an extension of a previous work on sequence data to graph data, it does have some novel improvement. The analysis and experiments are convincing and Primphormer is something that I would find personally useful. Essential References Not Discussed: No Other Strengths And Weaknesses: The other weaknesses of the paper are addressed in other relevant sections. My only additional concern is with the overall novelty as it is primarily an extension of [1] to graphs, but with a data adaptive basis rather than data adaptive weight. However, I find the theoretical and empirical analysis to be quite convincing and help to offset this. Other Comments Or Suggestions: Nothing notable Questions For Authors: Tables 1, 2, and 3 appear to report average and standard deviation but it is not specified in the text for how many runs of each architecture. I was unable to find this from a cursory examination of your supplemental materials. Is it the same average over 5 runs as in Table 5? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your insightful comments and appreciation of this work. We address your concerns below: - R1.1: The reason for using Sumformer. > We use Sumformer as a bridge to analyze the approximation. Specifically, we decompose the approximation into two parts: (1) Primphormer to Sumformer and (2) Sumformer to the target function. Leveraging Sumformer allows us to avoid the need for exponentially many attention layers, requiring only one attention layer to represent the Sumformer. - R1.2: Revised Corollary 3.5. > Thank you for your insightful comment. We agree with your suggestion that Corollary 3.5 should be revised. Since both the Transformer and Primphormer are capable of simulating the 1-WL test, we claim that there exist parameterizations of the Transformer and Primphormer such that the node features (or colors) produced by these models are identical. > (Revised corollary 3.5.) Let $G=(V,E,\ell)$ with $N$ nodes and feature matrix $X^{(0)}\in\mathbb{R}^{d\times N}$ consistent with the label $\ell$. Then for all iterations $t\geq 0$, there exist parameterizations of Transformer and Primphormer and a positional encoding such that $X^{(t)}\_\mathcal{T}(v)=X^{(t)}\_\mathcal{T}(u)\iff X^{(t)}\_{\rm Pri}(v)=X^{(t)}\_{\rm Pri}(u)$ for all nodes $v,u\in V$ where $X^{(t)}\_\mathcal{T}$ and $X^{(t)}\_{\rm Pri}$ are node features of Transformer and Primphormer, respectively. - R1.3: Results in Tables 1 and 3. > Thanks for your careful reading. Table 1 is correct, but the results of Primphormer and GraphGPS+Transformer in Table 3 were not updated (Fortunately, the rankings of the results remain unchanged.). We sincerely apologize for this oversight and ensure that this will be corrected in the final version of the manuscript. - R1.4: Results of the 1-WL test. > Table 5 presents the expressive results on the BREC benchmark [1]. The key challenge of this benchmark lies in its distinguishing difficulty, as it includes graphs that are indistinguishable by the (1-WL to 4-WL) tests. Consequently, the 1-WL test fails to distinguish graphs in the BREC benchmark, as demonstrated in the following table. > >| Model| Basic | Regular |Extension | CFI | >| :---: | :---: | :---: | :---: | :---: | >| 1-WL | 0 | 0 | 0 |0 | - R1.5: The experimental setting regarding runs. > In our experiments (except Table 5), we reported the average and standard deviation over 10 runs. We will include this description in the final version of the manuscript. [1] Wang Y, Zhang M. An Empirical Study of Realized GNN Expressiveness. ICML, 2024.
null
null
null
null
null
null
Machines and Mathematical Mutations: Using GNNs to Characterize Quiver Mutation Classes
Accept (poster)
Summary: In this paper, the authors study the problem of quiver mutation using the GNNs and GNN explanation tool. The author identified that the GNNs trained with naive classification tasks on predicting quiver mutation type are able to learn causal information related to quiver mutation. In particular, it can identify some existing theorems without any prior knowledge. ## Update after rebuttal I would like to thank the authors for the detailed response to my questions and concerns. Most of my concerns are appropriately addressed by the rebuttal. I would like to keep my original score. Claims And Evidence: The overall procedure makes sense to me, and using the algorithm alignment ability of GNNs to solve mathematic problems is interesting. Methods And Evaluation Criteria: 1. I am not really familiar with the quiver mutation. But I am just wondering if there is any additional concern that the dataset only contains graphs of size 6-10 in training and 11 for testing. Is it possible to evaluate on a graph with a size much larger than training (like 20 or 30), and will the conclusions still hold? 2. The analysis only focuses on the type $D$ and $\widetilde{D}$, Is it possible to also extend the analysis to other types and discover other patterns? 3. The paper only examines several particular theorems. I am wondering how we can generalize results to other problems, especially for problems where we do not know whether GNN is useful. As the finally goal is that we can use ML models to solve mathematical problem or identify theorem we are unknown, I think how to systematically classify a certain problem to corresponding ML class that is algorithmic alienable, how to train the ML model and how to leverage explanation tool to discover new knowledge would be more interesting and impactful. Theoretical Claims: As I am not familiar with Quiver mutation, I cannot assess the most theoretical claims made in the paper. Experimental Designs Or Analyses: See above. Supplementary Material: not applicable. Relation To Broader Scientific Literature: The method and pipeline used in the paper may be applicable to other mathematic problems. Essential References Not Discussed: No. Other Strengths And Weaknesses: See above Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback and insightful questions. We're glad to hear you found the topic interesting, and we hope to answer your questions below: > Is it possible to evaluate on a graph with a size much larger than training (like 20 or 30), and will the conclusions still hold? The properties of each mutation equivalence class remain valid *regardless of the size*, so from the domain perspective the exact size of the graphs in the test set is not particularly important. From the machine learning perspective, we can certainly evaluate the model on larger graphs. Below we test the same model checkpoint on quivers of up to 20 nodes. Because the number of distinct quivers grows very quickly with size, we can only check on a subsample (we use a mutation depth of 6 and, if necessary, randomly sample 100,000 quivers to avoid out-of-memory errors). | Nodes | Accuracy | |:-----:|:--------:| | 12 | 99.6% | | 13 | 98.7% | | 14 | 97.7% | | 15 | 95.5% | | 16 | 94.3% | | 17 | 92.0% | | 18 | 91.1% | | 19 | 89.4% | | 20 | 89.1% | The model continues to perform well though performance does begin to degrade. We believe this is because the GNN has a fixed depth of 4, and hence may struggle to capture certain larger non-local substructures that appear in quivers with more nodes (e.g. long cycles). We discuss the theoretical implications of using a message-passing GNN for large quivers in the Response to Reviewer ivH2. > The analysis only focuses on the type $D$ and $\widetilde{D}$, Is it possible to also extend the analysis to other types and discover other patterns? It should be possible to extract the characterizations of other finite mutation classes in a similar manner, recovering other results from Henrich [1]. Certain mutation-infinite classes may also admit characterization by certain patterns, though it is not known in general which other mutation classes, if any, admit such a characterization. If such a characterization does exist, it may be possible to extract these from a machine learning model as well. However, we note that mutation equivalence is in general a very hard problem [2], so discovering patterns for an arbitrary mutation class is likely also difficult in general. > I think how to systematically classify a certain problem to corresponding ML class that is algorithmic alienable, how to train the ML model and how to leverage explanation tool to discover new knowledge would be more interesting and impactful. We agree, and we think this is an interesting direction for future work. GNNs are a natural choice for many problems, but there is emerging evidence for reasoning capabilities in transformers, for example. A more general systematic pipeline, however, will require a more mature suite of interpretability tools and a deeper understanding of algorithmic alignment in different architectures. [1] Henrich, Mutation-classes of diagrams via infinite graphs, Math Nachr., 2011 [2] Soukup, Complexity of quiver mutation equivalence, preprint, 2023
Summary: This work uses GNNs to solve the quiver-mutation-equivalence problem: whether one quiver can be transformed into the other through a sequence of mutations. With explainability techniques, they discovers criteria for quiver of type $\tilde D$. Moreover, GNN need not to be trained. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. The proposed GNN is a simple adaptation of GIN to directed multigraph settings. Theoretical Claims: No. All proofs are on the property of quiver mutations, which are primary the background problem setting rather than the method. I do not have the domain knowledge and skip it. Experimental Designs Or Analyses: Yes. The training set contains quiver with nodes 6-10, while test set containing quiver with 11 nodes only, which avoids the risk of data leakage. Supplementary Material: Yes, I read the Appendix B for the design of GNN and PGEExplainer, and Appendix C for dataset. Relation To Broader Scientific Literature: As the author claims, it is related to the quiver mutation problem. Essential References Not Discussed: The GNN and Explainablity methods used in the paper are discussed in method or Appendix section, but not introduced in related work section. Other Strengths And Weaknesses: This work is more similar to a experiment report rather than a paper, as no novel algorithm or tasks are proposed, and the result is used to verified existing results rather than new results. Other Comments Or Suggestions: In line 157, please refer to figure 3 for a more clear illustration of quiver type A,D,E ... Questions For Authors: Equivalence relation is more similar to a contrastive learning task rather than classification task. Have you even tried formulate the task as predicting whether two graph are equivalent rather than predict the type of one graph? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your feedback and your suggestions regarding presentation. Below we hope to address your concerns and questions. > This work is more similar to a experiment report rather than a paper, as no novel algorithm or tasks are proposed, and the result is used to verified existing results rather than new results. Although our work is an application-driven case study rather than a specific novel method, we believe the application of existing machine learning techniques to the quiver mutation equivalence problem is novel and has potential impact in AI for mathematics. While our mutation equivalence results were known, we reiterate that *we discovered Theorem 5.1 independently of prior work*. As we have also discussed in or response to Reviewer ivH2, the use of ML to accelarate scientific discovery is of great interest to the machine learning community. Indeed, it is often pointed to as one of the primary positive impacts that modern machine learning can provide. Understanding how ML tools should be used to enable scientific discovery must be a conversation that involves both domain and ML experts. As such, we believe that works that illuminate this process are very much in line with the role of ICML in the ML community. > In line 157, please refer to figure 3 for a more clear illustration of quiver type A,D,E ... Thank you for the suggestion. We will move the reference to Figure 3 from line 142 to line 157. > Equivalence relation is more similar to a contrastive learning task rather than classification task. Have you even tried formulate the task as predicting whether two graph are equivalent rather than predict the type of one graph? The contrastive learning task is an interesting question in its own right. However, our goal of extracting structural characterization results from the model motivated our choice to formulate the task as classification. From an explainability perspective, we are interested in the question "Why does the GNN classify this as Type $D$?" rather than the question "Why does the GNN predict these two quivers are mutation equivalent?" Thus we chose to formulate the machine learning task as a classification problem rather than a contrastive problem.
Summary: This paper shows that a GNN learns the same substructure to classify the quiver mutation class of a quiver as proposed by a classification theorem in quiver theory in mathematics. The authors train a GNN on quivers of different types $A,D,E,\tilde{A},\tilde{D},\tilde{E}$ and use PGExplainer on this trained model to detect the substructure that the model is focussing on, in order to classify the mutation class of quivers. Further, in section 5 they conjecture a theorem which they prove about the mutation class of quivers of type $\tilde{D}$. Claims And Evidence: The claims made in the paper are supported by sufficient evidence. Methods And Evaluation Criteria: A holistic viewpoint of the methods seems to make sense for the problem at hand. However, some finer details such as choice of design of experiments do not seem to be motivated enough, as pointed out in the later section of the review. Theoretical Claims: Yes, I checked the proof of Theorem 5.1, and as a consequence Lemmas D.1, D.5-D.7 and Corollaries D.2 - D.4 in Appendix D. Experimental Designs Or Analyses: Why do you choose to train on A,D,E,$\tilde{A}$,$\tilde{D}$,$\tilde{E}$ when the test set is not going to contain any samples from $\tilde{E}$? Moreover, what is the reason behind training on quivers with lesser nodes and testing on quivers of a fixed node size which is larger than what the model has been trained on? Supplementary Material: Yes. I read all the appendices. Relation To Broader Scientific Literature: I think this work shows that machine-guided research is an avenue that needs to be explored more by the AI for science community. Though the specific task that the authors choose to show this might seem niche, the idea that machine recognizes the same substructures in a problem that a classification theorem gives is really powerful. Essential References Not Discussed: I don't think so. Other Strengths And Weaknesses: Strengths: Paper is well-written and well-organized. Weaknesses: As pointed out earlier, the choice of the design of experiments does not seem to be motivated enough. Other Comments Or Suggestions: You might want to add the fact that the message passing is being done over graphs with constant node features to highlight the fact that the GNN model is truly learning only the structure with little to no help from the node features. Questions For Authors: Did you try generating the graphs with different features? E.g. Node degrees or similar features? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your insightful review. We are encouraged you think our work demonstrates the promise of machine-guided research, and we are glad you found the paper well-organized. Since multiple reviewers had questions about the experimental design, we have also expanded the discussion in the paper to address the most common questions and emphasize that our experimental design was driven by our ultimate goal of generating domain-related insight from the model. We aim to address your questions more specifically below: > Why do you choose to train on A,D,E, $\widetilde{A}$, $\widetilde{D}$, $\widetilde{E}$, when the test set is not going to contain any samples from $\widetilde{E}$? We train the model on entire mutation classes of smaller sizes to ensure the model sees a comprehensive view of each mutation class during training. Our inclusion of types $E$ and $\widetilde{E}$ in the train set primarily serves to increase the difficulty of the training task, since ultimately our analysis focuses on types $D$ and $\widetilde{D}$. (From the domain perspective, there are only finitely many $\widetilde{E}$ of any size, so general classification can be achieved through exhaustive computation, and the class $E$ is no longer mutation-finite for larger sizes, making a combinatorial characterization substantially more difficult.) > Moreover, what is the reason behind training on quivers with lesser nodes and testing on quivers of a fixed node size which is larger than what the model has been trained on? The difference in sizes from train to test is in part a consequence of our desire to train on entire mutation equivalence classes, as we mentioned previously. In addition, from the application perspective any classification result for a mutation class should generalize across sizes. In this manner the difference in graph sizes follows prior work studying size generalization of GNNs (e.g. [1]). > You might want to add the fact that the message passing is being done over graphs with constant node features to highlight the fact that the GNN model is truly learning only the structure with little to no help from the node features. Thank you, this is an insightful suggestion! We have added this to the paper. > Did you try generating the graphs with different features? E.g. Node degrees or similar features? We did not generate graphs with node features. As you point out, we used constant node features so that the GNN learned entirely from the graph structure. In particular, our ultimate goal was to provide structural characterizations of mutation classes, so we wished to ensure that the GNN was truly relying on the graph structure to perform classification. [1] Yehudai et al., From Local Structures to Size Generalization in Graph Neural Networks, ICML 2021 --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I would like to maintain my score.
Summary: The paper uses GNNs to learn quiver mutation equivalence. The results show that GNNs can not only classify these quiver types, but can also characterize particular mutation classes through their latent representations. Claims And Evidence: Most claims are supported by evidence including theoretical results, experiments or case analysis. Methods And Evaluation Criteria: As admitted in the paper, the gap between train and test set (including the distribution of the number of nodes, and the absence of $\tilde E$ in the test set) does not follow common machine learning settings. All the experiments are carried on one dataset, while more comprehensive will be more convincing. Theoretical Claims: I check the correctness and did not find major flaws. However, a great proportion of definitions (those in Section 3) and theoretical results (e.g., Theorem 4.3) are known results or trivial extensions, which bothers reading and may downgrade the contribution of the paper. While there are tons of theoretical work on the expressivity of graph neural networks, I strongly recommend the authors look deeper into theoretical expressive power of (directed) GNNs to provide theoretical guarantees for GNNs to characterize quiver mutation classes. Experimental Designs Or Analyses: The paper only uses one type of classical GNN architecture, without investigating more expressive and more state-of-the-art GNN variants. Simultaneously, the explanation method only involves PGExplainer, while there are also many other GNN explainers. Moreover, the small model size and inadequate training does not match usual deep learning schemes, which make the results less convincing. Supplementary Material: I review the whole supplementary material. Relation To Broader Scientific Literature: The paper is related to literature in mathematics, topology and graph neural networks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is an interesting application of GNNs to the mutation equivalence problem. However, the machine learning techniques in this paper are elementary, and the analytical results lack statistical guarantee over larger scale. Other Comments Or Suggestions: This paper is essentially a naïve application of simple machine learning methods to one specific problem, without strong theoretical guarantees and large scale experimental verifications. My opinion is that it is not yet qualified for a top-tier AI/machine learning conference like ICML, and would suggest submitting the paper to some other conferences/journals for data analysis or applied mathematics. Questions For Authors: * Can you provide analysis on theoretical expressive power of directed GNNs, or at least for distinguishing quiver mutation classes? * Can you elaborate on more large-scale/real-world datasets and more state-of-the-art models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review, and we are glad you found the application to quiver mutation interesting. We hope to address your comments below: > As admitted in the paper, the gap between train and test set (including the distribution of the number of nodes, and the absence of $\widetilde{E}$ in the test set) does not follow common machine learning settings. All the experiments are carried on one dataset, while more comprehensive will be more convincing. Regarding the experimental methods, we recognize that the difference in the train and test sets does not follow common machine learning practice. Rather, *our experimental design is motivated by our ultimate goal of recovering classification theorems from the model*. As we discuss in our Response to Reviewer KQ27, the presence of $\widetilde{E}$ in the training set serves to modulate the difficulty of the problem to ensure that the model learns discriminative features for the other classes, particularly $D$ and $\widetilde{D}$, on which we focus our explainability study. Similarly, the difference in the distribution in the number of nodes between training and testing ensures that the GNN learns structural traits which are size-generalizable, as well as allowing us to provide the GNN with the full mutation classes for smaller sizes without contaminating the test data. > The paper only uses one type of classical GNN architecture, without investigating more expressive and more state-of-the-art GNN variants. Simultaneously, the explanation method only involves PGExplainer, while there are also many other GNN explainers. Our intention was not to investigate which GNN architecture would be the most effective for classifying quiver mutation classes, nor was it to compare between different explainability methods. Rather, *we provide a case study that illustrates how a domain expert using off-the-shelf components can guide their own research with machine learning*. This application-driven approach differs from works whose goal is to introduce novel architectures or algorithms to be adopted by others. In such settings we agree that it is important to show that the architecture will perform well beyond the initial dataset it was trained for. In our work, however, once the model offers sufficient insight to conjecture and prove a statement about the quiver mutation classes of interest, its performance on additional data ceases to be important. Ultimately, while we admit the individual methods are not novel in and of themselves, we believe our work presents a novel application of existing methods to an interesting problem in combinatorics. More broadly, we believe the central idea that machine learning can guide domain research is absolutely of interest to the ICML audience. > Can you provide analysis on theoretical expressive power of directed GNNs, or at least for distinguishing quiver mutation classes? The question of GNN expressive power (equivalently, the WL test) and quiver mutation is interesting, and we have expanded this discussion in the paper. Directed GNNs are known to be strictly more expressive than undirected 1-WL [1], and we thank you for pointing out this oversight in our discussion and references. In the context of quiver mutation, there are some relevant local substructures which an undirected GNN clearly cannot distinguish. For example, a directed triangle should be of Type $A_3$ while an undirected triangle is of Type $\tilde{A}_2$. In general, the directed WL test is sufficient to recognize type $A$ quivers, as well as certain subtypes of $D$ and $\widetilde{D}$ quivers (namely Types I, II and the corresponding paired types). However, one of the WL test's limitations is its inability to count larger cycles, such as those that appear in Type $D$-IV and $\widetilde{D}$-VI. > Can you elaborate on more large-scale/real-world datasets and more state-of-the-art models? GNNs' ability to generalize across sizes and their inductive bias towards local structures also motivates our choice of architecture. While e.g. graph transformers also provide promising reasoning capabilities on graph tasks, message-passing remains the state of the art for recognizing many of the local substructures which are relevant to this problem [2]. However, as we note above, message-passing fails to recognize larger-scale structures in graphs. Our response to Reviewer E1Ro contains an empirical investigation on larger graphs (up to $n = 20$ nodes), revealing that our GNN generalizes well but not perfectly on larger graphs. [1] Beddar-Wiesing et al., On the Extension of the Weisfeiler-Lehman Hierarchy by WL Tests for Arbitrary Graphs, MLG@ECMLPKDD 2022 [2] Sanford et al., Understanding Transformer Reasoning Capabilities via Graph Algorithms, NeurIPS 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. My concerns are basically addressed. I strongly encourage the authors to include these new results, and acknowledge the limitations as above in their revisions. I raised my scores accordingly.
null
null
null
null
null
null
A Variational Perspective on Generative Protein Fitness Optimization
Accept (poster)
Summary: The paper focus on the problem of protein fitness optimization to find new variants with enhanced fitness, facing challenges such as a vast search space and discrete protein sequences. The introduced Variational Latent Generative Protein Optimization (VLGPO) is a variational framework that enables posterior sampling of protein sequences based on desired fitness levels by combining a learned prior and fitness predictor. They first train a VAE on the sequence space to compress sequence into continuous latent representation and then train a flow matching model to learn a generative prior for the latent space. The fitness predictor is used for classifier-guidance of flow matching generation. VLGPO demonstrates strong performance on AAV and GFP benchmarks in medium and high difficulty tasks with limited data, achieving clear fitness improvements over baselines. The method has limitations in hyperparameter tuning, especially for challenging tasks like GFP (hard), and relies on in-silico evaluation with a trained oracle as ground truth, where experimental validation could provide more insights into its applicability. Claims And Evidence: All contribution points are supported in the paper, except some concerns regarding evaluation. See below. Methods And Evaluation Criteria: The proposed method suffers from a lack of clarity and fails to demonstrate sufficient novelty. 1. The predictors $g_\phi$, $g_\tilde{\phi}$, and the oracle $g_{\psi}$ are sourced directly from previous works. However, the paper offers no insights into their design process, the training process on the benchmark, or their accuracy levels. This absence of information leaves readers uncertain about the reliability of these predictors. 2. When training a flow matching model, the parameterization of the neural network and the choice of the time step sampling distribution significantly impact the final performance. These factors should be carefully considered as hyperparameters to be tuned. Unfortunately, the paper neglects to address this, leaving a gap in understanding how the model was optimized for optimal performance. 3. Most of the models used in the evaluation are based on CNNs. Nowadays, transformer-based models, such as ESM, have shown superiority over CNNs. This raises doubts about the reliability of the evaluation. Using outdated models may lead to an inaccurate assessment of the proposed method's performance compared to the latest advancements in the field. 4. Regarding classifier guidance, it remains unclear why the objective is to optimize towards a specific fitness value rather than maximizing the fitness value. In real-world scenarios, it is entirely possible to achieve fitness values higher than the highest ones present in the dataset. 5. The idea of applying flow matching on the latent space and guiding generation with a classifier has been proposed many times in different domains. This work seems to directly apply this method on the protein sequence design task. Theoretical Claims: N/A Experimental Designs Or Analyses: I have serious doubts about the soundness of the experimental design and the validity of the results presented. 1. All evaluations are conducted on relatively small benchmarks, containing fewer than 3,500 mutants. This narrow scope casts significant doubt on the reliability of the evaluation. Given that it only represents a small fraction of the vast possible mutation landscape, the generalizability of the findings is severely compromised. Moreover, the predictors and oracles, trained and evaluated on these limited datasets, are also suspect. In contrast, many recent datasets, such as those in [1] and [2], have been released with 10 to 100 times more mutants, highlighting the inadequacy of the current benchmark size. 2. The dataset is split according to fitness values, with lower-value data used for training the fitness predictor and higher-value data as the generation objective. However, in real-world protein engineering, the typical approach is to design single-mutants first, followed by double-mutants and then triple-mutants. Therefore, a more appropriate split should be based on mutation depth. Additionally, accurately modeling the effects of multiple mutations remains a challenging problem, which poses a significant challenge to the accurate training of oracles and predictors. 3. There is a lack of clarity regarding the dataset used to train the flow matching and VAE models. It is not specified whether they are trained on $S^*$ or if the training data contains all mutants. 4. The achievement of high fitness values comes at a substantial cost of sacrificing diversity and novelty. This trade-off is highly undesirable. 5. How the authors perform the grid search for hyperparameters such as $\alpha_t$ and $J$ is unclear. It appears that the search is based on the final reported metrics. This raises concerns about the practical applicability of the method in real-world scenarios, as the hyperparameter tuning strategy may not be robust enough for different contexts. 6. The authors should incorporate more realistic settings and additional baselines, similar to those presented in [1] and [2]. [1] Notin, Pascal, et al. "Proteingym: Large-scale benchmarks for protein fitness prediction and design." Advances in Neural Information Processing Systems 36 (2023): 64331 - 64379. [2] Ouyang-Zhang, Jeffrey, et al. "Predicting a protein's stability under a million mutations." Advances in Neural Information Processing Systems 36 (2023): 76229 - 76247. Supplementary Material: I have read all the supplementary material sections. Relation To Broader Scientific Literature: See the summary, method and experiment parts. Essential References Not Discussed: [1] Notin, Pascal, et al. "Proteingym: Large-scale benchmarks for protein fitness prediction and design." Advances in Neural Information Processing Systems 36 (2023): 64331 - 64379. [2] Ouyang-Zhang, Jeffrey, et al. "Predicting a protein's stability under a million mutations." Advances in Neural Information Processing Systems 36 (2023): 76229 - 76247. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for providing feedback on our manuscript and appreciate the comments. We will also discuss related work that you mentioned. - **The predictors and the oracle are sourced directly from previous works.** Indeed, we take this from recent work from ICLR’24 [1]. As mentioned to reviewer TQyg, we will further elaborate on this. - **Parameterization of the NN and time step sampling choices for flow matching.** Our experiments have shown to be very robust against the choice of time steps used for inference, as we use a flow-based approach that is much less sensitive to hyperparameter settings than standard time-discrete diffusion models. The choice of the architecture at hand is rather generic and used as such for many other related tasks. The time steps are sampled uniformly between 0 and 1 in training, which is the standard for flow-based approaches, we saw no need to change it. - **Nowadays, transformer-based models, such as ESM, have shown superiority over CNNs.** Transformer-based models are standard in protein design, we show that guided flow matching performs well even with simpler architectures (although our network does include attention layers). More expressive models might yield further improvements. Note, previous work [1,2] has discussed that CNN-based models are competitive in limited-data settings. As noted in Sec. 5, ESM embeddings are highly expressive, and we plan to explore their potential in future work. - **Regarding classifier guidance, it remains unclear why the objective is to optimize towards a specific fitness value rather than maximizing the fitness value.** Maximizing the fitness value is our ultimate goal; however, we intended to demonstrate the expressiveness of our approach by steering sequences toward specific fitness values. In practice, we compared both formulations which performed similarly. Given that our variational framework allows for fitness maximization in the likelihood term, we will explicitly incorporate this into Eq (3). If of interest, we can provide the results in the supplementary. - **The idea of applying flow matching on the latent space and guiding generation with a classifier has been proposed many times in different domains.** We agree that classifier guidance is widely used in generative AI. However, it cannot be trivially applied to discrete sequences, and its application in latent spaces for protein fitness optimization has not been explored before, to the best of our knowledge. - **All evaluations are conducted on relatively small benchmarks which casts significant doubt on the reliability of the evaluation.** We respectfully disagree with this concern. Our evaluation protocol and benchmarks align directly with established standards from previously published work [1]. Such dataset sizes reflect realistic scenarios. Moreover, the in-silico oracle was trained on the complete DMS datasets with 56,086 mutants for GFP and 44,156 for AAV. - **The dataset is split according to fitness values... a more appropriate split should be based on mutation depth**. Both tasks *medium* and *hard* indeed are based on fitness values as well as mutation gaps. This is discussed in [1] (independent original work) and in our manuscript in Sec. 4.1 and in Tab. 6. - **There is a lack of clarity regarding the dataset used to train the flow matching and VAE models.** For each of the four tasks, the flow matching and VAE models are trained exclusively on their respective limited datasets to emulate a realistic scenario. We will explicitly clarify this in the revised manuscript. - **The achievement of high fitness values comes at a substantial cost of sacrificing diversity and novelty.** We agree that there is an inherent trade-off between achieving high fitness values and maintaining diversity, which is also reflected in our approach. Given that our method achieves a substantial improvement in fitness compared to alternative approaches, we believe the benefit outweighs the marginal drop in diversity compared to competing methods. - **How the authors perform the grid search for hyperparameters is unclear.** The plots shown in the manuscript are generated by the oracle, which are an ablation and not the source of our hyperparameters, similar to [1]. We agree, this was not explained well enough. For our parameter estimation we combine grid searches over the diversity as well as the fitness given only by the predictor. We will add the diversity and predictor fitness plots (which resemble Fig. 5) in the appendix. Ultimately it is a heuristic tradeoff between fitness and diversity, which we will further elaborate upon. - **The authors should incorporate more realistic settings and additional baselines.** See response to reviewer jm6F, last point. [1] Kirjner, Andrew, et al. "Improving protein optimization with smoothed fitness landscapes."  [2] Dallago, Christian, et al. "FLIP: Benchmark tasks in fitness landscape inference for proteins." --- Rebuttal Comment 1.1: Comment: Thanks for the author's detailed response. My major concern is around the evaluation settings. I totally understand the reason the authors choose a well-established benchmark for the evaluation (AAV and GFP). This is mostly because the two proteins are well studied, have abundant data and people have already developed very accurate function predictors as the authors said. However, there are many more interesting and challenging targets to perform protein engineering in the real world, for which developing accurate predictors is challenging due to the scarcity of data or the difficulty to model the proteins, as shown in ProteinGym. Therefore, I'm really concerned about whether the method still works under these realistic and challenging settings. That said, please note that the reviewer's critique is towards the evaluation protocol of the specific task, not to this specific paper. As I mentioned in the review - "I'm actively working on the protein domains, but am not familiar with the specific task", I'm open to hear more thoughts about how to make the evaluation more robust and make real usage of the methods in this domain. --- Reply to Comment 1.1.1: Comment: We want to thank the reviewer for clarifying their response and for raising the concern about the evaluation protocol in the context of protein fitness optimization. Since our focus was on exploring a method that is novel within this context, we believe it is important to first rely on established benchmarks to compare our approach with competing methods. Our goal in this work is to establish a strong foundation using existing benchmarks before extending the approach to more complex or under-explored proteins. The fact that VLGPO achieves robust performance even on the more difficult GFP (hard) task with only few sequences - where competing methods often fail to generate mutants with better fitness - is a strong indicator that it is worth exploring further. This highlights that the idea of using classifier guidance in a continuous latent space is very promising. We acknowledge the sentiment and the point raised by the reviewer and we agree that there is a need to extend the evaluation to other, potentially more challenging or diverse proteins from FLIP or ProteinGym. We also believe that both in-silico evaluation such as the work in [1] assessing the utility of computational filters and using metrics like folding confidence and structural stability (to name a few) could offer a more complete perspective beyond fitness evaluation. Additionally, we think that future research could adopt iterative optimization protocols that better simulate realistic settings and incorporate structural or experimental validation. However, we believe that developing and implementing such expansions is essential, but goes beyond the scope of our current work, as it constitutes a significant research effort on its own. We hope this clarifies our reasoning and demonstrates the potential of our approach, especially as a step toward more realistic protein engineering scenarios. [1] Johnson, Sean R., et al. "Computational scoring and experimental evaluation of enzymes generated by neural networks." *Nature biotechnology* (2024): 1-10.
Summary: The paper proposes Variational Latent Generative Protein Optimization (VLGPO), a method for protein sequence optimization by training a VAE combined with a learned flow matching prior over mutations. A fitness predictor is used for guidance, and the method is evaluated on commonly used database lookups including the AAV and GFP datasets, as well as diversity and novelty metrics. Hyperparameter optimization is identified as a challenge for the approach, which is sensitive to these choices. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: The paper is well-situated in terms of the broader literature, and detailed comparisons are made to prior work Essential References Not Discussed: The authors may be interested in this very recent in silico benchmark, which introduces synthetic test functions that can be used for benchmarking black-box optimization methods like VLGPO. Stanton, S., Alberstein, R., Frey, N., Watkins, A., & Cho, K. (2024). Closed-form test functions for biophysical sequence optimization algorithms. arXiv preprint arXiv:2407.00236. Other Strengths And Weaknesses: The paper is clearly written, addresses an important problem in biology, contains relevant benchmarks and baselines, and introduces a novel approach. The biggest weakness is the reliance on GFP and AAV datasets as the benchmarks, which is a general limitation that affects the field. Have the authors considered other FLIP tasks, or other related tasks? Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their careful evaluation of our manuscript and for the positive assessment of our work. Below, we address each point individually: - **The authors may be interested in this very recent in silico benchmark, which introduces synthetic test functions that can be used for benchmarking black-box optimization methods like VLGPO.** Thank you for directing us toward this interesting recent work. We will examine it in more detail and consider its suitability for future applications, as synthetic test functions appear very promising for further assessing the generated sequences. - **The biggest weakness is the reliance on GFP and AAV datasets as the benchmarks, which is a general limitation that affects the field. Have the authors considered other FLIP tasks, or other related tasks?** This is indeed an important point. While our method could, in principle, extend to other FLIP or ProteinGym tasks, we chose the GFP and AAV datasets as they currently are well-established benchmarks to allow for fair comparisons against our proposed approach. The main focus here was to investigate performance in limited initial datasets, reflecting common practical scenarios. Nevertheless, exploring additional tasks beyond GFP and AAV is an exciting avenue for future research, and we fully agree on the importance of designing and adopting broader benchmarks in this context. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and their reply that extending benchmarking beyond the considered tasks is outside the scope of the current work. I will raise my score and recommend acceptance. --- Reply to Comment 1.1.1: Comment: We thank the reviewer and appreciate the raised score; we will revise the manuscript to incorporate all the initial feedback.
Summary: This paper presents a novel protein fitness optimization model called Variational Latent Generative Protein Optimization (VLGPO). VLGPO uses flow-matching to perform fitness optimization in the continuous latent space of the generative model, allowing efficient exploration of the fitness landscape. Guided by fitness predictors, VLGPO can effectively optimize high-fitness proteins. Claims And Evidence: The author demonstrated the performance of VLGPO on two protein datasets, GFP and AAV, with medium and hard difficulty. VLGPO showed superior fitness optimization ability compared to previous methods. Methods And Evaluation Criteria: The paper clearly presents the algorithmic and training details of flow-matching and classifier guidance. However, since the evaluation of the methods relies on the predictor $g_{\phi}$, it would be helpful to provide details on the accuracy of $g_{\phi}$ as in-silico oracle. Theoretical Claims: NA Experimental Designs Or Analyses: The author follows the experimental design in [Kirjner et al., 24], demonstrating that VLGPO can optimize sequences for both high fitness and high diversity and novelty. In Tables 2 and 3, the variability in diversity and novelty among different methods is significantly greater in GFP than in AAV, which is another observation worth discussing. Supplementary Material: The supplementary material provides useful definitions and results for understanding the paper. Relation To Broader Scientific Literature: Previous works mainly focus on directly optimizing protein sequences. This paper optimizes protein sequence in a continuous latent space by using flow matching, providing novel insights into exploring the protein fitness landscape. Essential References Not Discussed: NA Other Strengths And Weaknesses: It would be helpful if the author could also provide more analysis and statistics, such as diversity at different fitness, of the GFP and AAV dataset. Other Comments Or Suggestions: NA Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their valuable comments and feedback. Below, we address each point individually: - **Since the evaluation of the methods relies on the predictor, it would be helpful to provide details on the accuracy of $g_{\phi}$ as in-silico oracle.** Thank you for highlighting this important aspect. We reused the oracle and predictor from [1] entirely out-of-the-box, without any modifications, aiming to present a method in which alternative predictors or oracles could easily be used as drop-in replacements. We will comment on this in more detail in the revised version. However, since [1] does not explicitly report the final performance of the in-silico oracle, we have now computed the Mean Squared Error (MSE) on a subset of 512 randomly selected samples from the ground truth dataset $\mathcal{S}^*$. The oracle’s predictions closely follow the target fitness values, resulting in MSE values of 0.012240 for GFP and 0.002758 for AAV. - **In Tables 2 and 3, the variability in diversity and novelty among different methods is significantly greater in GFP than in AAV, which is another observation worth discussing.** We appreciate the reviewer’s insightful observation. This aligns well with our findings, indicating that the GFP setting indeed appears more challenging compared to AAV. This discrepancy may arise due to GFP sequences being longer and thus representing sparser and higher-dimensional search spaces. We will discuss this observation in the revised version of the manuscript. - **It would be helpful if the author could also provide more analysis and statistics, such as diversity at different fitness, of the GFP and AAV dataset.** Thank you for this valuable suggestion. The diversity within the top-performing (99th percentile) sequences of the entire dataset $\mathcal{S}^*$ is 4.73 for GFP and 5.23 for AAV, as briefly discussed in Section 4.3. In contrast, the diversity of sequences in the four considered tasks is notably higher: for GFP (medium) it is 14.5, for GFP (hard) 16.3, for AAV (medium) 15.9, and for AAV (hard) 18.4. The training datasets naturally contain more diverse sequences, as it includes those with lower fitness levels. In contrast, the top-performing sequences exhibit lower diversity, aligning closely with the diversity observed in Tables 2 and 3. We will highlight this observation in the revised manuscript. [1] Kirjner, Andrew, et al. "Improving protein optimization with smoothed fitness landscapes." *ICLR,* 2024.
Summary: This paper proposes a new in-silico method for generating novel high-fitness protein sequences. It first embeds sequences in a lower-dimensional space via a VAE, then fits a generative model to the embeddings by flow-matching. The sampling is guided by a pre-trained fitness predictor and manifold-constrained gradients. Empirical results show improved sampling of high-fitness variants, even in data-scarce settings. Claims And Evidence: * This paper argues that, for protein fitness optimisation, embedding sequences in a continuous latent space is more effective than token-based sequence representation. This claim is partially supported by the comparison with GWG, GGS, and gg-dWJS that rely on one-hot encoding. * This paper argues that using a fitness predictor is an effective approach to guide the optimisation towards high-fitness regions, particularly in limited data regimes. This claim is supported by the experimental study, notably with the comparison with a conditional diffusion model trained from scratch (on the labeled sequence-fitness pairs) without the fitness predictor guidance. The supplementary material reports the different evaluation metrics calculated on the samples obtained from unconditional diffusion model, for reference. Methods And Evaluation Criteria: * The method is sound, well-motivated and each component is clearly described in Section 3. * The evaluation is robust. Employing two protein optimisation benchmarks of varying complexity, 9 baseline approaches, and 3 different metrics, it clearly demonstrates the advantages of the proposed approach. * However, as the authors note among the potential limitations, the assessment of the fitness relies on a neural fitness predictor which is probably less reliable than wet-lab validation. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design and analyses are solid: * Results are averaged over five seeds. * A fair comparison with other baselines is achieved through grid search over hyperparameters and using the same networks when possible. * The ablation study (comparing guidance with and without manifold-constrained gradients, unconditional, and conditioning from scratch) is conducted for both cases, with and without graph-based smoothing. Supplementary Material: I've reviewed the supplementary material. Relation To Broader Scientific Literature: The method is a combination of ideas from the latest generative modelling literature (flow-matching in VAE latent space and guidance with manifold constrained gradient) applied to the real-world use case of protein fitness optimisation. Essential References Not Discussed: The related work is comprehensive. Other Strengths And Weaknesses: **Strengths** - The clarity of the paper. - The robust experimental design. - The novel combination of ideas applied to the real-world use case of protein fitness optimisation. **Weaknesses** - The use of the fitness predictor may overestimate the performance gap with some baseline approaches. Other Comments Or Suggestions: * Can you clarify the connections between the proposed approach and variational inference? Questions For Authors: * Can you comment on the poor performance of GFN-AL? * Can you contrast your approach with classifier-free guidance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive feedback and the insightful comments, which help to improve the quality of the manuscript. We fully agree with the reviewer that assessing fitness with a computational model is less reliable than direct wet-lab validation. Experimental validation is the final goal, in-silico evaluation is an important step along the way. We also appreciate the comment regarding the potential overestimation of the performance gap compared to baseline methods. This is indeed a challenge for the community, but hard to quantify precisely. Below, we address the reviewer’s more specific questions directly: - **Can you clarify the connections between the proposed approach and variational inference?** Thank you for this point. VLGPO uses variational inference through the initial embedding step to obtain continuous latent embeddings of the discrete protein sequence variants - the autoencoder is trained with variational inference. However, our approach extends this by introducing additional generative modeling components, namely flow matching and classifier-guided sampling, to further refine and control the generation process. We will clarify this connection explicitly in the revised manuscript. - **Can you comment on the poor performance of GFN-AL?** To ensure a fair evaluation, the results for all methods are taken from a comprehensive comparison done in recent published work [1]. Hence we can only speculate about the poor performance. GFN-AL might struggle in limited-data scenarios due to sparse reward signals from the few observed mutants, which could potentially limit effective exploration. - **Can you contrast your approach with classifier-free guidance?** Thank you for this question — we find it very interesting, as it highlights the advantages of using a separate classifier for guidance, which we have addressed in Fig. 3. There, we compare our method to a conditional model that directly learns the posterior $p(x|y)$. In that setup we additionally tested classifier free guidance, but it did not make a noticeable difference. As shown in both the figure and the ablation tables, classifier guidance allows for more effective steering of the generation process: in challenging scenarios like GFP (hard), classifier-free guidance struggles to produce higher-fitness sequences, while classifier guidance successfully targets high-reward regions. [1] Kirjner, Andrew, et al. "Improving protein optimization with smoothed fitness landscapes." *ICLR,* 2024.
null
null
null
null
null
null
Maximum Update Parametrization and Zero-Shot Hyperparameter Transfer for Fourier Neural Operators
Accept (poster)
Summary: This paper applies the Maximum Update Parametrization (µP) framework to Fourier Neural Operators (FNO), demonstrating that a single set of hyperparameters can effectively work for both large-scale and small-scale FNO models. Claims And Evidence: While I understand the authors' claims, the significance of this research question remains unclear. Scaling up transformer-like models appears to be a more straightforward alternative. Methods And Evaluation Criteria: The authors apply a framework for transferring hyperparameters from small to large FNO models. Theoretical Claims: I have not verified the details of the theoretical proofs. Experimental Designs Or Analyses: The datasets used in this study appear overly simplistic. The necessity for FNO models with billions of parameters in these tasks is questionable. More challenging datasets, such as those involving multiple mixed equations for training PDE foundation models [1,2], would provide more meaningful scenarios for investigating hyperparameter tuning in large-scale models. [1] Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [2] PDEformer: Towards a Foundation Model for One-Dimensional Partial Differential Equations Supplementary Material: I have reviewed the theorems in the appendix but have not verified their proofs. Relation To Broader Scientific Literature: This work may contribute to hyperparameter tuning research. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The rationale for scaling up FNO is unclear, as FNO itself does not seem particularly suitable for scaling. Current transformer-based models demonstrate superior performance and stronger scaling capabilities compared to FNO, raising questions about the significance of scaling up FNO. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments and constructive suggestions. Let us respond to your concerns one-by-one below. **Regarding the significance of our work.** Our main contributions can be summarized as follows: * **On the theory side:** We are the first to derive the Maximum Update Parametrization ($\mu$P) for FNOs, identifying the unique scaling rate for kernel integral parameters that leads to $\mu$P. The result is novel in two aspects: 1. We introduce new technical tools for analyzing neural network parametrization, going beyond LLN and CLT commonly used in literature. 2. The $\Theta\left(\frac{1}{\sqrt{d\log K}}\right)$ scaling rate is drastically different from existing results on all the other model component (e.g., $\Theta(m^{-1})$ for width scaling and $\Theta(L^{-1/2})$ for depth scaling). Directly applying existing result does not work for FNOs. * **On the algorithm side:** Based on our derived scaling rates in $\mu$P, we introduce $\mu$Transfer-FNO for zero-shot hyper-parameter transfer in FNO. * **On the experiment side:** We validate our theoretical claims on various PDEs, different training algorithms, and different hyper-parameters. The experiments consistently show the robustness of the theory. We also demonstrate that $\mu$Transfer-FNO can significantly reduce computational costs while maintaining or improving accuracy in practice. In particular, we believe that scaling up FNO is meaningful. Conceptually, FNO is designed to model continuous functions and enjoys the resolution-invariant property which standard Transformers do not have, making it a strong candidate for modeling PDE data. Practically, FNO is popularly used in recent research on pretraining and foundation models [1-4]. Therefore, we expect that our findings are relevant and interesting to the community. We also fully agree with the reviewer that Transformers are a powerful backbone for building foundation models. Developing techniques to scale up Transformers is also an important research direction, but it is orthogonal to the focus of this work. We appreciate your references and will add discussions on these related works in the revision. [1] Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior, NeurIPS 2023 [2] Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning, NeurIPS 2024 [3] Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs, NeurIPS 2024 [4] UPS: Efficiently Building Foundation Models for PDE Solving via Cross-Modal Adaptation, TMLR 2024/11 **Regarding the dataset choice of our work.** We first point out that the primary focus of our experiments is to validate the robustness and generality of our theoretical claims, rather than benchmarking on the most challenging PDE datasets. The datasets we used are representative and commonly used in existing research. We also agree with the reviewer that mixed equation training is a common setting in PDE foundation model training. Following your suggestion, we conducted additional experiments on a mixed equation dataset involving Burgers' Equation, Advection Equation, and Reaction-Diffusion Equation using the data curated in [5]. Similar to the experimental setup in our submission, we first train FNOs with $K=6$ using different learning rates. Then train FNOs with $K=24$ under Standard Parametrization/$\mu$Transfer-FNO. We present the loss for each run below: | $\log_{10}$ (learning rate) | -2.0 | -2.2 | -2.4 | -2.6 | -2.8 | -3.0 | -3.2 | -3.4 | -3.6 | | --------------------------------- | :-----: | :-----: | :-----: | :-----: | :-----: | :---------: | :---------: | :-----: | :-----: | | $K=6$ | 0.08314 | 0.04933 | 0.04665 | 0.03985 | 0.03822 | **0.03576** | 0.05366 | 0.05672 | 0.06318 | | $K=24$ (Standard Parametrization) | 0.98508 | 0.03785 | 0.03904 | 0.03200 | 0.03041 | 0.02852 | **0.02616** | 0.02989 | 0.03191 | | $K=24$ ($\mu$Transfer-FNO, ours) | 0.03836 | 0.03517 | 0.03391 | 0.03192 | 0.02842 | **0.02599** | 0.02728 | 0.02925 | 0.03406 | The results are consistent with our original findings: On the mixed equation dataset, the optimal hyper-parameter shifts when the model size scales up under standard parametrization. In contrast, $\mu$Transfer-FNO stabilizes the optimal configuration with the lowest loss consistently obtained with the learning rate being $10^{-3.0}$, enabling zero-shot optimal hyper-parameter transfer from small models to large ones. We hope the additional result strengthen our work. Thank you for the constructive comment! [5] PDEBench: An Extensive Benchmark for Scientific Machine Learning, NeurIPS 2022 We sincerely hope that our responses address your concerns and that you reevaluate our work based on the additional information. Thank you again for your time!
Summary: This paper introduces μTransfer-FNO, a zero-shot hyperparameter transfer method for Fourier Neural Operators (FNOs). The core idea is to derive a Maximum Update Parametrization (μP) for FNOs that allows hyperparameters tuned on small FNOs to be directly transferred to larger FNOs without additional tuning, even for models with billions of parameters. The paper theoretically derives the μP for FNOs under the scaling of Fourier modes $K$, finding that initialization variances of kernel integral parameters should be scaled by $O\Big(\dfrac{1}{d \cdot \log K}\Big)$ and learning rates by $O\Big(\dfrac{1}{\sqrt{d \cdot \log K}}\Big)$, where $d$ is the PDE dimensionality. Experiments on Burgers' Equation, Darcy Flow, and the Navier-Stokes Equation demonstrate that μTransfer-FNO maintains optimal learning rates, batch sizes, and optimizer configurations across model scales, reducing computational tuning costs while preserving or improving accuracy. The method is also shown to be applicable to Physics-Informed Neural Operators (PINOs). Claims And Evidence: The paper supports its claims with evidence, but some areas could benefit from further clarification or stronger support: **Strengths:** * **µP Derivation:** The mathematical derivation of the µP for FNOs appears sound, although verifying the full proof requires careful examination of the appendix. * **Learning Rate Stability:** The experiments clearly demonstrate that µTransfer-FNO stabilizes the optimal learning rate across different K values, supporting the central claim. Figures 1 and 2 provide visual evidence of this stability. * **Generalizability to PINO:** The extension to Physics-Informed Neural Operators (PINOs) is well-supported by the experiments on Darcy flow (Figure 2c). * **Transfer of Other Hyperparameters:** The experiments on batch size and Adam's $\beta_2$ (Figure 3) provide evidence that µTransfer-FNO can transfer hyperparameters beyond the learning rate, strengthening the overall argument. **Weaknesses:** * **Instability at Small $K$:** The paper acknowledges that the optimal learning rate can deviate at very small $K$ values, attributing it to training randomness. While plausible, further investigation or discussion of this instability would be beneficial. Perhaps providing error bars or statistics across multiple runs would strengthen this point. * **Computational Cost Savings:** While Table 1 shows computational savings for the Navier-Stokes equation, the Darcy flow experiment shows increased training cost. The paper argues this is due to the smaller gap between small and large models in this setting. However, a more thorough analysis of the computational cost trade-offs, perhaps considering a wider range of model sizes, would be helpful. Quantifying the computational cost of the hyperparameter search itself would also be valuable. * **Test Error Improvement:** The paper claims lower test error with µTransfer-FNO, attributing it to the ability to explore a larger hyperparameter search space with smaller models.While this is a reasonable explanation, it would be stronger to directly compare against a baseline that uses the same larger search space but tunes the large model directly (albeit at higher computational cost). This would isolate the benefit of µTransfer-FNO from simply using a larger search space. * **Limited Scope of PDEs:** While the chosen PDEs are representative, exploring a wider range of PDE problems (PDEBench) would further strengthen the claim of generality. Including more complex or higher-dimensional PDEs would be particularly valuable. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense for the problem of hyperparameter transfer in FNOs for PDE solving. * **µP as a Method:** The use of µP as a foundation for hyperparameter transfer is well-motivated.The core idea of maintaining consistent training dynamics across model sizes is directly relevant to the goal of transferring hyperparameters. The theoretical derivation provides a principled basis for the proposed scaling factors. * **Algorithm 1:** The µTransfer-FNO algorithm is a straightforward and logical application of the µP theory. It clearly outlines the steps involved in transferring hyperparameters from a small proxy model to a larger target model. * **Choice of PDEs:** The selected PDEs (Burgers' Equation, Darcy Flow, and Navier-Stokes Equation) represent a reasonable range of complexity and dimensionality. They are commonly used as benchmarks in the FNO literature, allowing for comparison with existing work. While a broader set of PDEs (PDEBench) would be even better, the chosen set provides a good starting point. * **Evaluation Metrics:** Using the L2 relative test error is a standard and appropriate metric for evaluating the performance of PDE solvers. It directly measures the accuracy of the solution obtained by the FNO. * **Comparison with Baseline:** Comparing µTransfer-FNO against directly tuning the large model provides a relevant baseline.This allows for assessing the computational cost savings and potential performance gains of the proposed method. However, as mentioned in the previous response, a stronger baseline would involve using the same expanded hyperparameter search space for both methods. Theoretical Claims: I've reviewed the provided proof of Theorem 3.5, which establishes the µP for FNOs. While the overall structure of the proof seems reasonable, there are some specific points and potential issues: * **Spectral Norm Calculation:** The proof relies heavily on calculating the spectral norm of the kernel integral operator $\mathcal{K}$. The argument that this norm is equivalent to the maximum absolute value of the parameters r seems plausible given the diagonal structure of $\mathcal{R}$ after the Fourier transform. However, the interaction between the multidimensional Fourier transform, the truncation operator $\mathcal{T}_K$, and the parameter tensor $\mathcal{R}$ could be complex, and a more detailed justification of this step would be beneficial. * **High-Probability Bound:** The proof invokes a standard result from high-dimensional probability to bound the maximum of $K^d$ sub-Gaussian random variables. While this is a common technique, the specific constants and assumptions underlying this result need careful checking against the properties of the parameters $r$. The paper assumes these are sub-Gaussian, which needs to be verified in practice. * **Simplification to $m=1$:** The proof simplifies the analysis by assuming a hidden dimension $m=1$. While extending to general $m$ is mentioned as straightforward, explicitly showing this extension, or at least providing a sketch, would strengthen the proof. The interaction between $m$ and $K$ in the scaling factors could be non-trivial. * **Discretization Effects:** The proof works with the discretized version of the FNO. While this is necessary for practical implementation, the impact of discretization on the theoretical results is not explicitly discussed. Ideally, the proof should connect back to the continuous formulation of FNOs. Experimental Designs Or Analyses: I've reviewed the experimental designs and analyses presented in the paper. While they provide support for the claims, there are some areas where the soundness and validity could be improved: * **Hyperparameter Search Space:** The paper doesn't explicitly define the hyperparameter search space $\Xi$ used in Algorithm 1. Knowing the range and granularity of the search for each hyperparameter (learning rate, batch size, $\beta_2$) is crucial for interpreting the results. A larger search space could lead to better results regardless of µTransfer-FNO, so specifying the search space is essential for a fair comparison. * **Baseline Comparison:** As mentioned previously, the comparison with the baseline of directly tuning the large model is not entirely fair. µTransfer-FNO benefits from exploring a potentially larger search space with the smaller proxy model. A stronger baseline would involve using the same expanded search space for tuning the large model directly, even if it's computationally more expensive. This would isolate the benefit of the transfer method itself. * **Limited Number of Runs:** The paper presents results for single runs of each experiment. Given the stochastic nature of neural network training, reporting results averaged over multiple runs (e.g., $10-20$) with standard deviations or error bars would provide a more robust evaluation and account for potential variability. * **Computational Cost Analysis:** The analysis of computational cost is somewhat limited. Table 1 only provides relative training costs ($1\times$, $1.38\times$, $0.30\times$). Reporting absolute training times or FLOPs would be more informative. Furthermore, the cost of the hyperparameter search itself is not factored into the comparison. A more comprehensive analysis should include the total cost of both the search and the final training. * **Lack of Ablation Study:** An ablation study investigating the individual contributions of the initialization variance scaling and the learning rate scaling would provide further insights into the effectiveness of the proposed µP. This would help determine the relative importance of each component. * **Details of Data Generation:** While the paper describes the PDEs used, more details about the data generation process would improve reproducibility. * **Code Availability:** Providing the code used for the experiments would significantly enhance reproducibility and allow for independent verification of the results. Supplementary Material: Yes, I reviewed the supplementary material, specifically Appendix A, which contains the omitted theoretical results and the proof of Theorem 3.5. * **Technical Lemmas (A.1):** I examined the lemmas presented, particularly Lemma A.3, which establishes the connection between the assumptions made in the main theorem and the feature learning condition. * **Proof of Theorem 3.5 (A.2):** I carefully reviewed the steps involved in the proof, paying attention to the spectral norm calculations, the application of high-probability bounds, and the handling of the independence assumption. I also looked for the justification of the specific scaling factors derived for the initialization variance and learning rate. As detailed in my earlier response on theoretical claims, this is where the majority of my concerns regarding the rigor and completeness of the proof lie. Relation To Broader Scientific Literature: This paper's main contributions relate to various areas within the broader scientific literature: * **Fourier Neural Operators (FNOs) and Operator Learning:** The paper directly builds upon the existing literature on FNOs for solving PDEs (Li et al., 2021). * **Physics-Informed Neural Networks (PINN):** The paper extends its method to Physics-Informed Neural Operators (PINOs) (Li et al., 2024), demonstrating its applicability beyond standard supervised learning for PDEs. * **Maximal Update Parametrization (µP) and µTransfer:** The core theoretical contribution relies heavily on the µP framework introduced by Yang & Hu (2021) and the concept of µTransfer (Yang et al., 2022). * **Hyperparameter Transfer Learning:** The overall goal of the paper is to enable efficient hyperparameter tuning for large models by transferring knowledge from smaller models. Essential References Not Discussed: The paper covers the most relevant literature regarding FNOs, PINNs, and µP/µTransfer. However, a few potential areas could benefit from additional discussion or referencing: * **Theoretical Analysis of Hyperparameter Transfer:** While the paper provides a theoretical justification for µP in FNOs, it could benefit from discussing any existing theoretical work on hyperparameter transfer in general. Are there any theoretical guarantees or bounds on the effectiveness of transferring hyperparameters across different model sizes or architectures? Connecting to this broader theoretical literature would strengthen the paper's contribution. * **Alternative Parametrizations:** The paper focuses on µP, but other parametrization schemes might exist for FNOs. Discussing potential alternatives and their potential advantages or disadvantages compared to µP would provide a more complete picture. For example, are there parametrizations specifically designed for different optimization algorithms or different types of PDEs? Other Strengths And Weaknesses: In summary, the paper is a valuable contribution, excelling in originality and practical use. Strengths and weaknesses were detailed in the preceding sections. Other Comments Or Suggestions: * Page 5: "descritized" needs to be replaced by "discretized" * Page 7: "serveral" needs to be replaced by "several" Questions For Authors: Questions were depicted in the preceding sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for supporting our paper! We respond to your main questions and concerns below. **Regarding the scope of PDEs.** Following your and other reviewers' suggestions, we conduct additional experiments on a mixed equation dataset involving Burgers' Equation, Advection Equation, and Reaction-Diffusion Equation from PDEBench. This mixed equation dataset setting is relevant for PDE foundation model training as noted by other reviewers. We first train FNOs with $K=6$ using different learning rates, then train FNOs with $K=24$ under both Standard Parametrization and $\mu$Transfer-FNO. We present the loss for each run below: | $\log_{10}$(learning rate) | -2.0 | -2.2 | -2.4 | -2.6 | -2.8 | -3.0 | -3.2 | -3.4 | -3.6 | | --------------------------------- | :-----: | :-----: | :-----: | :-----: | :-----: | :---------: | :---------: | :-----: | :-----: | | $K=6$ | 0.08314 | 0.04933 | 0.04665 | 0.03985 | 0.03822 | **0.03576** | 0.05366 | 0.05672 | 0.06318 | | $K=24$ (Standard Parametrization) | 0.98508 | 0.03785 | 0.03904 | 0.03200 | 0.03041 | 0.02852 | **0.02616** | 0.02989 | 0.03191 | | $K=24$ ($\mu$Transfer-FNO) | 0.03836 | 0.03517 | 0.03391 | 0.03192 | 0.02842 | **0.02599** | 0.02728 | 0.02925 | 0.03406 | The results are consistent with our original findings: On the mixed equation dataset, the optimal hyper-parameter shifts when the model size scales up under standard parametrization. In contrast, $\mu$Transfer-FNO stabilizes the optimal configuration. We believe these additional results strengthen our work. Thank you for the constructive comment! **Regarding our theoretical results.** - **Spectral Norm Calculation:** Our proof is applicable to a general dimension $d$. We only use the fact that the Fourier transform is orthogonal, and that the Fourier transform, the truncation operator, the multiplication with the parameter tensor, and the inverse Fourier transform are all linear operators. All these properties hold regardless of the dimensionality. **The sub-Gaussian assumption:** We note that our analysis focuses on the Adam optimizer which uses entry-wise normalized gradient momentum for updates. Furthermore, one can optionally apply value-based gradient clipping in practice, which strictly enforces all entries of the update to be bounded and hence sub-Gaussian. We have empirically verified this assumption in our preliminary experiments and will add a remark on this in the paper revision. **Extending to the case with a general $m$:** In this setting, the analysis can be broken down by dealing with each entry of the matrix-vector product separately and writing the product as the summation of entry-wise multiplication. Then one can treat $\widetilde{\boldsymbol R}_{\ell}\in\mathbb{R}^{N_1 \cdots N_d\times N_1 \cdots N_d\times m\times m}$ as $m\times m$ instantiations of $N_1 \cdots N_d\times N_1 \cdots N_d$ matrices and apply the analysis in the $m=1$ case to arrive at the same result. Based on this argument, a general $m$ would not incur complicated dependency on the scaling rate of $K$. **Discretization Effects:** Our proof does not rely on any property of a specific resolution. We mainly leverage the fact that the Fourier transform matrix is an orthogonal matrix in the discrete formulation, and that the main building blocks in the kernel integral operators are all linear. Connecting to the continuous formulation of FNOs, these facts can be understood as consequences of orthogonality and linearity of the corresponding operators over functional spaces. **Regarding theoretical analysis on hyper-parameter transfer.** To the best of our knowledge, no theoretical bounds exist for hyper-parameter transfer across different sizes or architectures. Most existing work, including our results, focuses on asymptotic scaling rates, building on or adjacent to the $\mu$Transfer framework as discussed in Section 5.3. Another relevant recent result is [1], which we will include in our discussion. [1] Super Consistency of Neural Network Landscapes and Learning Rate Transfer, NeurIPS 2024 **Regarding alternative parametrizations.** This is an interesting question! Our analysis is not tied to any specific PDE, similar to how other existing analyses are not tied to any specific task or dataset. However, $\mu$Parametrization does vary for different optimization algorithms. For example, [2] derives $\mu$P for K-FAC and Shampoo optimizers. Our analysis focuses on Adam because of its popularity. [2] On the parameterization of second-order optimization effective towards the infinite width. Due to the word limit, we cannot respond to other comments individually. But we would like to assure you that we value your comments and will modify our paper accordingly based on your comments. We sincerely hope that our responses address your concerns!
Summary: The authors discuss how Fourier Neural Operators (FNOs), which is a state-of-the-art SciML method, have been used to solve complex PDEs. However, they identify issues with scaling FNO to more intricate PDEs that requires increasing the number of Fourier modes. Increasing the number of Fourier modes increases the number of model parameters and makes HPO very expensive. The authors propose $\mu$Transfer-FNO, which is a zero-shot hyperparemeter transfer method with optimal hyperparameters tunes on smaller FNOs to be applied to large billion-parameter FNOs zero-shot. The method is based upon the Maximum Update Parametrization ($\mu$P) framework. the authors show that $\mu$Transfer-FNO reduces the cost of tuning parameters on large FNOs and maintains the accuracy. ## Update after rebuttal I appreciate the authors' detailed rebuttal, motivating example and additional experiments. In light of that, I raised my score. Claims And Evidence: - The proposed transfer method makes sense to help scale HPO tuning for large FNOs. I would like to see a concrete example where larger Fourier modes are required to solve the PDE for better problem motivation. - The authors correctly motivate that FNO scales as $\mathcal{O}(K)^d$, where $K$ is the number of Fourier modes and $d$ is the dimension of the problem, solving that the complexity grows exponentially with the dimension so for practical 3D space and time problems, this can be very expensive. - Rather than showing the loss in Figure 1 , it may be more informative to show the validation accuracy as a more meaningful metric. It is interesting that the optimal learning rate with the proposed method remains approximately the same regardless of the model size. To better show this, it may be good to put a vertical line through the star points. Methods And Evaluation Criteria: - It makes sense to try to increase the number of Fourier modes and the proposed transfer HPO method seems effective. To do so, the proposed method scales the kernel integral operator parameters as the number of modes $K$ is increased so that the optimal hyperparameters stay approximately the same across model sizes. This motivates HPO on the small model and then directly transferring to larger FNOs. - The result cover a good range of PDEs, i.e., Burgers, Darcy Flow and the challenging Navier-Stokes equations. I would like to see a larger number of equations tested. See the comprehensive Neural Operator benchmark in Saad et al., "Guiding continuous operator learning through Physics-based boundary constraints", ICLR 2023. - The authors show that the proposed transfer learning preserves the same optimal learning rate, training batch size and optimizer configurations across increasing model sizes. - With the proposed approach, the method is able to scale to large FNOs with nearly 1B parameters with better accuracy and only `0.3x` training compute. - In Figure 2, even with the proposed approach the optimal learning rates are not exactly the same and seem slightly shifted but less than the base especially in Figure 2a-b. Is there a quantitative metric to better measure this difference? Theoretical Claims: - The main theoretical result is presented in Theorem 3.5 on calculating the scaling rate for kernel up the FNO kernel integral operator. The theorem shows the abc-parametrization is a $\mu$P of FNO with the Adam optimizer and scales according to 1 / sqrt(d), where d is the dimensionality. The proof is detailed and provided in the appendix with a proof outline of the main concepts in the main body. - The authors highlight a difference in their proof than in the standard $\mu$P proofs that the past proofs rely on the CLT to measure the average of random variables, whereas there is a technically difference in the structure of FNO and the kernel integral operator that leads to the analysis of the maximum of the random variables. It is good that the authors clarify this difference. - It is also good that the proposed scaling function in the proof is not resolution dependent so that the desired resolution invariance property of NOs is not broken with this methods - do the authors have proof of this, i.e.,that resolution invariance still holds? Experimental Designs Or Analyses: - Nice that experiment section is organized according to various questions to show that aforementioned theory also holds empirically. - Please clarify that Burgers' in 4.1 is actually viscous Burgers' for $\nu \ne 0$, which is simpler to solve than the sharp shock solution case where $\nu = 0$ and there is no artificial diffusion from the viscosity term. - Good that the authors consider 1D and 2D problems and also vary mapping the initial condition to solution in Burgers' and PDE coefficient to the solution in Darcy Flow. - Since one of the main motivations of the proposed method is for large-scale problems, I think 3D spatial cases should also be used because that when $d=3$ is when the FNO becomes very computational expensive. It is good that 3D Navier-Stokes is tested. Is this FNO-3D because it is the standard 2D space + time test case? I think 3D space should also be considered. An example practical, real-world 3D Car Surface Pressure Prediction test case is provided in Ma et al., "Calibrated uncertainty quantification for operator learning via conformal prediction", 2024 and Li et al., " Geometry-informed neural operator for large-scale 3d pdes", 2023. Both of these works should be cited as well. Supplementary Material: I read the supplementary material but did not check the proof in detail. Relation To Broader Scientific Literature: - FNOs have shown large impact on solving PDE problems with ML. This paper focuses on more challenging PDEs where larger parameter FNOs are needed with more Fourier modes. The authors in the introduction go directly into discussing the specifics of FNO and the effect of its modes. I think the authors should first motivate solving PDE problems, why ML methods have been developed to solve these problems, mention the three main method classes, e.g., PINNs, Neural Operators and MeshGraphNets and then state that this paper will focus on FNOs. The authors can tell clearly identify the more challenging PDEs with regular size parameters FNOs struggle to motivate these larger-parameter FNO models which require HPO. - The authors mention the $\mu$ P (Yang & Hu (2021) and $\mu$Transfer methods (Yang et al. (2022)). Please clearly differentiate the novelty of the proposed approach. Does $\mu$Transfer-FNO just directly apply these methods to FNO? If so, please explain the technical challenges of applying them to the FNO architecture. - My main concern is that the proposed method is too similar to the past Yang & Hu (2021) and Yang et al. (2022) in literature and hence the novelty needs to be clarified. - The authors state that the number of hidden dimensions $m$ and model depth $L$ are fixed in this work and have been studied in Yang & Hu (2021) and Yang et al. (2022). Please briefly summarize their effect here or in an appendix so that this work can be fully understood end-to-end without relying on these prior works for key concepts. Essential References Not Discussed: - It is good that the authors include the known boundary conditions in the problem definition in Eqn. 1. There is missing reference to Saad et al., "Guiding continuous operator learning through Physics-based boundary constraints", ICLR 2023 that enforces boundary conditions as an exact constraint. - The authors mention that FNO is most commonly NO. They should state the reason is because the FFT is very computational efficient to compute the kernel vector $\mathcal{K}x$ products. Having said that, the paper is missing references to other neural operators with different bases, e.g., Gupta et al., “Multiwavelet-based Operator Learning for Differential Equations”. In: Advances in Neural Information Processing Systems. Vol. 34. PMLR, pp. 24048–24062, 2021, Li, Zongyi, Nikola Kovachki, et al. (2020b). “Multipole Graph Neural Operator for Parametric Partial Differential Equations”. In: arXiv preprint arXiv:2006.09535. - It is good that the authors also compare to PINO. I think the authors should be a bit careful referring to the PINNs loss as "more advanced training techniques". Both the PINO paper and Saad et al., ICLR 2023 show that in several cases the unconstrained FNO outperforms PINO. Also there have been several instabilities with PINNs training that have been reported in the literature that are not cited, e.g., Krishnapriyan et al., "Characterizing possible failure modes in physics-informed neural networks", Advances in neural information processing systems 34, 26548-26560, 2021. This looks like it is also observed by the authors own training of PINO in Figure 2(c) with the spike and oscillations in the loss function. - References to transfer learning methods other than $\mu$ P (Yang & Hu (2021) and $\mu$Transfer methods (Yang et al. (2022)) are missing. Other Strengths And Weaknesses: ## Weaknesses - The author should better define what they mean by "intricate PDEs" in the abstract. - The introduction is missing a contribution section. I would also move Theorem 1.1 to the method section and clearly state the contributions in the Introduction. - The authors should define $a_i$ in the training pairs, e.g., initial conditions, PDE parameters. Using $a_i$ for the notation is confusing since later in 4.1 a is used as the Darcy coefficient as well and sometimes for Burgers the initial condition$u_0$ is used in the mapping ## Strengths - The authors address a common problem with FNOs in scaling to higher dimensions, i.e., the curse of dimensionality and propose a simple transfer learning approach of the hyperparameters to scale to larger FNOs with more modes to allow costly HPO on these larger architectures. Other Comments Or Suggestions: Typos - Line 221 left column, \citet should be used instead of \citep for the reference. - Capital W on line 374 right column to start the sentence. - "serveral" on line 413 left. Questions For Authors: 1. How would this method apply to more general NOs than just FNO, e.g., the Multi-wavelet NO in Gupta et al., 2021? 2. Are any of the NO method exampels in this work including time or extrapolation in time like the Markov Neural Operator, Li et al., "Learning Dissipative Dynamics in Chaotic Systems", 2021. 3. Why is the total training cost increased from tuning the full model for Darcy Flow in FNO-2D but decreased for FNO-3D in Table 1. Also the table caption states that three PDEs are shown in the table but it only shows 2. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for supporting our paper! We respond to your main questions and concerns below. **Regarding motivating the need for larger Fourier modes.** A concrete example is based on Kolmogorov microscales in fluid dynamics: When simulating turbulent flows governed by the Navier-Stokes equations, the Kolmogorov microscales represent the smallest scales at which energy dissipation occurs, which are proportional to $\mathrm{Re}^{-3/4}$ where $\mathrm{Re}$ is the Reynolds number. For realistic engineering applications with high Reynolds numbers, these microscales become extremely small and require careful modeling of high-frequency features. In such cases, larger numbers of Fourier modes are preferred. **Regarding more PDEs.** Following your and other reviewers' suggestions, we conduct additional experiments on a mixed equation dataset involving Burgers' Equation, Advection Equation, and Reaction-Diffusion Equation from PDEBench. This mixed equation dataset setting is relevant for PDE foundation model training as noted by other reviewers. We first train FNOs with $K=6$ using different learning rates, then train FNOs with $K=24$ under both Standard Parametrization and $\mu$Transfer-FNO. We present the loss for each run below: | $\log_{10}$(learning rate) | -2.0 | -2.2 | -2.4 | -2.6 | -2.8 | -3.0 | -3.2 | -3.4 | -3.6 | | --------------------------------- | :-----: | :-----: | :-----: | :-----: | :-----: | :---------: | :---------: | :-----: | :-----: | | $K=6$ | 0.08314 | 0.04933 | 0.04665 | 0.03985 | 0.03822 | **0.03576** | 0.05366 | 0.05672 | 0.06318 | | $K=24$ (Standard Parametrization) | 0.98508 | 0.03785 | 0.03904 | 0.03200 | 0.03041 | 0.02852 | **0.02616** | 0.02989 | 0.03191 | | $K=24$ ($\mu$Transfer-FNO) | 0.03836 | 0.03517 | 0.03391 | 0.03192 | 0.02842 | **0.02599** | 0.02728 | 0.02925 | 0.03406 | The results are consistent with our original findings: On the mixed equation dataset, the optimal hyper-parameter shifts when the model size scales up under standard parametrization. In contrast, $\mu$Transfer-FNO stabilizes the optimal configuration. We believe these additional results strengthen our work. Thank you for the constructive comment! **Regarding the novelty compared to existing works on $\mu$P and $\mu$Transfer.** While the high-level algorithmic idea of $\mu$Transfer-FNO is based on existing research, we are the first to derive the Maximum Update Parametrization ($\mu$P) for FNOs. The result is novel in two aspects: 1. We introduce new technical tools for analyzing neural network parametrization, going beyond LLN and CLT commonly used in literature. 2. The $\Theta\left(\frac{1}{\sqrt{d\log K}}\right)$ scaling rate is drastically different from existing results on all the other model components (e.g., $\Theta(m^{-1})$ for width scaling and $\Theta(L^{-1/2})$ for depth scaling). We point out that directly applying existing results does not work for FNOs because the design of the kernel integral operator is significantly different from other standard neural network modules such as embedding layers and linear transforms. **Regarding more general NOs than just FNO.** Unfortunately, our current theoretical analysis is specific to the kernel integral operator and FNO. We will mention this as a limitation of our work in the paper revision and include references to other NO variants including Multi-wavelet NO, Multipole Graph Neural Operator, etc. **Regarding extrapolation in time.** Our experiments focus on more standard settings for FNO and do not consider extrapolation in time like the Markov Neural Operator. We believe that our technique is still applicable to this setting since the design of FNO is unchanged. **Regarding Table 1.** Thank you for catching the typo! As for the total training cost, this difference occurs because the size of FNO scales as $\mathcal{O}(K)^d$. Thus, when $d$ increases, the efficiency gain from tuning models with small $K$ and applying $\mu$Transfer-FNO becomes much more significant. Specifically, the Darcy Flow problem with FNO-2D is relatively simple. Direct tuning on the full-sized model with a smaller hyper-parameter search space can still lead to decent results with reasonable computational cost. Hence, this setting is not particularly favorable to $\mu$Transfer-FNO. However, for the Navier-Stokes Equation with FNO-3D, tuning the full model becomes prohibitively expensive. In this more complex setting, $\mu$Transfer-FNO offers substantial efficiency gains by allowing us to tune models with small $K$ and then transfer the optimal hyper-parameters to larger models. Due to the word limit, we cannot respond to other comments individually. But we would like to assure you that we value your comments and will modify our paper accordingly based on your writing suggestions and references. We sincerely hope that our responses address your concerns!
Summary: This paper introduces $\mu$Transfer-FNO, a zero-shot hyperparameter transfer technique for Fourier Neural Operators (FNOs). Based on the Maximum Update Parametrization (μP) framework, the authors propose a parametrization scheme that enables hyperparameters tuned on smaller FNOs to be transferred directly to much larger models without retraining. The authors derived a novel scaling law for the parameters of FNOs with respect to the number of Fourier modes. Extensive experiments on multiple PDEs like the Burgers' equation, Darcy Flow equation and the Navier-Stokes equation together demonstrate that μTransfer-FNO enables scaling of the nearly billion-parameter FNOs while reducing computational cost and maintaining accuracy. ## Update After Rebuttal The reviewer is positive about the results presented in the paper and satisfied with the authors' responses to the questions raised in all reviews. Hence, the reviewer would like to remain the score. Claims And Evidence: Here is the main claim made in this paper: $\mu$Transfer-FNO allows zero-shot hyperparameter transfer from small-scale FNOs to large-scale FNOs. Sufficiently many experiments are included to compare $\mu$Transfer-FNO with standard parametrization, which supports the claim in a clear and convincing way. Methods And Evaluation Criteria: Yes. The experiments are mainly based on the Burgers’ Equation (1D), the Darcy Flow Equation (2D), and the Navier-Stokes Equations (3D), which are standard examples used in the original FNO paper. The authors studies the performance of $\mu$Transfer-FNO on the finetuning of multiple parameters like the learning rates and batch sizes with respect to different number of Fourier modes. Experiments on other variants of FNO like PINO (Physics-Informed Neural Operator) are also included to justify the efficacy of the proposed methodology. Theoretical Claims: The only theoretical claim in the paper is Theorem 1.1 (informal version of Theorem 3.5). Its proof in the supplement has been verified to be correct. Experimental Designs Or Analyses: Yes, please refer to the "Methods And Evaluation Criteria" section above. Supplementary Material: Yes, I have reviewed all the theoretical results presented in the appendix. Relation To Broader Scientific Literature: This work is based on the Maximal Update Parametrization ($\mu$P) framework proposed in the series of work on tensor programs, which is originally used in the studies of MLPs and Transformers. However, this paper should be mainly posited within the literature (FNOs, DeepOnets and their variants) on operator learning and neural PDE solvers. Essential References Not Discussed: Below is one preprint that probably needs to be discussed in the paper is [1]. Specifically, the main result in [1] discussed the connection between neural operators and transformers (the attention architecture), which is also partially addressed in the cited paper [2]. The authors should discuss how the $\mu$Transfer-FNO methodology proposed in this paper differs from the $\mu$Transfer framework originally proposed for transformers, since neural operators can be linked with transformers under certain circumstances. References: [1] Calvello, Edoardo, Nikola B. Kovachki, Matthew E. Levine, and Andrew M. Stuart. "Continuum attention for neural operators." arXiv preprint arXiv:2406.06486 (2024). [2] Kovachki, Nikola, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. "Neural operator: Learning maps between function spaces with applications to pdes." Journal of Machine Learning Research 24, no. 89 (2023): 1-97. Other Strengths And Weaknesses: To the best of the reviewer's knowledge, this is one of the first work that studies how to finetune parameters for large-scale FNOs based on good parameters trained on small-scale FNOs, which is of not only practical significance but also theoretical insights. For future work, it might be meaningful to study the proposed methodology under the setting of infinitely wide neural networks proposed in the series of paper on tensor programs. However, one potential weakness of this work is that it might be meaningful to test the proposed methodology on other architectures within the operator learning framework, such as DeepONets. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for supporting our paper! We respond to your questions and concerns below. **Regarding the connection to Transformers.** This is an interesting point! The Fourier Integral Operator and Continuum Attention are both nonlocal operator classes, but they have different parametrizations: - For Continuum Attention, the parametrization closely resembles vanilla attention, and its size is controlled by the hidden dimensionality (the model "width"). Therefore, existing results on $\mu$Transfer for scaling up vanilla attention are directly applicable. - In contrast, the Fourier Integral Operator is parametrized in a uniquely different way—by modeling the Fourier Transform of a kernel function with a pre-defined number of Fourier modes (i.e., $K$ in our paper's notation). This design differs from all modules studied in existing Maximum Update Parametrization ($\mu$P) and $\mu$Transfer literature, and our findings show that its scaling rate is also drastically different from that of other model components. That being said, in Fourier Integral Operators, the notion of model "width" still exists (corresponding to $m$ in our paper's notation). The size of Fourier Integral Operators is controlled by both $m$ and $K$. Existing width scaling results apply to $m$, while our analysis applies to $K$, which is the unique aspect of Fourier Integral Operators. **Regarding to other architectures within the operator learning framework.** We acknowledge that our current findings are limited to FNOs. We believe it would be a meaningful future direction to study $\mu$P and $\mu$Transfer for other operator learning models, e.g., DeepONets. We sincerely hope that our responses address your concerns. We are also happy to discuss further if you have any additional questions. Thank you again for your time! --- Rebuttal Comment 1.1: Comment: The reviewer would like to thank the authors for their clarification. The authors are encouraged to include a discussion on the relation between transformers (or the continuum attention architecture) and FNO, which might inspire future studies on how the $\mu$Transfer methodology differs for these two models. Overall, the reviewer remains positive about the results of this paper and would like to keep the score. --- Reply to Comment 1.1.1: Comment: We appreciate your continued support of our paper! We will follow your suggestion to include a discussion on the connections and differences between Transformers (particularly the continuum attention) and FNOs under the $\mu$P and $\mu$Transfer framework in Section 3. Thank you again for your thoughtful feedback that has helped strengthen our work.
null
null
null
null
null
null
Censor Dependent Variational Inference
Accept (poster)
Summary: The paper proposes a censor-dependent conditional VAE (CD-VAE), where two variational posteriors—for censored and non-censored events—are inferred given covariates and observed times, instead of the typical single posterior assumed in baseline methods. Further, the paper provides theoretical results to support the decomposed posterior. Experimental results on synthetic datasets show (i) smaller KL divergence between the prior and posterior ; (ii) competitive performance in terms of the C-Index and Brier score. Claims And Evidence: - The paper claims significant performance improvements over baselines: 1) In general, the experimental results demonstrate marginal or comparable performance compared to baselines. It's unclear whether the marginal gains are statistically significant, as confidence intervals are not provided. 2) It's unclear why C-Index and Brier Score results are not reported on the synthetic datasets, where only KL divergence is provided. This could be misleading, as a small KL divergence between the prior and posterior could also imply posterior collapse and is not necessarily indicative of survival model performance. 3) It's unclear why the paper does not benchmark against VSI (Xiu et al., 2020), which proposes a principled, single shared posterior for censored and non-censored approaches on real-world datasets (Table 4). 4) It's unclear why the real-world experiments (Table 4) do not include the importance sampling variations of the proposed CD-VAE approach. Methods And Evaluation Criteria: - The decomposed posterior seems problematic, as only the observed time depends on the censoring indicator, which is already accounted for in the log-likelihood. This also reduces the number of samples used to learn the inference, where the quality of the learned posterior becomes a function of the censoring rate. I encourage the authors to provide complete results (C-Index, Brier score, KL divergence, and calibration) on both synthetic and real-world datasets to comprehensively evaluate the effect of censoring on these metrics. Additionally, including the censoring rate in Table 3 would be more informative than simply reporting the number of censored samples. - The paper should also consider evaluating the approach on larger datasets with more covariates, such as the SEER dataset, to provide insights into how the proposed method scales with the number of covariates and sample size. - The paper states that the Brier score is a measure of calibration, which is not necessarily true. See Haider et al. (2020) for D-calibration and Chapfuwa et al. (2020) for KM-calibration as metrics for survival calibration. Theoretical Claims: - Lemma 3.1: Justifying the need to decompose the posterior in Theorem 3.2.1 could be problematic: 1) The paper begins by establishing conditions of equality for equations (4) and (5) assuming *non-censored* events. However, it is unclear why it is necessary for these equations to be equal, as the likelihood for observed events is already decomposed in equation (6) according to the censoring indicator. Moreover, note the relationship $f(t|x) = h(t|x) \cdot S(t|x)$, which implies that (4) and (5) are equal only if the hazard function is constant at 1. 2) Other theoretical claims appear to be straightforward extensions of previously proposed theorems. Experimental Designs Or Analyses: - See methods and evaluation section. Supplementary Material: - Yes theoretical proofs for Theorem 3.2.1 Relation To Broader Scientific Literature: - Variational inference is an important generative modeling approach, where most methodological contributions focus on the posterior/prior distribution. This paper focuses on justifying the proposed decomposed censor-dependent posterior for variational survival analysis. Essential References Not Discussed: - The paper should discuss other variational approaches outside survival analysis focusing on *conditional* posterior distributions, including Siddharth et al. (2017), Kingma et al. (2014), and Joy et al. (2021). Other Strengths And Weaknesses: - A major strength of this paper lies in its connections to previously proposed variational survival analysis methods and variational inference. However, the justification of the proposed decomposed censor-dependent variational distribution and its practical implications—particularly the sensitivity to the censoring rate, which has not been thoroughly explored in the paper—represent a major weakness. Other Comments Or Suggestions: - Given that Theorems 4.3.1–4.3.3 are straightforward extensions of previously proposed theorems and do not appear to be directly related to the key contribution on the decomposed censor-dependent posterior, it is unclear whether they should be included in the main paper. The writing could be improved to clarify the connections; otherwise, it is not clear what purpose the proposed theorems serve in strengthening the key contributions. **Minor** - Remove empty [] from Brier score equation Questions For Authors: - Could you provide comprehensive results on synthetic and real-world datasets, including the impact of the censoring rate on the learned posteriors? - Could you clarify why, in Lemma 3.1, the equivalence between equations (4) and (5) is necessary for *non-censored* events, given that only equation (4) is used in the likelihood estimation in equation (6) when events are *non-censored*? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their time and effort in engaging with our work. Your recognition of the importance of variational survival analysis is encouraging. We've added further experimental results and endeavor to clarify a few points that may have been misinterpreted. ## Response to Experimental Concerns See extended experimental results in response to Reviewer acdJ (#1). >Evidence[2]: reporting C-index on the synthetic datasets Our experiments aim to evaluate both the inference optimality of our method and its effectiveness in survival modeling. Our simulation datasets are essential for assessing the former. Nonetheless, we’ve included additional results for completeness. >Evidence[1,3,4]:significance metrics; comparison with VSI. report variants on benchmark >Methods[2]:SEER datasets Statistical significance testing is not a common practice. VSI lacks a training script. SEER used in Nagpal et al. (2021a) is against our modeling assumptions as a dataset for competing risks analysis. In response, we added the Apellaniz et al. (2024) baseline, reported average metrics with uncertainties, and included full performance results for all variants. >Evidence[2]:misleading KLD >Weaknesses: lack of sensitivity exploration We would like to clarify a misunderstanding: the KL divergence reported in Table 2 is with respect to the true posterior, not the prior, as noted in the caption. Simulated datasets were used to explore the optimality of CDVI under varying censoring rates, while the seven benchmark datasets naturally cover a wide range of censoring scenarios. ## Addressing Theoretical Questions 1. Methods[1] & Weekness[1]: A potential issue with the proposed variational distribution is the assumption that the observed time Y depends on the censoring indicator δ, which lacks theoretical justification. 2. Theoretical Claims[1] & Questions[2]: Justifying the need to decompose the posterior in Theorem 3.2.1 could be problematic: Why necessary for Eq4 and Eq5 to be equal? 3. Methods[1] & Other Weaknesses[1]: Separating network parameters reduces the number of samples used to learn the inference and can introduce sensitivity to the censoring rate. We appreciate the opportunity to clarify all these points regarding our theory: 1. The dependency between Y and δ is not assumed in the posterior. See also Reviewer oybt (#2)'s first question. 2. Eq. 4 and 5 are not required to be equal. They are inequalities that can be required to hold equal on both sides simutanuously on a same data point (x,y). See Claims [1] in Reviewer acdJ (#1) for a complete reasoning. 3. Variational parameter separation is not forced or by default — see our reply to Reviewer acdJ (#1). We use a joint encoder and our proposed variational posterior approximates the true posterior $p(z|x,y,\delta)$ as a whole; thus, sample imbalance is a general issue in (variational) conditional density estimation, not unique to our method; we address it following standard strategies (Nagpal et al., 2021a, Line 1042). We hope these clarifications resolve the concerns and misunderstandings that you mentioned as a major weakness. Nonetheless, we remain happy to provide further detail if helpful! >Suggestions[1]: Importance of Theorems 4.3.1–4.3.3 Thank you for the comments. While these results build on existing work, our analysis are concise—occupying less than half a page—and are carefully presented to support, rather than distract from, the central idea of censor-dependent variational inference (CDVI). They offer additional theoretical justification for our proposed CDVI by demonstrating its alignment with the broader augmented VI framework. They also offer rigorous foundations for our proposed variants, offering theoretical grounding of design choices that has not been established in prior work. Importantly, extending these results to the survival setting required nontrivial technical work. As mentioned in line 300, Theorem 4.3.2 corrects a key issue in the original framework by identifying a notational inconsistency between the variational posterior and the augmented variational posterior. To our knowledge, this inconsistency has not been previously addressed in the literature. As a result, Theorem 2 from Domke & Sheldon (2018) could not be directly extended to our setting. For reference, we invite the reviewer to compare our proof in Appendix B.6 with the proof of Theorem 1 in Domke & Sheldon (2018). > Brier score is not necessarily a calibration; Adding References. Thank you for highlighting this extremely helpful detail. We have revised the interpretation in line with Haider et al. and added the mentioned references along with [Sohn et al. 2015] for conditional variational posterior/VAEs. *We thank the reviewer again for the valuable feedback. Given the clarifications and efforts made, we would appreciate your consideration of reassessing the score. Please don’t hesitate to reach out with further questions. --- Rebuttal Comment 1.1: Comment: Thank you for addressing some of my concerns. I have read the reviews and responses, as well as the justifications for the proposed CDVI. However, it is still unclear how the proposed CDVI ELBO in Eq. (12) is better than the vanilla VSI ELBO. - **The Vannial VSI ELBO:** $\text{ELBO}(x|t) = \mathbb{E}_{q(z|t)} [\log p(t|z)] - KL( q(z|x, t) || p(z|x) )$ **Proposed CDVI ELBO Eq. (12):** By decoupling the learned posteriors for censored and observed events, CDVI reduces the number of samples required for learning the inference process. Consequently, the quality of the learned posteriors becomes dependent on the censoring rate. It is unclear how to assess this key limitation without comprehensive analysis and direct comparison with VSI. The VSI GitHub repository is available here: https://github.com/ZidiXiu/VSI/. --- Reply to Comment 1.1.1: Comment: We truly appreciate your time and efforts in reading our rebuttal. We are grateful for your acknowledgement and expressing the remaining concerns. > [Q1] still unclear how the proposed ELBO is better than the vanilla ELBO (citing Eq. 4 in [Xiu, 2020]). Thank you for the additional question. However, we would like to kindly clarify a misunderstanding: Eq.4 cited from [Xiu, 2020](https://arxiv.org/pdf/2003.04430) is **not** their proposed ELBO. As noted in the title of Section 3.1 in Xiu, 2020, Eq. 4 corresponds to the variational bound for *observed events* only. Their proposed ELBO is obtained by substituting terms in Eq.7 with Eq.4 and the bound for *censored events* in Section 3.2, corresponding to the right-hand sides of our Eq. 4 and Eq. 5 (Lines 99–100). For correctness, we continue to refer to our **Eq. 6** as vanilla ELBO, which consistent with the derivation in Xiu, 2020 and the derivation in [Nagpal 2021a, page 4](https://arxiv.org/pdf/2003.01176). Our analysis in Section 3 demonstrates why and how Eq.12 is preferred for a better inference optimality. - As noted in Section 2.4, VI optimality is achieved when the variational posterior (encoder) closes the inference gap between the log-likelihood in Eq. 2 and the ELBO in Eq. 6 or Eq. 12. - The line of argument in Lemma 3.1->Prop. 3.1->Remark 3.1 proves that this inference gap cannot be properly tightened using a vanilla $q_\phi(z|x,y)$ and the derived vanilla ELBO in Eq.6: an optimal $q_\phi$ vanilla VI solution fails to exist for non-degenerate θ (claim 1-2) or may be a trivially learned solution exhibiting laziness or collapse (claim 3-4). These issues are triggered when the presence of both (x,y,1) and (x,y,0) data triplets, which are unavoidable when their corresponding supports overlap. - The line of argument in Thm 3.2.1->Remark 3.2->Thm 3.2.2 proves that the optimal encoder for log-likelihood in Eq.2 that avoids the above issues should be designed to approximate to the true posterior density $p(z|x,y,\delta)$ *as a whole*, thereby avoiding the separating variational bounds in Eq. 4 and Eq. 5, as done in [Xiu, 2020] or [Nagpal 2021a]. Our proof of Thm 3.2.1 in Appendix B.2 shows that this requires to properly incorporate the censoring indicator $\delta$. As a result, Thm 3.2.2 proves how CDVI benefits from such $q_\phi(z|x,y,\delta)$ as it avoids the hard constraint $\phi_1=\phi_2$ imposed by the vanilla variational posterior q(z|x,y). In practice, as shown in Table 2, using the vanilla ELBO in Eq. 6 within a v-structured CVAE results in a large inference gap on simulation dataset SD2-5, which is substantially reduced by the censor-dependent ELBO in Eq. 12 (and its sampling variants). Extensive validation on benchmark datasets also shows a better estimation of individual survival distributions over multiple methods DSM, SAVAE based on vanilla ELBO. > Key Limitation: By decoupling the learned posteriors for censored and observed events, CDVI reduces the number of samples required for learning the inference process. Consequently, the quality of the learned posteriors becomes dependent on the censoring rate. Thank you for the comments. However, we would like to clarify that this misunderstanding does not align with our methodology, which has already been addressed in our first rebuttal and response to the question from Reviewer acdJ(#1). Specifically, the encoder in Eq. 10 is implemented as a dense, joint network to approximate the true posterior density $p(z|x,y,\delta)$ *as a whole*, as shown in Fig.3(a). An awareness of parameter sharing and network partitioning provides clarity on why $\phi_1$ is not trained separately on all event observations. Additionally, we clarify that the implication introduced by “consequently, .....” is misleading. In general, performance degradation as the censoring rate increases is a well-known challenge and not specific to our approach (or more broadly variational methods); see also Table 1 from Xiu, 2020. Similarly, as evident in Table 2, the accuracy of the learned posteriors in CVAEs varies with the censoring rate using both vanilla ELBO and our proposed ELBO. However, our proposed method consistently improves the accuracy of the learned posteriors in SD2-5 datasets with censoring rate ranging from 5%-50% compared to a vanilla CVAE that uses a vanilla ELBO. **Edited on 4/7: We respectfully follow up to inquire if any feedback is available on this point. As Reviewer #1 suggested, we have included an illustrative figure (https://imgur.com/a/1BtArNl) to show why censor dependence is essential for modeling true posterior under censoring.** > A direct comparison with VSI. Given that vanilla EBLO has been extensively compared in VAE-based DSM, SAVAE(Apellániz et al. 2024) and our CVAE implementations, we believe the current evaluations sufficiently highlight the strengths of our method, particularly as VSI’s training script is not yet publicly available (syntax to run VSI is still under contribution).
Summary: This paper builds upon prior work (Nagpal et al. 2021a, Apellaniz et al. 2024) that uses the variational distribution $q_\phi (z \mid x, y)$ as a posterior approximation for $p_\theta (z\mid x)$, without accounting for censoring (i.e., $y$ and $\delta$). The authors propose a variational inference method that explicitly incorporates censoring, providing a tighter bound on the log-likelihood. The proposed approach is validated through extensive experiments on six simulation studies and seven real-world datasets, demonstrating empirical effectiveness. ## Update after rebuttal Thank you for the detailed response and for conducting the additional experiments — they address most of my concerns. Additionally, I realize my earlier statement, "using the predicted probability as score is problematic," may have been unclear. What I intended to convey is that referring to the current version of the concordance index as "Harrell’s C-index" can be misleading, as it differs from the formulation originally described in Harrell’s paper. Claims And Evidence: The theoretical contributions and experimental results strongly support the paper’s claims. Methods And Evaluation Criteria: The overall methodological approach is reasonable—optimizing two distinct variational distributions for censored and event groups. However, I have two concerns: * This approach seems to suggest a dependence between censoring times and event times, which contradicts the assumed DAG. The authors should clarify this apparent inconsistency. * The paper only compares against one variational inference method (Nagpal et al. 2021a). Why are other relevant methods, such as Ranganath et al. (2016) and Apellaniz et al. (2024), excluded? A broader comparison would strengthen the evaluation. Theoretical Claims: I did not verify the correctness of the proofs in the appendix. Experimental Designs Or Analyses: Yes. I've checked the experimental design. However, there are two issues: * The Harrell's concordance index this paper claims using is problematic. The correct way of calculating Harrell's C-index is by comparing the predicted times, not the predicted probability at a single time. Harrell's paper ([link](https://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0258(19960229)15:4%3C361::AID-SIM168%3E3.0.CO;2-4), Section 5.5) clearly states that the predicted survival time is the default choice and the predicted probability can serve as a substitute with **conditions**. This substitution is only allowed when the predicted probabilities and times have a one-to-one mapping (e.g., proportional hazard is satisfied). However, in the experiments, some of the baselines (RSF, DSM) as well as the proposed method do not have the proportional hazard assumption and therefore using the predicted probability as the risk score is problematic and therefore not the correct Harrell's C-index. * The reported performance is based on the highest value across five random seeds, which raises concerns about fairness. Variational inference methods are known to be challenging to optimize, as they heavily depend on good parameter initialization -- a well-documented issue in the literature. In my previous experiments, different random seeds significantly affected performance, leading to high variance. Selecting the best-performing result rather than reporting the average (or another robust statistic) gives an unfair advantage over more stable and robust methods. Supplementary Material: No. Relation To Broader Scientific Literature: This paper extends previous work (Nagpal et al. 2021a, Apellaniz et al. 2024) by explicitly incorporating censoring into the variational inference framework. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper is difficult to follow, particularly regarding the motivation and limitations of prior work. The introduction uses vague terms such as “remain unclear” without specifying concrete shortcomings. Other Comments Or Suggestions: * Line 90: "surjective" → "subjective" * Lines 99, 126: "partial log-likelihood" → "log-likelihood" (The term partial log-likelihood in survival analysis specifically refers to Cox's proportional hazards model.) * Figure 2: The meaning of solid vs. dashed lines is unclear. Suggestions: (1) make the caption self-contained, explicitly defining these lines. and (2) use dashed lines for the encoder in Figure 3(a) to maintain consistency. * Lines 352-353: "Deep Survival Forest (DSF)" → "Random Survival Forest (RSF)" * The term *Censor-dependent* Conditional VAE suggests a dependent censoring assumption, which is misleading. A name like Censor-Aware VAE might be clearer. Questions For Authors: * In the general generative DAG of $U$ (Figure 1(a), Figure 2(b), and Figure 3(b)), why do you represent the sequential graph as $x \rightarrow z \rightarrow u$? Specifically, in your decoder, why is $x$ necessary? Additionally, how reasonable is the structure of the graph you are using? * You mentioned that the event distribution should belong to a location-scale family. However, in your experiments, you did not explicitly specify which type of distribution you used for optimal modeling. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your thoughtful feedback and valuable suggestions. We truly appreciate the recognition of the contribution and your overall support of our work. We have improved our manuscript based on your recommendations. In what follows, we respond to your comments point by point, with deeper discussions where needed. > **A dependence between censoring times and event times contradicts the assumed DAG** This misconception is also raised by Reviewer KuLt (#3). The assumed DAG aligns with a conditional independence between uncensored time-to-event $U$ and censoring times $C$ as noted in line 93. A fact in probability theory is that the independence between $U$ and $C$ *does not* imply an independence between the **observed** censored times ${Y|\delta=0}$ and the **observed** events times ${Y|\delta=1}$ due to Eq.1. >VI relies on good parameter initialization; more experiements Thank you for sharing your insights. We found that tuning hyperparameters effectively reduces seed variability on datasets like NWTCO. Happy to share our logsumexp and xvaier tricks for stability or train script if needed. Ranganath et al. (2016) do not have a open script. The link for 4 additional experiments is in our reponse to Reviewer acdJ (#1). > **Computing of Harrell's C-index via predicted probabilities for non-cox models violates the one-to-one mapping condition** We appreciate the opportunity to having a deeper discussion regarding Harrell’s C-index. The issue you raised was carefully discussed during our research. We invite your additional feedback to our step-by-step reasoning below: 1. We view the C-index, when computed using a particular risk score, as a generalized ranking metric/statistic that applies across all models considered. While Harrell’s C-index is defined using predicted survival times, the notion of concordance naturally extends to ranking via risk scores such as survival probabilities [Uno et al. 2011]. 2. Proportional hazards (PH) assumption brings ordering guarantees. In particular, it ensures time-consistent ordering i.e., $S(u│x_i)≤S(u│x_j) ↔ ∀u, S(u|x_i)≤S(u│x_j)$ and risk-consistent ordering across different scores: $S(u│x_i)≤S(u│x_j) ↔ E(u|x_i)≤E(u│x_j)↔z(x_i)\geq z(x_j)$, which is refered to as one-to-one mapping [Harrell et al 1996]. The latter allows for efficient concordance computation via hazard ratios for Cox models, which is an advantage over models like RSF, DeepSurv, DSM, and our own. From this perspective, the averaged C-index offers a practical and robust measure of overall ranking performance. 3. Without PH assumption, the inconsistency across *risk scores* is expected, and C-index based on a meaningful score remains valid and is not a "misuse"; the inconsistency across *test time* motivates the adoption of Antolini et al.’s [2005] in our work. Nevertheless, outperforming Cox models under both average and time-dependent C-indices demonstrates effective ranking without relying on the PH assumption. In conclusion, we disagree with the claim that "using the predicted probability as score is problematic" for non-Cox models > Clarification on the motivation and limitations of prior work Thank you for the feedback. As discussed in Lines 142–150, our motivation stems from the lack of variational inference (VI) optimality analysis [Cremer et al., 2018] in prior work. The limitations in achieving VI optimality are shown in Prop. 3.1, while Remarks 3.1 and 3.2 point out issues in existing assumptions and model designs. > Why the sequential graph as x→z→u? Why x necessary? How reasonable is the v-structure? This is an insightful question—one we have also considered. To clarify, only Fig1(a) represents a general generative DAG; Fig2(b) and 3(b) depict the v-structured latent process used in our method. The sequential graph x→z→u corresponds to a D-separated latent structure, as in Ranganath et al. (2016) and Apellaniz et al. (2024), where $u$ depends solely on z. In contrast, the v-structure, assuming z⊥x, requires x to generate u, thereby enabling individual survival modeling. While both latent structures are equivalent [Zheng et al. 2022, Remark 1], the v-structure is the default setup of CVAE [Sohn et al. 2015, Section 3]. For survival analysis tasks, we conjecture that the absence of v-structure stems from the prevailing use of vanilla VI (since Ranganath et al. 2016), and its inferior VI optimality compared to D-separation as noted in Line 175-180. > Specify which type of location-scale family used for optimal modeling. As noted in Line 1023, it is treated as a hyperparameter with no consistent advantage observed; results depend on settings of others hyperparameters. >Suggestions: Line 90: "surjective" → "subjective" It is not a typo. In mathematics, a function $f:A→B$ is surjective if for every b∈B, there exists at least one a∈A such that $f(a)=b$ > Suggestions [2-4] Thank you for the valuable suggestions. We have revised the relevant section accordingly.
Summary: This paper analyzes the current practices to apply variational inference to latent variable models for survival analysis, provides insights into why the naive application of VI may be insufficient, and presents a new VI formulation that can potentially sidestep some of those challenges. The authors also include some experimental results that corroborate the improved performance from using the new framework. Claims And Evidence: Overall claims made at the start of the paper are well supported. I do have specific concerns and raise them in the subsequent sections. Methods And Evaluation Criteria: While the benchmarks and the metrics might be enough, I am confused about the lack uncertainty estimates in the numbers in Table 4 and Table 5. Can the authors comment on why they used the maximum and the minimum numbers across the trials without any uncertainty quantification? Theoretical Claims: Overall, I had trouble understanding the final take-away from the analysis. Here is what I understand right now and I would appreciate if the authors can help me understand the rest of it. Moreover, I would strongly encourage the authors to provide a high detailed technical summary of their contributions in the introduction (more suggestions in a later section). - Section 3.1: Lemma 3.1 provides the conditions that need to be met for the optimal solution of eq. 4 and eq. 5. Proposition 3.1 provides what follows if those conditions are met by a solution under the assumption on the model's functional form. Then, I do not fully understand the claims around Remark 3.1. Does the problem of no good vanilla VI solution only happens if there is data point that has the same $x$ and $y$ but different $\delta$? The whole discussion around the overlapping sample spaces was very confusing for me. Overall, what is the main takeaway from this? - Section 3.2: Overall, the things moved fast in Section 3.2 without a lot of commentry on what is happening and why. Several variables are introduced with no explanation of how to take this information. In particular, how do I interpret results of theorem 3.2.1? Does this imply that we need to keep separate variational parameters for the censored and the uncensored data? The formulation of remark 3.2 is weird. Why are we using $2-i$ here? - Section 4.2: How is the $S$ implemented? Do we assume use of simple distributions where $S$ is available in closed form? What is $\zeta$ below figure 3? What purpose is proposition 4.2 serving in the training procedure? Experimental Designs Or Analyses: Please see the section above about the methods and evaluation. Supplementary Material: I went over the proofs to verify some of the major claims. Relation To Broader Scientific Literature: Survival analysis models are crucial in various scientific fields. As their applications increase, it becomes increasingly important to learn not only the models themselves but also the uncertainty associated with unobserved variables. Variational inference (VI) plays a pivotal role in addressing this challenge. The primary contribution of this work lies in theoretical analysis of what happens when naive VI is applied and how a more cautious approach can enhance its effectiveness. By providing essential foundations, this work paves the way for future research and development in mechanistic models for survival analysis. Essential References Not Discussed: I think references are missing from some important places in this paper. A few instances are - The initial paragraphs of the introduction make claims about the applications of survival analysis. - In lines 100-102, the authors are talking about a well-established way to define the likelihood in the survival analysis. It will be great to see some references here. - In lines 175-180, there is discussion of posterior collapse without any referencing. - In lines 195-205, the authors talk about different types of censoring without describing or referencing. In general a pass over the paper may reveal even more places that would benefit from citations to provide support and additional context for readers not very well-read within the survival analysis literature. Other Strengths And Weaknesses: Overall, I am want to accept this paper. However, I am can not recommend a clear accept in its current state. I think the paper is well written for most parts. But I also had trouble understanding the main take-aways from the analysis. Here are the few things that I think can help this paper a lot. - Add references where needed to provided appropriate context - Summarized the main results and take-aways from the analysis in the introduction with forward referencing. - Add more context around the technical contributions in the Section 4. Also, please add forward referencing to proofs in the main body. - Close the loop on the final algorithm for the proposed CD-CVAE approach with details about $S$. - Motivate the main benefits of the proposed approach using a working example. I think a clear, simple demonstration of where the vanilla VI fails and where the proposed approach will succeed will go a long way in understanding the scope of the contributions. - Add the proper uncertainty estimates in the empirical results. Other Comments Or Suggestions: Some typos and minor errors: - $S_\theta$ in Eq. 2 started as $S_{U, \theta}$ and is then never used like that, again. - $h$ is not defined before usage in 1) of Proposition 3.1 - $\pi$ in 4) of Proposition 3.1 - Lines 155-157, second column the wording is strange. I think the authors want to claim to be the first to identity the problem latent non-identifiability in survival analysis. Wording can be changed to make that more precise. Also, similarly, a more precise claim for correcting the result of Domke and Sheldon can be used in lines 301-303. - Trailing bracket in equation for proposition 4.2 - What is DVI in the header part of section 5. Questions For Authors: Please see the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We truly appreciate your thoughtful and constructive feedback and your "willing to accept" recommendation. Your reasoning between what is well-explained and what is unclear is greatly appreciated. The following responses address the points you raised. # Evaluation: Best metrics, average metrics, more experiments. ### This is a general reponse. We value all reviewers’ thoughtful feedback on experimental design. While reporting best performance is a standard practice for VAEs (Burda et al. 2015; Tomczak & Welling 2018, Apellaniz et al. 2024), we agree that additional experiments are helpful. 4 additional experiments can be found via this [anonymous link](https://imgur.com/a/71JKyHg) with Apellaniz et al. 2024 included. Strong results across several benchmarks show our method consistently outperforms baselines on both best and average metrics. Meaningful exploration on simulated datasets is provided too. >Claims[1]: Question on vanilla VI solution, discussion around Remark 3.1, and its main takeaway Your interpretation of the vanilla VI failure case is spot on. Assuming the survival dataset contains two datapoints (x,y,1) and (x,y,0), optimal vanilla VI solution requires both inequalities (4) and (5) to hold as equalities at the same point (x,y) to perfect bound log-likelihood Eq.2. This leads to problems in Prop. 3.1. Remark 3.1 extends the VI optimality analysis to the population level by considering overlapping sample spaces. Specifically, it shows that, under non-informative censoring, an global optimal vanilla VI solution may fail to exist for non-degenerate θ (claim 1-2) or may be trivial (claim 3-4). >Claims[2]: Section 3.2 is too fast. Interpreting Thm 3.2.1. Thank you for the comments. We conduct a standard VI optimality analysis (Domke & Sheldon, 2018) for log-likelihood Eq.2 in Section 3.2. Thm 3.2.1 formalizes the structure of the optimal variational posterior $q_ϕ$ that achieves a zero inference gap under general censoring assumptions. It shows that the optimal parameter ϕ inherently depends on both the event and censoring time distributions ($U$ and $C$). In addition, under certain censoring assumptions, its dependence on censoring time distribution $C$ can be *eliminated*, offering practical insights into model design. >Does this imply separating variational parameters for the censored/uncensored data? A general answer is no. A similar misconception was raised by Reviewer KuLt (#3). The encoder in Eq. 10 is a dense/joint network as shown in Fig3(a). Conditioning on 𝛿 = 0,1, it yields two branches within the same network, producing variational parameters $\phi_1$ and $\phi_2$, which are used for notational consistency, as noted in Line 203, with vanilla VI setup and Thm 4.3.1 to 4.3.3. That said, our codebase includes a *de facto* split encoder (`Delta_encoder`) that disables parameter sharing, though it should be used with discretion. For clarity, we have added an appendix section discussing why a joint encoder is theoretically preferred, as the two branches are inherently coupled under the VI optimality conditions. > Why $2−i$ is used in Remark 3.2? $2−i$ is used to unify the indices in the results. For $i=2, q(z|x,y) =q_{\phi_2}$ if and only if there is no event observation $p(\delta=2-2=0|y)=1$. >Claims[3]: How is the S implemented? Is S simple distributions? This likely refers to S(u∣x) in Eq.12. We compute S(u∣x) via the standard reparameterization trick by sampling z from the prior distribution and then computing closed-form $S(u|x,z) using the learned decoder (See `.predict()` in `model/cd-cvae.py`). While the decoder $f(u∣x,z)$ follows simple parametric forms (and so does S(u|x,z)), $f(u|x)$ and $S(u∣x)$ in Eq.2 are generally intractable, as they correspond to an infinite mixture over z [Rasmussen, 1999]. >What is ζ below fig.3? For notation clarity, we decompose the decoder parameter θ into a location parameter $μ_ζ$ and a scale parameter σ. Specifically, ζ parameterizes the neural network that outputs the decoder mean $\mu(x, z)$. >What purpose is proposition 4.2 (no close-form update of $\sigma$)? As noted by Reviewer oybt (#2), stable training in VAE is non-trivial. While robust training strategies for VAEs are well-established in general settings (See Related Work, Liu & Wang, 2025), Proposition 4.2 identifies a fundamental pitfall: censored likelihoods in survival analysis make the dual-step optimization (Rybkin et al., 2021) *inapplicable*, justifying our use of standard $\sigma$ training in our proposed method of CDVI. > a motivating example/a detailed technical summary/a main take-away/add multiple references; We appreciate the valuable suggestions. Due to rebuttal length constraints, we will provide a detailed summary, supporting references, and an illustrative figure during the discussion phase. > Suggestions [1-6] We appreciate the valuable comments and have addressed the noted minor issues. For clarification, DVI is our proposed delta method variant.
null
null
null
null
null
null
null
null
Universal Approximation Theorem of Networks Activated by Normalization
Reject
Summary: The authors study the approximation power of MLPs with no traditional activation functions but instead only with the *layer norm* between affine layers. They show that this too is a universal approximator. Personally, I find this result interesting as it is something I've wondered about showing my self but never got around to investigating it; nice job! :) Their analysis is rigorous, cleanly explained, and accompanied but some numerical illustrations (though the numerics are largely vaguely supported but welcome). --- That said, I have some questions/concerns-ish (below). Claims And Evidence: Rigorous proofs. Methods And Evaluation Criteria: NA Theoretical Claims: Correct, and rigorously proven. Experimental Designs Or Analyses: Largely irrelevant. Supplementary Material: Detailed, and complete proofs (I focused on those and less so on the numerical analogies). Relation To Broader Scientific Literature: Very nice! Essential References Not Discussed: The authors have no connection to classical non-linear approximation theory; specifically non-linear (manifold) widths; e.g. [1]. Recent results such as [2] would be relevan. [1] DeVore, Ronald A. "Nonlinear approximation." Acta numerica 7 (1998): 51-150. [2] Cohen, A., and R. DeVore. "Nonlinear Approximation and (Deep) ReLU Networks." Constructive Approximation (2021). Other Strengths And Weaknesses: Quantitative estimates would have been nice, but that's for a figure time and place perhaps... Other Comments Or Suggestions: Definition 3 - In the *approximation theory* literature, which you are aiming for, this is a *width* of $\mathcal{F}$. For instance, if $\mathcal{G}$ were the set of all $N$-dimensional linear subspaces of a Banach space containing $\mathcal{F}$ then this quantity is the Kolmogorov linear width. Questions For Authors: 1. Can the authors clarify what is rigersouly meant by "optimization capacity" in this sentence: "LN-2 may have more representation capacity than PLN-4 in theory, but the optimization capacity is less?" (Also bad grammar). 2. I don't really see the point of section 4 personally. The authors consider the *width* of the class of L-Lipschitz functions on the unit interval, there is no effect of dimensional explored so I don't really get the point (clearly any such function can very easily be approximated by a piecewise linear interpolation with no piece having slope more than L); what am I missing? 3. Together Proposition 1 and 3 don't really conclude anything. There is an upper bound for NNs with LN which does not beat the lower bound for NNs with ReLU non-linearity. So there is some evidence but no mathematical conclusion.. Why include this as it is a proof of little.... Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### Reply to the reviewer G246 Thanks for your valuable comments and suggestions. We are pleased with your support for our paper. --- #### **Response to Question 1** Thanks for figuring out this typo. We indeed want to express "better optimization property" with the words "optimization capacity". A better optimization property of a module means that, adding the module to a network may improve conditioning, numerical stability and training efficiency in the optimization process. The optimization property of normalization is the main reason why normalization can be widely applied in various deep neural networks, as we described in introduction of this paper. Here we take PLN-2 as an example. When LN acts on $R^2$, LN constrains the mean and variance both and it outputs only two values. This constrain is compact for only two neurons, ensuring that the output is nearly constant---assuming the input vector is $(x_1,x_2)$ and $x_1>x_2$, we can identify that the output must be a constant vector $(1,-1)$ as long as $x_1>x_2$. Consequently, the derivative of the output with respect to $x_1$ is zero, which can lead to stagnation in parameter updates during optimization. This observation explains why PLN-2 has " worse optimization property", despite our theoretical proof of its strong approximation capabilities. Therefore, we believe that there is a trade-off between optimization and optimization, as shown in Section 4. --- #### **Response to Question 2** In Section 4, we aim to provide a comprehensive understanding of the theoretical and practical aspects of activation functions. Specifically: 1. **Theoretical Comparison**: In Section 4.1, we analyze the approximation bounds of different activation functions, including PLN and ReLU. Our theory aims to provide quantitative results of different activation functions, which can be regarded as a theoretical reference when we explore further in experiment. 2. **Practical Insights**: In practical scenarios, the parameters of the network is obtained by training, resulting the network may perform much worse than its theoretical case. Figure 5(c) is a supportive example for this. Therefore, it is necessary to consider the optimization property of activation functions. Our ultimate goal is to demonstrate that a network with PLN is **not only a universal approximator** but **also inherits the optimization benefits** of normalization. This dual property of PLN—combining strong approximation capabilities with improved optimization dynamics—makes it a promising candidate for simplifying deep neural network architectures and advancing their theoretical understanding. We also add approximation experiments with 512 random inputs and labels on $R^8\times R$ to explore the high-dimensional case, following the same experimental settings in section 4.1. Here we give the MSE loss using a network with depth 1 and width 256. | PLN-4 | PLS-4 | Sigmoid | Tanh | ReLU | | -------- | -------- | ------- | ------- | ------- | | 3.78e-12 | 2.01e-10 | 2.14e-3 | 5.17e-8 | 2.97e-6 | We find PLN-4 and PLS-4 performs much better than the other activation functions, beyond the results of $R \to R$ approximation experiments in this paper. Additionally, we have included experiments on high-dimensional data in Section 5. While these experiments are not focused on approximation tasks, they further highlight the practical advantages of PLN in deep learning contexts. --- #### **Response to Question 3** In Section 4.1, our theoretical analysis provides quantitative insights into the approximation capabilities of different activation functions. However, as demonstrated in Section 4.2, the practical performance of a network is also heavily influenced by its optimization properties. As you can see, our experiment is conducted on a simple network. Although we applied various training techniques (line 252-262) to attempt to find the better parameters, it is still a hard task---referring to the bad performance in Figure 5(c) . We find the optimization problem exists, even in such a tiny network (depth 1, width 16). Therefore, We further explore the relationship between optimization and approximation in deep neural networks, namely the experiments in section 5. The propositions in Section 4.1 give us a good reference of the approximation capacity of a network. Only based on Section 4.1, we can contribute some bad performances of a network to the optimization problem rather than its approximation capacity **confidently**---based on that we know the theoretical performance of this network is outstanding, but we have not exploit it in a certain training process. --- #### **About Essential References** Upon carefully reviewing, we find the two references provide are supportive work for this paper. These will be added to citations in revision. We sincerely appreciate the reviewer for mentioning them. --- Much thanks for your support again.
Summary: The authors study universal approximation for networks, where the activation function is replaced by a layer normalization. This result is traced back to the classical universal approximation theorem by Cybenko. However, this step contains an error, as the sigmoid function derived from LN does not act element wise on the output of the affine transformation of the layer. Therefore the utilization of Cybenko's result is not legitimate as the pre-conditions are not met. I therefore find that at the present stage of preparation, the paper is not yet suitable for publication. Also, the 1d input dimension results do not really change this finding, as their (practical) scope is very limited. This said, I do not intend to claim that the statement of the theorem the authors give is wrong. I can well imagine that it is correct as a mathematical statement, as there are many activation functions known that do not act element wise. Nevertheless, it is not yet proven in the present version of the article. This finding can not be compensated by the numerical experiments the authors provide. Though the 1-d experiments are convincing, the further experiments on the VGG architecture and CIFAR10 and the time series task are much less. In particular, the results the authors have obtained up to now, do not really give a sound experimental basis to support the author's claims. # # Thanks for the discussion , but my opinion stays unchanged, as the way the authors define their normalization layers does not really match with what generally is understood as layer normalization. The paper seems formally correct, but the result is too limited. Claims And Evidence: As indicated above, the application of Theorem 1 in the proof of Theorem 2 contains an error, as \sigma(x) in the last line of the proof does not act element wise, as required in the original version of Theorem 1. Therefore, the proof f Theorem 2 requires adjustment. This might be feasible, but as this is somewhat the core of the paper, this should lead to a rejection for this time. Methods And Evaluation Criteria: The experiments on the 1d example and CIFAR10 and the time series task are inconclusive and are not yet suited to promote the proposed architecture. Theoretical Claims: Checked proof of main Thm 2, which contains an error. Experimental Designs Or Analyses: Weak, see above under Methods and Evaluation Criteria. Supplementary Material: Caontains proofs. Relation To Broader Scientific Literature: I'm not an expert on UAP with 'exotic' activations. Essential References Not Discussed: None Other Strengths And Weaknesses: The status of preparation of the paper is very preliminary and there are too many typos and small errors to list them here. The authors should conduct a careful reediting of the paper, before submitting it elsewhere. Other Comments Or Suggestions: The subject per se is not uninteresting. i encourage the authors to correct their proof and improve the paper throughout, including creating a reasonaby solid experimental basis, and then resubmit. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ### Reply to the reviewer 1Q2E Thanks for your valuable comments and suggestions. The main concern raised by the reviewers is the correctness of our proof. After re-examining our proof, we are confident that there are **no errors** in our reasoning. Here, we attempt to clarify why the reviewer might have misunderstood our proof. --- #### **Clarification on the Misunderstanding of the Proof** The reviewer pointed out that the operation $\sigma$ does not act element-wise. We trace back to lines 132–155 and clarify the following: In line 150, $\sigma(\boldsymbol{w^\top_j x} + b_j)$ acts element-wise. Here, both $\boldsymbol{w}$ and $\boldsymbol{x}$ are vectors, and $b_j$ is a scalar. Therefore, $\boldsymbol{w^\top_j x} + b_j$ is a real number rather than a vector. Consequently, $\sigma(\boldsymbol{w^\top_j x} + b_j)$ is applied element-wise. As we mentioned in line 153, $\sigma(x) = (x / |x| + 1) / 2$ is a function on $R$. Besides, the form in our proof (line 150) aligns with Cybenko's work (line 65, Eqn. 1), ensuring the correctness of our proof. If this is not the source of confusion, another possible reason is the discrepancy between the input and output dimensions of the Layer Normalization (LN) operation. In our proof, we construct $G(x) = \sum\limits_{j=1}^{N+1} \boldsymbol{\alpha}_j^\top LN(\boldsymbol{W_j x} + b_j)$, where $\boldsymbol{W}_j$ and $\boldsymbol{b}_j$ belong to the first linear layer, and $\boldsymbol{\alpha}_j$ belongs to the second linear layer. We set $d = 2$ in this proof, as mentioned in the paper. The input to the $N+1$ LNs has $2(N+1)$ neurons, and the output of the LNs also has $2(N+1)$ neurons. However, the final output of Eqn.5 is the linear combination of $N$ neurons. This is because: 1. We merge the $(N+1)$-th term into the previous $N$ terms. 2. We set $\boldsymbol{\alpha}_j = [\hat{\alpha}_j, 0]^\top$, as shown in line 136. While the previous $N$ terms output $2N$ neurons, only $N$ of them are allowed to pass—because the other $N$ neurons are multiplied by zero in $\boldsymbol{\alpha}_j$. This is why the reviewers might have misunderstood our proof. If the reviewer still has trouble understanding our proof. Please feel free to point it out, and we would like to clarify it. --- #### **Clarification on the Experimental Design** The experiments are not solely designed to support the theory, they also lead to a discussion on approximation and optimization. In practical scenarios, the parameters of the network is obtained by training, resulting the network may perform much worse than its theoretical case. Figure 5(c) is a supportive example for this. Therefore, it is necessary to consider the optimization property of activation functions. In this paper, we find that PLN can be seen as a combination of normalization and activation—possessing both good approximation properties (as we propose) and optimization properties (inherited from normalization). As we concluded in Section 5, deep neural networks with only linear modules and PLN can perform well. These experiments identified both the approximation and optimization properties of PLN. --- Rebuttal Comment 1.1: Comment: Tank you for this explanation. I acknowledge that in Lemma line 150 you reach a shape that is consistent with applying Cybenko's theorem. But I still don't understand how you are getting there. You define LN(x) in equation (3), which intermingles all x_j in the denominator. You then apply Layer-Normalization on sub-streams, which is made possible by the convention (4), which is not what people would normally understand as layer normalization, as this 'normalizes' just two values produced in a very spacial way with the w_j and-w_j in the matrices. From that you obtain the element wise non-polynomial activation required by Pinkus theorem. But this is not Layer-Normalization. Layer normalization would be to apply LN to Wx+b. Therefore my misunderstanding was somewhat induced by the notions you chose and the title of your paper. And I find the result, which I now understand better, much more limited because the freedom that is utilized for the 'tiny sub-layer normalization' is so wide, that not much can be learned from your observation. So I'm not yet ready to adjust the score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the timely feedback on our rebuttal. We provide the further clarification for your concerns. --- #### **Clarification for the misunderstanding of proof.** Thanks for your reply on our rebuttal in detail. We can infer that your misunderstanding on the proof is mainly derived from that you thought the proof is based on only one Layer Normalization (LN, formulated by Eqn. 3), while our proof is based on multiple Layer Normalizations in a layer (i.e., Parallel Layer Normalization (PLN)). We denote PLN for simplifying the description. We spend many words (pages) to describe and highlight the PLN. For examples, in our introduction, `Lines 061-066`, we described that---**We focus on parallel layer normalizations (PLN) rather than serial LN-Net, as shown in Figure 1. We theoretically prove an infinitely wide network—with a "linear-PLN-linear" structure—has universal approximation ability on $[0, 1]^n$**. Besides, we also introduced what is PLN, in `Lines 085-093 (Right)`, just **after the description of Layer Normalization (Eqn.3)** and **before Section 3 (obviously before Theorem 2)**. Note that PLN is a more general formulation of Group Normalization (GN) [1], which is widely used in CNNs for object detection and segmentation. GN is also described as a more general formulation of LN in paper [1]. Besides, [2] further extended GN in CNN to LN-G in MLP, and also figured out that LN-G has more nonlinearity than LN. [2] also provided experimental basis showing that LN-G obtains better performance than LN. We highlight that PLN-d has the same structure as LN-G in `Lines 144-155(Right)` (please refer to Section 3.2). As for "normalizes just two values produced in a special way with the $w_j$ and $-w_j$", note that we show that "**we give the proof in the case d=2**" in Line 132. We also provided the case "normalizes $d$ values" in **Appendix A.1**, which is also mentioned in `Line 157`. As for the misunderstanding induced by the notions you chose and the title of the paper, does the reviewer mean the title of our Theorem 2 rather than our paper? There is no obvious "LN" in the title of our paper. As for the title of Theorem 2---"LN for UAT"---here LN can be seen as a module, and can be also seen as an operation. We consider the similar title "ReLU for UAT", a "ReLU module" has many "ReLU operations". An "LN module" has only one "LN operation", but a "PLN module" has many "LN operations". LN in the title of Theorem 2 means LN operations rather than a single LN module. **Figure 2** also shows the differences of PLN and other activation functions. This may be why the reviewer misunderstood our Theorem. --- #### **Clarification for the limited learn given the same wide.** As for the concern that our PLN-Net seems too wide to train conveniently. Please refer to Section 4 and 5 for our experiments. In section 4, although ReLU has stronger nonlinearity under the same width, **ReLU is the one hard to train rather than PLN**. In section 5.1.1, we show the advantage of PLN in deep neural networks---it possesses **both good approximation property (as we propose) and optimization property (inherit from normalization)**. Therefore, the network with PLN is easy to train. --- #### **References** [1] Wu Y, He K. Group normalization[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 3-19. [2] Ni Y, Guo Y, Jia J, et al. On the nonlinearity of layer normalization[J]. arXiv preprint arXiv:2406.01255, 2024.
Summary: This paper explores the possibility of replacing activation functions with layer normalization, offering a new perspective on the foundational logic of neural networks. It provides corresponding approximation theory, width estimates, and experimental results, including a theoretical proof of the universal approximation theorem (UAT) for linear layers equipped with layer normalization. Additionally, the paper numerically demonstrates the impact of different normalization designs on network performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Null. Relation To Broader Scientific Literature: Null. Essential References Not Discussed: Null. Other Strengths And Weaknesses: Strengthnesses: 1. The idea of replacing the activation functions with layer normalization is interesting, especially considering the inter-neuron nonlinearity. It would be even better to derive some novel activation based on the existing normalization methods. 2. The paper presents a unique perspective by raising an important and thought-provoking question. It explores, from a theoretical standpoint, whether normalization operations can replace activation functions. Weaknesses: 1. The theory is not particularly novel. The so-called PLS or PLN can be considered as special activation functions, and the conclusion for infinite width (Thm 2) can be directly derived from Cybenko (1989). Although the author mentions in Corollary 2 that the result can be extended to PLS, I did not see any relevant description. Moreover, LS in Corollary 2 is not the same as PLS, so I don't fully understand what conclusion Corollary 2 is trying to convey. 2. The discussion in Section 4 lacks convenience for me. The theoretical results estimate the minimum width for UAT based on 1D target functions, but they fail to provide some intrinsic differences between the proposed LN and some conventional activations. The authors also conducted some experiments to testify to the approximation capacity, but the optimization issues strongly influence the performance. The adopted experiment setting is too straightforward to distinguish the approximation and optimization properties of different activations. I think this section may need major revision, especially the experience setting part. 3. The experience conducted in Section 5 needs further discussion. The current setting includes CV and NLP models with different architectures. In some settings, the proposed PLN-8 outperformed BN (or other activations), and some don't. I think it will be interesting to discuss further the source of these preferences. The authors attempt to address this issue by assuming there is a difference in the level of nonlinearity that each task requires, but the influence of model architecture or activations (when used together) cannot be disentangled. I think this topic can even be a paper itself. Other Comments Or Suggestions: 1. The footnote 1 on Page 2 is incomplete. 2. The row header of Table 3 is wrong. 3. There is a typo in Sec 3: 'Finally, we further disocuss the approximation on LN without centering' -- where 'disocuss' should be 'discuss'. Questions For Authors: 1. The estimation of the approximation bound in Propositions 1-3 holds for only 1D target functions and shares a similar form of the upper bound. However, in high dimensions, will the dependence on the dimension $n$ show a difference when we choose different activations? 2. The performance of PLN seems sensitive to the hyperparameter $d$. Any criterion about the choice of $d$? 3. In Figure 10, do the results mean that compared with the identity case, adding PLN-8 can hurt the performance? 4. Figure 11 clearly shows the advantages of PLN-8, but Figure 10 does not. Is this because of the different tasks or different architecture? A possible way is to consider ViT, a transformer-based CV model, to get rid of the influence of model architecture. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ### Reply to the reviewer Guee Thanks for your valuable comments and suggestions. --- #### **Response to Weakness 1** To begin with, we list our contributions below to clarify the novelty of this paper. 1. We are the first to **consider LN (and LS) as activation functions** and provide the **mathematical proof** of its universal approximation property **by constructing proper parameters** in the linear layers. 2. We are the first to propose **the concept of PLN and PLS**. Although the similar concepts like GN or LN-G have been proposed in the previous work, we discussed their differences with our PLN in our paper. 3. This paper also discussed the **universal approximation property of normalization** for the first time. Besides, we point out that the proof is **not direct** as the reviewer commented. As we can see in the proof of Lemma 1, LN is **not equivalent to** sigmoidal functions in the network. We **construct proper weights and biases** in the linear layer and then obtain a similar result to the linear combination of sigmoidal functions. We also **construct piece wise step functions** to prove the universal approximation capacity of PLN, as shown in Appendix A.3. We initially try to show our theory of PLN in this way, but this method is more complex than the current version. Considering the readability of this paper, we decided to give the proof based on Cybenko's work. As for the reviewer's concern about Corollary 2, we provided details in the supplementary material, as we mentioned in line 197. The relationship between LS and PLS is similar to LN and PLN, as shown in Figure 1. When we put LS parally, we then obtain PLS. Compared with PLN, norm size 1 is suitable for PLS, while PLN requires the norm size of at least 2. Besides, note that LS is RMSNorm, which is also widely used in LLMs (e.g., LLama, Qwen2). --- #### **Response to Weakness 2** Due to the limited words for the reply, could you please refer to the **Response to Question 2** in the **Reply to the reviewer G246**, for understanding Section 4 better? We admit it is hard to distinguish the approximation and optimization properties of different activations. Although we applied various training techniques (line 252-262) to attempt to find the better parameters, it is still a hard task---referring to the bad performance in Figure 5(c) . We find the optimization problem exists, even in such a tiny network (depth 1, width 16). Therefore, We further explore the relationship between optimization and approximation in deep neural networks, namely the experiments in section 5. We will adjust our descriptions and expressions in the revised version for better readability. --- #### **Response to Weakness 3** This paper does not aim to show that PLN can outperform other normalizations either activation functions. We aim to show that PLN can be seen as a combination of normalization and activation---it has both good approximation property (as we propose) and optimization property (inherit from normalization). Furthermore, it may help simplify the structure and of DNNs and then provide a convenient platform to study DNNs. --- #### **Response to Question 1** There are also results for high dimensional input in the form with the sign $O(\cdot)$, as shown in the references mentioned by the reviewer G246. But it is hard to give the precise bounds like that in Section 4.1. Therefore, we add experiments to explore the high-dimensional case. Could you please refer to the **Response to Question 2** in the **Reply to the reviewer G246** for the detailed results? --- #### **Response to Question 2** We further discussed PLN-2 in the **Response to Question 1** in the **Reply to the reviewer G246**, could you please turn there for our response? Here we can see that better approximation capacity may suffer from optimization problem. As for the practical scenarios, following our experiments in section 4 and 5, we recommend that $d=4$ for shallow networks, and $d=8$ or lager for deep networks. --- #### **Response to Question 3 and 4** In figure 10, the "Identity" term means that we use PLN-8 as normalization and Identity as activation. The "PLN-8" term does not introduce additional nonlinearity based on the "Identity" term---it essentially add PLN-8 (as activation) after another PLN-8 (as normalization). Here we provide the results on ViT for classification with PLN-8 as Normalization. | Activation | Acc(%) | | ---------- | ------ | | Identity | 82.77 | | PLN-8 | 82.51 | | ReLU | 87.55 | From the classification task, we find that "PLN+ReLU" is much better than "PLN+Identity", it indicates that the architecture will affect the conclusion. On the other hand, in the time-series tasks (Figure 11), we find the performance of "Identity-Identity" is not so bad, indicating that the task will also affect the conclusion.
null
null
null
null
null
null
null
null
Active Learning for Efficient Discovery of Optimal Combinatorial Perturbations
Accept (poster)
Summary: This work proposes a new active learning framework for optimizing desirable properties in CRISPR screening, which is a combinatorial problem. The main algorithmic contribution comes down to an adaptive training method that scales the embedding dimensionality of the predictive model with the size of the training size, which allows the representation to learn the necessary characteristics of the genes as the active learning loop progresses. The authors conducted experiments to compare the performance of the proposed algorithm against various baselines. ## update after rebuttal The authors have sufficiently responded to my questions, and I will increase my score. Claims And Evidence: Yes, the claims are appropriately backed by the experiments. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation make sense. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: Yes, the experimental designs are appropriate. The analyses could be extended to get a better insight into what makes the active learning policy works as well as the scalability of the method. For example, as I understand, MPE is a version of UCB that does not explore (setting the tradeoff parameter $\beta = 0$, simply optimizing the predicted mean). If so, I find it interesting that this version outperforms UCB with a non-zero $\beta$. Do the authors have any comments on whether other values for $\beta$ have been tried and what the trend is. In terms of scaling the dimensionality of the embedding with the training data size, do we do this at every step of the active learning loop? This seems to be quite an expensive endeavor, as we need to train multiple deep neural networks (since an ensemble is used) every time new data comes in. On this note, have the authors considered how performance changes as a function of the size of the ensemble? Finally, could the authors provide some discussion regarding how scalable for the number of perturbations to be larger than 2? Does it take exponentially more effort to extend this framework? Supplementary Material: I read through the appendices. Relation To Broader Scientific Literature: The proposed method is a new active learning framework for optimizing over gene combinations which outperforms baselines from previous works. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Is there a reason why the different curves start off at different points in the plots? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer PDeb, We greatly appreciate your thoughtful feedback and questions. Your comments have helped us identify key areas where we could improve the presentation and clarity of our work. We will incorporate the additional results from the experiments in the revised manuscript. Below, we address your questions point by point. Due to space constraints, we have the full tables with standard errors here: https://bit.ly/42bZt32. --- > [Q1] Do the authors have any comments on whether other values for β have been tried and what the trend is? [A1] You raised an important point regarding the role of β in the acquisition function. This prompted us to explore a more generalized formulation of our acquisition function. We now express the acquisition score as a weighted combination: score = α × mean + β × std. This unified view captures multiple strategies: Uniform sampling: α = β = 0 Uncertainty-only (pure exploration): α = 0 MPE (pure exploitation): β = 0 UCB: α = β = 1 (equal trade-off) To better understand how this trade-off impacts performance, we added a new experiment that varies β while holding α = 1, testing β values from 0 to 25 on four genetic perturbation datasets. We find that the True Positive Rate (TPR) of top 200 gene combinations remains relatively stable for small β (β ≤ 1). This may result from, in early rounds, the effect magnitude dominating the predictive uncertainty. As β increases, uncertainty begins to dominate and leads to a decrease in TPR. |β|Norman|Simpson|Horlbeck K562|Horlbeck Jurkat| |-|-|-|-|-| |0|138|146|99|152| |0.5|138|148|98|152| |1|139|145|98|154| |5|137|144|81|143| |10|133|142|72|132| |25|120|124|73|108| |Uniform|93|69|39|80| We also want to clarify that UCB in RECOVER is defined differently: it uses residuals + β × std, since the model’s objective is to predict residuals from a linear baseline (synergy prediction). We will revise the manuscript to distinguish these UCB formulations more clearly. Finally, we expanded our experiments to compare the effectiveness of MPE and residual-UCB strategies across both NAIAD and RECOVER. The results are in our response to Reviewer kwgr [see A4]. Consistently, MPE outperforms residual-based UCB methods, further supporting our conclusion. > [Q2] Is embedding dimensionality scaled at every active learning step? Given the expensive cost of retraining multiple networks, have the authors evaluated performance versus ensemble size? [A2] You're correct that we increase the embedding size during the active learning iterations. Retraining the ensemble from scratch each round is expensive (typically hours). However, this remains fast compared to the time and cost of biological experiments (weeks per cycle). Your suggestion to test the impact of ensemble size was valuable. We performed an experiment varying the number of ensemble models. The results indicate that increasing the ensemble size can improve performance under uniform sampling. In contrast, the MPE acquisition function demonstrates robust performance regardless of ensemble size. |Ensemble Size|Norman||Simpson||Horlbeck K562||Horlbeck Jurkat|| |-|-|-|-|-|-|-|-|-| ||MPE|Uniform|MPE|Uniform|MPE|Uniform|MPE|Uniform| |1|141|89|150|67|106|34|153|80| |5|141|91|143|68|104|37|151|81| |7|143|92|143|71|102|39|152|81| > [Q3] How scalable for the number of perturbations to be larger than 2? Does it take exponentially more effort to extend this framework? [A3] Thank you for highlighting this valuable point. Our framework extends efficiently to fit combinations with more than two gene perturbations. We will revise the manuscript to provide a more detailed explanation. For a dataset with a higher-order combination involving p perturbations, we define the set $S=\\{i_1,i_2,…,i_p\\}$. We model the effect of jointly perturbing the genes in $S$ as: $$Y_{S} = f_1([Y_{i_1}, Y_{i_2}, … Y_{i_p}]W_1)A^T + \sum_{i \in S} f_2(W_2X^i_{gene})A^T_2$$ Here, $Y_{i_1}, Y_{i_2}, \dots Y_{i_p} $ represent the individual gene effects, while the interaction term $\sum_{i \in S} f_2(W_2X^i_{gene})A^T_2$ captures the high-order genetic interaction effect. We leverage a permutation-invariant mechanism for combining individual gene embeddings through summation, allowing the model to efficiently generalize across different lengths of perturbation sets. > [Q4] Is there a reason why the different curves start off at different points in the plots? [A4] Thanks for pointing this out. In Figure 4, the x-axis should start at “5,” the minimum TPR for the top 5 gene combos. Since strategies choose different samples after round one, the curves diverge from there. We'll update the figure by removing “0” and marking the start as “5.” --- Thank you for your thoughtful and constructive feedback. It has helped strengthen and refine our work. If you have any further questions, we’d be happy to address them. If you feel all concerns have been resolved, we kindly invite you to consider re-rating your evaluation. Thank you.
Summary: We can perturb the expression of various genes to achieve a desirable phenotype such as enhanced cell viability. However, given that there are close to 20k known human genes, it is not possible to test every gene combination to identify the optimal combination that leads to the most desirable phenotype. This paper presents an active learning method called NAIAD for discovering optimal gene pairs using existing perturbation data. NAIAD builds a model to predict the effect of jointly perturbing a pair of genes that not only accounts for additive effects but also for interactions been the genes. This model is coupled with an acquisition strategy, the most effective being Maximum Predicted Effects or MPE, to choose gene pairs over active learning iterations. Crucially, NAIAD also explicitly accounts for the need to efficiently model datasets with different sizes over active learning iterations by progressively increasing the dimensionality of gene embeddings as more data is obtained. In their experiments that use gene perturbation and drug combination data, the authors convincingly demonstrate the following: - Show that adaptive gene embedding sizes allow NAIAD to learn more effective models across training sets of different sizes when compared to using fixed embedding sizes - NAIAD outperforms existing baselines in modelling combinatorial gene perturbation datasets, especially when training data is limited. - In the active learning setting, the MPE acquisition function outperforms all other functions and identifies the most number of gene pairs that produce the greatest changes in the measured phenotype. - NAIAD + MPE outperforms the baseline RECOVER + UCB method in identifying optimal drug combinations. ## Update after rebuttal The additional results provided in the rebuttal further highlight the effectiveness of NAIAD and help understand the usefulness of MPE alone. All of my questions have been satisfactorily addressed and I will keep my original score. Claims And Evidence: The paper is very well-written and claims made throughout the paper are supported by clear and convincing evidence. Additionally, shortcomings and ways to improve on NAIAD are transparently described in the discussion section. Methods And Evaluation Criteria: The method is well-motivated and the benchmarks used are quite comprehensive and meaningful. I have a few suggestions here to help improve the benchmarking: - Although the drug combination benchmark shows that NAIAD + MPE outperforms RECOVER + UCB, it would be useful to see active learning results from RECOVER and GEARS in the gene combination benchmarks to more convincingly demonstrate that NAIAD outperforms these methods in the active learning setting. - Figure 5 seems to indicate that using MPE leads to NAIAD outperforming RECOVER. To isolate the benefits of using MPE vs. using the NAIAD model, having results from RECOVER + MPE would useful. Theoretical Claims: The paper does not make theoretical claims. Experimental Designs Or Analyses: Yes, I checked the soundness of all the experiments presented and have mentioned my suggestions above. Supplementary Material: Yes, I reviewed most of the supplementary material to better understand the various acquisition strategies and benchmarks. Relation To Broader Scientific Literature: The main contribution of this paper is the NAIAD modelling framework that can accurately model effects of perturbations in a pair of genes from limited training data by incorporating both additive and interaction effects. The use of adaptive gene embedding sizes to account for differences in training data sizes across active learning iterations is novel to the best of my knowledge. RECOVER (Bertin et al. (2023)) is the closest prior work but their model does not explicitly have an additive effects term to the best of my knowledge (they only seem to have an interactive effects term). RECOVER was also only used for identifying drug combinations in the original paper. Although the MPE acquisition function is very simple, its use in this setting also appears to be novel and the authors demonstrate its effectiveness. Essential References Not Discussed: To the best of my knowledge, relevant prior work has been adequately discussed. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. In an active learning setting, how is it handled when the acquisition strategy queries a gene combination that is not present in the dataset? Do you only consider pairs for which data is available during acquisition? 2. How do RECOVER and GEARS perform on the gene combination-based active learning task? 3. How much improvement comes from using MPE alone? Have the authors tried using MPE along with RECOVER in their benchmarking? 4. In lines 129-130, shouldn't $k$ be the total number of genes for the row vector $X^{i}_{gene}$ to represent the $i$th gene's embedding? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer kwgr, We appreciate your supportive feedback and your valuable comments, which have helped us to improve our work further. We will incorporate the results from your suggested experiments in the revised manuscript. We address your questions below. Due to space constraints, we have the full tables with standard errors here: https://bit.ly/42bZt32. --- > [Q1] how is it handled when the acquisition strategy queries a gene combination that is not present in the dataset? Do you only consider pairs for which data is available during acquisition? [A1] We appreciate your point. The datasets used in our work were originally designed to be comprehensive, ensuring that all pairwise gene perturbation combinations were covered. In our simulated active learning framework, the full dataset includes all possible gene combinations. In contrast, during real active learning, each round selects gene pairs that have not been previously measured. These could then be measured through additional biological experiments. We will revise the manuscript to more clearly explain this. > [Q2] How do RECOVER and GEARS perform on the gene combination-based active learning task? [A2] Thank you for suggesting a comparison between RECOVER, GEARS and NAIAD in active learning tasks to enhance the benchmark's comprehensiveness. We've added an experiment evaluating RECOVER, GEARS, and NAIAD using MPE on the Norman dataset. The results below, showing the true positive rate (TPR) of the top 200 hits, demonstrate that NAIAD continues to outperform GEARS and RECOVER in active learning rounds. The GEARS framework does not have active learning iterations and is not efficient for large combinatorial perturbation. Our active learning framework includes ensembles and multiple replicates, further increasing runtime. Thus, we focused the GEARS benchmark on the Norman dataset. A more comprehensive comparison with RECOVER is provided in [A3] and [A4]. |Model|Method|Round0|Round1|Round2|Round3|Round4| |-|-|-|-|-|-|-| |NAIAD|MPE|86|100|112|128|143| |RECOVER|MPE|82|101|111|125|139| |GEARS|MPE|42|80|92|100|109| |NAIAD|Uniform|86|88|88|90|94| > [Q3] Figure 5 seems to indicate that using MPE leads to NAIAD outperforming RECOVER. To isolate the benefits of using MPE vs. using the NAIAD model, having results from RECOVER + MPE would be useful. [A3] We appreciate your thoughtful suggestion to disentangle the contributions of the MPE acquisition function and the NAIAD model architecture. In response, we added an experiment using the drug combination dataset, evaluating NAIAD and RECOVER paired with different acquisition strategies: uniform sampling, MPE, and residual-UCB (originally used in RECOVER). The results, summarized in the table showing the TPR of the top 200 hits at round 4, indicate that MPE has a stronger impact on performance than the choice of model architecture. Both RECOVER + MPE and NAIAD + MPE achieve comparable results. However, in the genetic perturbation data, we do see the advantages of both the NAIAD architecture and the MPE acquisition function [comments in A4]. |Model|Method|Drug Combination| |-|-|-| |NAIAD|MPE|131| |RECOVER|MPE|133| |NAIAD|residual_UCB|121| |RECOVER|residual_UCB|120| |NAIAD|Uniform|120| |RECOVER|Uniform|117| > [Q4] How much improvement comes from using MPE alone? Have the authors tried using MPE along with RECOVER in their benchmarking? [A4] Thank you for your insightful comments to further explore the role of NAIAD and MPE in active learning. We included an experiment comparing the performance of RECOVER and NAIAD across all genetic perturbation datasets by calculating the TPR of the top 200 hits at round 4. These results highlight the advantages of both the NAIAD architecture and the MPE acquisition function. The discrepancy between the drug combination and genetic perturbation results is likely due to the relatively small size of the drug combination dataset, which includes only 1,800+ combinations, a substantially smaller number than in the genetic perturbation datasets. |Model|Method|Norman|Simpson|Horlbeck K562|Horlbeck Jurkat| |-|-|-|-|-|-| |NAIAD|Uniform|93|70|39|81| |RECOVER|Uniform|81|44|37|75| |NAIAD|MPE|143|142|100|150| |RECOVER|MPE|139|88|54|66| |NAIAD|res_UCB|110|103|60|96| |RECOVER|res_UCB|85|62|26|49| > [Q5] In lines 129-130, shouldn't k be the total number of genes for the row vector X_gene to represent the i th gene's embedding? [A5] Yes. You are correct that k should be the total number of genes within our dataset. Thank you for noticing this typo. We’ve fixed the statement to read: "Let $X_{\text{gene}} \in \mathbb{R}^{k \times p} $ be the learnable gene embedding matrix, where $k$ is the number of genes perturbed …." --- Many thanks to Reviewer kwgr for your professional, detailed, and valuable reviews! We have done our best to address each of your concerns and hope our response can resolve them. We will actively join the discussion until the end of the rebuttal period. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for the additional analyses, I think these results further highlight the effectiveness of your method and help understand the usefulness of MPE alone. All of my questions have been satisfactorily addressed and I will keep my current score.
Summary: This paper presents an active learning framework that efficiently discovers optimal gene pairs by leveraging single-gene perturbation effects and adaptive gene embeddings. The experiments show that the proposed method achieve better performance than baseline methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: Yes. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1. the proposed method is straightforward with interesting applications. However, I have to admit that I have shallow understanding on this field. I cannot provide a fair evaluation on the novelty. Weaknesses 1. Considering that this is a ML conference, I think the author should provide a preliminary section in the main content so the general reader can understand the task. Other Comments Or Suggestions: No Questions For Authors: See Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer vQva, Thank you sincerely for taking the time to review our work. We’re encouraged that you found the applications of our method interesting. We appreciate your thoughtful feedback about accessibility for a broader ML audience. We will incorporate a concise, ML-focused framing in the introduction that situates our work within well-known machine learning paradigms such as active learning and surrogate modeling. Here is the content: --- We frame the discovery of optimal gene or drug combinations as a machine learning problem of active search over a high-dimensional combinatorial space, where evaluating each combination (via experiment) is costly. Our method trains a neural surrogate model that predicts the effects of unseen perturbation pairs by combining overparameterized encodings of single-gene outcomes with a learned gene embedding space that models interaction effects. Crucially, the model’s capacity is dynamically scaled with the amount of training data. The surrogate guides new experiment selection via acquisition strategies inspired by Bayesian optimization, with the ability to leverage both exploitation (via maximum predicted effect) and exploration (via ensemble-based uncertainty). While this work is motivated by biological discovery, similar challenges arise across machine learning domains—including data augmentation policy search in vision [1], cold-start item selection in recommender systems [2], and sample-efficient policy learning in robotics [3]—all of which involve large discrete spaces, costly evaluations, and the need for adaptive modeling and decision making. In such domains, discrete components (e.g., transformations, items, actions) play a role analogous to gene or drug perturbations in our framework: each is embedded in a latent space, and combinations of these embeddings are used to represent and evaluate complex configurations. Our framework shows how these components can be actively selected via a data-adaptive surrogate to enable efficient, scalable discovery. [1] Cubuk, Ekin D., et al. "Autoaugment: Learning augmentation policies from data." arXiv (2018). [2] De Pessemier, et al. "Batch versus sequential active learning for recommender systems." arXiv (2022). [3] Anwar, Abrar, et al. "Efficient Evaluation of Multi-Task Robot Policies With Active Experiment Selection." arXiv (2025). --- Additionally, we note that Reviewer kwgr regarded our methodology and framing as well-motivated, and our evidence as convincing. We have carefully considered the suggestions from both Reviewers kwgr and PDeb, and have expanded our experiments accordingly. These results are included in our rebuttal and will be incorporated into the final version of the paper. Specifically, we have: (1) further isolated the contributions of the surrogate model and the MPE acquisition strategy (2) performed a more comprehensive benchmark analysis across various models in active learning iteration (3) performed hyperparameter tuning for the UCB acquisition function (4) evaluated the impact of the ensemble size on the performance of NAIAD. We hope this added context addresses your concerns and helps clarify how our contributions relate to mainstream machine learning. Please let us know if you have any further questions. We will be actively available until the end of the rebuttal period. If you feel your concerns are addressed, please consider reevaluating our work. Looking forward to hearing from you!
null
null
null
null
null
null
null
null
An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints
Accept (poster)
Summary: The paper considers the online learning problems in the constrained MDP problems. The objective is to maximize the reward while satisfying the constraints, where the system state transits according to an underlying MDP. The problem is formulated as a finite horizon episodic setting with $H$ periods in each episode. The main goal of the paper is to develop an on-policy learning algorithm to achieve a $O(\sqrt{K})$ regret upper bound on the objective and the constraint violation, where $K$ denotes the number of episodes. The main algorithm of the paper is based on a primal-dual framework. The optimal policy for the CMDP problems can be formulated as an LP. However, the LP cannot be directly solved since the problem parameters are unknown. Therefore, the paper considers the Lagrangian dual formulation of the LP and updates the primal variable, which is the occupancy measure, and the dual variable in an online manner. To be specific, the paper employs an optimistic mirror descent algorithm to update the primal variable occupancy measure, while using the online gradient descent algorithm to update the dual variable. At each step, the newly observed data is again collected to refine the estimation of the problem parameters, which will be used in the update of the next step. The paper shows that by properly picking the parameters in the mirror descent algorithms that update the primal and dual, the main algorithm of the paper is able to achieve a regret bound of $O(\sqrt{K})$ over the total reward and the constraint violations. The paper further shows that when the reward and cost functions are deterministic, then the algorithm would achieve an $O(1)$ bound on the total reward, since the primal variable is updated using optimistic mirror descent, however, the constraint violation bound turns out to still be at the order of $O(\sqrt{K})$. One theoretical contribution claimed by the paper is that their result does not require the slater's condition to be satisfied, which is usually adopted to bound the range of the dual variable by the existing algorithms. Therefore, even if there is no guarantee of the existence of a strictly feasible solution, the algorithm in the paper still works. Claims And Evidence: The claim looks convincing to me. Methods And Evaluation Criteria: The proposed algorithm comes with theoretical guarantees. However, there is no numerical validation of the algorithm. Theoretical Claims: The proof looks correct to me. Experimental Designs Or Analyses: There is no numerical experiment. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: The paper develops primal-dual algorithms to study the constrained MDP problems. The algorithm is based on a LP formulation of the CMDP problems. The new part of the algorithm is to utilize the optimistic mirror descent algorithm to update the primal decision variable. The paper shows that such a design could achieve a better bound over the objective value when the reward and cost functions are deterministic. Essential References Not Discussed: The algorithm developed in the paper is based on the LP formulation, where the primal-dual algorithmic framework is widely developed to solve it. However, primal-based algorithms have also been developed to solve the LP formulation or the CMDP problems in general. For example, the primal-based algorithm has been developed in [1], [2], [3], [4], and [5]. In particular, the paper [6] develops resolving primal LP methods to solve the CMDP and achieves the first instance-dependent $O(1/\epsilon)$ sample complexity. I think these papers are worth mentioning in the literature. References: [1]. Yongshuai Liu, Jiaxin Ding, and Xin Liu. Ipo: Interior-point policy optimization under constraints. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 4940–4947, 2020. [2]. Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, and Marco Pavone. Risk-constrained reinforcement learning with percentile risk criteria. Journal of Machine Learning Research, 18(167):1–51, 2018. [3]. Yinlam Chow, Ofir Nachum, Aleksandra Faust, Edgar Duenez-Guzman, and Mohammad Ghavamzadeh. Lyapunov-based safe policy optimization for continuous control. arXiv preprint arXiv:1901.10031, 2019. [4]. Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin Paduraru, and Yuval Tassa. Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757, 2018. [5]. Tengyu Xu, Yingbin Liang, and Guanghui Lan. Crpo: A new approach for safe reinforcement learning with convergence guarantee. In International Conference on Machine Learning, pages 11480–11491. PMLR, 2021. [6]. Jiang, Jiashuo, and Yinyu Ye. "Achieving $\tilde {O}(1/\epsilon) $ Sample Complexity for Constrained Markov Decision Process." NeurIPS, 2024. Other Strengths And Weaknesses: Strength: The paper develops a primal-dual algorithm to solve the CMDP problems. Using the optimistic mirror descent to update the primal variable is an interesting idea and the benefits is that when the reward and cost functions are deterministic, the bound over the gap of objective values can be improved to $O(1)$ (though the constraint violation gap is still $O(\sqrt{T})$). In the general case, the algorithm enjoys an $O(\sqrt{K})$ bound over the gap of objective values and constraint violations. Weakness: 1. The $O(\sqrt{K})$ bound (or the equivalent $O(1/\epsilon^2)$ sample complexity) is now quite standard in the literature and has been developed in many previous works. Though the result can be improved to $O(1)$ when the reward and cost functions are deterministic, the bound over the constraint violation is still $O(\sqrt{K})$. 2. It is claimed that the paper considers a stronger formulation of the constraint, as discussed in line 60. However, it turns out that the problem can still be formulated as LP, which is the same as the formulation induced by the weaker formulation of the constraint. The algorithm is also developed to solve the LP in an online primal-dual manner. Therefore, I am not sure whether the stronger anytime constraint formulation would introduce any difference here. I have asked a question for further clarification from the authors. 3. The primal-dual framework has been widely developed, see for example Efroni et al., 2020. The main difference of the algorithm in the paper seems to be that optimistic mirror descent has been adopted to update the primal variables. However, the only benefits seems to be that when the reward and cost functions are deterministic, the bound over the gap of objective values can be improved to $O(1)$. But the bound on the constraint violation is still $O(\sqrt{K})$. So the benefits of using optimistic mirror descent is not immediately clear to me. Other Comments Or Suggestions: Please refer to my questions below. Questions For Authors: 1. It is claimed that the paper considers a stronger formulation of the constraint, as discussed in line 60. However, it turns out that the problem can still be formulated as LP with decision variables being the occupancy measure, which is the same as the formulation induced by the weaker formulation of the constraint. The algorithm is also developed to solve the LP in an online primal-dual manner. Therefore, I am not sure whether the stronger anytime constraint formulation would introduce any difference here. Could you please provide further clarification on this? 2. Could you provide more explanation on the benefits of using an optimistic mirror descent to update the primal variable? I understand that one potential benefit is that when the reward and cost functions are deterministic, the bound over the gap of objective values can be improved to $O(1)$. But the bound on the constraint violation is still $O(\sqrt{K})$. 3. One of the theoretical contribution is that the algorithm does not require the slater's condition to be satisfied. However, previous work, for example, Efroni et al. (2020), requires this condition to develop an upper bound on the dual variable, which is useful for scaling the range of the dual variable to a unit box. However, I think even if the slater's condition does not hold, there must be some other way to upper bound the dual variable, or we just assume a given upper bound over the dual variable. So it seems to me that removing the Slater's condition is not a big part. Please correct me if I am wrong about this. 4. Could you please comment on the practical performance of your algorithm, since there is no numerical experiment in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the valuable comments provided by the reviewer. We address the reviewer's questions as follows. >**Response to Essential References Not Discussed** We will explicitly cite and discuss these references in our final revised manuscript, clearly positioning our primal-dual contributions against this closely related primal-only approach. >**Response to Weakness and Question** >**W1: Regarding Standard Result** We would like to mention that although the $\tilde{O}(\sqrt{K})$ regret for reward is standard, it is the optimal order under unknown reward, other approaches can not achieve this result even under a deterministic reward. For the constraint violation, achieving a $\tilde{O}(\sqrt{K})$ bound in the adversarial constraint setting is not standard and is extremely difficult and nontrivial. We are the first to establish such a result in the anytime constraints setting. >**W2: Difference of Stronger and Weaker constraint** LP only serves as a general framework for solving optimization problems, it is well-known that the CMDP problem is equivalent to an LP problem. So LP is used as a standard tool for optimizing the inner problem at each step, however the primal-dual formulation and the Lyapunov function may be different. In other words, the theoretical results established in our paper are due to the carefully constructed Lyapunov function and LP is formulation is due to the natural of the CMDP problem. Please let us know if further explanation is needed. We will also revise the manuscript to make these points clearer. >**W3: Benefits of using OMD** It is indeed true that the primal-dual method is a central tool for solving CMDP problems, primarily because CMDPs can be formulated as linear programs and are essentially constrained optimization problems. However, the primal-dual framework is just one part of the overall solution strategy. There remain many open challenges in CMDPs due to their numerous variants—such as soft vs. hard constraints, adversarial constraints, and both model-based and model-free formulations. In our work, the achieved regret bound of $\tilde{\mathcal{O}}(\sqrt{K})$ matches the known lower bound and is optimal in the case of unknown rewards. In contrast, other approaches cannot achieve the $\mathcal{O}(1)$ reward bound even under deterministic rewards. As for constraint violation, obtaining a $\tilde{\mathcal{O}}(\sqrt{K})$ bound under adversarial constraints is highly non-trivial and, to our knowledge, has not been shown before in the {anytime constraint} setting. Our work is the first to establish such a result. The use of optimistic mirror descent (OMD) in our algorithm enables us to bound the regret through a sequence of one-step gradients of the surrogate objective function $f$ across $K$ episodes, as shown in Lemma 5.7. This approach, combined with a carefully designed Lyapunov function, ensures both low regret and strong constraint satisfaction. In summary, OMD is critical not only for improving convergence in the deterministic setting, but also for controlling the gap between the observed and optimal performance in each episode. Together with our Lyapunov-based dual control mechanism, it enables a unified and near-optimal treatment of both stochastic and adversarial CMDPs. We will revise the manuscript to better emphasize this connection and clarify the contributions beyond the use of OMD. >**Q1: Regarding Standard Result** Besides the response in Weakness-(2), algorithms developed under weaker constraint formulations are not directly applicable to stronger formulations. This is because weaker constraints allow for cancellation of constraint violations during the learning process (see lines 56–65 in the main paper), a property that does not hold in the stronger constraint setting. >**Q2: Benefits of using OMD** Please see the response in Weakness-(3). Please let us know if further explanation is needed >**Q3: Removing Slater’s condition** Due to response length limitations, we have addressed this question under Reviewer 7EBa. Kindly refer to the response to Weakness-(1) in that section. >**Q4: Experiment Result** Due to response length limitations, we have addressed this question under Reviewer 7EBa. Kindly refer to the response to Weakness-(3) in that section. We sincerely thank the reviewer for the helpful feedback. If our response satisfactorily addresses your concerns, we would greatly appreciate your consideration in raising the evaluation score. We're happy to clarify any further questions during the discussion.
Summary: This paper introduces the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, a novel approach for online constrained Markov decision processes (CMDPs) with anytime adversarial constraints. Unlike prior methods that assume known safe policies or rely on Slater’s condition, OMDPD achieves optimal regret $\tilde{O}(\sqrt{K})$ and strong constraint violation bounds $\tilde{O}(\sqrt{K})$ even in dynamic and adversarial settings. The algorithm leverages optimistic estimates, online mirror descent, and adaptive dual updates to balance exploration and constraint satisfaction. If accurate reward and transition estimates are available (via a generative model), the regret can further improve to O(1). OMDPD surpasses existing CMDP approaches by handling both stochastic and adversarial constraints, making it highly relevant for applications in autonomous systems, robotics, and cybersecurity. ## update after rebuttal Claims And Evidence: The paper provides a thorough theoretical analysis and compares its results against various existing CMDP algorithms. The findings demonstrate that the proposed method achieves sublinear regret and sublinear constraint violation under adversarial settings, effectively validating its theoretical claims. Methods And Evaluation Criteria: The paper introduces the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, which combines optimistic online mirror descent (OMD) with a primal-dual framework to handle adversarial constraints in CMDPs. The method leverages optimistic estimates for transition kernels, rewards, and costs, using confidence sets and Lyapunov-based updates to regulate constraint violations dynamically. The evaluation is based on two key theoretical criteria: regret minimization and strong constraint violation bounds. Theoretical Claims: The proof of Theorem 5.1 is well-structured, and Figure 1 effectively illustrates the relationships between Lemmas 5.7–5.11. The detailed proof steps in the appendix are logically sound, clearly presented, and follow a rigorous progression. Experimental Designs Or Analyses: No experiment results. While the paper provides strong theoretical guarantees, it lacks empirical experiments to validate the practical performance of OMDPD in real-world CMDP scenarios. Benchmarking against existing algorithms on simulated or real datasets would strengthen its applicability. Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: This paper contributes to the broader literature on constrained reinforcement learning (RL), online convex optimization, and adversarial learning by introducing a unified framework for CMDPs with both stochastic and adversarial constraints. By leveraging optimistic mirror descent and primal-dual methods, the proposed OMDPD algorithm achieves optimal regret and strong constraint violation bounds, advancing research on safe decision-making in dynamic and adversarial environments with applications in autonomous systems, robotics, and cybersecurity. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: + The paper establishes that OMDPD achieves optimal regret $\tilde{O}(\sqrt{K})$ and strong constraint violation bounds $\tilde{O}(\sqrt{K})$, ensuring both efficiency and safety in stochastic and adversarial settings. These bounds improve upon prior approaches that only handle stochastic constraints or allow weaker forms of violation. + OMDPD provides a generalized approach that works under both stochastic and adversarial rewards/costs, bridging the gap between these two settings. + The theoretical results show that if accurate estimates of rewards and transitions are available ( through a generative model), the regret bound can be further improved to O(1). This highlights the algorithm’s adaptability and potential for achieving near-optimal learning in CMDPs with additional information. + This paper uses refined error bounds(Lemma 5.9) for estimating transition kernels, rewards, and costs in CMDPs. By leveraging a Bellman-type law of total variance, it improves upon Lemma 29 of Efroni et al. (2020)[1] by a factor of $\tilde{O}(\sqrt{H})$, enabling more precise value estimation and better learning efficiency. Weaknesses: + The paper claims that removing Slater’s condition is an advancement, as it eliminates the need for a strictly feasible policy. However, in many practical scenarios, a near-feasible solution is often available, making this contribution less impactful than suggested. + The proposed algorithm is developed under the assumption that reward and cost functions are linear, which may limit its applicability. It remains unclear whether this approach can generalize to a broader class of convex CMDP settings where rewards and costs exhibit nonlinear dependencies on state-action pairs. Many real-world problems, such as risk-sensitive decision-making or energy management, involve complex, non-convex constraints that may not align with the assumptions made in the theoretical analysis. A discussion on potential extensions to general convex or non-convex CMDPs would strengthen the paper’s contributions and applicability. + While the paper provides strong theoretical guarantees, it lacks empirical experiments to validate the practical performance of OMDPD in real-world CMDP scenarios. Benchmarking against existing algorithms on simulated or real datasets would strengthen its applicability. [1] Efroni, Yonathan, Shie Mannor, and Matteo Pirotta. "Exploration-exploitation in constrained mdps." arXiv preprint arXiv:2003.02189 (2020) Other Comments Or Suggestions: None. Questions For Authors: Please refer to the Other Strengths And Weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the valuable comments provided by the reviewer. We address the reviewer's questions as follows. >**Response to Weakness** >**W1: Removing Slater’s condition** While it is true that in some practical scenarios, a near-feasible solution may exist, it doesn't make any changes to our results. A relaxed assumption is always desired for theoretical analysis. We would like to emphasize that, from a theoretical perspective, we achieve the best result without the need for the Slater's condition—an important and nontrivial contribution. The reason is that some existing methods, [1, 2, 3, 4, 5] rely on Slater’s condition with a slackness parameter $\rho$ , which quantifies how well the condition is satisfied. Their final regret and constraint violation bounds depend directly on this parameter, and when $\rho$ is small, these bounds can degrade significantly. Moreover, some algorithms([1, 3]) need to know the slackness $\rho$ which is not practical. [1] "Optimal Strong Regret and Violation in Constrained MDPs via Policy Optimization." [2] "Online learning in CMDPs: Handling stochastic and adversarial constraints." [3] "Cancellation-free regret bounds for lagrangian approaches in constrained markov decision processes." [4] "Learning adversarial mdps with stochastic hard constraints." [5] "Best-of-Both-Worlds Policy Optimization for CMDPs with Bandit Feedback." >**W2: Convex CMDP applicability** We would like to clarify that our algorithm is not inherently restricted to linear reward and cost functions. In fact, it can be naturally extended to a broader class of convex CMDP settings where the reward and cost functions are general convex functions of the occupancy measure $ q $. Specifically, if we define the reward and cost functions as $ h_k(q) $ and $ g_k(q) $, respectively, the general optimization problem becomes: $$ \max_{q \in \mathcal{Q}} \ \frac{1}{K} \sum_{k=1}^K h_k(q) \quad \text{s.t.} \quad g_k(q) \leq 0. $$ To ensure theoretical guarantees, we require the following basic conditions: (1) The reward is generated stochastically; The cost can be generated either stochastically or adversarially; (2) The potential function $f$ used in the updates is 1-strongly convex; (3) The occupancy measure lies in a compact convex set(Fact 5.3); (4) The reward $h_k$ and cost $g_k$ functions are convex and Lipschitz continuous with respect to the occupancy measure $ q $:$ |h_k(q_1) - h_k(q_2)| \leq C_1 \|q_1 - q_2\|_2, \quad |g_k(q_1) - g_k(q_2)| \leq C_2 \|q_1 - q_2\|_2.$ Under these conditions, the core structure of our algorithm remains applicable. We will provide the proof in our final version. >**W3: Experiment Results** We implemented our algorithm on a synthetic CMDP with the following settings: $\mathcal{S} = 5$, $\mathcal{A} = 3$, and $H = 5$. For the adversarial constraint, the cost function was randomly selected from a fixed cost set with values in the range $[-1,1]$. After running the algorithm for $K = 3{,}000$ episodes, we report the cumulative constraint violation in the table below: | Episode | K=1 | K=500 | K=1000 | K=1500 | K=2000 | K=2500 | K=3000 | |-----------------------|-------|---------|---------|---------|---------|---------|---------| | Constraint Violation | 12.75 | 757.07 | 882.14 | 957.41 | 1012.13 | 1055.41 | 1091.21 | It is clear that a sublinear regret for the constraint violation is achieved. We sincerely thank the reviewer for the helpful feedback. If our response satisfactorily addresses your concerns, we would greatly appreciate your consideration in raising the evaluation score. We're happy to clarify any further questions during the discussion. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The authors' explanations have thoroughly addressed my previous questions, particularly regarding convex CMDP applicability, and the experiment results show strong performance under adversarial constraints. I am willing to raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for your acknowledgment and for increasing the score of our paper, we truly appreciate it!
Summary: The paper studies the online learning problem for episodic constrained Markov decision processes, where the constraint functions are either stochastic or adversarial, and the transition function is unknown. A key technique the authors introduce is a surrogate objective function for the policy optimization step. The authors propose an optimistic mirror descent primal-dual algorithm: (1) the primal step updates the occupancy measure via the standard optimistic gradient step for the surrogate objective function; (2) the dual step performs the standard dual ascent step. In both stochastic and adversarial settings, the authors prove that the standard regret and the strong constraint violation (only counts non-negative violations) are sublinear in the number of episodes, without assuming Slater's condition and knowing a strictly feasible policy. Claims And Evidence: (1) A key quantity of the main result is $\mathcal{C}$ which is used throughout the proof. Since it is not well-defined for the KL divergence, the main result can be vacuous. (2) For the adversarial cost case, the authors assume the feasibility of all constraints, which is not the standard adversarial loss setting. (3) In the regret analysis, the comparison policy is a nearly feasible policy instead of an optimal policy. In this sense, the regret bounds can be suboptimal. Methods And Evaluation Criteria: The proposed primal-dual method seems appropriate for the problem, but its conditions for correct performance characterization remain unclear. Theoretical Claims: I went through the proof sketch in Section 5, but I didn't check the proofs. The authors provide a proof sketch for the adversarial and stochastic settings. Their key difference is the violation analysis. It is not clear the reason why both cases can use the same comparison policy in Lemma 5.6. Experimental Designs Or Analyses: The authors didn't provide experimental results. Supplementary Material: I briefly reviewed some proof steps but didn’t check them in detail. Relation To Broader Scientific Literature: This work is of interest to the broader safe reinforcement learning community. Essential References Not Discussed: To the best of my knowledge, essential references have been discussed. Other Strengths And Weaknesses: Other weaknesses on clarity: (1) The violation analysis of adversarial constraints in Theorem 5.12 follows the same analysis. This suggests that the adversarial setting is not inherently more challenging than the stochastic setting. It would be helpful if the authors could clarify the technical challenges for analyzing adversarial constraints. (2) The authors apply the optimistic online mirror descent to a surrogate objective function for the primal update, achieving strong constraint violation bounds. It would be helpful if the author could clarify why this is the case for the strong constraint violation bounds, not strong regret bounds. The optimistic gradient method was used in the following papers to improve convergence. It would be helpful if the authors could clarify the challenges of obtaining strong regret and constraint violation bounds. - ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs - Last-Iterate Convergent Policy Gradient Primal-Dual Methods for Constrained MDPs (3) A key step to bound the violation of adversarial constraints is to use the drift-plus-penalty framework for (23). It would be helpful if the authors could provide a formal proof of it for the CMDP setting? Other Comments Or Suggestions: It would be helpful if the authors could compare different techniques in the literature that lead to varying regret guarantees and highlighted the technical challenges in achieving strong regret or constraint violation bounds in the adversarial/stochastic settings. Questions For Authors: (1) What is the definition of anytime adversarial constraints? (2) What is the definition of $\mathcal{N}$ in Theorem 5.1? (3) Is the optimal policy in Theorem 5.6 episode-dependent? (4) How do accurate estimates of rewards and transitions improve regret bounds? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the valuable comments provided by the reviewer. We address the reviewer's questions as follows. >**C1: Bounded $\mathcal{C}$** In our paper, we assume that $U$ is a 1-strongly convex function. When $U(q)$ is the entropy function, the resulting Bregman divergence is the KL divergence, which lacks a guaranteed upper bound. A common remedy is assuming a bound on the KL divergence [1] or introduce a probability mixing step to avoid boundary issues [1][2]. In both cases, the constant $\mathcal{C}$ is measure-dependent but independent of the time horizon $K$, so our algorithm's performance bound remains valid. [1] arxiv.org/abs/1908.00305 [2] arxiv.org/abs/1311.1869 >**C2: Adversarial setting** We clarify that the adversarial loss setting in our paper follows the standard formulation widely studied in constrained online convex optimization ([1,2]). The reviewer may be referring to a different setting, where constraints are satisfied only in expectation over time ([3]). In contrast, our work focuses on the stricter anytime constraint setting, requiring constraint satisfaction in every episode. To our knowledge, ours is the first to achieve a sublinear $\tilde{O}(\sqrt{K})$ constraint violation under this setting, which we hope will inspire further research. [1] arxiv.org/abs/2310.18955 [2] "Online convex optimization with hard constraints: Towards the best of two worlds and beyond." [3] "Online learning in CMDPs: Handling stochastic and adversarial constraints." >**C3: Regrading near optimal policy** The comparison policy is the optimal policy for the CMDP problem (3), we decomposite the regret analysis into two error terms in Eq.(24). This is the standard definition of regret. >**Methods Criteria** We clarify that our theoretical guarantees rely on the following key assumptions: (1) without Slater condition (a main contribution); (2) stochastic rewards, stochastic/adversarial costs; (3) the potential function is 1-strongly convex (due to our Lyapunov choice); (4) the occupancy measure is a compact convex set; and (5) reward/cost functions are convex and Lipschitz in the occupancy measure. >**Theoretical Claims and Experiment** We would like to clarify that we use a single notation $\pi^*$ to denote the optimal policy under both stochastic and adversarial constraint settings for simplicity, although the optimal policies in these two cases are generally different, which theoretical analysis applies to both scenarios. Kindly refer to the response to Weakness-(3) in Reviewer 7EBa to check the experiment results. >**W1: Challenging of adversarial setting** While our algorithm is designed for the adversarial constraint, it also applies to the stochastic constraint setting. This is one of the main contributions of our work—the proposed algorithm provides a unified framework that achieves near-optimal performance in both settings, without requiring prior knowledge of the environment type. Compared with existing approaches, reaching this level of generality and optimality requires careful design of the surrogate objective function, which ensures that unsafe actions are avoided while maintaining regret bounds through the use of the OMD algorithm. Our carefully constructed Lyapunov function plays a key role in guaranteeing these theoretical properties. >**W2:** We emphasize that minimizing regret and enforcing strong constraint satisfaction are different objectives. Our paper primarily addresses adversarial constraint violations. While both [1][2] adopt OMD, the key novelty of our work lies in Eq.(23), which provides an upper bound on the sum of exponential dual variable and cumulative estimated regret. This result is enabled by Lyapunov function (Eq.(20)), which incorporates both the reward and cost via rectified transformation. The theoretical contribution doesn't arise solely from using the OMD update. Its advantage is that it allows us to bound the episode-wise difference $\sum_{k=1}^K \left(f_k(q_k) - f_k(q^*)\right)$ in a clean way. Since [1, 2] do not adopt a similar Lyapunov-based design, they are unable to provide strong bounds on constraint violations under adversarial costs. Moreover, from the drift analysis, we show that the dual variable $\lambda_K$ can be controlled by bounding its exponential transformation. >**W3:** The proof of Eq.(23) is shown in the proof of Lemma 5.8(Appendix C.3, line 816-851). >**Q1:** The anytime adversarial constraint assumes that the cost functions are chosen adversarially at each episode, and the agent is required to satisfy the constraints in anytime. >**Q2:** $\mathcal{N}$ is maximum number of non-zero transition probabilities >**Q3:** The optimal policy is not episode-dependent, defined in Eq.(3). >**Q4: $O(1)$ bound** The improvement can be achieved due to our design of the Lyapunov function and the OMD component. Following Lemma 5.8, the term(I),(III) will disapper when reward is deterministic, which will benefit the bound.
null
null
null
null
null
null
null
null
Breaking the Curse of Multiagency in Robust Multi-Agent Reinforcement Learning
Accept (poster)
Summary: This paper studies the problem of multi-player general-sum robust Markov games (RMGs). Via proposing a new robustness measure called fictitious uncertainty set that centers at $(s,a_i)$ instead of $(s,\mathbf a)$, the authors break the curse of multi-agents (i.e., the dependency is $\sum_{i=1}^n \lvert A_i\rvert$ rather than $\prod_{i=1}^n \lvert A_i\rvert$) for the first time in RMGs. Claims And Evidence: There is no proof sketch in the main text, making it pretty hard to justify the main insight in Theorem 4. I trust the authors out of good faith. Methods And Evaluation Criteria: The models and metrics are standard and fair. Theoretical Claims: Didn't check the proof as there is no concise sketch in the main text or the appendices. Experimental Designs Or Analyses: N/A Supplementary Material: No Relation To Broader Scientific Literature: The idea of fictitious uncertainty set looks interesting, but the authors didn't discuss a lot (and whether it can be useful somewhere else). Also, see the Questions for a series of recent similar ideas in (non-robust) Linear MGs; I'm not sure whether they're relevant enough, but they look a bit similar. Essential References Not Discussed: I feel the discussion is pretty complete. See Questions part for several recent papers in (non-robust) linear MGs that I'm not sure whether relevant enough. Other Strengths And Weaknesses: As mentioned earlier, the presentation can be improved: There is too much background information, making the room for algorithms, discussions of technical contributions, and proof sketch very limited. Looks like the main text could be polished to make slightly more room (e.g., math equations in Theorem 4.1 are spaced in an luxury way), but I still suggest the authors to move more background information into the appendices for the ease of reading. The convergence $O(\epsilon^{-4})$ is slow, but it's fine as it's the first to break curse of multi-agencies. The assumption of generative model is also non-desirable, but it's also fine since it's also presented in previous works on RMG. Other Comments Or Suggestions: See below. Questions For Authors: In the recent literature of breaking the curse of multi-agents in Linear Markov Games (non-robust) ones, a similar idea of "independent linear approximation", which essentially assumes that Q-functions are linear as $Q(s,a_i)=\theta^T \phi(s,a_i)$ instead of the previous "global linear approximation" scheme that $Q(s,\mathbf a)=\theta^T \phi(s,\mathbf a)$. See below for a series of such paper, the first two are concurrent and the last two are concurrent: 1. Qiwen Cui, Kaiqing Zhang, and Simon Du. "Breaking the curse of multiagents in a large state space: Rl in markov games with independent linear function approximation." COLT 2023. 2. Yuanhao Wang, Qinghua Liu, Yu Bai, and Chi Jin. "Breaking the curse of multiagency: Provably efficient decentralized multi-agent rl with function approximation." COLT 2023. 3. Junyi Fan, Yuxuan Han, Jialin Zeng, Jian-Feng Cai, Yang Wang, Yang Xiang, and Jiheng Zhang. "Rl in markov games with independent function approximation: Improved sample complexity bound under the local access model." AISTATS 2024. 4. Yan Dai, Qiwen Cui, and Simon Du. "Refined sample complexity for markov games with independent linear function approximation." COLT 2024. **Question**: 1. Is your idea of assuming structures around $(s,a_i)$ instead of $(s,\mathbf a)$ similar to the independent linear function approximation above? 2. Could you sketch how your model (compared to the previous $(s,\mathbf a)$ one) helps you to break the curse of multi-agency in a more technical way? **Remark.** The current rating assumes all the proofs 1) are correct, and 2) do not contain significant technical innovations that can be of independent interest. Feel free to correct me if I am missing something. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Q1. The idea of fictitious uncertainty set looks interesting, can it be useful somewhere else. Fictitious uncertainty sets, inspired by behavioral economics, hold significant value in game theory and behavioral economics and have several meaningful applications: * **Understanding human preferences regarding risk and robustness under uncertainty**: Humans are generally risk-averse and prefer robustness under uncertainty [1](https://dl.acm.org/doi/10.1145/3490486.3538351) [2](https://arxiv.org/html/2502.11243v1) [3](https://link.springer.com/article/10.1007/s10683-005-5374-7), whose behaviors correspond to our fictitious uncertainty set, rather than the $(s,\mathbf{a})$-rectangular uncertainty set used in prior works. Applying our framework to real-world human data could help predict individual risk preferences and facilitate the design of personalized decision-making strategies for users. * **Improving robustness in safety-critical applications.** As mentioned in the introduction, numerous safety-critical scenarios can benefit from our proposed framework to identify optimal solutions for multi-agent interactions, such as financial markets, social dilemmas, autonomous driving, and human-robot interactions. ### 2. Further improve the presentation. We sincerely thank you for your valuable suggestion regarding the writing. In the revised version of our work, we have moved part of the background information to the Appendix and included the algorithm description in the main text. ### 3. The convergence rate We acknowledge that the $O(\epsilon^{-4})$ convergence rate may seem suboptimal but note that rate $O(\epsilon^{-4})$ or $O(\epsilon^{-3})$ are common in MARL literature, as seen in [4](https://arxiv.org/pdf/2302.03673) and [5](https://arxiv.org/pdf/2204.03991). As the first to break the curse of dimensionality in RMGs, we see great value in further improvements and plan to explore ways to accelerate the algorithm in future work. ### 4. The relationship of this work to linear MG works * Is the idea of assuming $(s,a_i)$ structures around instead of $(s, \textbf{a})$ similar to the independent linear function approximation in linear MGs? **The proposed $(s,a_i)$ plays a fundamentally different role in robust MGs compared to the methods used in non-robust linear MGs..** 1. **Different problems and challenges: Nonlinearity in Robust MGs.** Robust MGs pose additional challenges compared to non-robust MGs due to the nonlinearity of the robust value function, unlike the linear payoff functions in non-robust MARL, which fundamentally alters the problem structure and solution methods. 2. **Inspired by game theory and behavioral economics.** The $(s,a_i)$ structure is considered for two key reasons: 1) Capturing human behavior in reality inspired by behavioral economics, **both $(s,a_i)$-rectangular structures and $s$-rectangular structures ($\otimes\_{s\in \mathcal{S}} \mathcal{U}\_\rho^{\sigma_i}(\mathbb{E}\_{\mathbf{a} \sim \pi_h} P\_{h, s, \mathbf{a}})$) can predict human behavior**, but $(s, \textbf{a})$ fails to do so; 2) We ultimately choose to consider the $(s,a_i)$ structure instead of the $s$-structure because, the $(s,a_i)$-rectangular set makes Nash (also CCE and CE) is guaranteed to exist, whereas this is not the case for the $s$-rectangular set. 3. **Distinct techniques for breaking the curse of dimensionality in $(s,a_i)$ structures** A direct indication is that our techniques developed for $(s,a_i)$ structures also apply to $(s)$-uncertainty set, whereas no corresponding result exists for non-robust linear MG. * How does your model technically break the curse of multi-agency compared to prior work? A concise answer is that it allows for the decomposition of the non-linear payoff function in robust MGs. The key reason why the $(s, a_i)$ structure helps break the curse of dimensionality lies in its role within the Bellman equation: $$ V^{\pi}\_{i,h}(s) = \mathbb{E}\_{\mathbf{a} \sim \pi_h(s)}[r\_{i,h}(s, \mathbf{a})] + \mathbb{E}\_{a_i \sim \pi_{i,h}(s)} \left[ \inf\_{\mathcal{U}\_{\rho}^{\sigma_i} \left(P^{\pi_{-i}}\_{h,s,a_i} \right)} P V^{\pi}\_{i,h+1} \right], $$ $V^{\pi}\_{i,h}$ becomes a linear function with respect to the $i^{th}$ agent's policy $\pi_{i,h}$. This property also holds for the $(s)$-uncertainty set but not for the $(s, \mathbf{a})$ uncertainty set used in prior works. Consequently, we can leverage concentration inequalities to control the gap between the estimated and true value functions, effectively breaking the curse of dimensionality for $(s, a_i)$ and $(s)$ uncertainty sets, but not for $(s, \mathbf{a})$. --- Rebuttal Comment 1.1: Comment: Okay, I agree that while in both case $V_{i,h}^\pi(s)$ become linear in $\pi_{i,h}$, there aren't many similarities. Thank you for your clarification. I feel this paper is pretty interesting and recommend for an accept. --- Reply to Comment 1.1.1: Comment: Thank you so much to the reviewer for recognizing our insights and contribution and raising the score to support this work!
Summary: The paper consider the problem of strategic interactions in uncertain environments; namely, robust Markov games (MGs). Robust Markov games are the multi-agent extension of Markov decision processes. The authors consider MGs where the transition kernel, i.e., the dynamics governing state transition, drift from some nominal value within a given uncertainty set; i.e., players might be trained on a game with a particular transition kernel, but when they deploy their policies, they do so on a game with slightly shifted transition kernel. The goal of each agent is to compute policies that unilaterally perform well under the worst-case shift of the game's parameters (rewards and transitions). This objective gives rise to the notion of robust equilibria (robust Nash equilibrium, robust coarse-correlated equilibrium). The authors contribute: 1) a new assumption on the uncertainty sets, *fictitious* uncertainty sets that depend on the s-and-action pair of each agent. They do not depend on the joint-action of all players 2) an algorithm with provable guarantees that converges to a robust coarse-correlated equilibrium in games that satisfy the latter assumption. The sample and iteration complexity is polynomial in the natural parameters of the game and break the curse of multiagency. I.e., the dependence is on the sum of the size of individual action spaces and not the product. Claims And Evidence: In general, the authors clearly support their claims with formal proofs. One of the parts that is slightly confusing is when they note *"To the best of our knowledge, Robust-Q-FTRL with the above sample complexity is the first algorithm for RMGs breaking the curse of multiagency, regardless of the types of uncertainty sets."* Yes, their algorithm breaks the curse of multiagency but this is due to the additional assumption. They convincingly argue that this is a reasonable assumption, nevertheless, the claim that the algorithm breaks the curse of multiagency in RMG is imprecise. Methods And Evaluation Criteria: The methods used to support the claims where mathematical formal reasoning. Theoretical Claims: I went over the proof of theorem 3.1 in the appendix which proves the existence of robust NEs which also implies the existence of robust CCEs. I also went the main parts of the proof of theorem 4.1 (main contribution of the paper) and believe that it is correct. Experimental Designs Or Analyses: I check the the validity of mathematical arguments. Supplementary Material: I went over the proof of theorem 3.1 in the appendix which proves the existence of robust NEs which also implies the existence of robust CCEs. I also went the main parts of the proof of theorem 4.1 (main contribution of the paper) and believe that it is correct. Relation To Broader Scientific Literature: The authors seem to cite most relevant work in MARL and robust MDPs and robust MGs. They even connect their assumptions to economic theory which I really appreciate as the assumption seems well-founded. Essential References Not Discussed: I do not know the area of robust MGs and MDPs very well so I do not know whether they are missing some crucial reference. Other Strengths And Weaknesses: * One of the weakness of the writing. I struggled understanding their definitions and how their fictitious uncertainty sets compare with previous assumptions. It required opening the cited papers to understand that. Since the authors are introducing a new assumption, in my opinion, they should take the time to carry out an explicit comparison between their assumption and the state-joint-action rectangular uncertainty set assumption. (I.e., the most similar assumption to theirs.) * There seems to be a gap in the understanding of the complexity of solving RMGs. Is it perhaps impossible to break it for state-joint-action rectangular uncertainty sets? Answering in the positive would strengthen the necessity of this paper assumption * The claim that the curse of multiagency is surpassed is true but only thanks to this assumption. Other Comments Or Suggestions: Equatoin 9: There is no description what a the cartesian product is over. I am guessing all $h,s,a$. 289 to 305: left col. notation $V_{i,h}^{\pi, P}(a_i, \bm{a}_{-i})$ is confusing. Why does the value function take actions as arguments? 306 to 317: is it not the case for $(s,\bm{a})$-rectangular to degrade to $(s,a)$-rectangularity condition in case other agents remain fixed? Questions For Authors: * Is it impossible to break the curse of multiagency for state-joint-action rectangular uncertainty sets? * What are other assumption that can lead to polynomial sample complexity results and make sense? * Would you be able to get better convergence rates if you used optimistic ftrl [Syrgkanis et al. 2015]? Syrgkanis, V., Agarwal, A., Luo, H. and Schapire, R.E., 2015. Fast convergence of regularized learning in games. Advances in Neural Information Processing Systems. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### 1. "The proposed algorithm is the first to overcome the curse of multiagency in RMGs, irrespective of the uncertainty set types." Is it due to an additional assumption? Our work breaks the curse of multiagency **through two key innovations: the introduction of a new class of fictitious RMGs and a new algorithm**. Both elements are essential. While the algorithm addresses a specific type of RMGs—the proposed fictitious RMGs—it is also the first to break the curse within the broader class of RMGs, as no prior work has achieved this, even for other subclasses of RMGs. Specifically, * **Novel and Realistic Assumption Inspired by Behavioral Economics** General RMGs are computationally intractable to solve, which leads to the widely used rectangularity assumptions. We didn't add addtional assumption, but replace the state-joint-action rectangular assumption used in prior work by a realistic rectangular assumption inspired by human behavior in behavioral economics. While this fictitious uncertainty set helps break the curse of multiagency, its primary purpose is to realistically model human behavior. * **Tailored Algorithm for Breaking the Curse of Multiagency.** Breaking the curse of multiagency in our proposed RMGs is more challenging than in standard MGs due to the nonlinearity of the robust value function, unlike the linear payoff functions in standard MARL. To address this, Algorithm 1 uses tailored sampling methods to handle nonlinearity and integrates them with a customized online learning algorithm. ### 2. Is it "impossible" to break the curse for state-joint-action rectangular sets? * **Possibly Impossible, but a decisive conclusion for this is hard.** We agree with the reviewer's intuition that it may be impossible. Prior work shows an exponential sample complexity lower bound for computing Nash in general-sum games [2](http://arxiv.org/abs/1606.04550). However, no work has established a lower bound for computing CCE/CE in any type of game. This remains a very open question and area in game theory and a promising direction for future research. * We emphasize that the proposed fictitious uncertainty set is inspired by how human behavior is observed in reality through behavioral economics, not because the state-joint-action approach from prior works is infeasible, prompting a simpler alternative. ### 3. Can other assumption lead to polynomial sample complexity results and make sense? A "reasonable" uncertainty set should ensure the well-posedness of the problem --- guaranteeing the existence of equilibria (e.g., Nash). This presents challenges in problem formulation (assumptions), requiring both a well-posed uncertainty set and feasible algorithms with polynomial sample complexity to compute the corresponding equilibria, which may involve trade-offs between the two. For example, $s$-rectangular uncertainty sets may ensure polynomial sample complexity but fail to guarantee Nash equilibrium, making them less desirable. Exploring other uncertainty sets that ensure well-posedness and allow tractable algorithm presents exciting opportunities for both game theory and MARL community. ### 4. Get better convergence rates if use optimistic FTRL [Syrgkanis et al. 2015]? A brief answer, based on the authors' intuition, is no. The non-linear payoff functions in RMGs create a dilemma between the statistical complexity of estimating the transition kernel and the regret from online adversarial learning algorithms (e.g., FTRL and optimistic FTRL). The current bottleneck lies in the statistical complexity of estimating the transition kernel. Since we already use a state-of-the-art online learning algorithm with optimal sample complexity in standard MARL [1](http://arxiv.org/abs/2208.10458), switching to optimistic FTRL is unlikely to improve the results further. ### 5. Others: * Adding detailed comparison between the proposed set and prior works. * As the reviewer suggested, we will certainly include a more detailed comparison in the appendix of the revised version. * The reviewer is correct that the cartesian product in Equation (9) is over all $h,s,a_i$ for any $i$-th agent. * 289 to 305: Why does the value function take actions as arguments? * Here, $V$ represents a general payoff function that naturally depends on all agents' actions, not the value function. We will revise $V$ to a different notation, $u$, to avoid ambiguity. * 306 to 317: Will $(s,\textbf{a})$-rectangular degrade to single-agent $(s,a)$-rectangular RMDP in case other agents remain fixed? * No, it does not degrade to $(s,a)$-rectangular RMDPs. Since even if other agents' policies $\pi_{-i}$ are fixed but **stochastic**, their selected joint actions $\textbf{a}\_{-i}$ will vary, leading to corresponding uncertainty sets around each joint action $\textbf{a}\_{-i}$. As a result, other agents cannot be treated as a fixed part of the environment, preventing the model from simplifying to the single-agent $(s,a)$-rectangular case.
Summary: The paper proposes a robust multi-agent reinforcement learning framework based on a new fictitious uncertainty set. It proves the existence of robust Nash equilibria and coarse correlated equilibria then introduce a novel algorithm, Robust-Q-FTRL, which adaptively samples from a nominal generative model and solves a dual-optimization problem to estimate robust Q-values. Robust-Q-FTRL breaks the curse of multiagency and thus improves scalability compared to prior methods. Claims And Evidence: The paper’s claims that it breaks the curse of multiagency is supported by sample complexity guarantees. Methods And Evaluation Criteria: The paper is mainly theoretical and it makes sense to compare time complexity with those of the existing uncertainty-set-based baselines. Theoretical Claims: I did not do a detailed verification of the proofs for the theoretical claims. However, the arguments presented appear logically sound based on the provided explanations. Experimental Designs Or Analyses: There are no experiments. Supplementary Material: There is no supplementary material submitted. Relation To Broader Scientific Literature: Breaking the curse of multi-agency is a longstanding challenge in multi-agent RL, where exponential blow-up of joint actions severely hinders scalability. This paper aligns with recent theoretical efforts to provide polynomial sample complexity for robust MARL. It contributes to ongoing work on designing algorithms that offers strong theoretical guarantees of efficiency. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: - Breaking the curse of multi-agency is an important problem in MARL. - The sample complexity bounds surpass those of existing baselines. Weakness: - The paper lacks empirical experiments, so the method’s practical performance remains uncertain. Other Comments Or Suggestions: It would be better if the authors could empirically compare the convergence rate of Robust-Q-FTRL with the baselines. Questions For Authors: - Can some of the assumptions in Robust-Q-FTRL be relaxed to make it applicable to deep MARL? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the careful reading of the paper and the insightful and valuable feedback. ### 1. Additional experiments verifying the effectiveness of Robust-Q-FTRL against baseline methods could be beneficial. Thank you very much for this valuable suggestion! As the reviewer rightly observed, in this work, we focus on taking an initial step toward developing a clear and realistic formulation and framework for players in multi-agent systems under uncertainty. Towards this, we introduce a fictitious uncertainty set inspired by behavioral economics, establish the existence of Nash equilibria (as well as CCE and CE) and propose an algorithm with theoretical guarantees. As the reviewer suggested, further experimental validation of both the proposed formulation and the proposed algorithm with comparison to baseline across diverse application scenarios would be highly valuable. In the future, we are considering several such scenarios, including autonomous driving simulations in CARLA or real-world experiements for human-robot interactions with the humanoid robot Unitree G1. ### 2. Can some of the assumptions in Robust-Q-FTRL be relaxed to make it applicable to deep MARL? Thank you for raising this valuable point! Both assumptions underlying Robust-Q-FTRL can be relaxed to better align with practical deep MARL settings, which are truly interesting future directions. Specifically: * **Relaxation of data collection through a generative model:** Our theoretical findings have the potential to extend to more practical data collection scenarios commonly used in deep MARL, such as online or offline settings. Exploring these extensions represents an interesting direction, which introduces additional challenges—particularly in online settings, which inherently pose difficulties in terms of statistical estimation and sample efficiency [2]. * **Relaxation of the tabular Markov game formulation for MARL problems:** We believe our current results in tabular cases provide a strong foundation for exploring more general scenarios involving function approximation. This aligns closely with deep MARL, where neural networks are typically employed to approximate policies and Q/V-value functions. Nevertheless, adapting our approach to general robust MARL problems (e.g., robust linear MARL) will require distinct problem formulations—an open research area with no existing solutions to the best of our knowledge. Moreover, algorithm design and theoretical analysis frameworks would necessitate different assumptions, such as linearization conditions for linear function approximation [1] and realizability or low-rank structural assumptions for general function approximation. > [1] Yuanhao Wang, Qinghua Liu, Yu Bai, and Chi Jin. "Breaking the curse of multiagency: Provably efficient decentralized multi-agent rl with function approximation." COLT 2023. \ [2] Lu, Miao, et al. "Distributionally robust reinforcement learning with interactive data collection: Fundamental hardness and near-optimal algorithm." arXiv preprint arXiv:2404.03578 (2024).
Summary: This paper addresses the robustness issue in MARL by proposing a novel approach based on fictitious uncertainty sets. The main contributions are as follows: 1. The authors define a new type of uncertainty set, which incorporates both environmental uncertainty and the behavior of other agents. Then they prove the existence of robust NE and CCE under it. 2. They also design a sample-efficient algorithm, Robust-Q-FTRL. The algorithm leverages tailored adaptive sampling strategy to find an approximate robust CCE, just with polynomial sample complexity. Claims And Evidence: All claims made in the submission are well-supported by clear and convincing evidence. Methods And Evaluation Criteria: This paper uses sample complexity as its evaluation criteria to describe the effectiveness of the algorithm, and it’s meaningful. Theoretical Claims: No obvious errors were found, but further validation is still required. Experimental Designs Or Analyses: There is no experiment, but comprehensive theoretical proofs, including the definition of the fictitious uncertainty set, the existence of robust NE and CCE, and the sample complexity analysis of the Robust-Q-FTRL. Supplementary Material: Appendix A and B. The former shows the details on Robust-Q-FTRL, the latter describes related works in addition. Relation To Broader Scientific Literature: It can open up new research directions in MARL, such as uncertainty set selection and construction. Essential References Not Discussed: No other works related. Other Strengths And Weaknesses: Strengths: 1. Inspired by behavioral economics, the proposed fictitious uncertainty set integrates environmental uncertainty and the behavior of other agents, better reflecting real-world human decision-making, demonstrating significant innovation. 2. The authors provide rigorous proofs for the existence of robust NE and CCE, along with detailed sample complexity analysis, offering solid theoretical support. 3. The Robust-Q-FTRL is the first algorithm with sample complexity scaling polynomially with all key parameters in RMGs, significantly improving scalability. This may open up new research directions in related area. 4. The paper is well-organized, with a clear presentation of the research background and preliminary knowledge. And the logical flow is rigorous, making it friendly to reader and easy to follow. Weaknesses: 1. The paper lacks experiments or simulations to verify the effectiveness of the algorithm. While the theoretical contributions are notable, the absence of empirical evidence makes it difficult to fully assess its practical applicability. 2. There is no detailed discussion on computational complexity. Although the sample complexity is optimized, the algorithm's computational complexity may be still high, especially in large state and action spaces. Other Comments Or Suggestions: In Section 1.2, the sentence "... through rational deviations.." (page 2, line 97) appears to have an extra period at the end, which should be removed for consistency. Questions For Authors: I believe this is an excellent paper and I have only one question: Q1: How does this work perform in real-world MARL tasks? Are there any practical applications that have already been implemented? This question does not affect my overall evaluation of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing and appreciating our contributions, both in terms of the problem formulation and the technical results. This acknowledgment is extremely rewarding! ### 1. How does this work perform in real-world MARL tasks, and are there existing practical applications? Thanks for pointing out this essential point! Definitely, practical usefulness is the main motivation behind our problem formulation and the proposed fictitious uncertainty set. Currently, two promising classes of applications emerge from this work, providing exciting future research directions: * **Understanding human preferences regarding risk and robustness under uncertainty**: A classical finding in behavioral economics is that humans are typically risk-averse and prefer robustness when facing uncertainty stemming from other players or the environment [1]. Applying our algorithms to real-world human data could help predict individual risk preferences and facilitate the design of personalized decision-making strategies for users. * **Improving robustness in safety-critical applications.** As mentioned in the introduction, numerous safety-critical scenarios can benefit from our proposed framework to identify optimal solutions for multi-agent interactions, such as financial markets, social dilemmas, autonomous driving, and human-robot interactions. Various experimental scenarios include autonomous driving simulations in CARLA, financial analysis using Kaggle datasets, and other datasets in Hugging Face, as well as real-world experiements for human-robot interactions with the humanoid robot Unitree G1. In this work, we focus on taking an initial step toward developing a clear and realistic formulation and framework for players in multi-agent systems. As the reviewer suggested and recognized, the next step is to apply this framework to diverse practical scenarios. ### 2. Further experiments to verify the effectiveness of the algorithm. Thank you very much for this valuable suggestion! In this work, we focus on taking an initial step toward developing a clear and realistic formulation and framework for players in multi-agent systems. As the reviewer suggested, further experimental validation of both the proposed problem formulation and the corresponding algorithms would be highly valuable across diverse application scenarios. In the future, we are considering several such scenarios, including autonomous driving simulations in CARLA or real-world experiements for human-robot interactions with the humanoid robot Unitree G1. ### 3. Discussions on the computational complexity of Robust-Q-FTRL We sincerely thank the reviewer for this valuable suggestion. In summary, the computational complexity of our proposed Robust-Q-FTRL algorithm is similar to that of the current state-of-the art algorithm for standard MARL presented in [2]. Specifically, Robust-Q-FTRL converges within $K=O(\frac{H^3}{\epsilon^2})$ iterations, with each iteration requiring computational complexity $O(HS\log(S)\sum_{i}A_i)$, which is nearly linear with respect to the size of the state and action spaces. As the reviewer suggested, we will incorporate this detailed discussion following the introduction of our main result, Theorem 4.1. ### 4. An extra period at the end of page 2, line 97 Thank you for pointing this out. We have removed the extra period and will thoroughly polish the entire manuscript again in the revised version. > [1] Goeree, Jacob K., Charles A. Holt, and Thomas R. Palfrey. "Risk averse behavior in generalized matching pennies games." Games and Economic Behavior 45.1 (2003): 97-113. \ [2] Li, Gen, et al. "Minimax-optimal multi-agent RL in Markov games with a generative model." Advances in Neural Information Processing Systems 35 (2022): 15353-15367.
null
null
null
null
null
null
A Bayesian Model Selection Criterion for Selecting Pretraining Checkpoints
Accept (poster)
Summary: This paper studies the problem of neural network model selection under the pretrain-then-adapt paradigm. Based on the pretraining data, multiple neural network checkpoints can be obtain roughly corresponding to different local minimums of the network parameters. To select a better choice that adapts well to downstream tasks, pretraining and downstream free energies are introduced as Bayesian model selection criteria. To deal with cases where downstream data are not available during model selection, relations between the two energies are explored so that an approximation of downstream free energy can be designed based on pretraining data, which the authors further refer to as pretraining WBIC. Several numerical experiments are conducted. Claims And Evidence: 1. This paper is not self-contained. As one of the most important equations, only Watanabe's book is referenced for Eq.(4) but no derivation is given. If Eq.(4) is directly taken out of the book, maybe a detailed reference to sections&pages would help. Similarly, the complexity term \lambda^1(\omega^{*1}) also lacks its definition. 2. The intuitive argument "Intuitively, lower downstream free energy indicates a higher concentration of parameters in parameter space for which the model is more adaptable and capable of generalizing well on downstream tasks" along with the definiion of downstream free energy in Eq.(1) using the parameter ball B_\gamma(\omega^*) is not very compelling. It seems that the scale of parameters would strongly affect the energy, which not necessarily affect the model performance. For non-neural network examples, this energy definition may not be ideal for a local-scale mixture model. Similarly, I also wonder how does the use of batch/layer normalization in the neural network affect the effectiveness of this downstream free energy. Methods And Evaluation Criteria: See Claims And Evidence. Theoretical Claims: See Claims And Evidence. Proposition 5.1 seems correct. Experimental Designs Or Analyses: Seems fine. Supplementary Material: I went over the theoretical components of the supplementary material. In line 235 column 2, it says "under mild assumptions", and in line 56, it says "Let \omega^* and \gamma satisfy assumption ??". The assumption is missing from both the main paper and the supplementary material. Relation To Broader Scientific Literature: in Section 5.2, the authors estimates the pretraining free energy using Eq.(14), which is called the pretraining WBIC. How does this differ from the commonly used WBIC criteria in Bayesian model selection. Is this free energy idea a new interpretation for WBIC, or is the proposed method actually different from WBIC? If it's the latter, perhaps more discussion of their distinctions could be discussed. Essential References Not Discussed: Here are some key relevant references not cited/discussed: WAIC: Watanabe, Sumio. "A widely applicable Bayesian information criterion." The Journal of Machine Learning Research 14.1 (2013): 867-897. WBIC: Watanabe, Sumio. "A widely applicable Bayesian information criterion." The Journal of Machine Learning Research 14.1 (2013): 867-897. More on WAIC: Gelman, Andrew, Jessica Hwang, and Aki Vehtari. "Understanding predictive information criteria for Bayesian models." Statistics and computing 24 (2014): 997-1016. WAIC for latent variable models (potentially related to neural networks): Merkle, Edgar C., Daniel Furr, and Sophia Rabe-Hesketh. "Bayesian comparison of latent variable models: Conditional versus marginal likelihoods." Psychometrika 84.3 (2019): 802-829. Other Strengths And Weaknesses: The ideas is interesting and worth further exploration, but the paper is not well written or self-contained at this point. Other Comments Or Suggestions: N/A Questions For Authors: See Claims And Evidence & Relation To Broader Scientific Literature. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our work. We are concerned that the major criticisms do not fully justify a “reject” recommendation. Below, we show that the issues raised can be readily resolved with minor clarifications or references, rather than indicating any fundamental flaw. > Claims and Evidence: This paper is not self-contained..for Eq.(4) no derivation is given...also, the complexity term $\lambda^1(\omega^{*1}$) lacks its definition. As we state in the paper, it is possible to derive Eq (4) using techniques set out in Watanabe’s book, the details of which can be found in [Lau, 2023]. We will add a precise reference to the relevant sections/pages in [Lau, 2023]. Because this expansion is well-established in singular learning theory, many works reference it rather than re-deriving it fully. We hope you will agree that adding explicit sections/pages to our reference of [Lau, 2023] is a minor fix that does not warrant rejection. In regards to the definition of $\lambda^1({\omega^{\ast 1}})$, we believe here you mean $\lambda^1({\omega^{\ast}})$ since the former does not appear in the paper. With respect to $\lambda^1({\omega^{\ast}})$, recall that this quantity (which represents the complexity measure of $\omega^{\ast}$) is defined implicitly as the $\log m$ coefficient in Eq (4). We feel this is sufficient since we never have to reckon with this term on its own, only as it appears in the asymptotic expansion in (4). > Claims and Evidence: The intuitive argument and...the definiion of downstream free energy in Eq.(1) .. is not very compelling. It seems that the scale of parameters would strongly affect the energy...I also wonder how does the use of batch/layer normalization affect the effectiveness of this downstream free energy. Regarding scale, this is what we think the reviewer is expressing, please correct us if we’ve misinterpreted. The reviewer is hinting that in neural network architectures with some form of scale invariance such as ReLU networks, multiplying the entire parameter set by some constant might not change the function’s outputs—and thus wouldn’t degrade or improve downstream accuracy—while the free energy quantity we define could shift in a non-trival way. We think this is a valid point. However our experiments and our intention revolve around realistic neural networks that are deployed in practice which rarely exhibit strict parameter scaling invariance. We will add a brief discussion in the final version to clarify this point. Thank you for raising this. Regarding batch/layer norm, we note that for our experiments we trained models with (e.g. Resnet) and without batch norm (e.g. VGG) and see the same effect. We do not expect this factor would affect the effectiveness of our approach. > Supplementary: ...In line 235 column 2, it says "under mild assumptions"...The assumption is missing from both the main paper and the supplementary material. In the main text at line 235 col 2, we wrote a parenthetical “under mild assumptions, below” to refer to the assumptions in Propn 5.1. We will ensure the final version states these assumptions explicitly, The broken reference in the supplementary will also be corrected. > Relation to Broader...: How does this differ from the commonly used WBIC criteria in Bayesian model selection. Is this free energy idea a new interpretation for WBIC, or is the proposed method actually different from WBIC? Our localized WBIC can be viewed as the classical WBIC with a Gaussian prior centred on a pretraining checkpoint. While we have taken care to rigorously define our localized WBIC, we welcome the suggestion to make the distinction with classical WBIC more explicit and will add a dedicated paragraph discussing how our “pretraining WBIC” compares with the classical WBIC. We see this as a straightforward clarification that in no way invalidates our approach. > Essential References Not Discussed We cited Lau et al. (2023) for our local WBIC approach but we are happy to cite the original WBIC paper as you suggest. Thank you. However, we do not consider references on classic WAIC that you mention here to be relevant for our work. Can you please articulate in which sense the WAIC references are essential or give some more detail as to how you see WAIC directly intersecting with our methodology? > Other Strengths and Weaknesses: The idea is interesting ..but the paper is not well written or self-contained. We hope that our planned clarifications around Equation (4) and any added references will address your concern about self-containment. In regards to being "not well-written", can you please provide more precise feedback on any sections which remain confusing or unclear. We note that Reviewers ek2K and UWMc explicitly praised our writing, but we will certainly incorporate any further suggestions to improve readability. Could you please indicate which specific aspects of the writing needs improvement so we can address them directly?
Summary: This paper introduces a Bayesian model selection criterion called the downstream free energy, which quantifies the adaptability of pretraining checkpoints for downstream tasks. By measuring the concentration of favorable parameters for the task, this criterion helps predict fine-tuning performance without requiring access to downstream data or prior task knowledge. Empirical evidence validates that the criterion reliably correlates with improved fine-tuning performance. Claims And Evidence: The claims made in the submission are supported by evidence. Methods And Evaluation Criteria: Yes, the proposed methods make sense for the problem at hand. Theoretical Claims: I have generally checked the proofs, but some details have not been thoroughly verified. Experimental Designs Or Analyses: I have generally reviewed the experimental design, which seems reasonable. Supplementary Material: I have roughly checked the proofs in the supplementary material. Relation To Broader Scientific Literature: The paper contributes to the study of Bayesian model selection. Essential References Not Discussed: There are several works [1-3] focusing on assessing the reusability or transferability of pre-trained models. However, this paper does not discuss these works. [1] Tran et al. Transferability and Hardness of Supervised Classification Tasks. ICCV 2019. [2] Nguyen et al. LEEP: A New Measure to Evaluate Transferability of Learned Representations. ICML 2020. [3] You et al. LogME: Practical Assessment of Pre-trained Models for Transfer Learning. ICML 2021. Other Strengths And Weaknesses: 1. Selecting pre-trained models for downstream tasks is a field with many existing works [1-5], but this paper does not discuss the differences from these works, nor does it compare them with these methods in the experiments. 2. The experiments only involve two datasets, CIFAR-100 and mini-ImageNet, which are relatively few in number and have a small dataset size for each. [1] Tran et al. Transferability and Hardness of Supervised Classification Tasks. ICCV 2019. [2] Nguyen et al. LEEP: A New Measure to Evaluate Transferability of Learned Representations. ICML 2020. [3] You et al. LogME: Practical Assessment of Pre-trained Models for Transfer Learning. ICML 2021. [4] Guo et al. Identifying Useful Learnwares for Heterogeneous Label Spaces. ICML 2023. [5] Zhang et al. Model Spider: Learning to Rank Pre-Trained Models Efficiently. NeurIPS 2023. Other Comments Or Suggestions: I have no other suggestions. Questions For Authors: I have no other questions beyond those already mentioned. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your time in reviewing our paper. We note that the reviewer raised two concerns—(1) the absence of certain references ([1–5]) and (2) the limited dataset scope (CIFAR-100 and mini-ImageNet)—and offered no further questions or objections. You’ll find below our best efforts to address these concerns. Please consider raising your score if you are satisfied with them. > Essential References Not Discussed: There are several works [1-3] focusing on assessing the reusability or transferability of pre-trained models. However, this paper does not discuss these works. Indeed there are several studies (including those [1-3] mentioned here) which examine how to quantify transferability of pre-trained models. Below, in “Other Strengths and Weaknesses,” you also reference [4] and [5], but do not categorize them as “essential.” Can you please specify why you feel these particular works [1-3] are essential to the scope of our current paper? In particular, can you please clarify how these works directly inform or extend our results? Provided this clarification, we are happy to include these or any other references we would have accidentally missed. > Other Strengths and Weaknesses: Selecting pre-trained models for downstream tasks is a field with many existing works [1-5], but this paper does not discuss the differences from these works, nor does it compare them with these methods in the experiments. (Related to above) Our paper includes comparisons with established measures such as geometric complexity and neural collapse. These are equally heuristic or empirical in nature and comparable to [1–5]. Since our focus is on a Bayesian model selection approach, not on exhaustively benchmarking all transferability metrics, we believe our chosen references are sufficient to position this work in the broader literature. Can you please specify how [1–5] directly inform or critique the Bayesian framework we adopt in our paper? If so, we are happy to include these or any other references we would have accidentally missed. > Other Strengths and Weaknesses: The experiments only involve two datasets, CIFAR-100 and mini-ImageNet, which are relatively few in number and have a small dataset size for each. In regards to our experiments, we used CIFAR-100 and mini-ImageNet because they are well-established benchmarks that allow rapid, reproducible testing of our approach. We view exploring larger datasets as an orthogonal direction that would not alter our main theoretical contributions. We appreciate your feedback and remain open to expanding our experiments to additional datasets in future work.
Summary: This paper proposes a new metric, pretraining free energy, which can be used to find a pretraining model checkpoint which is most adaptable for downstream finetuning tasks. The paper is largely theoretical, justifying this metric, although there are two experiments (one in appendix) showing that WBIC, which is used to approximate the pretraining free energy, correlates with downstream finetuning performance. Claims And Evidence: The paper makes a number of claims: - That downstream free energy is a good proxy for the downstream performance of a model after finetuning. This seems to be generally justified via theory, and seems to hold based on the arguments presented in the paper. - That pretraining free energy is a more measurable alternative to downstream free energy which upholding similar performance prediction characteristics. Again, I think this hold, although I admit (discussed below), that I found this discussion relatively confusing - possibly a function of my background not aligning to that of the paper. - The WBIC can be used to approximate pretraining free energy without requiring expensive (and quite possibly intractable) integration calculation. This is demonstrated empirically for two experiments, and seems to hold, though as an empirically minded researcher I would have possibly liked to see this demonstrated in a couple of additional domains to ensure the finding holds generally (possibly in a task that was not image classification). That said, the key contribution of this paper is theoretical and so I do not believe that this limits the correctness or validity of the work. Methods And Evaluation Criteria: As stated above, this paper is principally theoretical (despite, admittedly, dealing with a very empirical topic). As such, while I believe consideration of additional benchmarks would be good - possibly for larger models, such as LLMs, given this is where a large part of the pretrain-then-finetune regime gains have proven fruitful - I think the paper should be viewed with a more theory-focused lens. In this perspective, I believe the benchmarks used are valid and, while the empirical effectiveness of the work could be boosted, the experiments run in this paper provide enough support for it to stand. I appreciated the narrative, in which the method was built up - I felt this had a very logical flow, and took practicality into consideration (which can be rare for theoretical papers). As such, the transition from downstream free energy -> pretraining free energy -> WBIC was very logical and I think makes a lot of sense for the problem at hand. Theoretical Claims: I attempted to follow the theoretical claims made throughout the paper, but admit that this work is beyond my background and thus I did not always follow 100%. One thing I was unsure about was why the terms in the pretraining free energy are stochastic, whereas they weren't for downstream free energy, and think a qualitative sentence explaining this would provide some needed clarity. I would suggest that other reviewers be more emphasised in this regard. Experimental Designs Or Analyses: The experiments seeem reasonably designed, and there is a description of the pretraining process and fine-tuning details in the appendix (including hyperparameters). I think some more variety would be good, rather than just focusing on image classifciation, to truly verify whether the correlation between WBIC and downstream performance is legitimate or coincidence (though attached to the theory, I think it should hold). I particularly emphasise this as it also sees (by eye) that there is a close link between strong pretraining performance and strong downstream performance; ruling out n empirical link there, as is discussed in Observations 1 and 2, would help the experimental results I think. Supplementary Material: I spent some time considering the additional imageNet results, and examining the experimental design. I did not consider the proof or examples in too much detail in the supplementary material. Relation To Broader Scientific Literature: The paper seems to contextualise themselves well against prior literature, although I am not an expert in this field. There are cmparisons against certain metrics which have been proposed in prior literature for similar problems. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall, I found most of the paper clear as a reader (particularly one who works in a different area to this work). I thought the structure of the narrative was good in building up a more complete picture of the method being presented. That said, I found the paragraph starting on line 196 (left hand side), about how the checkpoints considered are not actually checkpoints, a bit confusing. I also found proposition 5.1 hard to follow. Besides that, I felt this was a good paper. Other Comments Or Suggestions: On line 267, in the right hand column, there is a missing full stop. Questions For Authors: - How computationally costly is it to compute WBIC - is this something that drastically increased the overhead of the experiments in the paper? - Do you think this method is both feasible, and will continue to scale, for much larger models - for example, 70B or 300B parameter LLMs? - As a follow up ot the above - is this still useful if it doesn't, given this is the arguably the most significant field for pretraining and finetuning. - How confident are you that this finding would apply in fields beyond image classification, as presented in the CIFAR and ImageNet-mini experiments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your careful attention to our paper and thoughtful review. We are glad you think our paper is "clear" and that the "structure of the narrative was good". We will do our best to answer your concerns regarding potential weaknesses below. > Experimental Designs or Analyses: I think some more variety would be good, rather than just focusing on image classification We fully agree that a more diverse suite of experiments beyond image classification would show that the correlation between WBIC and downstream performance is not coincidental. Thank you for the suggestion, and we look forward to addressing it in follow-up work. > Experimental Designs or Analyses: it also sees (by eye) that there is a close link between strong pretraining performance and strong downstream performance; ruling out n empirical link there, as is discussed in Observations 1 and 2, would help the experimental results I think. The reviewer suspects a confound: maybe WBIC (and strong downstream performance) both correlate with strong pretraining performance, rather than with each other. We actually have some counterexamples to this, for instance the third row of Figure 2. Note that towards the end of training, all momentum values share the same pretraining loss yet the downstream performance is quite different; the pretraining WBIC can pick this up. > Other Strengths and Weaknesses: Overall, I found most of the paper clear [...] the structure of the narrative was good in building up a more complete picture of the method being presented. That said, I found the paragraph starting on line 196 (left hand side), about how the checkpoints considered are not actually checkpoints, a bit confusing. I also found proposition 5.1 hard to follow. In regards to the question about "checkpoints considered are not checkpoints": We apologize for the confusion. We will clarify that “pretraining checkpoints” in our theoretical discussion refers to local minima of the test loss, which may differ from the actual checkpoints saved during training. We will further clarify that in order for the theory to match the empirical analysis, we stipulate the actual checkpoints saved during training are local minima of the training loss. Regarding Prop 5.1 being hard to follow, do you mean that the statement of the proposition itself is hard to follow, the proof, or the discussion of how prop 5.1 is used to justify Eq (10), or something else? > Questions For Authors: > 1. How computationally costly is it to compute WBIC - is this something that drastically increased the overhead of the experiments in the paper? > 2. Do you think this method is both feasible, and will continue to scale, for much larger models - for example, 70B or 300B parameter LLMs? > 3. As a follow up ot the above - is this still useful if it doesn't, given this is the arguably the most significant field for pretraining and finetuning. > 4. How confident are you that this finding would apply in fields beyond image classification, as presented in the CIFAR and ImageNet-mini experiments? 1. The original WBIC is very costly to compute. The local WBIC computed in this paper is much cheaper because the localizing prior forces the exploration to stay close to some parameter $w^*$. We compute local WBIC through SGLD sampling which is computationally efficient for deep learning models. 2. Yes, there is no fundamental obstacle preventing the approach from scaling to much larger architectures—provided sufficient computational resources. In principle, this includes LLMs on the order of tens or hundreds of billions of parameters. 3. see above, 4. Our current theoretical results rely on conditions (e.g., mild distribution shifts) that are plausibly satisfied in the image classification tasks we considered. We are cautiously optimistic this approach could generalize to other tasks as well—potentially including text domains—but confirming that the same assumptions hold there would likely require additional theoretical and empirical investigation and is the focus of future work. Thank you again for your insights, which will help make our paper better! --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your rebuttal. I've responded to a couple of points below. Re: more diverse experiments, I think this might be good to include in the paper as a limitation/proposed future work to be upfront about this restriction of the analysis. Re: Counterexample, I agree with this point though it is worth noting that a more magnified scale may give a better idea of whether each run has converged to the same loss or whether the scale of loss is just smaller, if that makes sense? Re: Prop 5.1, I think my finding this difficult is more likely due to the fact that I am an empirical researcher in a different area, and thus this is likely my ignorance showing - reviewer ek2K has verified the correctness of this proposition and I am, therefore, content. Re: questions, thank you for answering these. I would love to see a sentence (possibly in the future work) suggesting that this should be able to scale to larger models. While this paper is beyond my area of expertise, and I do not plan on increasing my score since I don't believe this work has the very high impact expected of a paper rated 5, I believe this is a good paper worthy of acceptance to ICML. --- Reply to Comment 1.1.1: Comment: We appreciate your valuable feedback -- we’ll incorporate these suggestions into our revision.
Summary: This paper introduce a Bayesian model selection criterion, called the downstream free energy, to improve fine-tuning performances. There are both theoretical and empirical results provided. Claims And Evidence: Yes. Section 5 is about theoretical results, and empirical results are in Section 6. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Proposition 5.1 is correct. Experimental Designs Or Analyses: Yes. Section 6 is about empirical results. And there are also some empirical details in appendix. Supplementary Material: The appendix is about the proof of proposition 5.1, and some experimental details. Relation To Broader Scientific Literature: This paper is mainly related to model generalization performance, which mainly focusing on the controlling the upper bound of test error, using the information on training error and function class. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1. This paper is well-written, which provide a clear statement of the results. 2. There are both theoretical and empirical evidences to support the idea. Weaknesses 1. There is a lack of discussion about the relationship between model generalization performance and energy. Please provide more explanations about this question. 2. The model used in this paper is some small models. Are there the same results guaranteed on larger models? Other Comments Or Suggestions: See strengths and weaknesses. Questions For Authors: See strengths and weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time in reviewing our paper. We are glad you think our paper is "well-written" and provides a "clear statement of results" supported by both "theoretical and empirical evidence". Below, we address the potential weaknesses you mentioned, and we hope these clarifications will encourage you to consider raising your score. > Weaknesses > 1. There is a lack of discussion about the relationship between model generalization performance and energy. Please provide more explanations about this question. > 2. The model used in this paper is some small models. Are there the same results guaranteed on larger models? In regards to Weakness 1: Thank you for emphasizing the importance of the relationship between model generalization performance and energy. We agree that explaining this connection is crucial. In fact, Section 5.1 of our paper already provides a detailed discussion of how the free energy (and its complexity term) interacts with test loss to shape model generalization, through a series of Observations tied to our main proposition. If these details were inadvertently overlooked, we kindly invite you to revisit that section and let us know if anything remains unclear or incomplete. In regards to Weakness 2: The theory we develop here does not make any assumptions on the model size or complexity. So, yes the same results should hold for larger models as well. However, as we state in the Section 7 'Conclusion and Future Work', the bottleneck is computation which can be challenging for very large models. This is an intriguing area of future work and we also suggest some alternative approaches to address this limitation there. --- Rebuttal Comment 1.1: Comment: Thanks for the author's reply. It has addressed part of my questions. I will keep the positive score.
null
null
null
null
null
null
Revealing Weaknesses in Text Watermarking Through Self-Information Rewrite Attacks
Accept (poster)
Summary: The paper proposes a new attack against model-watermarking algorithms that involves first identifying tokens in an LLM output that have high self-information, and then passing the output to a paraphraser that changes these tokens. The hypothesis is that these tokens are also the tokens that most likely contain the watermark signals, hence such targeted paraphrasing could improve the efficiency of paraphrasing attacks against model watermarking methods. The paper presented empirical results to support their claims, showing that the proposed method can achieve much higher attack success rates compared to benchmarks, at a relatively low per token cost. Claims And Evidence: The claim that their method outperforms baselines empirically is partially supported by the existing experiments, though there are some additional results that would make them more convincing. - The trade-off between semantic preservation of the attack and the effectiveness of the attack. Currently this is analyzed only in aggregate in separate charts. A Pareto-plot showing the trade-off between attack effectiveness and semantic preservation (e.g. measured by semantic similarity) across benchmarks (rather than just on the proposed method) will help. - Related point: the use of \epsilon is confusing, as it denotes three different things in the paper, i.e., in equation 4 for semantic similarity threshold, in unnumbered equation line 269 for percentile threshold, and line 212 for watermarking threshold. - Inclusion of error bars will also confirm that the results are statistically significant. - If the claim is that self-information is the best metric to identify tokens with watermark signal, the best approach to analyze this is to directly compute how well the proposed metric predicts which are the tokens that contain the 'greenlist' signals, e.g. by computing some appropriate correlation metric. - Smaller point: it would be useful to see the naive baseline results of a paraphrasing attack using the same base paraphraser used in the SIRA methods, i.e. unconstrained paraphrasing (with similar instructions to not reuse the same words in the reference text), to see the direct impact of the identification of a subset of tokens for replacement. The bigger problem lies in the proposed explanation and theoretical analysis. - Entropy is expected self-information, and it is not true that "higher entropy is typically associated with high self-information". Choosing higher self-information tokens based on the proposed method essentially means choosing tokens that have lower absolute probabilities given preceding tokens. Possible explanations for the performance gains may be considered more from this perspective. - The proposed detailed discussion in Appendix G/H is unclear that needs to be a lot more detailed and careful with assumptions and approximations made. Statements like line 1041 on "entropy is context-agnostic" needs to be properly justified and defined. - The authors should be a lot clearer what is the experimental setting and what they specifically did to compute the results for Table 9. - The problem formulation in Sec 3.1 does not directly relate to the crux of the proposed method, which is token-level based. The method is broadly on identifying tokens to mask and replace, while the formulation do not related watermarking methods to token-level perturbation (definition 3). Methods And Evaluation Criteria: Please see above point and question regarding direct analysis of the correlation and prediction capabilities of the self-information metric v.s. the 'watermark signal' tokens. It would also be better if experiments are conducted on more than just one dataset, i.e. the C4 dataset, especially since the proposed method should be relatively easy to implement for other datasets. The other datasets should ideally cover a different type of text, e.g. not news articles related but perhaps more scientific or literature-based, to confirm that the underlying token distribution of the dataset do not significantly affect the performance of the method. Theoretical Claims: Please see above regarding the issues on self-information vs entropy, problem formulation gap on relating token level watermarking to identification of 'watermarked tokens' as a viable attack strategy. Conceptually, basic watermarking methods operating on the 'green-red' list approach also relies on both the green and red lists for signal detection and watermarking strength -- so it is unclear how 'green tokens' identification is more important than 'red tokens', rather than the actual distortion from the underlying word distribution. Experimental Designs Or Analyses: Please see issues above on metrics and datasets. Additionally, it would be additional validation for the authors to also consider recent robust watermarking methods such as [1], rather than just logits adjustment-based model watermarking methods that adopts 'green-red' list approaches (which the proposed method is deigned directly to attack). For e.g., [1] applies logits perturbations to the entire token space but with varying degree of perturbation to each token based on a hash, preceding tokens and chosen perturbation functions. Showing that the proposed method can also work for that and generalize beyond just 'green-red list' approaches, will significantly strengthen the validity of the method. [1] Lau et al, Waterfall: Scalable Framework for Robust Text Watermarking and Provenance for LLMs, EMNLP 2024 Supplementary Material: Yes, the appendix. Relation To Broader Scientific Literature: Empirically, it seems like there is merits to consider this attack in future watermarking works, if the authors can more rigorously justify the proposed method's performance gains compared to existing paraphrasing attacks. It would be useful for the authors to also discuss or even better evaluate other logits perturbation-based model watermarking approaches that go beyond the 'green-red list', such as [1] mentioned above and other similar types of approaches. Essential References Not Discussed: As mentioned, it would be useful to discuss logit perturbation-based watermarking approaches that go beyond just 'green-red list' approaches (e.g., [1] mentioned above and related works). As the proposed method is primarily designed based on the 'green-red list' watermarking model, discussing other approaches that spread the watermarking signal across all tokens would better illustrate the generalizability, or limitations, of the proposed method. Other Strengths And Weaknesses: Overall, I think this has potential to be a good contribution to the literature if the attack is more rigorously backed empirically. The theoretical underpinnings and explanation is problematic, but given that this is a proposed attack model and is primarily empirical in nature, it may be ok as long as these portions are clarified and de-emphasized in the paper. Other Comments Or Suggestions: Please see above Questions For Authors: Please see the points above, on empirical gaps and questions on explanations regarding why the proposed method works. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are thankful to Reviewer 7XjW for the thorough and detailed feedback, due to space limit, we address the main concerns below: > Q1: A Pareto plot, the use of \epsilon, Line 1041 A1: We greatly appreciate the reviewer’s suggestion. We will add the Pareto plot, correct the misuse of \epsilon, and revise Line 1041 to avoid confusion. >Q2: Directly computing how well the proposed metric predicts tokens will be helpful A2: Due to space limitations, please see the reply to zCjb A1. >Q3: It would be useful to see the naive baseline results of a paraphrasing attack. A3: We clarify that this is already included in our paper. As shown in Table 3, we compare our method to a naive baseline where the LLM paraphrases the input twice. This aligns with our approach, where contains two paraphrasing for reference text and attack text. Results show that our design outperforms naive baseline. >Q4: Possible explanations for the empirical and theoretical A4: We clarify that prior works claim to embed watermarks in high-entropy is a heuristic description, **they do not explicitly compute token-level entropy to build the greenlist**. Instead, greenlist tokens are selected via a hash-based pseudorandom process conditioned on past tokens and a key, implicitly simulating high-entropy selection. For example, in EWD [3], The high entropy concept is represent as a renormalization step: ```python renormed_probs = probs / (1 + z_value * probs) sum_renormed_probs = renormed_probs.sum(dim=-1) ``` This renormalization acts as soft reweighting that suppresses high-probability tokens and flattens the distribution, thereby emphasizing low-probability regions. As a result, low-probability tokens are further highlighted and tend to receive higher weights regarding the watermark signals. Similarly, EXP uses exponential sampling to favor low-probability tokens, which, once selected, have higher weight during detection. In summary,**low-probability tokens are either preferred when selecting a green token or assigned stronger watermark weights once selected**. Our self-information approach aligns more directly with practical implementation, enabling more accurate identification of watermark-bearing tokens and improving attack effectiveness. > Q5: The formulation in Sec 3.1 doesn't connect watermarking to token-level perturbations . A5: We clarify that the perturbation function in Section 3.1 refers to the LLM used for paraphrasing. Definition 3 is **not** meant to describe token-level watermark perturbations. It corresponds to the second step of our method, where the LLM is instructed to paraphrase the input. >Q6: Why is the green token more important? A6: The watermark signal is embedded by biasing generation toward green tokens, while red tokens act as a control group. Detection relies on whether the green token significantly exceeds the non-watermarked baseline. Thus, the green tokens carry the actual signal. Regarding distortion, robust watermarking schemes are explicitly designed to minimize it while preserving detectability; otherwise will affect text quality and stealth. >Q7: Other dataset experiment A7: We follow the reviewer’s suggestion add the OpenGen dataset [2], which samples from WikiText-103. We use the 500 chunks as prompts, following the same evaluation protocol as in our main experiments. We show our results below, and our methods achieve the highest ASR. | Attack | KGW | Uni | UPV | EWD | DIP | SIR | EXP | |---------|-----|-----|-----|-----|-----|-----|-----| | Del |21.8|0.8 |9.6 |17.6|64 |36.8|6.6 | | Syn |77.4|16.8|67.8|72 |98 |71.4|47.4| | GPT |69 |58.2|57.4|73.4|98.2|58 |74.2| | DIPPER-1 |89.4|67.8|71.4|88.8|98.8|74.6|83.2| | DIPPER-2 |89.2|71.2|78.8|92.2|99.0|72.8|85.6| | SIRA-T |92 |84 |74.8|94.2|99.6|74.6|81.8| | SIRA-S |93.8|91.2|80.6|94.8|99.6|80.2|86.2| >Q8: Waterfall watermark experiment A8: We followed the reviewer’s suggestion to conduct the experiment on the waterfall watermark[1]. We use 500 samples from the C4 dataset and select 0.2 as our z-threshold; others follow the same evaluation protocol as in our main experiments. The results shown below, our methods get the highest ASR. | Waterfall | ASR | |--------------------|--------| | Del | 4.4 | | Syn | 55.6 | | GPT | 80 | | DIPPER-1 | 73.8 | | DIPPER-2 | 80 | | SIRA-T | 88.4 | | SIRA-S | 90.8 | >Q9: Weaknesses regarding theoretical A9: We thank the reviewer for the constructive suggestion. We acknowledge that our work is primarily empirical, and while we attempted to provide theoretical insights, some parts may be limited or unclear. We will revise the manuscript accordingly and de-emphasize those sections as suggested. We appreciate any further feedback to improve the theoretical part. --- Anonymous Reference Link: https://docs.google.com/document/d/1-VCOEO5eJmrq-_44oGaHdRE7KOGXHumV75h9DDCTP8A/edit?usp=sharing --- Rebuttal Comment 1.1: Comment: Thanks for the response. While some of my concerns have been addressed such as the experiments with an additional dataset and with different watermarking approach, there are still issues unaddressed. As mentioned before, the Pareto plot would help characterize the trade-off between semantic preservation of the attack and the effectiveness of the attack. Inclusion of error bars will also confirm that the results are statistically significant. Correlation between score and greenlist signal tokens. The naive baseline results of a paraphrasing attack using the same base paraphraser used in the SIRA methods, i.e. unconstrained paraphrasing (with similar instructions to not reuse the same words in the reference text), to see the direct impact of the identification of a subset of tokens for replacement — the current paraphrasing attacks seems to not use the same instructions as the method, to not use the same words in the reference text etc. Hence, I will maintain my score. --- Reply to Comment 1.1.1: Comment: We thank Reviewer 7XjW for the additional feedback. Due to character limits and the number of initial comments, we are unable to address every detail. We further respond to key concerns below: > **Q1: Pareto plot** **A1:** As stated in our previous response, we will add the Pareto plot in the revised version to help visualize trade-offs. This addition supports but does not affect our core contributions. > **Q2: Error bars** **A2:** Thanks for the new suggestion. We will include error bars in the revised manuscript to clarify variability. > **Q3: Correlation between detection score and greenlist tokens** **A3:** We would like to clarify that due to space constraints, we were unable to provide a detailed explanation of the correlation between the detection score and greenlist signal tokens in the initial rebuttal. The detection z-score is explicitly designed to reflect how many greenlist tokens remain in the generated text. These tokens are selected during generation through a hash-based pseudorandom process that conditions on the past tokens and a secret key, and are favored via a bias added to the logits. In the naive watermarking method [1], under the null hypothesis, the expected number of green tokens is $\gamma \cdot T$, with $\gamma$ as greenlist ratio and $T$ the token count. The z-score is: $$ z = \frac{G - \gamma T}{\sqrt{T \cdot \gamma (1 - \gamma)}} $$ where $G$ is the number of tokens in the generated text that fall into the greenlist at their corresponding positions. This statistic measures how much the observed green token count deviates from the expected value under randomness. Since watermarked generation increases the probability of sampling green tokens, $G$ tends to be significantly higher in watermarked texts, leading to a large positive z-score. Therefore, there is a direct and quantifiable correlation between the detection score and the number of greenlist tokens preserved in the text. Moreover, simply checking how many high self-information tokens fall into the greenlist is unreliable. Token falls in green token may carry different weights. For example, in EWD [2], high-information green tokens have much larger impact than low-weight ones. Thus, the detection score reflects both quantity and quality of green tokens, making simple matching strategies suboptimal. > **Q4: Instructions for paraphrasing** **A4:** We respectfully argue that using different instructions is a common and accepted practice in prior work[3,4], and in our case, it is inherently part of the method to adapt the algorithm design. Different methods are designed with different assumptions and mechanisms, and their corresponding instructions are often not interchangeable. For example, DIPPER [3] **needs to preprocess input text and inject customized prompts like** `"lexical = {lex}, order = {order}, {curr_sent_window}"` to give the model a hint and adapt raw text to how it was trained. Similarly, Sadasivan et al. [4], the authors **explicitly define a customized instruction (Appendix B.2)**, stating that the output should be as **diverse and different as possible from the input**, and **should not copy any part verbatim**. Our novel mask-and-paraphrase framework requires its own instruction design, making this component integral to the method. We kindly ask the reviewer to reconsider this concern, **given the methodological differences and the established practices in prior work**. In addition, **we believe we have addressed the core concerns raised in the initial review** —including an additional dataset (A7), a new watermarking baseline (A8), and the rationale behind low-probability tokens (A4)—which further supports the soundness of our approach. **We would sincerely appreciate it if you could consider increasing your score to reflect the revised manuscript.** **References** [1] Kirchenbauer, John, et al. "A watermark for large language models." . ICML(2023) [2] Lu, Yijian, et al. "An entropy-based text watermarking detection method." *arXiv preprint arXiv:2403.13485* (2024). [3] Krishna, Kalpesh, et al. "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense." NeruIPS(2023). [4] Sadasivan, Vinu Sankar, et al. "Can AI-generated text be reliably detected?." arXiv preprint (2023).
Summary: The paper aims to erase text watermarks from LLMs by proposing a novel rewrite attack utilizing self information. The proposed SIRA could achieve almost 100% success rate across various watermark algorithms. Specifically, SIRA calcuates self information of every token in the watermarked sequence, where tokens of high values are masked in the output. To complete the masked sequence that includes all information from the original sequence, a paraphrased version of the original sequence will be used as a reference. Experiments show that the text quality is well preserved, while maintaining fine complexity and budget. Claims And Evidence: It seems the text quality, while superior than most baselines, is slightly lower than the GPT paraphrasing. In the paper, you claimed that your method has smaller impact on text quality than other baselines. Methods And Evaluation Criteria: Since the method is completely black-box, which indicates the user prompt when generating the watermarked sequence is not available as wee, does this missing prompt affect the calculation of self-information? Also, if the given watermarked texts is cropped from a long output, will the missing context affect the calculation of self-information? I wonder if the parapharsed reference text is necessary as maybe during the masked sequence completion, the llama model could be instructed to avoid using exactly same wording from the original text. If this step is unnecessary as you use the original text as reference, the cost and time consumption could be further lowered. Theoretical Claims: No issues. Experimental Designs Or Analyses: No issues. Supplementary Material: No issues. Relation To Broader Scientific Literature: No issues. Essential References Not Discussed: No Other Strengths And Weaknesses: It is nice that proposed method is a plug-and-play & black-box attach method while many previous works have more assumption on the knowledge of watermarking generation process. Other Comments Or Suggestions: No issues. Questions For Authors: Please refer to previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are thankful to the reviewer **P6b2** for the appreciation of our work and the efforts spent to review our paper. We address concerns and questions below: > **Q1: Text quality slighter lower than GPT paraphrase** **A1:** We thank the reviewer for their suggestion and will revise the description of our experimental conclusions accordingly. We fully agree with the reviewer and would like to clarify that text quality is largely influenced by the choice of **paraphrasing model**. In our case, we adopted GPT-4o; the inherent ability difference of LLMs causes such a gap. For the stronger LLM like LLaMA3-70B, it outperforms GPT in two watermarks. One advantage of our method is its transferability: unlike DIPPER, our approach is training-free. As more powerful LLMs become available in the future, our method can be seamlessly adapted with zero cost to take advantage of better text quality. > **Q2: Does this missing prompt affect the calculation of self-information?** **A2:** This is a great question. Our answer is no—the calculation of self-information does not need such a prompt. The prompts used to generate watermarked text in prior works such as DIPPER [1] and Random Walk [2] are typically referred to as “context.” However, our goal is to minimize assumptions in order to make the method a simple and broadly applicable tool. In the results we present, we do not use any such context prompts. In earlier studies, the original prompts were primarily designed to help paraphrasing models better preserve semantics through tailored instructions. We consider exploring the impact of prompt design on our method as an interesting direction for future work. > **Q3: If the given watermarked text is cropped from a long output, will the missing context affect the calculation of self-information?** **A3:** Yes, but it depends on how long the cropped text is. The missing context theoretically affects how self-information is calculated since it depends on the exact preceding context. However, our watermarking approach remains effective in practice because it primarily relies on local context within the segment, rather than requiring the entire text. Consequently, cropping a reasonable portion of the output does not significantly degrade our detection or performance unless the given text is too short or the different segment contexts are unrelated. In such an extreme case, our method degrades to a performance level similar to that of random masking. > **Q4: Using instruction instead of reference text** **A4:** We thank the reviewer for the suggestion. This was indeed one of the approaches we explored during the preliminary phase of our study. Adding explicit instructions to encourage the LLM to use different words proved to be ineffective; a large performance gap will exist, especially for lightweight models such as those at the 3B scale. We believe this is because such a small model cannot fully understand complex instructions. While this approach performs comparably on larger models (e.g., 70B), it still underperforms compared to using a reference text. This is likely because, as mentioned in lines 256–260, we instruct the model to do tasks similar to filling in blanks, and due to the high similarity between mask text and watermarked text, LLMs tend to take shortcuts by copying directly from the original text; instructions are not sufficient to fully prevent this behavior. We will continue to explore whether better prompt design can help to achieve this goal. --- **References** [1] Krishna, Kalpesh, et al. "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense." *Advances in Neural Information Processing Systems* 36 (2023): 27469–27500. [2] Zhang, Hanlin, et al. "Watermarks in the sand: Impossibility of strong watermarking for generative models." *arXiv preprint arXiv:2311.04378* (2023). --- Rebuttal Comment 1.1: Comment: Thank you for your response to my concerns. I will maintain my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer P6b2, We are truly delighted by your recognition of our work and your interest in our paper. Your feedback is invaluable for enhancing the quality of our manuscript. Many thanks! Best regards, The Authors
Summary: This paper studies how to remove the watermark in text generated by LLMs. It assumes the watermark is injected through the high-entropy (self-information) words. There it first uses an auxiliary mode to compute the self-information for each token in the generated text. Then it masks out the tokens with high self-information and fills in the masks according to the paraphrased text. Experiments show it outperforms baselines. Claims And Evidence: 1. The authors assume existing watermark methods embed patterns in high-entropy tokens and thus base their method on this assumption. However, it's unclear if this is true because there is no theoretical analysis or direct experimental evidence. For example, the authors can add an experiment to directly verify the precision/recall of the masks generated in the first step with the ground truth tokens. 2. Similarly, the authors claim it's the first targeted attack. This "targeted" feature should be clearly defined and validated. 3. There is still a gap between the motivation and the theory. From the beginning, the authors always emphasize that existing watermarks use high-entropy tokens. However, later on, the authors analyze the theory and empirically show that self-information is a better indicator. However, from the motivation, it seems high entropy should give better results than self-information. Methods And Evaluation Criteria: make sense. Theoretical Claims: Line 997, why can one assume the model predicts the next words with equal probability? Is it reasonable? Please justify it. Experimental Designs Or Analyses: 1. It's not reasonable that the attackers use much larger models, such as Llama3-3B-70B, to attack watermark text generated by a small model Opt-1.3B. In this case, the attackers can directly use the large models. So a more reasonable attack scenario is the attackers only have limited resources so they can only have a small model, while they want to leverage the capability of a larger model. However, the larger model has watermarks. So they want to use the small model to remove the watermark text generated by the large model. Therefore, it's suggested that similar rules be followed in the evaluation. 2. As mentioned earlier, it would be better to evaluate the accuracy of the detected watermark tokens. Because this is one of the contributions claimed in this paper. 3. It's suggested that the authors evaluate or at least discuss the adaptive watermarks. For example, the watermarks that try to leverage other tokens in addition to the high-entropy loss. Supplementary Material: Yes. Most. Relation To Broader Scientific Literature: Help people design stronger watermark techniques. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. An interesting idea and experiments show it has high ASRs. 2. Overall, it's easy to follow. Weakness: 1. It's unclear why authors have several definitions in Section 3.1, especially equations 2-4. Because most of them are not used in the following discussion or design. Why can the watermark defenders see the attack text $y_p$ when they design the detector at Line 192? 2. The figures in the evaluation section are not very visible. Other Comments Or Suggestions: Some symbols are used without introduction, such as $Y_w$ at Line 191. Typo: Line 1030: "the green tokenthe" Questions For Authors: Please refer to the above points. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are thankful to the reviewer zCjb for the time spent reviewing our paper. Due to the reply character limit, we address the main concerns below and put the reference in anonymous link: > Q1: No theoretical analysis or direct experimental evidence on the high-entropy assumption A1: The use of high-entropy embeddings is not an assumption but a well-established design choice in prior watermarking research [1–4, 6, 7]. Section 3 of the KGW paper [1] provides a detailed theoretical framework. Notably, we are the first to identify this design as a potential vulnerability exploitable by attackers. To further support our results, we extend the ablation study in Table 4 with mask text and report the z-score, which reflects the green token remnant. | Text | Attack Success Rate | Average *z*-score | |--------------------|---------------------|-------------------| | Human-written Text | N/A | 0.12 | | Reference Text | 64% | 3.75 | | Attack Text | 94% | 1.85 | | Mask Text|100%| 0.66| > Q2: Clarification regarding “targeted” attack A2: We clarify that prior paraphrasing attacks, such as DIPPER [5] and GPT-Paraphraser, rely entirely on the LLM to rewrite text without control over which parts are modified, making them untargeted. In contrast, our method treats watermark removal as a targeted problem, selectively rewriting tokens likely to carry the watermark signal. Experiments show this targeted strategy is more effective. > Q3: The motivation gap. A3: We clarify that the use of “high entropy” in prior work is heuristic. Specifically, none of the existing watermarking methods explicitly compute entropy, making it reasonable to explore metrics under the heuristic. Entropy and self-information are mathematically related, we find that self-information provides a more direct, token-level signal that better aligns with the actual greenlist selection process. Therefore, our motivation and implementation are consistent: we empirically identify a metric that captures the core vulnerability and reflects the intended intuition. > Q4: Why can one assume the model predicts the next words with equal probability in Appendix G? A4: We clarify that this is a deliberate simplification of high entropy and our formal proof, Appendix H does not build on such an assumption. The uniform distribution serves as a theoretical upper bound for high entropy for a given probability space and helps illustrate high-entropy scenarios where probability mass is thinly spread. In such cases, even small probability changes can significantly affect self-information. Thus, this approximation highlights an extreme, simplified case to clarify the scaling behavior under high entropy for the reader not familiar with watermarking. > Q5: It's not reasonable that the attackers use much larger models. A5: Our work is aligned with established settings in prior related studies [5–9]. We emphasize that watermark robustness mainly depends on the algorithm design and hyperparameters, not the choice of generation LLM. The mentioned work adaptive watermark in [4] uses the same evaluation. Notably, our method significantly reduces computational and time costs, as shown in Appendix B. We would appreciate references that follow the reviewer's proposed setting regarding LLM watermark robustness evaluation. > Q6: Evaluate the accuracy of the detected watermark tokens A6: Please see A1 response. >Q7: Adaptive watermarks experiments A7: We appreciate the reviewer’s suggestion. We show the proposed results below; the experiment follows our main experiment setting, and the threshold is 0.75. We report the attack success rate. **As explicitly mentioned in the abstract of the work[4], this watermark is embedded in high-entropy text.** We would appreciate it if the reviewer could clarify what “high-entropy loss” is and could provide a specific reference that did not adopt high-entropy embedding, and we will try our best to include the corresponding experiments during the discussion window. | Adaptive Watermark | ASR | |---------------------|--------| | Word delete | 5.6% | | Synonm | 92.4% | | GPT-4o Paraphraser | 61.4% | | DIPPER-1 | 60.6% | | DIPPER-2 | 65.6% | | SIRA-Tiny | 96.2% | | SIRA-Small | 98.2% | > Q9: Why can the watermark defenders see the attack text when they design the detector at Line 192? A9: We thank the reviewer for pointing this out. This was a typo — the goal of the detector is to distinguish between watermarked and non-watermarked text. We will revise this part accordingly. >Q10: Typo, figure and symbol unclear A10: We sincerely thank the reviewer for pointing it out. We will correct them in our revised manuscript. --- Anonymous Reference Link: https://docs.google.com/document/d/1t1HxJ5KkCydhf_AwUdqOC8R1c0bwY024Ryr_iTZo-2A/edit?usp=sharing --- Rebuttal Comment 1.1: Comment: Thank the authors for the reply. I will maintain my score. I still think it's unreasonable for attackers to use much larger models. As for the adaptive watermarks, I was referring to cases where the watermark designers know this attack strategy and propose an adaptive defense against it. --- Reply to Comment 1.1.1: Comment: We are thankful to the reviewer **zCjb** for the time spent reviewing our paper. We address the concerns and questions below: > **Q1: Large model vs smaller attacker experiment** **A1:** We would like to respectfully point out that the setting where attackers are assumed to use larger models than the victim model has not been adopted in any prior work, to the best of our knowledge. If we adopt this setting in our orignal paper, we would hard to find suitable baseline methods to compare. We conduct experiment using Llama2-7b as generation model and apply Tiny(3b) as attack method.The experiment follows the same setting as our main experiments and we use 200 samples from the C4 dataset. Notably, the **baseline DIPPER and ChatGPT are larger than LLaMA2-7B**. Our findings are consistent with those in [1]: using a larger generation model does not necessarily make the watermark more robust. The results shown in below table, this further demonstrates the **strength of our attack** and questions the **necessity of assuming a stronger attacker**. | Attack | KGW | Uni | UPV | EWD | DIP | SIR | EXP | |-------------|------|------|------|------|------|------|------| | Del | 23.1 | 1.8 | 6.2 | 20.8 | 56.4 | 42.8 | 9.6 | | Syn | 84.8 | 17.2 | 65.4 | 75.4 | 99.8 | 82.0 | 52.6 | | GPT | 98.8 | 62.6 | 72.4 | 90.4 | 99.6 | 60.2 | 73.6 | | DIPPER-2 | 94.4 | 45.8 | 60.2 | 89.0 | 99.6 | 62.6 | 81.2 | | SIRA-TINY | 96.8 | 87.0 | 83.6 | 97.6 | 99.8 | 75.4 | 90.8 | > **Q2: Defencer know the attack strategy** **A2:** We would first like to emphasize that, to the best of our knowledge, **all existing text watermarking methods adopt high-entropy embedding strategies**. We would be very grateful if the reviewer could kindly provide a specific reference. **We will be more than happy to run the corresponding experiments and include the results in the revised version.** We now consider the case where the **defender is aware of our attack strategy**. The answer is that enforcing such a watermark in **low-entropy regions** of text would **significantly degrade generation quality** and may even **lead to hallucinations**. **Watermarking methods work by modifying the model’s output probabilities.** However, in low-entropy contexts, such manipulation of the logits can result in **unnatural token choices and thus compromise the output quality thus not feasible**. For example, given the prompt **"1 + 1 ="**, the token **"2"** is overwhelmingly high probability and with low-entropy. Forcing the model to deviate from this most probable continuation—e.g., generating **"3"** or **"one"**—would yield **semantically or factually incorrect outputs**, harming both **fluency and accuracy**. Moreover, **low-entropy contexts inherently constrain token choice**, making it difficult to encode meaningful watermark signals **without being detectable by users** or **significantly increasing perplexity**. This not only weakens the effectiveness of watermarking but also leads to a **high false positive rate during detection**. A **rigorous theoretical treatment** of these challenges is stated in **KGW [2]**. **We thankful for the reviewer further feedback, and would sincerely appreciate it if you could consider increasing your score to reflect the revised manuscript**. --- [1] Liu, Aiwei, et al. *"An unforgeable publicly verifiable watermark for large language models."* ICLR (2023). [2] Kirchenbauer, John, et al. *"A watermark for large language models."* ICML (2023)
Summary: The paper introduces SIRA, a novel text watermark attack method that leverages the concept of self-information to efficiently and effectively remove watermarks from text generated by large language models. The authors conduct systematic experiments to demonstrate the effectiveness of their approach. Claims And Evidence: The paper highlights the vulnerability of watermarks placed on high self-information tokens, which forms the basis for SIRA as a targeted attack method. This claim is well supported by the results presented in Table 3. The authors also argue that self-information is better than entropy for identifying watermarked tokens, which is backed by the experiments in Appendix G, Table 9. Another claim is that even when the distribution of the attack model differs from the generation-time distribution, SIRA can still identify watermark locations, and larger attack models better estimate the generation distribution, leading to improved watermark removal. This claim is supported by the results in Figure 2. However, there are a few claims that lack strong support. For instance, the authors state that even their most lightweight SIRA-Tiny method outperforms all previous approaches, but this comparison is tricky due to the varying attack strengths and inconsistent rankings across different metrics (e.g., Appendix F shows that on the Unigram watermark, SIRA-Large achieves better rewrite quality than DIPPER2, while Figure 3b shows that SIRA-Large has slightly worse quality than DIPPER2 on the same watermark). Another claim that lacks clear evidence is the $0.88 per million tokens cost, which is highlighted in the abstract but without a clear calculation provided. There is also a claim in Appendix G, Table 9, that compares filtering potential green tokens using self-information, entropy, and probability. I fully understand the self-information vs entropy part. The authors also attempt to prove that self-information is more accurate than probability, but I didn't follow this part. Since self-information and probability have a monotonic relationship, using percentiles should yield the same results. Without the code to reproduce the experiments and find the details, it is difficult to understand this discrepancy. The authors should clarify this point and potentially consider the possibility of numerical instability in their experiments. Methods And Evaluation Criteria: The methods and evaluation criteria used in the paper seems to be sound and appropriate. Theoretical Claims: Although Appendix H contains some bounds, they are not strongly connected to the main text. The paper's main contribution lies in its practical aspects rather than its theoretical claims. Experimental Designs Or Analyses: Some of the claims in the paper are well-supported by the experimental designs and analyses, while others have weaker support. Please refer to the "Claims And Evidence" section for a detailed discussion. Supplementary Material: I reviewed Appendices B, D, E, F, and G. Relation To Broader Scientific Literature: The paper cites and compares its approach to many relevant baseline methods. Essential References Not Discussed: I am not aware of any. Other Strengths And Weaknesses: The paper presents a novel method and conducts systematic experiments to support its claims. Some claims are well-supported, while others have weaker support. Please refer to the "Claims And Evidence" section for a detailed discussion. Other Comments Or Suggestions: I have no additional comments. Questions For Authors: I noticed that the current s-BERT measurement compares the similarity to the non-watermarked text, which leads to the observation that the s-BERT score after paraphrasing attacks is even higher than the no-attack case. Also in Table 2, with more attacks, s-BERT score is higher. This is because strong paraphrasing models are used, making the score less informative about the degree of semantic preservation after the attack and more about the ability of the paraphrasing model. Why not measure the similarity with respect to the watermarked text instead? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are thankful to the reviewer **RVPR** for the valuable time and effort spent reviewing our paper. We elaborate on the questions raised by the reviewer below: > **Q1: Clarification regarding claims: SIRA-Tiny method outperforms all previous approaches** **A1:** We would like to clarify that this statement only refers to the fact that SIRA-Tiny outperforms previous methods in terms of ASR under the same watermark strength. We agree that a more precise statement would be better. We will revise the text to reflect this more precise phrasing to avoid overclaim and add a corresponding note in Appendix C, Table 6. > **Q2: Lack of support: Claim SIRA-Large achieves better quality than DIPPER2, while Figure 3b shows that SIRA-Large has slightly worse quality than DIPPER2 on the same watermark** **A2:** We would like to clarify that in watermarking, text quality is typically referred to the perplexity metric, which is shown in Figure 3a. In contrast, Figure 3b, which the reviewer referenced, reports S-BERT scores that evaluate semantic preservation, a different aspect from text quality. Therefore, we respectfully argue that our claim that SIRA-Large achieves better quality than DIPPER2 remains valid. We appreciate the reviewer's feedback and will revise the related part more clearly to avoid potential misunderstandings. > **Q3: Computation regarding the cost** **A3:** We provide a more comprehensive cost analysis here and will include it in the relevant section of the manuscript. Specifically, we estimate the cost of processing 1M tokens of watermarked text using third-party services. According to OpenAI’s pricing, using the GPT Paraphraser (GPT-4o) would cost \$10 × 2 = \$20 (input + output). In contrast, our method (SIRA-Small) costs \$0.22 × 2 (input + output) × 2 (two iterations) = \$0.88, based on the AWS Bedrock LLaMA3-8b pricing. And the cost could be further reduced if we use SIRA-Tiny (LLaMA3-3b). > **Q4: Why is the self-information result different from probability?** **A4:** We clarify that the self-information in our method is conditional and chunk-based (e.g., calculated and percentile by segment), whereas the raw token probabilities are derived from the LLM’s output logits. This means that the statistical basis for the two measures is not aligned: the conditioning context and granularity differ. As a result, percentile ranks based on self-information may diverge from those based on raw probability. This mismatch leads to divergent percentile rankings and explains why filtering using self-information yields different token selections than filtering using raw probability. We will release our code to ensure reproducibility and transparency, and we appreciate the suggestion regarding potential numerical differences, as we will further increase the sample size to reduce numerical instability. > **Q5: Why not measure the similarity with respect to the watermarked text instead?** **A5:** **For the attack cases shown in Figure 3b and Table 2, our SBERT-based similarity is indeed computed between the attack text and the watermarked text as proposed by the reviewer**. As SBERT needs one pair of texts, "No Attack" is a special case; comparing the watermarked text to itself would be meaningless. Therefore, for “No Attack” we compare the watermarked text with the non-watermarked text. This term aims to reflect how the watermark algorithm changed the semantics. We acknowledge that this was not clearly explained in the paper. We will clarify this to avoid potential misunderstanding.
null
null
null
null
null
null
MATS: An Audio Language Model under Text-only Supervision
Accept (poster)
Summary: The authors propose to use pre-trained audio-text contrastive models such as CLAP to achieve text only supervision, with a strongly-related noisy text with audio mechanism to introduce robustness. Claims And Evidence: The authors compare proposed methods to several other audio large language models and show that proposed methods can perform similarly to those trained with the need of audio data during training phase. This provide evidence that proposed method works to some degree. Methods And Evaluation Criteria: The authors leverage various and diverse audio tasks and datasets for evaluation, this shows more robust and generalizable results. Theoretical Claims: The authors provide a theoretical analysis on generalization and take the modality discrepancy into consideration, this helps to provide different perspectives to the problem other than based on heuristics. Experimental Designs Or Analyses: The authors also provide MATS-Audio to show the modality gap of approximate with CLAP models. This is a good additional contribution to the community. Supplementary Material: Go through all the datasets and benchmark involved in this work. And viewing some examples of interaction for proposed systems. Relation To Broader Scientific Literature: Multimodal large language model is a current popular research, this work explore the modality gap in multimodal encoders and how it affects when trained with large language model, which is a good contribution to the community. Essential References Not Discussed: NA Other Strengths And Weaknesses: The comparisons with various open-ended audio tasks such as AIR-Bench and MMAU provide thorough understanding of the current landscape. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### We sincerely appreciate your time and effort in reviewing our manuscript. Your positive evaluation is highly encouraging. Thank you for your valuable feedback.
Summary: This paper proposes a text-only supervision method that closes the gap between the text embedding space and the audio embedding space via a mechanism called santa. ## Update after rebuttal I deeply appreciate authors providing additional results. It resolves my other concerns except for this one: "connection between the bound derived and the proposed "santa" method is not clear". I have read the rebuttal about this but still feel not that directly related and felt the derivation a bit disjoint with the main theme of the paper. Therefore, I decided to maintain my score as it is. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. The proof of theorem 3.1 is convincing to me provided that the assumption of P_A(y) = P_T(y) is true. This might limit the application scenario to more general tasks (e.g. speech) but is a reasonable assumption for the experiments they conduct. Experimental Designs Or Analyses: Yes. The experiments are sound to me. Supplementary Material: No. Relation To Broader Scientific Literature: This paper has compared to a wide range of existing work that can perform audio understanding. Essential References Not Discussed: No Other Strengths And Weaknesses: Other Weaknesses: 1. I found the connection between the bound derived and the proposed "santa" method not very clear. It reads to me as if the authors used a lot of maths to prove a bound, only to conclude that we need to bridge the two spaces closer and propose a method that does not directly use the disc_L1 metric. Please clarify. Other Comments Or Suggestions: Lines 165-177 on left column of page 4 seem to be a repetition of stuff on page 3 line 264: identity -> identify Questions For Authors: 1. Why do you need 5M text samples? Is the description space that large? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Q1: The proposed Santa does not directly use the disc_L1 metric.** 1. In our design, we only have access to text-only data during training, making it impractical to directly use the disc_L1 metric to reduce the modality gap. Instead, as shown in Figure 3 of main paper, our Santa achieves a similar effect in effectively reducing the distance between audio and text embeddings. 2. Further, Table 1 presents relevant statistical information. Specifically, we randomly select 350 samples from `AudioAIA` dataset and generate their language embeddings $Z_T$ and audio embeddings $Z_A$ using the CLAP encoder. Santa is then applied to the audio embeddings, denoted as $f_{\mathrm{Santa}}(Z_A)$. Next, we calculate the L1 distance between the prototype of audio embeddings $Z_A/f_{\mathrm{Santa}}(Z_A)$ and language embeddings $Z_T$. As shown in Table 1, Santa effectively reduces the distance between audio and text embeddings, validating its effectiveness in bridging the two spaces closer. | Method | L1 distance | | ----------- | ----------- | | Origin CLAP | 18.35 | | Santa | 10.48 | _Table 1: The statistics on the modality gap within CLAP. **Note:** We randomly select 350 samples to calculate the L1 distance._ ### **Q2: Lines 165-177 on the left column of page 4 seem to be a repetition of stuff on page 3 line 264: identity -> identify** Thanks for your reminder. We will update it in the next version. ### **Q3: Why do you need 5M text samples? Is the description space that large?** 1. Using 5M text samples aims to improve the generalization of MATS. With approximately 7B parameters, MATS-LLaMA requires a substantial amount of training data to effectively scale its capacity. As shown in Table 2, most existing LALMs of similar size (around 7B parameters) are trained on over 5M audio-text pairs. 2. Furthermore, we conducted an ablation study to assess the performance of MATS-LLaMA with different training dataset sizes. As shown in Table 3, the model's performance improves as the dataset size increases, especially in open-ended scenarios. This result validates the necessity of using 5M text samples. | Model | Capacity | #Sample | | -------------- | -------- | ------- | | Audio Flamingo | 2.2B | 5.9M | | GAMA | 7B | 8.7M | | LTU | 7B | 5.6M | | LTU-AS | 7B | 9.6M | | SALMONN | 7B | 5M | | MATS-LLaMA | 7B | 5M | _Table 2: The number of train dataset from current LALMs_ | Ratio (%) | AudioCaps (CIDEr) | AIRBench-Sound (GPT-4) | MusicCaps (ROUGH_L) | ESC-50 (ACC) | | --------- | ----------------- | ---------------------- | ------------------- | ------------ | | 50% | 0.697 | 6.25 | 16.4 | 0.87 | | 75% | 0.705 | 6.30 | 17.7 | 0.88 | | 100% | **0.735** | **6.43** | **18.7** | **0.88** | _Table 3: Performance of MATS-LLaMA under different training data ratios_
Summary: This paper proposes MATS, an audio-language multimodal large language model (LALM) that is trained solely on text data while achieving strong performance on various audio comprehension tasks. Unlike conventional LALMs, which require a large corpus of audio-language pairs, MATS leverages CLAP (Contrastive Language-Audio Pretraining) to align audio and language modalities without audio supervision. During training, MATS only uses textual data, where CLAP's language encoder extracts text embeddings, which are further processed using a Transformer-based mapper before being fed into the LLM. To mitigate the modality gap between CLAP’s audio and text embeddings, a Gaussian noise injection strategy is applied to text embeddings during training. At inference time, audio inputs are encoded using CLAP’s audio encoder, and the Santa mechanism is introduced to bridge the modality gap. Santa retrieves semantically related caption embeddings from a clustered database and balances them with the input audio embedding. The final input to the LLM consists of both the audio embedding and Santa's retrieved text embedding, effectively improving generalization. Extensive zero-shot evaluations demonstrate that MATS achieves performance comparable to state-of-the-art audio-supervised models on multiple benchmarks, including audio classification, captioning, and open-ended question answering. Notably, MATS surpasses SALMONN and Qwen-Audio-Chat on the MMAU benchmark while being trained only on text data, showcasing its ability to learn audio semantics without direct audio supervision. Claims And Evidence: The paper presents experimental results comparing MATS with audio-supervised models, demonstrating that the proposed text-only training method achieves comparable performance. Additionally, the authors claim that the Santa mechanism effectively mitigates the modality gap and outperforms previous text-only audio LLM approaches (as shown in Table 2 and Table 4). However, I have concerns regarding the justification of the latter claim. From Table 2, MATS appears to perform similarly to previous text-supervised models, and these models are not compared in tasks beyond audio captioning. While I acknowledge that previous text-supervised models primarily focus on captioning, a stronger justification for Santa's superiority is needed. Specifically, since Santa is the key architectural difference from previous text-supervised approaches, a more thorough comparison would be beneficial. This could be done by training an ablated system that replaces Santa with mechanisms proposed in prior works, evaluating it on the broader set of tasks used in this paper. Such an experiment would provide clearer evidence of Santa’s advantage over prior approaches. Additionally, I am unclear about the discrepancy between the first column of Table 4 and DRCap. What is the exact difference between the mechanism in the first column of Table 4 and DRCap? The performance gap between the first column of Table 4 and DRCap in Table 2 is quite large, and it would be helpful to clarify why this occurs. Methods And Evaluation Criteria: Please see "Claims And Evidence" section. Theoretical Claims: I read through the theoretical claims in 3.3. I think it is correct but I am not very certain. Experimental Designs Or Analyses: Please see "Claims And Evidence" section. Supplementary Material: I skimmed through the appendix. Relation To Broader Scientific Literature: The key contribution of this paper is proposing a method to train a text-only supervised audio LLM that generalizes to a broader range of audio-related tasks by constructing a more extensive dataset, as well as introducing an improved inference-time mechanism (Santa) to reduce the modality gap between audio and text embeddings. The text-only training framework leverages the recent CLAP model (Elizalde et al., 2023), building upon prior works such as Pengi (Deshmukh et al., 2023) and LTU (Gong et al., 2024). The Santa mechanism further enhances previous memory-based or noise-based methods (e.g., DRCap by Li et al., 2024; NoAudioCaptioning by Deshmukh et al., 2024) by integrating clustering and weighted embedding retrieval, explicitly addressing limitations in existing methods regarding the preservation of audio semantic information. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: I would recommend the authors improve Figure 2 to better highlight the correspondence between the upper-right "Modal-Transfer Method" block and the rest of the figure. Currently, there is no clear segmentation between the Santa mechanism and the noise injection component, making it difficult to distinguish these parts. Additionally, clarifying the connection between the label "modal-transfer method" and the Santa/noise injection block would improve the figure's readability. Questions For Authors: 1. In line 252: > "However, due to the limited representational power of individual language embedding, this strategy is prone to retrieving the texts with insufficient semantic relevance, thereby affecting the effectiveness of audio-language modality alignment." Could you further elaborate or provide evidence explaining **why** individual language embeddings from CLAP have limited representational power? What factors lead to insufficient semantic relevance in this context? 2. In line 405, the authors state that the variance is a hyperparameter and searched for the optimal value. However, in line 169, the author introduces variance as determined by calculating the infinity norm between audio and language embeddings over a set of 30 randomly selected samples. How exactly is noise variance determined? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **W1: Training an ablated system that replaces Santa with mechanisms of prior works (PromptAAC and DRCap)**. Following your suggestion, we replace Santa with the modality-gap reduction mechanism of PromptAAC and DRCap, referred to MATS-PromptAAC and MATS-DRCap. As shown in Table 1, Santa achieves the best performance on both closed-ended and open-ended tasks, which validate that Santa outperforms previous text-only methods. - DRCap enhances audio captioning performance by leveraging a benchmark-specific memory bank, fully mapping the audio embedding to weighted language embeddings. But DRCap discards the original audio embedding, making its performance heavily dependent on the relevance between the memory bank and the test benchmark. However, in **multi-task setting**, the memory bank is no longer tailored to a single benchmark but instead aggregates information from multiple benchmarks. As a result, the mapping process may introduce unintended noise, projecting the audio embedding into a less relevant textual embedding space, leading to performance drop. - PromptAAC adopts an augmentation-based approach that involves injecting noise and substituting similar language inputs. It retrieves audio events by matching audio embeddings with language embeddings derived from 527 predefined audio labels in AudioSet. However, the limited variety of audio events restricts the diversity of the retrieved information, resulting in inferior performance compared to Santa. | Benchmark| ESC-50 (ACC) | AudioCaps (CIDEr) | AIRBench-Sound (GPT-4) | AIRBench-Music (GPT-4) | | - | - | - | - | - | | MATS-PromptAAC| 0.77| 0.593| 6.07| 5.28 | | MATS-DRCap | 0.84 | 0.619 | 5.83 | 5.29 | | **MATS-LLaMA (Ours)** | **0.88** | **0.735** | **6.43** | **5.76** | _Table 1: Comparison results on CLS, CAP, and AQA benchmarks._ ### **W2: What is the difference between the mechanism in the first column of Table 4 and DRCap?** - DRCap introduces the Retrieval-Augmented Generation (RAG) and Projection-Based Decoding (PD) strategies. In Table 4 of main paper, we only use PD strategy (denoted as **Memory-based**). - We further report the performance of DRCap (RAG+PD), where we replace Santa with DRCap in our framework. As shown in Table 2, it still underperforms the performance of DRCap (reported in original paper) in single-task setting . - This is because DRcap fully discards original audio embedding during inference, making its performance heavily dependent on the relevance between the memory bank and test benchmark. However, in multi-task setting, the memory bank is no longer tailored to a specific benchmark but instead integrates information from multiple benchmarks. This broader integration can introduce unintended noise during the mapping process, projecting the audio embedding into a less relevant textual embedding space. As a result, MATS-DRCap, trained in a multi-task setting, experiences a performance drop compared to DRCap. | Method| CIDEr | SPICE | SPIDEr | |-|-|-|-| | Memory-based (Only PD) | 0.234 | 0.094 | 0.164 | | MATS-DRCap (PD+RAG) | 0.619 | 0.175 | 0.397 | |DRCap (single-task setting reported in original paper) | 0.718 | **0.186** | 0.452 | **MATS-LLaMA** | **0.735** | 0.171 | **0.453** | _Table 2: Ablation Study on AudioCaps._ ### **W3: Improve Figure 2.** Thanks for your suggestion. We will update the figure in next version to better illustrate the "Modal-Transfer module" and its connection to noise injection/Santa. ### **Q1: Could you elaborate or provide evidence explaining why individual CLAP text embeddings have limited representational power? What factors lead to insufficient semantic relevance?** 1. To validate it, we perform a retrieval task between CLAP audio embeddings and CLAP text embeddings on Clotho test set. Specifically, we compare the error rates of two strategies: top-K retrieval; K-means clustering followed by top-K retrieval. As shown in Table 3, the K-means method achieves a lower error rate in capturing semantically relevant captions, effectively mitigating the impact of irrelevant textual information caused by the limited representational capacity of individual language embeddings. 2. It may be attributed to the CLAP text encoder compressing textual information into a 1024-dimensional embedding space. Such aggressive dimensionality reduction leads to a loss of fine-grained semantic details, resulting in insufficient representational capacity of individual text embedding. |Method|Error Rate@5| | - | - | |K-means-based | 18.3% | |TopK | 23.3% | _Table 3: Retrieving Error Rate@5 on the Clotho Test Set_ ### **Q2: How exactly is noise variance determined?** The variance is treated as a hyperparameter. As suggested by [1], the optimal value roughly aligns with the used strategy (calculating infinity norm between audio and text embeddings over 30 randomly selected samples), also shown in Figure 4 of the main paper. [1] Training audio captioning models without audio. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying my questions and addressing my concerns. I have increased my score to 3 accordingly. It would be great to see those clarifications included in the updated paper. --- Reply to Comment 1.1.1: Comment: ### We sincerely appreciate your thoughtful and constructive feedback. We are especially grateful for your recognition of our work. Your feedbacks were valuable in helping us improve the quality and clarity of the paper. And we will incorporate the clarifications and improvements into the revised version of the paper. Thank you again for your time and efforts in reviewing our submission.
null
null
null
null
null
null
null
null
AutoGFM: Automated Graph Foundation Model with Adaptive Architecture Customization
Accept (oral)
Summary: This paper introduces an automated graph foundation model with adaptive graph neural architecture customization. The authors address the architecture inconsistency problem in graph foundation models. The proposed method consists of graph encoder, architecture customization, and curriculum training. The theoretical analysis and empirical results on multiple datasets seem to show the effectiveness of the proposed approach. Claims And Evidence: The claims regarding architecture inconsistency and the benefits of the proposed method are grounded with theoretical justifications and experimental comparisons. The authors demonstrate that fixed architectures underperform on diverse datasets and propose a method to adapt architectures dynamically. Methods And Evaluation Criteria: The proposed methodology is sound, leveraging contrastive learning and mutual information constraints. The evaluation benchmarks are relevant, with a reasonable selection of baselines. However, the choice of hyperparameters for different datasets is not well discussed, and sensitivity analysis should be enhanced. Theoretical Claims: The theoretical claims, particularly the propositions regarding architecture inconsistency, are mathematically sound. Experimental Designs Or Analyses: The experimental setup is comprehensive, covering multiple datasets and comparisons. The results indicate superior performance. Supplementary Material: I reviewed portions of the appendix, mainly on proofs and experimental setup details. Relation To Broader Scientific Literature: The paper builds on prior work in graph foundation models and GraphNAS, providing a novel adaptation mechanism. The references are mostly relevant. Essential References Not Discussed: The paper could benefit from citing more recent advances in self-supervised learning techniques for GNN adaptation. Other Strengths And Weaknesses: A key strength is the novel combination of graph foundation models and GraphNAS, which opens a new direction on this topic. Other Comments Or Suggestions: Clarify the hyperparameter tuning process and discuss more on ablation studies. Questions For Authors: How does the proposed method compare in terms of computational efficiency with other NAS-based approaches? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer for the detailed comments and insightful questions. We respond to each of the reviewer’s comments point by point as follows. > 1. "Clarify the hyperparameter tuning process and discuss more on ablation studies." Thank you for bringing this to our attention. Regarding the hyperparameter tuning process, we pretrain the GFA model with $\lambda,\beta \in \\{1e-1,1e-2,1e-3,1e-4\\}$. Subsequently, we fine-tune these pretrained models and evaluated their performance on the validation set to empirically determine the hyperparameters. Additionally, we provide a further analysis of the ablation studies below. (i) The disentangled contrastive graph encoder module is designed to extract discriminative invariant and variant patterns from the data by pulling similar samples closer and pushing dissimilar samples apart in the latent space. Removing this module impairs the extraction of invariant patterns and reduces the distinguishability between patterns extracted from different datasets, ultimately harming the effectiveness of architecture prediction. (ii) The invariant-guided architecture customization module serves to shield architecture $A$ from the influence of variant patterns $Z_V$ given the invariant pattern $Z_I$. The substantial performance decrease observed upon removing this module highlights the importance of effectively isolating architecture predictions from $Z_V$ influences, reinforcing the critical role of this module in ensuring the invariance conditions of captured patterns. (iii) This curriculum architecture customization mechanism aims to reduce data dominance in the architecture search process. Removing this module causes certain operations, which perform well on specific datasets during early training stages, to dominate the search process. Consequently, other datasets may neglect potentially beneficial operations. > 2. "How does the proposed method compare in terms of computational efficiency with other NAS-based approaches?" We are grateful for your feedback. In the original manuscript, we analyze the time complexity of GFA as ${O}(|E|d_e +|V|d_e^2 +|\\mathcal{O}|^2d_e +|\\mathcal{O}|(|E|d_a +|V|d_a^2))$. Considering the term with the largest complexity, this is approximately ${O}(|\\mathcal{O}|(|E|d +|V|d^2))$. The time complexity for most existing GNN methods is typically ${O}(|E|d +|V|d^2)$, and GNAS methods also exhibit a approximate complexity of ${O}(|\\mathcal{O}|(|E|d +|V|d^2))$. Thus, our method's complexity is comparable to existing GNAS approaches. > 3. "However, the choice of hyperparameters for different datasets is not well discussed, and sensitivity analysis should be enhanced." Thank you for your suggestion. Since GFA is trained jointly across all datasets, the hyperparameters $\lambda$ and $\beta$ are consistent for all datasets. Specifically, we set $\lambda$ to $1e-3$ and $\beta$ to $1e-1$. We provide a more detailed discussion on the hyperparameter sensitivity. The hyperparameter $\lambda$ in Eq.17 controls the trade-off between $L_{task}$ and $L_{dis}$. Specifically, $L_{task}$ aims to maximize the mutual information between the invariant pattern $Z_I$ and the architecture $A$, ensuring that $Z_I$ is sufficient to predict $A$. In contrast, $L_{dis}$ aims to minimize the mutual information between the invariant pattern $Z_I$ and the variant pattern $Z_V$, thereby enabling the extraction of two disjoint patterns from the data. We adjust its value within the set $\\{1e-1,1e-2,1e-3,1e-4\\}$. As shown in Figure 5, when $\lambda$ is set too low, the model's performance deteriorates, confirming that proper disentanglement of $Z_I$ and $Z_V$ is essential for effective architecture prediction. Conversely, when $\lambda$ is set too high, performance also declines, indicating that while ensuring the separation between the two patterns, it is equally important that the $Z_I$ retains sufficient information to predict the architecture. Overall, $\lambda$ is an important hyperparameter for balancing the sufficiency and disentanglement. The hyperparameter $\beta$ in Eq.17 controls the trade-off between $L_{task}$ and $L_{inv}$. Specifically, $L_{inv}$ aims to shield architecture $A$ from the influence of $Z_V$ given the invariant pattern $Z_I$. As demonstrated in Figure 5, setting $\beta$ too low results in degraded model performance, underscoring the importance of effectively shielding $A$ from the influence of $Z_V$ given $Z_I$. Thus, $\beta$ is also a critical hyperparameter for balancing the sufficiency and invariance conditions of the patterns captured by the model. > 4. "more recent advances in self-supervised learning techniques for GNN adaptation." We sincerely appreciate your valuable suggestion. In response, we will include a paragraph in the Related Works section of our revised manuscript, discussing recent advances in self-supervised learning techniques for GNN adaptation. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. The authors’ responses have addressed my concerns. I would like to raise my overall assessment to this work.
Summary: The paper introduces a framework for adapting GNN architectures dynamically to improve generalization in GFMs. Existing graph neural architecture search methods struggle to design architectures for GNN-based GFMs. This paper addresses the issue of architecture inconsistency by identifying an invariant relationship between graphs and architectures. The authors propose GFA, an automated approach that tailors GNN architectures to different graph datasets, tasks, and domains. The key contributions of the paper include: - Proposed a disentangled contrastive graph encoder to extract invariant and variant patterns from graph data. - Proposed an invariant-guided architecture customization strategy to tailor GNN architectures in a dynamic way. - Proposed a curriculum-based architecture customization mechanism to mitigate the effects of data domination during the search process. - Provided theoretical insights demonstrating the limitations of existing GNAS methods in handling architecture inconsistency. - Conducted extensive experiments on multiple datasets showing that GFA outperforms baseline methods. Claims And Evidence: The primary claim that adaptive architectures enhance performance across various graph settings is supported by theoretical analysis and empirical results. The paper provides mathematical proofs demonstrating why existing GNAS methods struggle with architecture inconsistency, emphasizing the need for dynamic customization. Additionally, experimental evaluations on diverse datasets show significant improvements over both manually designed GNNs and existing GNAS approaches. Ablation studies seem to validate the effectiveness of the proposed method. But the analyses on the results are a little limited. Methods And Evaluation Criteria: The methodology is clearly presented and builds upon solid foundations in NAS and GNN customization. The experimental setup/evaluation is comprehensive. Table 1 covered all common graph tasks, like node classification, link prediction and graph classification. Theoretical Claims: The theoretical contributions are valuable. The proofs are correct after my careful check. The authors provide mathematical formulations and proofs that highlight the architecture inconsistency problem in existing GNAS methods. Their theoretical analysis shows that under the assumption that different datasets require distinct architectures, differentiable GNAS methods (e.g., DARTS) fail due to optimization conflicts. My verification confirms the correctness of these proofs, and the proposed invariant-guided architecture customization is a theoretically sound solution to this problem. Experimental Designs Or Analyses: The evaluation is thorough, but the study would benefit from detailed ablation experiments to analyze the impact of various components of the proposed method. Supplementary Material: The supplementary material provides necessary details regarding the theoretical proofs, dataset descriptions, and experimental setups. The proofs appear correct, and the dataset descriptions are sufficiently detailed to ensure reproducibility. Relation To Broader Scientific Literature: The work follows the recent trends in adaptive neural architecture design and graph foundation model. Essential References Not Discussed: I did not locate essential references missing. Other Strengths And Weaknesses: Strengths: - Strong empirical validation with extensive experiments across multiple datasets. - Novel use of curriculum learning to mitigate data domination effects in GNAS. - Theoretical insights into architecture inconsistency and its impact on GNAS. - Comprehensive experimental design, including ablation studies and few-shot learning evaluation. Weaknesses: - The discussions on the results shown in the figures or tables could be enhanced. - The framework figure 2 is a little simple and did not clearly show the technical details of the proposed method. - The descriptions on the inference stage are missing. Other Comments Or Suggestions: It would be useful to expand the discussion on Figure 4 and explain what key insights can be drawn regarding how different architectures adapt to various datasets. Questions For Authors: 1. Can you explain what claims you want to support with the showcase Figure 4? What important points can be derived from the showcase? I am not very clear to that. 2. How does the computational complexity of GFA compare to classical GNNs and standard GNAS methods? 3. Could you show the pipeline of GFA duration the inference stage? The current algorithm and method description only focus on the training stage. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We would like to express our sincere appreciation to the reviewer for providing us with detailed suggestions. We have carefully reviewed each comment and offer the following responses. > 1. "Can you explain what claims you want to support with the showcase Figure 4?" Thank you for highlighting this point. To clearly visualize the customized architectures tailored to different datasets, we presented a heatmap in Figure 4, illustrating the choice weights of each operation at each layer. Firstly, we observe that different graph datasets prefer distinct architectures; for example, Cora mainly prefers GraphConv and GraphSAGE, whereas these two operations are rarely selected for PubMed. This observation further supports our earlier assumption that different datasets require different architectures, and some datasets exhibit inconsistent architectural preferences. Moreover, we find that many datasets prefer varying operations across different layers. For instance, the Arxiv dataset prefers GCN in the first layer and GAT in the second layer. Such fine-grained architectural preferences are challenging to meet through manual design, highlighting the advantage of automated, customized architectures. > 2. "How does the computational complexity of GFA compare to classical GNNs and standard GNAS methods?" We are grateful for your feedback. In the original manuscript, we analyzed the complexity of GFA as ${O}(|E|d_e +|V|d_e^2 +|\\mathcal{O}|^2d_e +|\\mathcal{O}|(|E|d_a +|V|d_a^2))$. Considering the term with the largest complexity, this is approximately ${O}(|\\mathcal{O}|(|E|d +|V|d^2))$. The complexity for most existing GNN methods is typically ${O}(|E|d +|V|d^2)$, and GNAS methods also exhibit a approximate complexity of ${O}(|\\mathcal{O}|(|E|d +|V|d^2))$. Thus, our method's complexity is comparable to existing GNAS approaches. > 3. "Could you show the pipeline of GFA duration the inference stage?" Thank you for highlighting this point. During the inference stage, given an input graph, we first utilize the Disentangled Contrastive Graph Encoder to obtain its invariant pattern representation, denoted as $Z_I$. Then, $Z_I$ is fed into the Invariant Predictor within the Invariant Guided Architecture Customization module to generate a customized architecture. This customized architecture is subsequently employed as the GNN component within the GFM to perform prediction. > 4. "Ablation studies seem to validate the effectiveness of the proposed method. But the analyses on the results are a little limited." Thank you for bringing this to our attention. We provide a further analysis of the ablation studies below. (i) The disentangled contrastive graph encoder module is designed to extract discriminative invariant and variant patterns from the data by pulling similar samples closer and pushing dissimilar samples apart in the latent space. Removing this module impairs the extraction of invariant patterns and reduces the distinguishability between patterns extracted from different datasets, ultimately harming the effectiveness of architecture prediction. (ii) The invariant-guided architecture customization module serves to shield architecture $A$ from the influence of variant patterns $Z_V$ given the invariant pattern $Z_I$. The substantial performance decrease observed upon removing this module highlights the importance of effectively isolating architecture predictions from $Z_V$ influences, reinforcing the critical role of this module in ensuring the invariance conditions of captured patterns. (iii) This curriculum architecture customization mechanism aims to reduce data dominance in the architecture search process. Removing this module causes certain operations, which perform well on specific datasets during early training stages, to dominate the search process. Consequently, other datasets may neglect potentially beneficial operations. > 5. "The discussions on the results shown in the figures or tables could be enhanced." Thank you for your suggestion. We will add more discussions on the results shown in the figures and tables in our revised manuscript to make our paper more readable. > 6. "The framework figure 2 is a little simple and did not clearly show the technical details of the proposed method." Thank you for highlighting this point. We have refined our framework by incorporating additional technical details to better align with the content and equations presented in the Method section, specifically in the following three aspects. (1) Enhancing the clarity of the pipeline by adding more numerical indexing and clear directional arrows to improve readability. (2) Including additional equations directly in the figures, enabling readers to easily identify corresponding equations from the text. (3) Reflecting the unique advantages of our method, such as the customization of different architectures tailored to specific datasets, and the progressive architecture search process driven by curriculum learning.
Summary: The authors introduce GFA, a framework for graph neural network architecture customization in graph foundation models. The paper addresses the architecture inconsistency problem, which arises when different graph domains and tasks require varying GNN architectures. To tackle this, the authors propose a disentangled contrastive graph encoder, an invariant-guided architecture customization strategy, and a curriculum-based optimization mechanism to improve architecture search for diverse datasets. They conduct extensive experiments across real-world datasets, showing that GFA outperforms state-of-the-art baselines. Furthermore, the paper provides theoretical analysis demonstrating the limitations of existing graph neural architecture search methods in handling architecture inconsistency. In summary, the paper presents a novel approach to automated GNN-based GFMs with architecture search. ## update after rebuttal I will keep my positive opinion towards the paper after rebuttal. Claims And Evidence: The main claim of the paper is that customizing GNN architectures according to different datasets improves GFM performance. This claim is supported by empirical evidence, including evaluations on eight diverse datasets. The paper effectively shows that fixed architectures used in prior GFMs lead to suboptimal performance, while GFA dynamically adapts architectures, achieving state-of-the-art results. A key contribution is the theoretical analysis that demonstrates why standard differentiable GNAS methods struggle under architecture inconsistency. This analysis, backed by proofs in Appendix, provides strong theoretical motivation for the proposed approach. However, while the theoretical insights are technically solid, they could be presented more intuitively to help reader understanding. Methods And Evaluation Criteria: The methodology is convincing. The three core modules, disentangled contrastive graph encoder, invariant-guided customization, and curriculum-based optimization, are integrated to address architecture inconsistency issue. The experimental design is comprehensive, covering datasets across node, edge, and graph classification tasks. The authors compare against multiple baselines, including vanilla GNNs, self-supervised learning methods, existing GFMs, and various GNAS techniques. Theoretical Claims: The theoretical aspect of this paper addresses why standard differentiable GNAS methods struggle to manage architectural inconsistencies across heterogeneous graph datasets. The authors provide a series of theorems (with proofs in Appendix) showing how prior methods might overlook essential dataset-specific structures when optimizing a universal, shared architecture parameter space. The derivations appear mathematically solid. Nonetheless, additional intuitive explanations could make the theoretical arguments more readable for a broader audience. Experimental Designs Or Analyses: The experimental results are strong, showing improvements over baselines. However, the few-shot experiments in table 2 seem to be confusing (I will explain it in weaknesses). Also, the paper does not discuss how hyperparameter sensitivity affects performance thoroughly. Supplementary Material: I read the appendix. No supplementary material. Relation To Broader Scientific Literature: The work aligns with ongoing research in graph LLMs and graph foundation models. Essential References Not Discussed: No more works should be referenced. Other Strengths And Weaknesses: I think the paper did a good job at these points: - It proposes a novel, end-to-end approach for GNN architecture customization in GFMs. - It shows theoretical analysis exposing limitations of existing GNAS methods. - It integrates theoretical, empirical, and algorithmic innovations into a unified framework. But I still have some concerns on the paper: - One concern relates to the “N-way K-shot” few-shot learning experiments. The authors explore different way settings (Cora-7 way, WN18RR-10 way, CHEMHIV-2 way), but it is unclear why these specific values of N were chosen for each dataset. A clearer justification would be beneficial. - Another concern is on the hyperparameter sensitivity. The paper does not discuss how hyperparameter sensitivity affects performance thoroughly. - And there are limited descriptions of the baseline methods. Other Comments Or Suggestions: - The experiment should consider additional way settings beyond the current specific configurations. - The authors should provide more detailed descriptions of the baseline methods. - The framework diagram (Figure 2) should be revised to clarify the technical details of the proposed GFA. Questions For Authors: How about the performance in additional way settings as in table 2? Does the method still perform best in these new few-shot settings? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the insightful comments provided by the reviewer. We have carefully considered each point raised and would like to respond as follows. > 1. "The experiment should consider additional way settings beyond the current specific configurations." Thank you for raising this important point. We have conducted additional experiments to validate our model’s performance under more N-way settings on the Cora and WN18RR datasets. Due to character limits, we only include a subset of the most competitive baselines below. The complete results will be provided in our second-round response and included in the revised manuscript. |Cora||5 way|||2-way|| |-|-|-|-|-|-|-| ||5-shot|3-shot|1-shot|5-shot|3-shot|1-shot| |GAT|52.30±6.05|51.73±7.32|50.17±7.41|75.92±3.89|75.17±5.36|72.83±5.48| |GIN|49.83±7.79|49.17±8.10|48.97±6.73|75.25±8.60|**76.83±8.36**|71.50±7.44| |GRACES|50.17±7.74|49.30±6.12|49.40±6.20|74.81±5.82|74.42±5.47|72.58±4.90| |Ours|**53.93±6.95**|**52.50±6.84**|**50.87±5.55**|**76.43±5.45**|76.55±4.48|**73.92±6.64**| |WN18RR||5 way|||3-way|| |-|-|-|-|-|-|-| ||5-shot|3-shot|1-shot|5-shot|3-shot|1-shot| |GAT|46.23±4.44|46.33±4.50|46.30±4.43|59.56±3.85|59.39±3.45|58.06±4.34| |GIN|47.57±5.56|47.80±5.29|47.60±3.81|61.33±5.98|61.83±6.35|58.22±4.93| |GRACES|48.37±3.76|47.67±4.04|47.00±3.15|61.50±3.31|60.00±4.01|59.17±6.35| |Ours|**49.93±3.63**|**49.10±3.31**|**48.47±4.38**|**63.11±5.80**|**61.94±2.61**|**59.72±4.26**| Our method outperforms the baselines mostly across various N-way K-shot settings, further verifying the effectiveness of the customized architectures. > 2. "additional intuitive explanations could make the theoretical arguments more readable for a broader audience." Thank you for your suggestion. We provide some additional intuitive explanations below. **Assumption 3.1** assumes that the optimal architectures required by two different datasets may differ. As shown in Figure 1(the performance of each architecture on different datasets), GCN achieves optimal performance on the PubMed dataset, while GraphSAGE performs best on the Wikics dataset. **Assumption 3.1** serves as a prerequisite condition for **Proposition 3.2**. **Proposition 3.2** demonstrates that when two datasets require different optimal architectures, current mainstream GNAS methods encounter optimization conflicts for GFM. As previously illustrated, the optimal architectures for PubMed and Wikics differ. Consequently, when existing GNAS methods search simultaneously for an architecture optimal for both datasets, they fail to identify a single architecture that performs best for both and are forced to compromise. **Assumption 3.3** defines what constitutes an invariant pattern for architecture prediction. - **Condition 1** indicates that the data contains two types of patterns: an invariant pattern $Z_I$, which reliably predicts the architecture, and a variant pattern $Z_V$, which cannot stably predict the architecture. - **Condition 2** highlights that the variant pattern $Z_V$ are not independent of the architecture $A$. - **Condition 3** states that, given the invariant pattern $Z_I$, the architecture $A$ is independent of the variant pattern $Z_V$, and $Z_I$ is sufficient for predicting $A$. **Proposition 4.1** aims to demonstrate that our method satisfies condition 3 of an invariant pattern for architecture prediction via enforcing $P(A \\mid Z_{I}, Z_{V}\) = P(A \\mid Z_{I})$. > 3. "the paper does not discuss how hyperparameter sensitivity affects performance thoroughly." Thank you for bringing this to our attention. Due to character limitations, we will provide a more detailed discussion on hyperparameter sensitivity in our revised manuscript. Alternatively, we kindly refer the reviewer to our response to **Reviewer YeCs’s Question 3**, where we address a similar question. We apologize for any inconvenience this may cause. > 4. "The authors should provide more detailed descriptions of the baseline methods." We appreciate your suggestion. We will include a new section in the appendix providing detailed descriptions of all the baselines. > 5. "The framework diagram (Figure 2) should be revised to clarify the technical details of the proposed GFA." Thank you for highlighting this point. We have refined our framework by incorporating additional technical details to better align with the content and equations presented in the Method section, specifically in the following three aspects. (1) Enhancing the clarity of the pipeline by adding more numerical indexing and clear directional arrows to improve readability. (2) Including additional equations directly in the figures, enabling readers to easily identify corresponding equations from the text. (3) Reflecting the unique advantages of our method, such as the customization of different architectures tailored to specific datasets, and the progressive architecture search process driven by curriculum learning. --- Rebuttal Comment 1.1: Comment: I have gone through all the reviews and rebuttals. I believe the paper is of high quality and have raised my score to 4.
Summary: This paper explores automated graph neural architecture search (GNAS) for Graph Foundation Models (GFMs) to overcome the limitations of fixed, hand-designed GNN architectures, which result in suboptimal performance across diverse graph domains and tasks. The authors identify the architecture inconsistency problem, where the optimal GNN architectures vary across different domains and tasks. To tackle this, they propose an Automated Graph Foundation Model with Adaptive Graph Neural Architecture Customization (GFA), which incorporates: a disentangled contrastive graph encoder to learn both invariant and variant patterns from graph data, an invariant-guided architecture customization strategy to adapt GNN architectures to different domains and tasks, and a curriculum architecture customization mechanism to mitigate the dominance of particular data during the search process. Additionally, the paper provides theoretical insights into the limitations of existing GNAS methods under the architecture inconsistency problem. Extensive experiments demonstrate that GFA outperforms baseline models, achieving state-of-the-art performance. This work is the first to address the problem of GNAS for GFMs. Claims And Evidence: The main point that fixed GNN architectures lead to suboptimal performance in diverse settings is justified through both theoretical analysis and empirical validation. Methods And Evaluation Criteria: The proposed method is evaluated on multiple datasets and makes sense. Theoretical Claims: The theoretical analysis is rigorous. The authors present a formulated argument demonstrating the optimization conflicts caused by architecture inconsistency in existing methods. The proof of Proposition 3.2 convincingly shows that a one-size-fits-all architecture search approach is insufficient for diverse graph tasks. Proposition 4.1 provides a justification for the proposed invariant-guided architecture customization strategy. But I would say that the theoretical analysis is based on some assumptions. It should be discussed whether these assumptions can be true in the real world. Experimental Designs Or Analyses: The experimental setup is comprehensive, covering multiple datasets and a set of baseline models, including manually designed GNNs, existing GNAS methods, and state-of-the-art Graph Foundation Models. Supplementary Material: I reviewed all sections of the supplementary material. Relation To Broader Scientific Literature: The work fits within GNAS and GNN literature. Essential References Not Discussed: The discussion of related work is comprehensive. More recent advances in GFMs [1] could be relevant references. [1] Graph Foundation Models: Concepts, Opportunities and Challenges. ArXiv:2310.11829. Other Strengths And Weaknesses: Strengths: - The paper presents an important and novel problem—GNAS for Graph Foundation Models (GFMs)—which has not been previously explored. The identified architecture inconsistency problem is a significant contribution to the field. - The proposed GFA framework systematically addresses key challenges in GNAS for GFMs. The design of disentangled contrastive learning, invariant-guided customization, and curriculum-based customization is innovative. - The paper provides a theoretical analysis of the limitations of existing GNAS methods under the architecture inconsistency problem. - The empirical results demonstrate the effectiveness of GFA, showing state-of-the-art performance across multiple benchmarks. Weaknesses: - The author did not explain whether the assumptions can be valid in the real world. - The writing can be improved. For example, more details in Sec. D.2 should be added for reproducing the results. The descriptions on the baselines are too simple, which brings difficulty to the readers that do not very familiar with this topic. Other Comments Or Suggestions: As shown above, the authors should talk about the validity of the assumptions used in the method. Why do the authors need the Assumption 3.1 and Assumption 3.3? Are these assumptions common in the literature? Minors: There are some typos in the manuscript. The corresponding symbols (periods or commas) after equations are missing in Eq. (4) (5) (14) (15) or wrong in Eq. (16). Initial letters should be capitalized in line 434. Questions For Authors: 1) Could you clarify whether the assumptions in the theories can be valid in the real world? 2) Could you add more details on the baselines? How do you implement the baselines? The current descriptions are too simple. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer for providing us with detailed comments and insightful questions. We have carefully considered the reviewer's feedback and would like to address each point as follows. > 1. "Could you clarify whether the assumptions in the theories can be valid in the real world?" Thank you for raising this important point. **Assumption 3.1** serves as a prerequisite condition for **Proposition 3.2**. Specifically, **Assumption 3.1** assumes that optimal architectures required by two different datasets may differ. To validate this assumption, we evaluated various GNN architectures based on a GNN-based GFM (GFT [1]) across multiple real-world datasets. Figure 1 presents a heatmap visualization of each architecture’s performance on different datasets, showing that optimal architectures indeed vary according to the dataset. For instance, GCN achieves optimal performance on the PubMed, while GraphSAGE performs best on the Wikics. **Assumption 3.3** defines what constitutes an invariant pattern for architecture prediction. The concept of the invariant pattern is well-defined, and methods based on this concept have been validated as effective in various real-world applications, such as academic citation networks [2] and molecular structures [3]. Unlike previous work, which has primarily focused on capturing stable relationships for accurate label prediction, we applied this concept to architecture search, aiming to define invariant patterns that support stable architecture prediction, and designed our method based on this concept. > 2. "Could you add more details on the baselines? How do you implement the baselines? The current descriptions are too simple." Thank you for bringing up this point. For **Vanilla GNNs**, **self-supervised methods**, and **GFMs**, we reproduce the results based on their original papers and publicly available code. To ensure a fair comparison between **manually designed GNNs** and **GNAS** baselines, we employ GFT[1] as the base model. Specifically, for **manually designed GNNs**, we replace the GNN in GFT with various manually designed GNNs and follow identical pretraining and finetuning procedures as GFT. For **GNAS** methods, we substitute the GNN component in GFT with different GNAS methods. Architecture search is performed during the pretraining stage, whereas in the finetuning stage, we further optimize only the parameters of the searched architectures without additional architecture searches. Furthermore, we use the same search space for both GNAS baselines and GFA, including operations such as GCN, GAT, GraphSAGE, GIN, and GraphConv within a super-network depth of 2 layers. We set the dimensionality of all methods to 768. > 3. "More recent advances in GFMs [1] could be relevant references. [1] Graph Foundation Models: Concepts, Opportunities and Challenges. ArXiv:2310.11829". Thank you for your suggestion. We will add this citation to the related work section of the revised manuscript. > 4. "The writing can be improved. For example, more details in Sec. D.2 should be added for reproducing the results. The descriptions on the baselines are too simple, which brings difficulty to the readers that do not very familiar with this topic." Thank you for your suggestion. We have expanded the description in Sec.D.2 as follows. We evaluate different GNN architectures and GNAS methods based on GFT[1], following the default hyperparameters of GFT to maintain consistency. To ensure a fair comparison, we set the dimensionality of all methods to 768, use the same search space and operations (GCN, GIN, GAT, GraphSAGE, GraphConv), and fix the number of layers to 2. For our method, we explore hyperparameter $\lambda,\beta \in \\{1e-1,1e-2,1e-3,1e-4\\}$ and empirically set $\lambda$ to $1e-3$ and $\beta$ to $1e-1$. The learning rate of the disentangled contrastive graph encoder is set to $5e-3$, and the learning rate of the architecture predictor is set to $3e-2$. The dimensionality of both the graph encoder and the supernet is 768. Each experiment is conducted 10 times, and we report the average performance along with standard deviations. We will also include a new section in the appendix with detailed descriptions of all the baselines. > 5. "Minors: There are some typos in the manuscript. The corresponding symbols (periods or commas) after equations are missing in Eq. (4) (5) (14) (15) or wrong in Eq. (16). Initial letters should be capitalized in line 434." We sincerely appreciate your thoughtful observation. We have corrected these typos and will carefully revise the manuscript to address any remaining errors. [1] GFT: Graph Foundation Model with Transferable Tree Vocabulary [2] Learning Invariant Representations of Graph Neural Networks via Cluster Generalization [3] Learning Invariant Molecular Representation in Latent Discrete Space --- Rebuttal Comment 1.1: Comment: Thanks for the response. My concerns have been addressed. This paper sounds good both in theory and in pracice. I'd like to raise the score.
null
null
null
null
null
null
Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models
Accept (poster)
Summary: This paper proposes a new algorithm for dictionary learning, namely Iterative Codebook Feature Learning (ICFL), which can be optionally augmented with the PCA whitening technique. This technique is then applied to interpret features learned in masked autoencoder models trained on microscopy images of cells. The authors present case studies on the extracted features. Claims And Evidence: The main claims are: - The proposed approach successfully retrieves biologically meaningful concepts, such as cell type and genetic perturbations. - The proposed approach can help better understand morphological changes in cells induced by genetic perturbations. The first claim is supported via experimental results. But some comparisons are not convincing. For example, in Figure 2B, the results of ICFL are so sparse that I do not really know how to compare them against the baseline (CP) performances. The second claim is based on a specific case study, which presents some evidence but does not adequately convince the readers on the generalizability of the approach. Methods And Evaluation Criteria: The evaluation criteria seem sound to me. Theoretical Claims: No theorem in this paper. Experimental Designs Or Analyses: The experiment **lacks comparisons** between the proposed methods and other existing approaches to interpretability. It is not clear how much practitioners can gain from using ICFL compared with other methods. Supplementary Material: No. Relation To Broader Scientific Literature: From the interpretability perspective, this paper introduces a new algorithm based on dictionary learning. Essential References Not Discussed: I'm not aware of any missing related works. Other Strengths And Weaknesses: ## Strengths - Making scientific discovery with machine learning is an intriguing research topic. ## Weaknesses - The presentation of the paper, the algorithm part in particular, looks sloppy to me. - Abuse of notations. Section 4 uses $W$ and $W_{\text{dec}}$ interchangeably without any additional explanations. I assume that $W_{\text{dec}}$ follows the notations in SAE, but this is inappropriate since there is no $W_{\text{enc}}$. In Equation (4), the use of $W$ and $W_{\text{dec}}$ is not even consistent in a single equation. - Missing details. The "computing the features $z$ using batched-OMP" paragraph assumes $W_{\text{dec}}$ is given. However, no where in the main paper is the initialization of $W_{\text{dec}}$ specified. - Inconsistent formats. The OMP algorithm is described using bullet points. The batched-OMP is described in (very informal) texts and pseudo code. Making those descriptions consistent would help clarify the modifications in the new method. - The proposed Iterative Codebook Feature Learning (ICFL) seems a simple extension of existing methods to me. The PCA whitening is also a known technique. Thus, **the technical contribution of the paper is limited**. While I value the simplicity in the design of ML algorithms, the authors need to present very solid comparisons against existing methods, show the generality on more foundation model families, and discuss insights on why the simple method is favorable. Other Comments Or Suggestions: Other minor presentation issues: - Please check the use of `\citep` and `citet`. There are many mistakes in the paper, e.g., lines 133 (right), 134 (right), 151 (right), 156 (right), 249 (left), etc. - Line 125 (right): $\\{x\\}_{i=1}^N$ $\to$ $\\{x\_i\\}^N\_{i=1} $. - Line 184 (left): MP $\to$ OMP. - Line 205 (right): correspond $\to$ correspond to. Questions For Authors: - In line 163 (right), are $W_i$ and $x^{(t)}$ both column vectors? How is $|W_ix^{(t)}|$ defined? - Are the MAE models open-sourced? What are the training configurations of the models? Those details should have been documented in the paper. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive review. We would like to respond below to a few comments made by the reviewer. > The experiment lacks comparisons between the proposed methods and other existing approaches to interpretability. It is not clear how much practitioners can gain from using ICFL compared with other methods. While a more extensive benchmark evaluation would indeed be valuable, we would like to stress that the emphasis of this paper lies in the use of dictionary learning / SAEs in the context of biological representation learning. In particular, we aim to explore how ideas developed in mechanistic interpretability for LLMs could potentially be used in the future for scientific discovery. A benchmark comparison therefore goes beyond the scope of this paper. > show the generality on more foundation model families In this paper, we prioritize an interdisciplinary approach and an in-depth analysis of features extracted from one specific model rather than multiple models. This is a common approach in the existing literature (see, e.g., [1,2]) for two reasons. - First, assessing the quality of extracted features is a highly domain-specific task, hence our usage of a SOTA model for cell microscopy data, rather than a generic vision foundation model. An attempt to apply the method to more foundation models, would have necessitated a less detailed study of the features that are learned. - Second, we note that the overwhelming majority of prior work [e.g. 1,2] on dictionary learning techniques have also only been applied to a single modality (typically text) with a single model. This approach is common for the reason outlined above (in any domain, careful analysis of the features requires extensive domain expertise), and access to large scale foundation models, the corresponding datasets, and the necessary infrastructure to run these experiments is often limited. > The presentation of the paper, the algorithm part in particular, looks sloppy to me. We thank the reviewer for making us aware of the inconsistencies in the notation. We switched between W and Wdec to better highlight the similarity between ICFL and TopK SAEs, however, there are indeed some minor inconsistencies due to smaller edits before submitting the paper, which we will resolve. Moreover, Wdec is initialized using the standard pytorch initialization for a linear transform (nn.Linear). We will mention this in the paper. > In line 163 (right), are It should indeed be W_i^T, thank you for noticing this typo. > Are the MAE models open-sourced? What are the training configurations of the models? Those details should have been documented in the paper. Unfortunately, we are not able to open source the MAEs at the time of writing this response, but they are based on the standard implementation of the MAEs described in [3]. Moreover, the training details for the MAEs are as described in [4]. [1] Gao, Leo, et al. "Scaling and evaluating sparse autoencoders." arXiv preprint arXiv:2406.04093 (2024). [2] Bricken, Trenton, et al. "Towards monosemanticity: Decomposing language models with dictionary learning." Transformer Circuits Thread 2 (2023). [3] He, Kaiming, et al. "Masked autoencoders are scalable vision learners." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [4] Kraus, Oren, et al. "Masked autoencoders for microscopy are scalable learners of cellular biology." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal and I appreciate the opportunity to review this interdisciplinary work. While I appreciate the authors' efforts, I find that the machine learning methodology lacks sufficient depth and novelty from a technical perspective. As my expertise lies primarily in machine learning rather than computational biology, I defer to other reviewers and AC with domain expertise for assessment of the biological findings. From my perspective as a machine learning researcher, the paper does not provide substantial new methodological insights that would advance the field. Therefore, I maintain my original rating. --- Reply to Comment 1.1.1: Comment: We are unlikely to reach agreement about depth or novelty - *we view the simplicity of the method as a key strength as simple methods tend to have higher impact; for example, the same “lacks sufficient [technical] depth and novelty” comment could be applied to Brickle et al. [2023] who “just” used a sparse autoencoder (a well-known technique), but have had a significant impact*. Instead, we ask the reviewer to reconsider their position in light of the official ICML reviewer guidelines for 2025 which explicitly encourage taking a broad view on originality in order to avoid inherently subjective debates about perceived novelty. In particular, the guidelines state, > We **encourage you to be open-minded** in terms of potential strengths. For example, **originality may arise from creative combinations of existing ideas**, removing restrictive assumptions from prior theoretical results, or **application to a real-world use case** (particularly for application-driven ML papers, indicated in the flag above and described in the Reviewer Instructions). [emphasis added] We present a method that: - combines **matching pursuit** with a **learned dictionary** that it is optimized by gradient descent and a **PCA preprocessing step**. Each of these three ideas is known, but we provide extensive experiments to demonstrate that it is their combination that leads to strong performance in a setting not previously studied (dictionary learning for image-only models that receive no text supervision); and - we provide an in-depth evaluation on a real world use case. Both of these points are supported with detailed experiments and ablations that show both the effectiveness of dictionary learning in general on the real-world use case, and the effectiveness of the novel combination of existing ideas in improving over the standard technique used in this area (Top-K SAEs).
Summary: This paper explores the application of dictionary learning (DL) to extract biologically meaningful concepts from large-scale masked autoencoders (MAEs) trained on microscopy images. The authors introduce Iterative Codebook Feature Learning (ICFL), a dictionary learning algorithm adapted from the Matching Pursuit (MP) algorithm, combined with PCA whitening on a control dataset. The approach aims to uncover sparse, interpretable features from cell imaging data that correspond to biological concepts such as cell types and genetic perturbations. The authors validate their method through classification tasks, interpretability analyses, and comparisons with Top-K sparse autoencoders (TopK SAEs) and handcrafted features from CellProfiler (CP). They show that ICFL improves selectivity, reduces “dead features,” and retains biologically relevant signals. The results suggest that dictionary learning can be effectively applied to bioimaging data, potentially enabling new insights in drug discovery and genetic perturbation analysis. Claims And Evidence: 1. Lack of ablation experiments for sparse representations -- The paper emphasizes that sparse dictionary learning is essential for interpretability, but does not conduct a sparse vs. non-sparse comparison experiment. The lack of such an ablation experiment affects the credibility of the conclusion. 2. Lack of hyper-parameter sensitivity analysis -- ICFL relies on several hyper-parameters (e.g., sparsity, number of iterations, PCA whitening strength), but the paper fails to conduct sensitivity analysis on these parameters, which affects the assessment of model robustness. 3. Single way of assessing feature interpretation - the paper mainly uses Linear Probing to assess whether the features are biologically relevant, but does not use feature clustering analysis (whether biologically meaningful clusters can be formed spontaneously). The use of these additional assessments can improve the persuasiveness of the paper. Methods And Evaluation Criteria: The paper uses PCA whitening + sparse dictionary learning to extract features, but does not compare it with broader explanatory methods (e.g., topological data analysis, decomposable representation learning), which affects the full assessment of its methodological uniqueness. Theoretical Claims: ICFL relies on sparse representations to improve interpretability, but does not provide ablation experiments to demonstrate the necessity of sparsity for biological feature extraction. Experimental Designs Or Analyses: The experiments in this paper are mainly conducted on 100,000-level microscope data, and the lack of computational complexity analysis of million-level bio-image data affects the applicability of this method on large-scale datasets. Supplementary Material: None Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: It is recommended that the ICFL computational complexity analysis be supplemented with operational efficiency tests on large-scale data to enhance the utility of the method. Questions For Authors: 1. What is the computational complexity of ICFL on millions of microscope data? Has it been tested for efficiency on large-scale data? 2. can the extracted dictionary features be matched with known biomarkers? 3. does PCA whitening result in loss of certain biologically relevant information? Are there ablation experiments? 4. have alternative methods of feature selection been considered (e.g., feature extraction based on topological data analysis)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and for raising helpful questions. We would like to respond below to a few comments made by the reviewer. > Single way of assessing feature interpretation While linear probing is one important way to evaluate the quality of the features, we disagree with the reviewer’s comment that the paper does not use feature clustering analysis. In Figure 4 we show examples for how well clusters are separated along selected feature directions. Moreover, the selectivity scores (as displayed in Figure 2) essentially measures how well features can separate individual genetic perturbations from the average of all perturbations. May we ask the reviewer whether this answers the reviewers concern? > Lack of hyper-parameter sensitivity analysis… sparsity, number of iterations, PCA whitening strength We agree that careful ablation studies are an important part of a paper. While the computational cost of these experiments prevents covering the full space of hyperparameters (the dictionary learning methods are trained over 40m tokes), we did include the hyperparameters that you suggested (and many more) in the paper: - We ablate over the sparsity (Figures 3 and 8), model size of the MAE (Figure 17), type of representation (Figures 17 and 18) and learning rate (Figure 8). - We did not find the method to be sensitive to the number of iterations as we trained to approximate convergence. - There is no notion of “whitening strength” - the representations are whitened such that they have unit covariance and zero mean on the control cells > Lack of ablation experiments for sparse representations > ICFL relies on sparse representations to improve interpretability, but does not provide ablation experiments to demonstrate the necessity of sparsity for biological feature extraction. The reviewer raises an interesting point. It is per se not clear why one should extract sparse representations instead of dense representations. First, to answer why not dense representations? We recall that the dimension of z (i.e. 8192) is much higher than the dimension of x (i.e. 1664). Without the sparsity constraint, there are infinitely many ways to implement the identity function (that is, Wz = x), and thus the problem is inherently ill-posed. As a result, there is also no reason why one should believe that the learned features should capture biologically meaningful signals. However, why sparse features? The motivation stems from the superposition hypothesis, which is described in Section 3. While this hypothesis clearly does not hold universally, the question remains - to what extent do it hold? With this paper, we provide at least some evidence that a weaker form of a superposition hypothesis may indeed hold. > but does not compare it with broader explanatory methods (e.g., topological data analysis, decomposable representation learning) While we agree that broader comparisons are generally useful; comparing across different classes of interpretability methods is very challenging because each class of methods has a different notion of interpretability. As a result, there are not clean quantitative comparisons across interpretability methods that can be made. That said, we believe that a well-designed user study that evaluated these methods for scientific discovery would be extremely valuable, but this goes well beyond the scope of this paper. > The experiments in this paper are mainly conducted on 100,000-level microscope data, and the lack of computational complexity analysis of million-level bio-image data affects the applicability of this method on large-scale datasets. We believe there is a misunderstanding. We use a subset of the data for evaluation when computing the linear probes. However, to train the dictionaries W, we use 40M tokens (that is, 40M crops). Moreover, we train for 300k steps with a batch size of 8192 tokens. > 1. What is the computational complexity of ICFL on millions of microscope data? Has it been tested for efficiency on large-scale data? We did not conduct an extensive run-time analysis given that the cost of compute is negligible compared to the cost of training and fine-tuning the MAEs . > 2. Can the extracted dictionary features be matched with known biomarkers? The selectivity score analysis suggests that the extracted features align well with (some) genetic perturbations. May we ask the reviewer to be more precise what sort of experiment they had in mind? > 3. Does PCA whitening result in loss of certain biologically relevant information? Are there ablation experiments? The contrary is true, PCA whitening helps to increase the biological signal in the extracted features. We would like to refer the reviewer to Figure 3. > 4. Have alternative methods of feature selection been considered... Not at this time, as such methods go beyond the scope of this paper, but we do regard such an analysis as an interesting and important future work.
Summary: This paper adapts techniques from the mechanistic interpretability literature to address the problem of discovering what concepts are learned by foundation models trained on microscopy data. Specifically, they develop a new dictionary learning method, which they call ICFL, based orthogonal matching pursuit to extract factors from a masked autoencoder trained on the Cell Painting dataset. A series of auxiliary probe tasks are used to evaluate the quality of the learned atoms. The features are compared with a manual feature extraction pipeline called CellProfiler. Finally, the usefulness of the learned concepts is illustrated through an extended case study of the a genetic perturbation experiment. Claims And Evidence: I found the claims to be direct and well-supported. For example, it is easy to understand what dead features in SAE are, and the results of Table 1 seem to resolve any doubts that their approach suffers from this problem less. Similarly, Figure 4 convinced me that the representations were of practical value in perturbation experiments. I was impressed by the depth of the case study. The identification of factors related to adherens junctions struck me as the sort of result that would be of genuine interest to computational biologists. Methods And Evaluation Criteria: The idea of applying SAE to microscopy images from perturbation experiments is well-justified and the proposed probes are appropriate for evaluation. The method of subtracting variation present in control samples is well-motivated by the experimental design. The only step in the method that I found unusual was how the feature directions are learned. Unless I am misunderstanding something, the objective (4) has a closed form solution for W and b. Using gradient descent with random resets struck me as an unecessarily complicated in an otherwise elegant approach. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The ablation experiments in the appendix are helpful for understanding the general properties of IFCL. Despite the comparison with Top-K SAE, some readers might find the lack of more detailed quantitative comparisons (on other datasets or against other baselines) potentially concerning. However, I found the novelty of the application and the qualitative finding from the case study outweighed any concern about precise quantitative benchmarking. Supplementary Material: I skimmed appendix C. Relation To Broader Scientific Literature: This work brings together ideas from two cutting-edge areas -- genetic perturbation experiments and mechanistic interpretability. Better methodology for analyzing perturbation experiments is an area of active research. Researchers have worked with both manual feature extraction and deep learning models for the associated microscopy images, but combining the interpretability/control of manual feature methods with the richness of self-supervised learning strikes me as potentially very impactful. In the interpretability literature, the IFCL algorithm is closely related to Top-K SAE and OMP, and the authors appropriately represent their contribution. From an interpretability perspective, much of the novelty is in the application domain. Many interpretability techniques have been understood on LLM embeddings, but few (none?) have successfully demonstrated utility on more advanced scientific imaging data. Essential References Not Discussed: These are not necessarily essential references, but the idea of identifying the major factors of variation in a control pool struck me as quite close to the ideas in: https://doi.org/10.1038/s41587-024-02463-1 https://doi.org/10.1093/biostatistics/kxv026 Other Strengths And Weaknesses: The paper is written in a direct and precise style. Other Comments Or Suggestions: I have no additional comments. Questions For Authors: I have no additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and appreciate their positive feedback on the paper. We were especially pleased to read that the reviewer appreciated our extensive case study. We would like to respond below to a few comments made by the reviewer. > The only step in the method that I found unusual was how the feature directions are learned. Unless I am misunderstanding something, the objective (4) has a closed form solution for W and b. We thank the reviewer for raising this point and would like to clarify: Given z, there is indeed a closed-form solution for W and b. However, there are two key reasons why W should not simply be updated to minimize Equation (4): First, given that we use more than 40M training tokens (samples), it would be infeasible to optimize (4) directly over all samples. Instead, we rely on mini-batches, making local updates such as gradient descent a natural choice. Second, the objective in Equation (4) is convex in W and b only if z is fixed. However, z depends on W since it is computed as the solution to Algorithm 1. This raises the question: what is a practical way to update W? Directly solving Equation (4) may significantly alter W without accounting for its dependency on z. This issue is mitigated when we take only a gradient step with respect to the objective in (4) before updating z again. We refer the reader to [1] for a similar algorithm, where the decoder (or dictionary matrix) is also learned via gradient descent. > Using gradient descent with random resets struck me as an unecessarily complicated in an otherwise elegant approach. One could think of the random resets as playing a role analogous to the projection step in projected gradient descent, in order to enforce the constraint that cosim(w_i, w_j) < 1-\delta. However, unlike projected gradient, we are not deterministically projecting onto the boundary of the constraint set; instead, the random reset ensures that we are at some (random) point in the interior of the constraint set (with high probability). This trades off the compute costs and algorithmic complexity of a true projection step, for a cheap alternative that we find works well in practice. We will add this discussion to the updated paper. > These are not necessarily essential references, but the idea of identifying the major factors of variation in a control pool struck me as quite close to the ideas in: [reference to a paper by Zhang et al., 2024, Nat Biotech]. Thank you for highlighting this interesting work and agree that discussing this line of research is a valuable addition to the paper.. We will extend our related work section accordingly. [1] Arora, Sanjeev, et al. "Simple, efficient, and neural algorithms for sparse coding." Conference on learning theory. PMLR, 2015. --- Rebuttal Comment 1.1: Comment: The reference to Arora et al. (2015) and their "neurally plausible algorithm" resolves my concerns about the use of gradient updates in Equation (4). The random reset proposal still seems heuristic -- the projected gradient idea is suggestive but could perhaps benefit from formal study. Nonetheless, I remain enthusiastic in my support of the paper. I feel that the analysis deserves high visibility, because it exemplifies careful thinking about interpretability in a challenging application domain.
null
null
null
null
null
null
null
null
Rethinking the Temperature for Federated Heterogeneous Distillation
Accept (poster)
Summary: This paper highlights the suboptimal temperature calibration issue in existing federated distillation (FD) methods. To address this, the authors propose ReT-FHD, which introduces multi-level elastic temperature to dynamically adjust distillation intensities across different model layers and category-aware global temperature scaling for class-specific calibration. Additionally, they integrate a Z-score Guard blockchain verification mechanism to defend against label-flipping and poisoning attacks. Extensive evaluations across multiple benchmarks validate the effectiveness of ReT-FHD, demonstrating its superiority over existing FD approaches. Claims And Evidence: The security protection related content in this article suffers from an inadequately articulated rationale and demonstrates weak contextual integration with the remainder of the content, ultimately coming across as a superfluous addition that compromises the paper's structural coherence. Methods And Evaluation Criteria: 1. The proposed multi-level elastic temperature is based on layer-level distillation, which looks incremental but interesting. 2. The category-aware global temperature scaling component is difficult to understand, primarily due to the lack of annotations in the equations and an unclear description of the training strategy. 3. Section 3.3 is quite confusing, as it presents a large amount of information with weak contextual integration. Theoretical Claims: I am not an expert in theoretical analysis. From my point of view, the theoretical claims in the paper are solid. Experimental Designs Or Analyses: 1. The authors show abundant experiments to demonstrate the effectiveness of the proposed methods on heterogeneous data & models. 2. The experimental section lacks a comparative analysis between the proposed method and established attack-defense FL frameworks, which significantly undermines the empirical validation of ReT-FHD's effectiveness. 2. While the authors present a comparative analysis of communication efficiency, the blockchain overhead inherent to ReT-FHD remains notably absent from their evaluation framework. Supplementary Material: Yes. I checked the Appendix provided by the authors. Relation To Broader Scientific Literature: The core idea of this paper builds on previous findings that layer-level distillation enhances performance. The authors investigate the impact of temperature at each layer and introduce elastic temperature scaling to further refine distillation quality. Essential References Not Discussed: Some key competitors are not discussed in the paper. For proto-based method: Rethinking federated learning with domain shift: A prototype view[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023: 16312-16322. For attack defense: Self-driven entropy aggregation for byzantine-robust heterogeneous federated learning[C]//Forty-first International Conference on Machine Learning. 2024. Other Strengths And Weaknesses: 1. The notations in equations are not clearly described. 2. The axis titles are missing in Fig. 1. Other Comments Or Suggestions: None. Questions For Authors: 1. The whole structure of the paper is massive. There is no clear connection between elastic temperature scaling and the security components, and the rationale for incorporating security remains unclear. 2. Lack of annotation in equations makes the paper hard to understand. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank reviewer e9EW for the constructive comments: "…dynamically adjust distillation intensities across different model layers...", "…validate the effectiveness of ReT-FHD...". To thoroughly address your concerns, we will answer the questions one by one: **Questions:** **Q1: Connection between elastic temperature scaling and the security components, and the rationale for incorporating security** >Our motivation is to **explore whether logits can be applied to complex heterogeneous FL scenarios**. To this end, we adopt Z-Score to normalize the knowledge distillation process and further introduce multi-level distillation, elastic temperature scaling, and secure validation mechanisms for optimizing cross-architecture knowledge distillation and enhancing model robustness, respectively. In addition, the verification mechanism of the model-based blockchain FL approach is difficult to apply to the logits sharing scenario. Therefore, we design a Z-Score-based verification mechanism, which is customized for the logits sharing FL method, which ensures efficient logits-based communication and effectively defends against malicious attacks. **Q2: Lack of annotation in equations** >- **Eq.3** describes the multistage elastic temperature mechanism, where each stage of the model is assigned an independent temperature adjustment. The temperature of each stage of the model is adjusted according to the logarithmic difference of the next stage. The temperature is bounded by $\xi$ to maintain stability and prevent degradation. The scaling factor $\gamma$ controls sensitivity. $\Delta Z_l$ represents the difference between current and next stage logits, guiding the temperature adjustment. $\Delta Z_{\max}$ normalizes the temperature across stages for consistency. > >- **Eq.4** describes the category-aware temperature scaling mechanism, which dynamically adjusts each category’s temperature based on its cumulative loss relative to the average loss. The weighting factor $\beta$ controls the sensitivity of temperature updates. $\mathcal{L}_{\mathrm{CE}}(f(\tilde{x}_c))$ represents the cumulative loss for category c, and the average loss across all categories serves as a global reference. The maximum absolute difference standardizes the temperature update across categories, ensuring consistency. > >- **Eq.5** defines the optimization objective of ReT-FHD. Here, $|\mathcal{D}|$ and $|\mathcal{D_i}|$ denote the total and local data sizes, respectively. $\mathcal{L}_{\mathrm{CE}}$ is the cross-entropy loss. The model applies KL divergence for knowledge distillation at each layer. $\sigma^t\left(\cdot\right)$ and $\sigma^s(\cdot)$ represent the softmax functions of the global and local logits, respectively. $\mathbf{z_t}$ denotes the global logits, $\bar{z_t}$ local logits' mean, and $\operatorname{Var}\left(\mathbf{z}_t\right)$ local logits' variance. $G(\Delta Z)$ and $\tilde{\tau}_c$ represent the temperature coefficients for model stages and categories, calculated using Eq.3 and 4, respectively. Unlike global logits, local logits are unaffected by category temperature adjustments. **Experimental Designs Or Analyses:** >Our validation mechanism, designed for logits-based scenarios, standardizes malicious logits detection and is mainly applicable to logits sharing, rather than model parameter-sharing FL frameworks. In security experiments, we demonstrate its effectiveness compared to other FL methods. On CIFAR-10, each validation (Eq.7) requires **~100-200 FLOPs**, while a ResNet18 forward pass takes **~1.8 GFLOPs**, making the blockchain overhead negligible. Moreover, since the blockchain mechanism only verifies logits without altering the FL communication process, ReT-FHD maintains the same communication efficiency as standard FL, as confirmed in Tab.5. **Essential References Not Discussed:** >Thanks for pointing out these references. Due to time constraints, we add additional experiments on **Cifar10** to more fully assess the effectiveness of ReT-FHD in dealing with model heterogeneity and safety. While **FPL[1]** mitigates domain shift, it may introduce information redundancy in our setting, whereas **SDEA[2]** is designed for homogeneous models and relies on high-quality public datasets. |Hetero. & No attack|RethinkFL|ReT-FHD| |---|---|---| |accuracy | 68.21% | 72.76% | |**Attack & Homo.**|**SDEA**|**ReT-FHD**| |accuracy|50.89%|51.75%| **Other Strengths And Weaknesses:** >If you have any doubts about specific equations, please refer to our response to Q2. As for Fig.1, the y-axis represents accuracy, and the x-axis indicates the number of communication rounds. **References:** >[1] Rethinking federated learning with domain shift: A prototype view > >[2] Self-driven entropy aggregation for byzantine-robust heterogeneous federated learning Thank you for dedicating your time and effort to reviewing our work. Please let me know if I have addressed your concerns and if not, we welcome any further questions you might have. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed answer. I took some more time to think about the points you raised: 1. Based on your response, I understand the rationale for introducing security components. However, I still have some questions that need further clarification. What is the specific role of the Z-Score in the verification? How does it enhance the reliability? 2. Function G is computed in Eq. 3 and applied in Eq. 5. However, I still have doubts about its specific role, particularly how it operates in Eq. 5. The clarifications and additional details in the rebuttal addressed many of my concerns, and I would have been more inclined to raise my rating on the paper if the questions above had been explained. --- Reply to Comment 1.1.1: Comment: Dear reviewer, thank you for your response! Please find our responses below. **Q1: What is the specific role of the Z-Score in the verification mechanism? How does it enhance the reliability of verification?** >Z-Score normalization effectively mitigates the inconsistency in numerical scales caused by data and model heterogeneity, while preserving the original information of client logits. By eliminating absolute numerical differences, it ensures the comparability of logits across clients, providing a fairer and more robust foundation for the validation mechanism. >- After Z-Score normalization, the standard deviation of the logits becomes$1/\tau$, which normalizes the logits of each client to a zero-mean, Gaussian-like distribution while maintaining their relative relationships. This enhances the accuracy of the validation mechanism in detecting deviations in client logits and improves its sensitivity to malicious clients. > >- During the verification process, based on Z-Score normalization, we define the label set overlaps under heterogeneous data scenarios as:$|\mathcal{V} \cap \mathcal{H}|=\eta, |\mathcal{E} \cap \mathcal{H}|=\phi, |\mathcal{V} \cap \mathcal{E}|=\psi$. These overlap metrics are used to weight the corresponding logits differences to further reduce the influence of heterogeneity during verification. Next, all clients' logits are scaled with a unified temperature $\tau$ to ensure consistency in distribution. The verification node then computes three types of differences: the difference between the verified node i and the verifier$\Delta Z_{\mathcal{E},\mathcal{V}}^i$, between the verified node and the global logits$\Delta Z_{\mathcal{E},g}^i$, and between the verifier and the global logits$\Delta Z_{\mathcal{V},g}^i$. Finally, a voting mechanism based on the weighted relationships among these differences is applied to enhance the reliability of the validation process. **Q2: How does function G work in Equation 5?** >The function G in Equation (3) computes the temperature coefficients at different stages of the model and is used in Equation (5) as part of the softmax function σ to adjust the logits' temperature: >- For global logits, the stage-wise temperature computed by G is multiplied by the class-specific temperature computed in Equation (4), jointly determining the final temperature coefficient. Accordingly, the general Z-Score formulation in Equation (2) is instantiated for global logits as: >> >>$$\mathcal{Z}\left(\boldsymbol{z_t} ; \tau_t\right)^{(c)} = \frac{\boldsymbol{z_t}^{(c)} - \bar{z_t}}{\operatorname{Var}\left({\boldsymbol{z_t}}\right)\cdot \tau_t}, \tau_t=G(\Delta Z)\cdot\tilde{\tau}_c$$ >> >where $\boldsymbol{z_t}$ represents the global logits, $\tau_t$denotes the global logits' temperature, and other symbols follow the definitions in the previous explanation. > >- For local logits, since class-specific temperature does not apply to local logits, the temperature computed by G is used directly as the final temperature coefficient. This ensures flexibility and effectiveness in temperature adjustment during the knowledge distillation process. Thus, the general Z-Score formula in Equation (2) is instantiated for local logits as: >> >>$$\mathcal{Z}\left(\boldsymbol{z_s} ; \tau_s\right)^{(c)} = \frac{\boldsymbol{z_s}^{(c)} - \bar{z_s}}{\operatorname{Var}\left({\boldsymbol{z_s}}\right)\cdot \tau_s}, \tau_s=G(\Delta Z)$$ We again thank the reviewers for the pertinent questions and helpful suggestions they gave us!
Summary: The paper introduces ReT-FHD, a framework for heterogeneous federated knowledge distillation that contributes three core ideas. First, it proposes Multi-level Elastic Temperature to adaptively regulate how much knowledge is distilled at each layer, enhancing cross-architecture consistency. Second, it implements Category-Aware Global Temperature Scaling, assigning class-specific temperatures to better handle non-IID data. Analysis shows this stabilizes convergence by harmonizing gradient variations. Finally, it integrates a Z-Score–based Guard to detect and deter malicious behaviors like label flipping. Experiments across benchmarks show significant accuracy improvements and communication efficiency. **Update after rebuttal:** Thanks to the authors for their response. After reading it, I would like to keep my original score. Claims And Evidence: **Well-Supported Claims:** **1. Adaptive Temperature Calibration Improves Accuracy.** Claim: Multi-level elastic temperature and category-aware scaling enhance accuracy under model/data heterogeneity. Evidence: Experiments on CIFAR-10/100, Tiny-ImageNet, and Flower102 show consistent accuracy gains. Ablation studies confirm the necessity of both components. **2. Communication Efficiency.** Claim: ReT-FHD reduces communication costs compared to proxy-based methods. Evidence: Quantified communication overhead aligns with the claim. **Problematic Claims:** **1. Theoretical Guarantees (Theorem 4.1) on Temperature-Efficacy.** Claim: Theorem 4.1 is presented as a theoretical foundation showing how temperature adjustments reduce entropy or gradient variance, thereby enhancing heterogeneous distillation. Issue: The paper does not clarify under what specific assumptions (e.g., model randomness, uniform client participation, or particular distributions of logits) this theorem holds. It is also unclear whether the same conditions remain valid in the multi-level knowledge distillation or Category-Aware Global Temperature Scaling framework. From the proof in the appendix, it appears that the theorem’s validity depends on the specific algorithm proposed by the authors. **2. Applicability of Lemma 4.2 in Realistic Scenarios.** Claim: Lemma 4.2 assumes a random initialization where the model is indifferent to all classes, thereby implying balanced effects on each class under soft label distributions. Issue: This assumption may be too strong for real-world settings, where data can be highly skewed and models might be partially pre-trained or not truly “randomly” initialized. As a result, the claimed uniform impact on different categories could fail to hold, and the lemma does not clarify how quickly or under what conditions the model transitions from this idealized initialization into a state where nontrivial data heterogeneity and model structure are present. Methods And Evaluation Criteria: **1. Design Rationale for Key Equations:** For Equation (3), the paper bases its temperature adaptation on the Z-score difference between the next-stage logits and teacher logits, yet does not explain why the difference involving the current-stage logits was not considered. If the goal is to measure the discrepancy between local and teacher knowledge, one might expect the current stage to be more directly relevant. For Equation (4), it applies a negative sign before \beta, effectively lowering the temperature for classes with higher loss. This design choice seems at odds with the intuition that “hard” classes (i.e., larger loss) should receive more guidance. A detailed rationale or additional experiments could clarify why reducing temperature for higher-loss classes aids learning. **2. Selective Scaling of Only Global Logits:** The paper confines category-aware temperature scaling to global logits, without mentioning the possibility of partially or fully scaling local logits. Since local distributions may be highly skewed, applying a similar scaling locally could potentially yield further improvements—or at least warrants discussion to justify why the authors limit scaling to the global side. **3. Completeness of the Update Formula:** Equation (6) is missing the intermediate update and lacks an explicit explanation of how the parameters are iteratively updated. Clarifying that step would better illustrate how multi-level distillation is integrated into the local optimization process. **4. Motivation and Implementation Details:** While the paper’s overarching goal—enhancing model performance via adaptive temperature—is clear, some formulaic details require further justification. Providing explicit motivations for these design choices (e.g., how each approach helps balance knowledge across architectures or handles non-IID distributions) would help readers understand why these decisions are suited to the specific problem of heterogeneous federated learning. Theoretical Claims: I checked the proofs for Theorem 4.1, Corollary 4.2, and Theorem 4.3 in the Appendix, and here are some observations: **1. Boundary Conditions and Dynamic Distributions:** Theorem 4.1 relies on taking the partial derivative of information entropy with respect to temperature under a fixed, single softmax distribution. In a multi-round FL process, however, both teachers’ and students’ logits evolve. If \tau approaches 0 or becomes very large, or if the model distribution becomes nearly deterministic at intermediate steps, the partial derivative behavior may shift drastically. The paper does not clarify how Theorem 4.1 extends to these boundary cases, nor does it show whether the distributional assumptions remain valid across rounds and layers, given that entropy typically changes along with training progress. **2. Uniform Class Impact in Partial Derivatives:** The proof of Corollary 4.2 concludes that all classes experience the same shift in knowledge scale by showing the partial derivatives for each class are identical. However, it does not fully justify why this equivalence holds once training proceeds and the network starts updating logits in potentially different ways for each class (especially if class distributions shift or vary across training steps). While the algebraic steps appear correct for an initial, uniformly random setup, the proof does not clarify how or whether the partial derivatives remain identical in subsequent iterations. **3. Class-Specific Temperature and Gradient Updates:** For Theorem 4.3, the logic that “increasing a class’s temperature increases its gradient update” is demonstrated under the corollary’s assumption of balanced classes and random initialization. This might not strictly hold in more skewed data settings or in deeper, multi-stage distillation pipelines unless further assumptions are introduced. Experimental Designs Or Analyses: **1. Heterogeneous Model Configuration:** The paper states that four models (AlexNet, ShuffleNetV2, ResNet18, GoogleNet) are used in a heterogeneous setup. However, it does not detail exactly how these models are distributed among clients, nor whether some clients share the same architecture. Adding a clear baseline (e.g., no heterogeneity) for comparison would help highlight how well the proposed method handles real-world model diversity. **2. Range of Dirichlet Parameters:** The authors mainly present results for \alpha=0.5, with limited discussion of the more extreme alpha=0.1. It would be helpful to visualize or table out how performance changes as \alpha varies (e.g., 0.1, 0.3, 0.5) to confirm the method’s sensitivity to stronger non-IID data. **3. Homogeneous vs. Heterogeneous in DeL:** The decentralized setting (Table 2) uses only 2-layer CNNs, implying no model heterogeneity. It remains unclear how the method would perform under simultaneous data/model heterogeneity in a truly decentralized scenario. Presenting such an experiment would better demonstrate ReT-FHD’s robustness in the most challenging setups. **4. Ablation Studies vs. Main Table Results:** The ablation experiments in Table 4 report a final CIFAR-10 accuracy of 72.75%, which is close to the 72.76% in Table 1 under Dir(\alpha=0.5). However, it is unclear if those ablations are indeed run under the same Dirichlet parameters and the same heterogeneous model assignment. Moreover, the “Single-level & Fixed \tau” condition seems to combine multi-level knowledge distillation and Z-score distillation, but the paper does not indicate whether that configuration matches any baseline in Table 1. Clarifying these experimental conditions would help readers match ablation performance to main table comparisons. **5. Data/Model Heterogeneity Parameters (Tables 5-7):** The paper does not specify the exact Dirichlet settings or model architectures used for the experiments in Tables 5, 6, and 7 (e.g., how many clients used each architecture or the distribution of labels). Since these tables measure communication cost, malicious attack detection, and robustness, readers need a clear understanding of the data/model heterogeneity to interpret the results accurately; Table 5 omits a direct comparison with FCCL, another logit-based approach, leaving it uncertain how the proposed method stands against the most relevant baseline in terms of communication and computation. Also, the authors claim a notably lower cost than all other methods, but an explanation of why ReT-FHD, despite using additional temperature scaling steps, still results in reduced overhead would be beneficial. Supplementary Material: I briefly reviewed the supplementary code package. The code appears to follow the main paper’s methodology, including functions for multi-level distillation and class-wise temperature scaling. However, it would be helpful if there were more inline comments and a clearer mapping between specific equations in the paper and their implementations in the code. Overall, the structure aligns with the paper, but additional documentation or annotated notebooks would further aid reproducibility. Relation To Broader Scientific Literature: The paper builds on and extends multiple lines of work within federated knowledge distillation and heterogeneous FL: 1. Heterogeneous FL. It aligns with prior methods such as FedMD and FedGKT in enabling cross-architecture knowledge transfer. However, whereas many of these methods rely on proxy models, public datasets, or large feature transfers, the authors focus on lightweight logit exchanges alone, making it more communication-efficient. 2. Multi-level Distillation. Inspired by hybrid distillation approaches, the paper introduces a multi-stage temperature calibration that had not previously been emphasized in the federated setting. This multi-level tactic leverages work showing intermediate-layer KD can improve model robustness and accuracy, but extends it with a dynamic temperature schedule designed for non-IID data and heterogeneity. 3. Security Extensions. The Z-score–based guard echoes recent interests in detecting malicious updates or Byzantine nodes. By combining blockchain-inspired validation with logit-based screening, ReT-FHD attempts to formalize trust boundaries and reduce risks of label-flipping or adversarial logits. Essential References Not Discussed: One area the paper could further contextualize is robust aggregators and Byzantine-resilient FL. Although the authors mention label-flipping and some blockchain-based checks, they do not engage extensively with the broader literature on strong Byzantine defense mechanisms. For example, seminal works like Krum, Bulyan, or other robust-mean estimators have laid out various theoretical guarantees. Since the authors’ proposal relies partly on logit-verification to detect anomalies, connecting their method to known robust aggregation strategies would help clarify whether and how ReT-FHD stands on par with or improves upon existing Byzantine-robust solutions. Additionally, there is a growing body of work on dynamic or adaptive temperature in knowledge distillation (outside of FL contexts), such as “Adaptive Temperature for Distillation” or “Dynamic Soft Labels,” which the paper might draw upon to strengthen its theoretical rationale. Including those references would make the paper’s unique application of adaptive temperature to heterogeneous FL even clearer. Other Strengths And Weaknesses: **Other Strengths** 1. Creative Combination of Techniques: The paper’s integration of elastic temperatures (multi-level and category-aware) and a Z-score–based blockchain security guard represents a novel synthesis of ideas from robust FL, KD, and malicious detection. Even if each concept (dynamic temperature, logit sharing, blockchain validation) has appeared in some form, combining them systematically to tackle heterogeneous FL highlights originality. 2. Applicability to Diverse FL Modes: By explicitly supporting centralized, decentralized, and blockchain-based topologies, the authors address a variety of practical deployment scenarios. This flexibility extends the potential reach of ReT-FHD to real-world federated systems that might not always rely on a single server or a purely decentralized approach. **Other Weaknesses** 1. Limited Discussion of Other Cases: While the paper mentions Transformers or attention-based models as future work, it does not evaluate or discuss how easily ReT-FHD might extend to such architectures. This leaves open the question of whether the proposed solution is limited to some specific models or can generalize broadly. 2. Clarity Gaps: Although the presentation is overall coherent, a more detailed discussion of certain design choices (e.g., the exact model distributions for heterogeneous clients, the rationale behind measuring Z-score difference with the next stage rather than the current stage) would improve transparency and reproducibility. Other Comments Or Suggestions: There are some typos: 1. In the paper, Algorithm 1 references color coding (e.g., “highlighted in orange/pink/purple”) to distinguish between FL, DeL, or blockchain steps, but there is no explicit legend or explanation for the orange portion. This can be confusing to readers trying to follow the workflow. 2. In line 262, the phrase “We assume that the model is optimised under the guidance of a soft label distribution with smoothing degree \lambda_1 is satisfied:” seems grammatically wrong. 3. Line 220 incorrectly refers to Corollary 1 instead of Corollary 4.1. 4. In Section 3.2, the text introduces $z_{t}^{(l)}$ (logits of the teacher) but never explicitly defines $z_{s}^{(l)}$. Questions For Authors: 1. Potential Interaction Between Model and Data Heterogeneity: In Corollary 4.2, you focus on the effect of multi-level elastic temperature for mitigating model heterogeneity, concluding that class-specific differences are not impacted. Yet you later propose category-aware scaling to address data heterogeneity. Would class-wise temperature adjustments (Equation 4) interact with the multi-level scheme (Equation 3) in ways that actually affect model heterogeneity as well? Do you think it is necessary to analyze whether category scaling at each level might improve—or possibly interfere with—multi-level temperature adjustments? 2. Related Work on Dynamic Temperature Tuning: The authors introduce Multi-level Elastic Temperature and Category-Aware Global Temperature Scaling in the context of FL. However, there is other research in KD or general deep learning about adaptive/dynamic temperature (even if not specifically for federated settings). Could you clarify whether there are existing algorithms that similarly adjust temperature across multiple layers or categories, and if so, how your proposed approach differs? A comparison or reference to this broader literature (outside federated learning) would help situate your contributions more clearly. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer TEBy for the constructive comments: "…enhance accuracy under model/data heterogeneity", "…reduces communication costs compared to proxy-based methods", "… represents a novel synthesis of ideas from robust FL, KD, and malicious detection … ", "… address a variety of practical deployment scenarios". To thoroughly address your concerns, we will answer the questions one by one: **Questions:** **Q1: Potential Interaction** >The interaction of category temperature tuning (Eq.4) with the multi-level distillation (Eq.3) can have a **positive impact** on mitigating model heterogeneity. We facilitate knowledge transfer by applying category scaling to global logits, which simultaneously improves multi-level distillation. The ablation experiments in Tab.4 show that the combination of the two significantly improves performance, demonstrating that they are **complementary** in dealing with heterogeneity. **Q2: Differences with Existing Dynamic Temperature Methods** >Please refer to Reviewer JGAK's Q1 for the answer to this question. **Weaknesses:** **W1: Extension to Transformer** >Logits-based knowledge distillation is also **applicable to the Transformer architecture**, and similar work [1] has validated the effectiveness of hierarchical distillation on Transformers. Since we deliver logits information and eliminate architecture specificity through branching exits, our approach is more general and can be widely adapted to different architectures. **W2: More Detailed Clarity** >The exact model distribution for heterogeneous clients: the four models AlexNet, ShuffleNetV2, ResNet18, and GoogleNet are randomly and evenly assigned to clients. In distillation learning, logits serve as high-level guidance. Our elastic temperature calculation uses **next-stage logits** (vs. current-stage) to **avoid local stage overfitting and enhance stability**. Besides, the experiments on Cifar-10 can verify this: | Cifar-10 | next-stage | current-stage | | --- | --- | --- | | accuracy | 72.76% | 71.54% | **Responses to Questions about Theoretical Analysis** >**Theorem 4.1:** Theorem 4.1 **does not** require specific assumptions and it is valid in the framework of both multilevel elastic temperature and category-aware global temperature scaling. According to the information entropy formula, the information entropy of logits is only affected by the softmax temperature and the model output and is **only related to the temperature coefficient** when the model output **cannot be changed**. In Eq.3, we set the **initial temperature ξ** and limit its variation by a **scaling factor γ and a log function** to avoid the boundary case. In addition, in lines 237-238 of the code file client.py, we set the temperature range to **0.5 ≤ τ ≤ 5**. > >**Corollary 4.2:** We refer to **the assumptions of [4]** to simplify the theoretical proof process. In addition, Corollary 4.2 is proposed to provide theoretical support for category temperature setting. Regardless of the initial state of the model, we dynamically adjust the category temperatures of the global logits based on the local learning of different categories. Since temperature scaling is done at the softmax level, it is applied evenly to all classes in the same round, thus ensuring that the scaling effect remains consistent. In addition, we focus mainly on model heterogeneity and non-iid data distributions in FL, whereas **changes in the category distribution are not our concern**. > >**Theorem 4.3:** Although Theorem 4.3 assumes category equilibrium for analytical simplicity, temperature scaling **remains effective in heterogeneous settings**, enhancing gradient updates for difficult categories. Tab.1(a) and Fig.1 validate this in non-IID scenarios (e.g., α = 0.1, 0.5, pat), showing **significant performance gains**. Fig.2 further demonstrates that increasing distillation layers from 1 to 4 improves accuracy, plateauing at 5, confirming the validity of our multi-level mechanism. **Supplementary experiments** **Experiments on model heterogeneity in DeL** >These decentralized baselines except DeSA[3] basically **do not consider model heterogeneity**, and it is unfair to forcefully change them to baselines for heterogeneous models. Due to time constraints, we supplemented our experiments with model heterogeneity on only one dataset: |Flower102|DeSA|ReT-FHD| |---|---|---| |accuracy|37.17%|40.82%| **References:** >[1] One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation > >[2] Improving Adversarial Robust Fairness via Anti-Bias Soft Label Distillation > >[3] Overcoming data and model heterogeneities in decentralized federated learning via synthetic anchors Due to space constraints, I apologize for not being able to answer some of the questions, but if there are still questions, we can respond to them in the second discussion. Finally, thank you for your time and effort in reviewing our work. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response. After reading it, I would like to keep my original score.
Summary: The paper introduces ReT-FHD, which aims to improve the efficiency of federated learning by addressing model and data heterogeneity. The paper’s key motivation comes from the common weakness of current federated distillation methods, i.e., suboptimal temperature calibration during knowledge fusion. Therefore, the paper proposes several innovative mechanisms to tackle this challenge. Firstly, Multi-level Elastic Temperature is dynamic temperature adjustment mechanism to adjust the distillation intensity across model layers, which can optimize knowledge transfer between heterogeneous local models; Category-Aware Global Temperature Scaling introduces temperature calibration to individual categories, which is based on the confidence distribution in global logits and it can ensure a more personalized distillation process; and Z-Score Guard is a blockchain-verified validation mechanism to mitigate attacks Through various experiments on various datasets, the proposed framework significantly outperforms existing methods. Claims And Evidence: Yes, the authors provided the corresponding analysis and experiments for the claims. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I have checked the proofs of theorem 4.1 and corollary 4.2. Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The key contribution of the paper is multi-level elastic temperature scaling. Essential References Not Discussed: Some knowledge distillation methods with dynamic temperature are not discussed. Other Strengths And Weaknesses: Strengths: 1. The paper combines multi-level elastic temperature scaling, category-aware global temperature scaling, and blockchain-based security mechanisms into a single framework for federated knowledge distillation. 2. The framework focus on reducing communication costs and enhancing security without introducing significant computational. Therefore, the application range of the framework is wider. 3. The paper provides clear experiments across multiple benchmark datasets compared with several methods. The use of several datasets(CIFAR-10, CIFAR-100, Tiny-ImageNet, Flower102) and non-IID data distribution settings shows the comparing performance of the proposed method. The ablation studies help prove the contribution of each component in the proposed framework. Weaknesses 1. Dynamic temperature has been discussed by several works on knowledge distillation, but the work does not discuss much about these works, while dynamic temperature is a core contribution of the work. 2. The work seems to be a simple combination of multi-level knowledge distillation and dynamic temperature, while both multi-level knowledge distillation and dynamic temperature have been already proposed. Though the authors made several contributions by designing a Z-score based temperature adjustment and introducing it into federated learning, which indicates much novelty, I personally consider the novelty is not so sufficient. 3. The different deployment scenarios make only a little sense. Overall, the federated learning framework does not require the server to conduct much computation with privacy preservation, thus it is obvious that it can be easily to be extended to different scenarios. The deployment method on blockchain might not be proper to be considered as a main contribution. Other Comments Or Suggestions: 1. Not all the notations are provided with the introduction. 2. Some key references are not introduced. 3. The adjustment strategy for temperature requires more detailed justifications. Questions For Authors: 1. What is the main difference in the strategy for temperature when compared existing dynamic temperature knowledge distillation methods? Is there any specific requirements or challenges? 2. Which one contributes more to enabling the logit-based knowledge distillation to address the model heterogeneity in federated learning? The dynamic temperature or the multi-level setting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer JGAK for the constructive comments: "…reducing communication costs and enhancing security …", "… provides clear experiments …". To thoroughly address your concerns, we will answer the questions one by one: **Questions:** **Q1: Differences from existing dynamic temperature knowledge distillation methods** >Existing dynamic temperature adjustment strategies—BGNN[1] (network-learned temperature for GNN nodes), CTKD[2] (course-learning-based scheduling), and DTKD[3] (sharpness approximation for temperature optimization)—are designed for **centralized learning** with **homogeneous architectures and IID data**, adjusting temperature **across training rounds (e.g., early vs. late epochs)**. Our approach 1) employs class-specific calibration for global logits to mitigate **non-IID label skew** and 2) assigns independent temperature coefficients to **distinct model stages (e.g., shallow vs. deep layers) within a single training round** to address **model heterogeneity**. **Q2: Which one contributes more to solving model heterogeneity in FL** >As detailed in **Line 32** and **Line 78**, the multi-level distillation is foundational to our heterogeneous distillation framework. As in the **Ablation Study (Tab.4)**(removing multi-level settings reduces performance by 3.08%), we validate the effectiveness in heterogeneous federated learning scenarios. Meanwhile, dynamic temperature( in **Theorem 4.1 and 4.3**) allows for more knowledge about global logits and improves performance by **2.82% (Tab.4)**. We therefore respectfully assert that these components form an indivisible contribution core. **Weaknesses:** **W1: Not much discussion of dynamic temperature work** >Our motivation is to explore whether **logits** distillation can be adapted to federated heterogeneous learning. According to the answer for **Q1**, temperatures are adjusted between training rounds. This **coarse-grained adaptation** relies on global training progress and fails to localize dynamic changes within a single round. In FL, due to clients' non-IID data, **models across clients may exhibit significantly divergent optimization states within a single round**. However, cross-round temperature updates[1][2][3] require synchronization in the next round, leading to slowed convergence and even suboptimal solutions caused by delayed adaptation. **W2: The novelty is not so sufficient** >Our work is not a "simple" combination of multi-level distillation and dynamic temperature, as detailed in **Q1** and **W1**. We clarify our contributions as follows: > >>1. **Incorporating dynamic temperature distillation into heterogeneous FL**, enhancing knowledge transfer across diverse models; >>2. **Designing a Z-score-based malicious node verification mechanism**, effectively detecting and filtering abnormal logits to improve model robustness; >>3. **Providing a theoretical justification** for ReT-FHD’s effectiveness from the perspective of information entropy and gradient updates, offering solid theoretical support for our method; >>4. **Extensive experiments** on centralized FL (Tab.1 and Tab.3), decentralized FL (Tab.2), and blockchain FL (Tab.6, Tab.7, and Fig.3) validate our robustness and flexibility. >> >We respectfully request reconsideration of this core contribution. **W3: Blockchain deployment is not suitable as a main contribution** >We do not clarify that blockchain deployment is our core contribution. In our contribution 3, we clarify that "our framework employs **Z-score verification** to validate logit distributions against dynamic boundaries, enabling automated reward/punishment protocols that deter malicious behaviors."We address its unique challenge in logits-sharing scenarios: > >>- **Motivation**: Traditional blockchain FL [4][5][6] validates via *model parameters*, but this fails for *logits-based distillation*. >> >>- **Contribution**: Our **Z-score validation** (Eq.7) ensures compatibility without parameter exposure, achieving: **20.11% post-attack accuracy** (Tab.7 vs. baseline 11.43%) > >This mechanism directly enables our main contribution—**Z-Score Guard:** (Eq.7) for Standardized Finding of Malicious Logits. **References:** >[1]Boosting Graph Neural Networks via Adaptive Knowledge Distillation > >[2]Curriculum Temperature for Knowledge Distillation > >[3]Dynamic Temperature Knowledge Distillation > >[4]Robust blockchained federated learning with model validation and proof-of-stake inspired consensus > >[5]Blockdfl: A blockchain-based fully decentralized peer-to-peer federated learning framework > >[6]Bit-fl: Blockchain-enabled incentivized and secure federated learning framework Please let me know if I have addressed your concerns and if not, we welcome any further questions you might have. Finally, thank you for your time and effort in reviewing our work. --- Rebuttal Comment 1.1: Comment: The authors have addressed some of my previous questions, and thus, I will raise the previous score to 3.
null
null
null
null
null
null
null
null
DragSolver: A Multi-Scale Transformer for Real-World Automotive Drag Coefficient Estimation
Accept (poster)
Summary: The authors present DragSolver, a Multi-scale Transformer for processing car point clouds to estimate drag coefficient for automotive designs in real-world applications. To adapt traditional transformer architectures to this new task, the authors propose multiple designs to achieve more trustful results, like 1) multi-scale feature extraction, 2) heterogeneous scale normalization to process vehicles in diverse sizes, 3) surface guided gating to alleviate distractions caused by irrelevant interior designs, and 4) Monte Carlo output drop layer to estimate the range of drag coefficient for real-world applications. The results show the superiority of the architecture design. Claims And Evidence: The presentation is clear. Methods And Evaluation Criteria: 1. In line 60, the author emphasize that both local details and global shape influence drag. Why do the authors choose point cloud over mesh as the model input? The point cloud is not always an efficient 3D representation as it requires a vast number of points to capture high-frequency patterns like sharp edges. Also, reconstructing meshes from point clouds leads to artifacts, indicating that point cloud might not be an ideal way that preserves sufficient local feature for estimating drag. 2. In line 210, the authors normalize the sizes of different vehicles by normalizing the wheelbase length to a fixed number. However, for real-world applications, the wheelbase length is not always standardized, *e.g.* applying a fixed wheelbase length for `Ford F-450` and `Peel P50` surely leads to an unexpected result. Therefore, why not directly use their actual size in the real world? 3. In line 264 (Sec. 3.5), the authors propose MC Dropout for the output layer (Module 4 in Fig. 2) for diverse estimations to provide an error interval estimation. Dropout is mostly a technique adopted during training to prevent overfitting. Can this be accomplished with a point cloud sampling method to produce diverse inputs? Theoretical Claims: The theoretical claims are well-addressed. Experimental Designs Or Analyses: The experiments are well-designed. The ablations are sufficient. Supplementary Material: The supp. provides more details on visualizations, dataset, implementation details, baseline methods. Relation To Broader Scientific Literature: This paper is closely related to point cloud processing methods like PointNet, PointNet++, ... as this paper focuses on designing 3D point cloud operators to estimate drag coefficients for vehicles. The paper is also closely related to DrivaerNet, DrivaerNet++, DrivaerML as they all focus on estimating drag coefficients. Essential References Not Discussed: All related works are included. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful and constructive comments. We have conducted additional experiments and analyses based on your suggestions, and these results will be explicitly included in the revised manuscript. **Important Note:** To clearly demonstrate the thoroughness of these additional experiments, detailed experimental configurations are provided in [**Table 1 (Click to View)**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%201.jpg). Additional results mentioned in our responses are provided anonymously here: [**Anonymous Supplementary Tables**](https://anonymous.4open.science/r/ICMLRebuttal-F329) --- **Q1: Point Clouds vs. Meshes** **A1:** We appreciate the reviewer’s insightful comment regarding the choice of point clouds over meshes. Indeed, mesh-based methods can effectively encode surface connectivity and local geometric details, but their performance heavily depends on the quality and consistency of mesh discretization. Achieving high-quality mesh generation often requires substantial domain expertise and careful tuning specific to each geometric structure, significantly increasing complexity and cost. To ensure general applicability, flexibility, and ease of use—especially for large-scale automotive datasets—we adopted point clouds. Point clouds naturally bypass issues related to mesh discretization and topology, simplify data processing pipelines, and enable straightforward implementation of various data augmentation techniques (e.g., random sampling, rotation, noise addition), enhancing robustness and generalization. Moreover, to address your concern about capturing local geometric details, we explicitly evaluated DragSolver's accuracy across different point cloud sampling densities ([**Table 4**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%204.jpg)). Our experiments confirm that DragSolver maintains strong predictive accuracy even with significantly reduced point counts (as few as 10,000 points), effectively capturing both local and global aerodynamic features necessary for accurate drag estimation. We will explicitly clarify this rationale in the revised manuscript. **Q2: Wheelbase Normalization** **A2:** We acknowledge the reviewer’s concern regarding normalization of wheelbase length. Although directly using actual vehicle dimensions is intuitive, significant scale differences (e.g., Ford F-450 vs. Peel P50) can cause substantial training instability and poor generalization, especially with limited or noisy training data. Our comparative experiments ([**Table 5**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%205.jpg)) clearly demonstrate that training without wheelbase normalization results in notably higher errors and instability (e.g., Relative *L²*: 0.0057 without normalization vs. 0.0014 with normalization, under limited 30% training data conditions). Importantly, our normalization strategy uniformly scales vehicle geometries to a consistent reference length without altering their inherent shapes, thus significantly reducing scale-related instability and improving predictive accuracy. In practical scenarios, predictions can be effortlessly rescaled back to the actual dimensions of the vehicles. We will explicitly clarify this point in the revised manuscript. **Q3: MC Dropout vs. Point Cloud Sampling** **A3:** We appreciate the reviewer’s insightful comment. Indeed, Dropout is commonly employed during training to prevent overfitting. However, in our method, MC Dropout is specifically utilized during inference as an approximate Bayesian approach to estimate epistemic (model) uncertainty [1], which differs fundamentally from the aleatoric (data) uncertainty captured by random point cloud sampling. To clearly illustrate this distinction, we conducted additional comparative experiments ([**Table 6**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%206.jpg)). The results show that random sampling alone primarily addresses aleatoric uncertainty and thus cannot adequately capture epistemic uncertainty, which MC Dropout specifically targets. Moreover, combining both methods provides a balanced estimation of both uncertainties, leading to a more robust prediction. We will explicitly clarify this distinction in the revised manuscript. [1] Gal, Yarin, and Zoubin Ghahramani. "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning." *ICML*. PMLR, 2016. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed feedback, which has addressed some of my concerns. I understand that point cloud is a widely-adopted method to represent 3D shapes as it can be processed with lower cost and complexity. Regarding the point cloud representation, I believe certain techniques like mesh surface importance sampling might compensate for the information loss caused by transforming meshes into point clouds. However, as for the data augmentation part, certain augmentation techniques like `rotation`, `noise addition` do not seem appropriate, as they will surely affect the drag coefficients. As is shown in [Figure 6](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%206.jpg), the combination of both `model and data uncertainty` leads to the largest estimation variance for DragSolver, but why is its estimation variance smaller than `model uncertainty only` for PTv3 (*e.g.* $\pm$0.1431 v.s. $\pm$0.1647 for Relative $L^{2}$)? --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s insightful and constructive comments. We respond in detail below, referencing additional experimental evidence ([**Table 7**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%207.jpg)) to clarify the concerns raised. **Q1: Data Augmentation and Its Impact on Drag Coefficients** **A1:** We fully agree with the reviewer’s point that certain augmentation techniques, such as rotation and noise addition, could indeed be inappropriate because they may significantly affect the drag coefficients. In fact, if all available data were perfect, we would prefer to avoid using data augmentation altogether. However, real-world data are rarely perfect and often contain minor perturbations, such as slight misalignments during scanning, partial occlusions, or small manufacturing variations. Thus, our data augmentation strategy—small-angle rotations (±2–4°), mild translations, and moderate noise additions—is deliberately designed to simulate these realistic imperfections, allowing us to evaluate the model’s robustness under practical conditions. We strictly avoid large-scale transformations (e.g., 90° rotations) that would invalidate the physical meaning of the drag coefficient. In the revised manuscript, we will explicitly clarify this motivation. As shown in [**Table 3**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%203.jpg), our model exhibits superior robustness compared to other models, consistently maintaining stable performance under these controlled augmentations, further validating its resilience to real-world noise and imperfections. **Q2: Uncertainty Comparison: DragSolver vs. PTv3** **A2:** We greatly appreciate the reviewer’s careful observation regarding uncertainty behavior presented in [**Table 6**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%206.jpg). Indeed, DragSolver exhibits an additive effect when combining aleatoric and epistemic uncertainties, resulting in the highest overall variance. Conversely, for PTv3, combined uncertainty is lower than epistemic uncertainty alone, indicating a complementary effect. To better understand and clarify this phenomenon, we performed additional experiments detailed in [**Table 7**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%207.jpg). These supplementary results clearly indicate that the interactions between aleatoric (data) and epistemic (model) uncertainties are significantly influenced by model architecture and random seed configurations, rather than being specific to a particular model architecture: - For DragSolver, the interaction between aleatoric and epistemic uncertainties is significantly influenced by the choice of random seed. Specifically, for Seed=1 and Seed=5, we observed an additive effect, meaning these uncertainty sources independently increase the overall variance. In contrast, with Seed=3, DragSolver exhibits a complementary (smoothing) effect, whereby aleatoric uncertainty mitigates fluctuations caused by epistemic uncertainty, effectively reducing the total variance. - Conversely, PTv3 generally demonstrates complementary behavior with Seed=1 and Seed=5, exhibiting a stabilizing effect. However, the influence of the random seed is evident with Seed=3, where PTv3 shows an additive effect, as both uncertainty types independently contribute to increasing the total variance. Importantly, this variability is **not a weakness**, but rather reflects meaningful differences caused by the model architectures and random seed selections. >**Specifically, the interaction between aleatoric (data-related) and epistemic (model-related) uncertainties is not fixed. Instead, it changes according to different random seeds, highlighting how sensitive the uncertainty estimation is to stochastic factors.** >**Rather than indicating a lack of stability, this variability helps us better understand how different model architectures and the randomness inherent in data sampling together influence uncertainty estimates.** By conducting experiments across multiple random seeds, we obtain a deeper and more robust understanding of each model’s stability and the relative contributions of different uncertainty types. We will explicitly clarify this explanation in the revised manuscript to avoid confusion and better emphasize the significance of our uncertainty analysis. Again, thank you for your insightful comments. These points significantly strengthen the clarity and rigor of our manuscript.
Summary: The paper presents a Transformer-based framework designed for predicting the aerodynamic drag coefficient of automotive designs directly from 3D vehicle models. This work is motivated by the high computational costs and inefficiencies of traditional Computational Fluid Dynamics (CFD) simulations and wind tunnel experiments, which, despite their accuracy, are often too slow for rapid design iterations. The authors propose DragSolver as a deep learning-based surrogate model that integrates multi-scale feature extraction, heterogeneous scale normalization, surface-guided gating, and epistemic uncertainty estimation to improve the reliability and generalizability of aerodynamic predictions. Claims And Evidence: One of the strengths of the paper is the multi-scale feature extraction mechanism, which effectively captures both global shape characteristics and fine-grained local geometric details that influence aerodynamics. Another notable contribution is the surface-guided gating mechanism, which suppresses irrelevant internal structures such as seats and dashboards that are often present in 3D scans but do not impact external aerodynamic behavior. Methods And Evaluation Criteria: The paper just used fully supervised training. It does not provide a detailed computational efficiency analysis in comparison to alternative CFD-based surrogate models Theoretical Claims: There were no theoretical claims in the paper. Experimental Designs Or Analyses: The paper compares perfomrance on 3-different datasets. Supplementary Material: Supplementary material is adequate. Relation To Broader Scientific Literature: The paper is focused on automotive drag prediction and as such narrow in scope. Essential References Not Discussed: None. Other Strengths And Weaknesses: Although DragSolver is significantly faster than traditional CFD simulations, the inference time of 0.9 to 5 seconds per shape could still be a limitation in real-time design applications where instant feedback is necessary. A comparison with other deep learning-based surrogates in terms of computational cost would further contextualize the trade-offs involved in adopting this approach. Another limitation is the reliance on fully supervised learning, which requires a large number of high-fidelity training samples with ground-truth Cd values obtained from CFD or wind tunnel experiments. While the study evaluates DragSolver across multiple datasets, it does not explicitly address real-world deployment challenges, such as robustness to noisy or incomplete 3D scans. Industrial CAD models and real-world scans often contain missing data, occlusions, or sensor artifacts that could impact the model’s predictions. Hence, while this is a great concept, further research is needed to optimize computational efficiency, reduce data dependency, and improve robustness to real-world noise. Other Comments Or Suggestions: None Questions For Authors: None Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful and constructive comments. Here, we respond to each concern in detail. **Note to reviewers:** We conducted additional experiments and analyses in response to your valuable suggestions. These results will be explicitly included in the revised manuscript. Detailed experimental configurations are provided in [**Table 1(Click to View)**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%201.jpg), and supplementary results can be found anonymously here: [**Anonymous Supplementary Tables**](https://anonymous.4open.science/r/ICMLRebuttal-F329) --- **Q1: Computational Efficiency and Comparison with Deep Learning-based Surrogates** **A1:** We appreciate the reviewer’s comment on computational efficiency. The reported inference time (0.9–5 seconds per shape) includes 10 inference passes for uncertainty estimation and data loading overhead. To address this concern, we conducted additional efficiency comparisons with state-of-the-art models (PTV3, Mamba3D, PointGPT; [**Tables 2**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%202.jpg) and [**3**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%203.jpg)). DragSolver consistently achieves superior accuracy, particularly under limited training data (5%–30%) and high-noise conditions where other models often fail to converge. A single-pass inference (without uncertainty estimation) for the entire test set (1154 samples) takes only 11.98–13.25 seconds, demonstrating DragSolver’s practical computational efficiency. Additionally, automotive aerodynamic optimization typically requires feedback at intervals of seconds or minutes during design iterations rather than millisecond-level speed, thus DragSolver fully meets real-world requirements. We will clarify this context explicitly in the revised manuscript. **Q2: Data Dependency (Fully Supervised Learning)** **A2:** We appreciate the reviewer’s insightful comment regarding data dependency. Although our current method indeed uses fully supervised learning and thus relies on labeled data, we specifically evaluated DragSolver’s effectiveness under significantly reduced training data conditions (as low as 5%–30%, see [**Table 2**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%202.jpg)). The results demonstrate DragSolver’s remarkable ability to maintain high accuracy (*R²*>0.92 at 10% training data) and superior stability compared to state-of-the-art surrogate models (PTV3, Mamba3D, and PointGPT), which largely fail to converge at these limited training ratios. These findings highlight DragSolver’s efficiency in leveraging limited labeled data. Nevertheless, integrating semi-supervised or physics-informed learning approaches to further reduce data dependency remains an important future direction. **Q3: Robustness to Noise and Incomplete Data** **A3:** We appreciate the reviewer’s valuable suggestion regarding robustness to real-world noisy or incomplete 3D data. To address this important concern, we conducted extensive experiments with varying levels of realistic noise and data augmentation ([**Table 3**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%203.jpg)), including substantial random dropout (up to 40%), rotations (±4°), translations (0.03), and noise addition (9%) to simulate common real-world imperfections such as missing data, occlusions, or sensor artifacts. The results clearly show that DragSolver consistently achieves superior accuracy and stability under these challenging conditions, significantly outperforming state-of-the-art surrogate models (PTV3, Mamba3D, and PointGPT), which often fail to converge effectively under higher noise intensities. This strongly supports DragSolver’s robustness and suitability for real-world deployment. Nevertheless, explicitly testing DragSolver on industrial CAD data and actual 3D scans remains an essential direction for future research, which we will highlight in the revised manuscript. **Q4: The Paper is Narrow in Scope (Automotive Drag Prediction)** **A4:** We appreciate the reviewer’s valuable point. Automotive drag prediction itself is an important and challenging problem with significant industrial impact, directly influencing vehicle energy efficiency and emissions. Moreover, automotive aerodynamics is widely recognized as a classical and representative scenario in aerodynamic shape optimization. Although our current study specifically targets automotive applications, the methodologies proposed in DragSolver—such as multi-scale feature extraction, surface-guided gating, and uncertainty quantification—can naturally generalize to broader aerodynamic and hydrodynamic design problems (e.g., aerospace, naval, and structural engineering). We will clarify both the intrinsic importance of automotive drag prediction and the potential broader applicability of our methods in the revised manuscript.
Summary: In this paper, the authors propose DragSolver—a method to effectively estimate physical properties of shapes, such as cars, without the need to run expensive CFD simulations, allowing for the design of novel shapes much faster and more effectively. The proposed method consists of four major blocks: (1) multi-scale feature extraction, which enables more accurate predictions across multiple scales, (2) input normalization, allowing the model to work with various meshes of changing size and number of nodes/edges, (3) an effective approach to ignoring irrelevant parts of the shape—such as interior objects—to produce more precise predictions, and (4) an MC-Dropout method that enables the approach to estimate uncertainties, which are of great importance in industrial workflows. The method is compared against a number of modern 3D architectures and demonstrates superior performance in terms of physical properties prediction accuracy. Claims And Evidence: The authors make several claims about each major component of their method (mentioned 4 parts) and either adequately discuss the motivation behind these claims and/or support them experimentally later in the experimental section. Methods And Evaluation Criteria: The choice of datasets and benchmarks is adequate and accurately reflects the current state of research in this direction. The evaluations are rigorous and demonstrate the effectiveness of the approach from multiple perspectives. Theoretical Claims: No theoretical claims, theorems, or proofs are presented in the paper. Experimental Designs Or Analyses: The design of the experiments sufficiently covers different aspects of the proposed approach, including both the accuracy of in-distribution predictions and the generalization ability of the model. Rigorous ablations are also much appreciated and support the claims made about the method. Additionally, the method is compared against a number of popular 3D architectures, further strengthening the case for its effectiveness. Supplementary Material: Additional visualizations, more detailed datasets and training pipelines descriptions are much appreciated. Relation To Broader Scientific Literature: The paper clearly situates itself within the existing literature by thoroughly discussing prior work on estimating physical properties of shapes, particularly in the context of CFD-free predictions. It provides sufficient detail on previous approaches, highlighting their limitations and demonstrating how the proposed **DragSolver** framework addresses these challenges, namely through the four discussed components of the method. Essential References Not Discussed: No essential references are missing. Other Strengths And Weaknesses: In short, the major Strengths of the paper are: * The proposed DragSolver framework is clearly structured, with each component thoroughly justified and experimentally supported. The paper includes rigorous ablation studies that strengthen its claims. * The experiments cover multiple aspects, including in-distribution accuracy and generalization ability, and the method is compared against modern 3D architectures, demonstrating its effectiveness. * The paper thoroughly discusses prior work, highlights the limitations of existing approaches, and evaluates the method on relevant, up-to-date datasets, ensuring its alignment with the current state of research. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s positive evaluation and constructive summary, which greatly encourages us. **Note to reviewers:** We conducted additional experiments and analyses in response to the valuable suggestions from other reviewers. Detailed experimental configurations are provided in [**Table 1(Click to View)**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%201.jpg), and supplementary results can be found anonymously here: [**Anonymous Supplementary Tables**](https://anonymous.4open.science/r/ICMLRebuttal-F329) Based on insightful feedback from other reviewers, we have further strengthened our manuscript with the following detailed analyses and clarifications: - **Computational Efficiency:** We explicitly compared DragSolver against state-of-the-art deep learning surrogate models (PTV3, Mamba3D, PointGPT) under limited training data and varying noise conditions. These comparisons clearly demonstrate DragSolver’s superior predictive accuracy and competitive inference speed ([**Table 2**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%202.jpg) and [**Table 3**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%203.jpg)). - **Robustness under Limited Data and Noise:** We evaluated DragSolver extensively under significantly reduced training samples (as low as 5%–30%) and challenging noise scenarios (up to 40% dropout, ±4° rotation, 9% random noise), confirming its consistent robustness and strong generalization capabilities ([**Table 2**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%202.jpg) and [**Table 3**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%203.jpg)). - **Uncertainty Quantification:** We clarified and differentiated epistemic (MC Dropout) from aleatoric (random sampling) uncertainties through additional targeted experiments, emphasizing their complementary roles in robust uncertainty estimation ([**Table 6**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%206.jpg)). - **Choice of Point Cloud Representation:** We provided explicit rationale and empirical evidence supporting our choice of point clouds over meshes, highlighting advantages in computational efficiency, flexibility, and generalizability for large-scale automotive aerodynamic analysis ([**Table 4**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%204.jpg)). - **Wheelbase Normalization:** We further clarified the practical necessity and benefits of our wheelbase normalization strategy through comparative experiments, confirming it significantly reduces training instability due to scale variations while preserving original geometric proportions ([**Table 5**](https://anonymous.4open.science/r/ICMLRebuttal-F329/Table%205.jpg)). We believe these comprehensive revisions effectively address key concerns, further improving the manuscript’s clarity, rigor, and broader impact. Again, we sincerely thank the reviewer for their encouraging and supportive comments. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful response to the reviews. I appreciate the clarifications provided and the additional experiments. After reading your rebuttal as well as the comments from the other reviewers, I continue to believe that this work could be an interesting contribution. In particular, the area of surrogate modeling for CFD lacks broadly adopted baselines, and I find it valuable that your method also produces uncertainty estimates. I therefore maintain my original score. --- Reply to Comment 1.1.1: Comment: Thank you sincerely for your thorough and supportive review in the first round. Your positive evaluation and recognition of the potential impact of our work have been greatly encouraging for us, and we truly appreciate your thoughtful insights and constructive feedback. Currently, we find ourselves facing a difficult situation: as the reviewer discussion period is approaching its end and the acknowledgment deadline (April 4, AoE) has already passed, we have not yet received further responses from the other two reviewers. Given this circumstance and considering ICML’s reviewing policy, we now only have this opportunity to directly communicate with you and potentially seek your further support. During the rebuttal stage, we invested significant effort and computational resources—conducting rigorous supplementary experiments on eight A100 GPUs running continuously for several days—to fully address all reviewer comments. Specifically, we provided extensive additional comparisons with state-of-the-art methods (e.g., PTV3, Mamba3D, PointGPT), robust analyses under limited data (down to 5%) and significant noise conditions (up to 40% dropout, ±4° rotation, and 9% random noise), and offered deeper clarifications regarding our uncertainty quantification approach. As a result, we believe the manuscript is now considerably stronger, and the experimental validations are more comprehensive. Given these substantial enhancements and additional efforts, could we kindly ask whether you feel the manuscript has improved sufficiently in rigor and significance to justify increasing your score? We would deeply appreciate your consideration and additional support. Thank you once again for your valuable time and effort.
null
null
null
null
null
null
null
null
Efficient Generative Modeling with Residual Vector Quantization-Based Tokens
Accept (poster)
Summary: This paper proposes an efficient generative framework (ResGen) to model residual tokens that has an additional depth dimension in the code sequences. Specifically, ResGen adopts the masked generative framework to sample tokens, and leverages Gaussian mixtures to directly predict the sum of masked token embeddings. This method achieves strong results with improved efficiency on both class-conditional image generation and text-to-speech tasks. ## update after rebuttal The authors' response addressed my concerns regarding the efficiency of the proposed GMM head, in comparison with the AR head in RQ-Transformer. Therefore, I have adjusted my rating to borderline accept. However, I observed that the major performance gains in FID primarily come from the use of a MaskGIT-like generation framework, which is not an innovation of this work. I recommend that the authors clarify this point in their paper to ensure transparency and accuracy in reporting their contributions. Claims And Evidence: The claims in this paper are supported by experimental results or prior studies. Methods And Evaluation Criteria: I am not familiar with text-to-speech evaluation. The metrics used for class-conditional image generation are common and standard. Theoretical Claims: I did not fully check the details of the theoretical proofs in Section 3.2. Experimental Designs Or Analyses: The experimental designs are sound. Supplementary Material: I have checked the supplementary material, part A. Relation To Broader Scientific Literature: The key contribution of this paper is to model residual tokens more efficiently. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** - The paper is well written and easy to follow. - The proposed method can be generally applied to both image generation and text-to-speech generation. **Weaknesses:** - Residual tokens have both length and depth dimensions. The focus of ResGen is to model **the depth dimension** more efficiently, i.e., by predicting embeddings of depth tokens collectively rather than individually. However, it is not clear to what extent does this method improve the efficiency over its baseline, i.e., a depth transformer in RQ-Transformer. Notably, ResGen adopts the masked generative framework to model **the length dimension**, which requires much fewer sampling steps than the autoregressive framework used in RQ-Transformer. This makes ResGen and RQ-Transformer less comparable. The authors are encouraged to provide a strict ablation on this. For example, using the the same autoregressive framework to model the length dimension. - Comparisons with recent masked generative models such as MAGVIT-v2 [1] and MaskBit [2] are missing in Table 1 and Figure 2. It seems ResGen does not show clear advantages in terms of both generation quality and sampling speed over these methods. [1] Yu, Lijun, et al. "Language Model Beats Diffusion-Tokenizer is key to visual generation." The Twelfth International Conference on Learning Representations. [2] Weber, Mark, et al. "MaskBit: Embedding-free Image Generation via Bit Tokens." Transactions on Machine Learning Research. Other Comments Or Suggestions: Please see weaknesses. Questions For Authors: 1. Why not include VAR-d30 in Figure 2 for wallclock time comparison? 2. Could the authors provide the exact coordinates of the data points in Figure 2 (left one)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive feedback. We address the specific weaknesses and questions raised below: **Regarding W1: Isolating the Efficiency Gains of Depth Modeling** We agree that comparing ResGen (masked generation) directly with RQ-Transformer (autoregressive generation) makes it difficult to isolate the efficiency gains specifically from our depth modeling strategy versus the gains from the masked generation framework itself. To provide a stricter ablation, as suggested: * **New Experiment:** We created an "AR-ResGen" variant. This model uses the *exact same spatial autoregressive transformer* as the RQ-Transformer baseline. The key difference is that it replaces RQ-Transformer's autoregressive depth transformer with our *cumulative embedding prediction MLP* (of similar size) to handle the depth dimension. This isolates the effect of the depth modeling approach while keeping the sequence modeling framework constant (autoregressive). * **Results:** This AR-ResGen achieves better FID scores than the original RQ-Transformer, particularly with few iterative refinement steps for the depth prediction, demonstrating the efficiency of our cumulative embedding approach even within an AR framework. The results (FID) are: | AR-ResGen Iterations | w/o CFG | w/ CFG | | :------------------- | :------ | :----- | | 1 | 27.45 | 6.47 | | 2 | 24.10 | 5.33 | | 4 | 23.75 | 5.30 | | 8 | 23.48 | 5.22 | * **Conclusion:** This targeted ablation confirms that our cumulative embedding prediction for depth modeling is inherently more efficient and effective than standard autoregressive depth handling, independent of the sequence generation strategy. We will include these findings in the revised manuscript. **Regarding W2: Comparison with MAGVIT-v2 and MaskBit** We thank the reviewer for highlighting the missing comparisons with MAGVIT-v2 and MaskBit. We acknowledge their strong performance (FID 1.78 and 1.62) and will add these results to Table 1 and Figure 2 for context. Our current FID is 1.93 with similar steps. While their FID scores are currently lower, ResGen offers distinct advantages: * **Theoretical Advantage (Modeling Correlations):** ResGen predicts cumulative embeddings to explicitly model dependencies across the RVQ depth dimension. This differs from methods like MAGVIT-v2/MaskBit using Lookup-Free Quantization (LFQ) and predicting independent bit groups. MaskBit's own ablation (Table 3b) shows performance degrading as independent groups increase, suggesting potential scaling limitations for higher fidelity (requiring more bits/groups). ResGen's approach of modeling correlations directly may offer better scalability for deep quantization or numerous discrete outputs. To show the benifit of modeling these correlation between tokens across depth, we implemented a variant of our method which uses the same masked generation framework introduced in our method section but predicts discrete tokens directly and in parallel across all depths at each step, instead of predicting continuous cumulative embeddings. This variant achieved FID scores of 12.79 (w/o CFG) and 2.91 (w/ CFG). While this performance surpasses the other comparison methods evaluated in our study, it is slightly worse than our final proposed model which predicts cumulative embeddings, thus supporting the efficacy of the cumulative embedding strategy to capture such token correlations. * **Practical Advantage (Resolution/Depth Trade-off):** Our use of 16-depth RVQ enables a lower spatial resolution (8x8) than typical 16x16 VQ methods, while achieving high reconstruction quality measure in rFID. ResGen's efficient depth handling makes this viable, offering flexibility in memory usage and model design. We will clarify these points and add the comparisons to the manuscript. **Regarding Q1: Inclusion of VAR-d30 in Figure 2** Thank you for this suggestion. To provide a more complete comparison of performance and efficiency trade-offs, we agree that including VAR-d30 in the wallclock time comparison (Figure 2) would be beneficial. We will update Figure 2 to include VAR-d30 in the revised manuscript. **Regarding Q2: Exact Coordinates for Figure 2 (left)** Certainly. The exact coordinates (Wallclock Time [s], FID) for the data points in Figure 2 (left) are: * DiT: [ (4.68, 2.27) ] * VAR: [ (0.16, 3.60), (0.19, 2.95), (0.24, 2.33) ] * MAR: [ (89.69, 2.31), (105.6, 1.78), (133.01, 1.55) ] * RQTran: [ (3.61, 3.89), (3.73, 3.80) ] * ResGen-rvq8: [ (0.84, 2.87), (1.43, 2.78), (1.86, 2.75) ] * ResGen-rvq16: [ (0.98, 2.26), (1.67, 2.14), (2.28, 1.98) ] * MaskGiT: [ (0.98, 6.18) ] Solid lines connect different models; dashed lines connect points for the same model with varying sampling steps. We will add this detailed information, likely in the figure caption or appendix, for clarity in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, which addresses some of my concerns. However, I see that the gFID of "AR-ResGen" is not notably better than that of RQ-Transformer (5.50 as shown in Table 1 of the paper). Meanwhile, while AR-ResGen requires 2~4x fewer sampling steps in the depth dimension, this reduction does not appear to result in significant speed improvements, given that the original depth transformer of the RQ-Transformer is lightweight and introduces minimal computational overhead. To fully convince me, the authors could provide the wallclock time and FID for AR-ResGen as presented in Figure 2. Besides, according to the rebuttal, the FID of VAR does not match the numbers presented in Table 2 of the paper. Please explain the reasons. It seems VAR to be a more efficient choice than ResGen as it is roughly 10 times faster than ResGen and achieves similar FID (e.g., VAR (0.19, 2.95) v.s., ResGen-rvq8 (1.86, 2.75)). --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s thoughtful follow-up and agree that practical efficiency, especially in terms of wallclock time, is essential to clearly demonstrate the advantages of our cumulative embedding prediction strategy. **Concerning the efficiency comparison between RQ-Transformer and AR-ResGen:** To directly address this concern, we conducted additional experiments measuring single-sample generation time using an NVIDIA A100 GPU, comparing the original RQ-Transformer with our AR-ResGen variant. The results are summarized below: |Model|FID|Wallclock Time (single sample)| |-|-|-| |RQ-Transformer|5.50|5.35s| |AR-ResGen (num_iter=1)|6.47|1.30s| |AR-ResGen (num_iter=2)|5.33|1.56s| |AR-ResGen (num_iter=4)|5.30|2.00s| |AR-ResGen (num_iter=8)|5.22|3.05s| These results clearly illustrate that AR-ResGen significantly reduces the generation time compared to RQ-Transformer, achieving a speedup ranging from approximately 1.75× to 4.1×, depending on the number of depth iterations. Notably, AR-ResGen reaches competitive or better FID scores within just two iterations, corresponding to roughly a 3.4× speedup relative to the original RQ-Transformer’s 5.35s sampling time. This explicitly validates our original claim that our cumulative embedding approach to depth modeling provides substantial practical efficiency improvements in terms of both generation speed and quality. **Regarding the discrepancy noted by the reviewer in reported VAR’s FID scores:** Table 2 in the manuscript correctly presents the accurate values, whereas Figure 2 displays values from an earlier arXiv version (version 1). We sincerely apologize for this oversight and will correct Figure 2 in the revised manuscript to ensure consistency and clarity. **Regarding the efficiency comparison between VAR and ResGen:** While we acknowledge VAR’s faster single-sample generation compared to ResGen, it's crucial to highlight that ResGen provides a significant advantage in terms of maximum batch size, enabling substantially greater throughput and parallelism during generation. From a modeling perspective, ResGen fundamentally differs from VAR, as elaborated in our response to **Reviewer BBR2**’s comment on **"Weakness 2: Elaborating Differences with VAR."** We sincerely appreciate the reviewer’s detailed observations, and we will carefully incorporate these clarifications into the revised manuscript.
Summary: This paper proposes ResGen, an efficient RVQ-based generative modeling for balancing quality and efficiency. It involves a masked token modeling strategy similar to MaskGiT, and a multi-token prediction pipeline inspired by CLaM-TTS, in a discrete diffusion process and variational inference. Experimental results demonstrate that ResGen outperforms autoregressive counterparts in both tasks, without compromising sampling speed. ## update after rebuttal We thank authors' detailed rebuttal. Most of my concerns have been solved. I would like to improve the rate. Claims And Evidence: Claim: ResGen outperforms autoregressive counterparts in both tasks, without compromising sampling speed. Question: In Tab. 1, ResGen shows a slightly higher FID score compared to MAR-L, achieving comparable quality (1.93 of ResGen v.s. 1.78 of MAR-L) with a much faster sampling speed. However, MAR [autoregressive’24] by Li et al. of NeurIPS 2024 demonstrates the inference time of MAR-L about 0.3 sec/image, which is significantly faster than the wallclock time over 100 sec in this paper in Fig. 2. Moreover, MAR with lower steps is also sufficient to achieve a strong generation quality. Could ResGen achieve higher performance compared with a faster MAR with lower steps? Potential Improvement: It will be helpful to figure out the details of the experiment environment, especially the GPU type and AR steps used in MAR. Further evaluation of MAR with lower steps could also be beneficial for proving the necessity of using ResGen. Methods And Evaluation Criteria: Methods: The motivation of this method is to eliminate the problem of sampling complexity associated with sequence length and depth for efficient RVQ-based image generation. The proposed masking and prediction strategy makes sense for the problem. And the discrete diffusion process helps with high-fidelity generation. Evaluation Criteria: For image generation, the paper relies on standard generative modeling metrics like FID, and evaluates the efficiency using metrics such as inference time and batch size. For audio tasks, the paper follows VALL-E (Wang et al., 2023) and CLaM-TTS (Kim et al., 2024). Potential Improvement: 1. Better to figure out the details of the experiment environment, especially when evaluating the maximum batch size and inference time. Theoretical Claims: This paper do not have some theoretical claims. Experimental Designs Or Analyses: Strengths: 1. They run on both visual and audio tasks: ImageNet 256×256, and zero-shot TTS tasks. 2. They compare memory usage (max batch size) and speed to a variety of generative baselines. 3. They show ablations of top-p, number of steps, and temperature in the appendix. Potential Weaknesses: 1. In Tab. 1, RQ-Transformer uses rejection sampling rather than CFG, which might be misleading. 2. In Tab. 1 and Fig. 2, The speed comparisons and maximum batch size comparisons are primarily self-reported. Without consistent inference setups or more hardware details, we can’t be sure these speed gains generalize. 3. Comparing with the conventional RQ-transformer which is an autoregression architecture, it is not clear that the preliminary improvement is from the discrete diffusion process or the masking and prediction strategy. A comparison between using autoregressive generation and discrete diffusion generation could help clarify this, similar to MAR [autoregressive’24] by Li et al. of NeurIPS 2024. Supplementary Material: Supplementary materials do not have many issues. Relation To Broader Scientific Literature: No more specific contributions to a broader scientific literature. Essential References Not Discussed: Most of the references have been discussed. Other Strengths And Weaknesses: Strengths: 1. Take advantage of RVQ and address the computational cost problem, bringing residual codebook design back for fast and high-fidelity image generation. Additional Weaknesses (detailed): 1. Complex Implementation: The usage of a mixture of Gaussians for each token plus multi-depth unmasking is quite complicated. The authors do not deeply address computational overhead or memory usage for these mixture parameters, especially as code dimension grows. 2. Unclear Large-Scale Scaling: Since the authors use depth 16, fewer than the RVQ depth of 32, it remains unclear if that scaling continues to yield benefits or stable training for the performance. Other Comments Or Suggestions: Codebook usage details: Detailing the usage of the codebook, especially at each depth, could help in understanding how the masking and prediction strategy and RVQ process work. Questions For Authors: 1. Generation Scaling Law: How does ResGen perform when scaling up from 574M for image generation? Could deeper depth gain more from a larger model with more parameters? 2. Mixture-of-Gaussians Overhead: Since each token position has multiple mixture components, how big is the compute overhead compared to simpler single-Gaussian or discrete classification heads, especially for wide embeddings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed and constructive feedback. We appreciate the recognition of ResGen's motivation and its potential to address computational costs in RVQ-based generation. We address the specific concerns and questions below: **Concerning the fair comparison with MAR and further comparison with faster MAR (Claim Question)**, there's a key difference in how speed is measured. MAR reports throughput (sec/image averaged over a large batch, e.g., 256), while we report wall-clock time for single image generation on one A100 GPU. This accounts for the apparent discrepancy (0.3s vs. >100s). To directly address whether ResGen outperforms faster MAR variants, we conducted additional experiments: 1. **Reducing MAR's AR Steps (Diffusion=100):** Even MAR-B at 16 AR steps (fastest tested, ~5.6s/image) yielded worse FID than ResGen-rvq16 (FID 4.11 vs. 1.93). |MAR FID (Diffusion=100)|AR=64|AR=32|AR=16| |:-|:-|:-|:-| |MAR-B|2.33|2.44|4.11| |MAR-L|1.81|2.10|4.32| 2. **Reducing MAR's Diffusion Steps (AR=256):** MAR-B at 25 diffusion steps (~22.5s/image) performed worse than ResGen-rvq16 (FID 3.38 vs. 1.93). |MAR FID (AR=256)|Diff=100|Diff=50|Diff=25| |:-|:-|:-|:-| |MAR-B|2.31|2.39|3.38| |MAR-L|1.78|1.83|2.22| These results demonstrate ResGen's consistent advantage in both speed and quality, even against accelerated MAR configurations. We will add this comprehensive comparison to the revision. **To clarify the experimental environment (Potential Improvement 1, W2.2):** All ResGen evaluations used a single NVIDIA A100 GPU. Training details are in the appendix. For Table 1, we reported the best performance for each model based on a hyperparameter search. Detailed sampling configurations and hyperparameters will be explicitly included in the final manuscript. For Table 2, max batch size was the largest fitting on one A100 using checkpoints' best CFG settings. Inference speed is measured as the wall-clock time required to generate a single sample. We will explicitly state these details in the final manuscript. **Regarding RQ-Transformer guidance in Table 1 (W2.1):** Thank you for noting potential confusion. For the ablation in Table 1 comparing generation on 8x8x16 tokens, we used CFG for ResGen, RQ-Transformer, and MaskGiT to ensure a fair comparison under identical conditions. As the reviewer noted, the RQ-Transformer paper uses rejection sampling. Rejection sampling is applicable to any generative model, but was not used in Table 1. We will clarify it in the revision. **Concerning scaling to deeper RVQ depths (Additional Weakness 2):** For images, we used depths 8 and 16 due to excellent reconstruction fidelity (RFID 1.29 and 0.67, Appendix A.4). However, in audio (Table 3), we extensively evaluated deeper depths (32, 72) and observed consistent performance improvements, demonstrating stable training and benefits from increased depth in that domain. This suggests potential benefits for vision too, though not explored here due to already high fidelity at depth 16. **Regarding clarifying the benefit of discrete diffusion over autoregressive generation:** Please refer to the newly added experiments and analyses presented in our response to **Reviewer aQEq** ("Regarding W1: Isolating the Efficiency Gains of Depth Modeling"), which directly address this point. **Regarding generation scaling law (Q1):** We investigated scaling ResGen-rvq16 from 574M to 1B parameters on images (400K iterations, batch 256 on 4 GPUs). We observed consistent FID improvements across sampling steps: |FID (Exp Sampling)|64 steps|48 steps|28 steps| |:-|:-|:-|:-| |**w/o CFG 574M**|32.44|33.12|33.92| |**w/o CFG 1B**|26.58|26.74|28.39| |**w/ CFG 574M**|9.53|9.72|9.98| |**w/ CFG 1B**|**8.07**|**8.10**|**8.82**| This indicates a positive scaling trend, which we will add to the manuscript. **Regarding the complexity and overhead of the Mixture of Gaussians (MoG) head (Additional Weakness 1, Q2)**, we acknowledge the need for clarity. The MoG head's computational cost is manageable. As detailed in Appendix A.2, parameter prediction involves projecting the hidden output (size `O`) to `K` mixture probabilities, `K` mean vectors (size `H`), and affine parameters. This leads to a projection complexity dominated by `O(O*K*H)`. For our vision model (`O=1152, K=1024, H=64`), this cost is comparable to a standard softmax layer with a ~64K vocabulary (`V = K*H`), which is practical. For higher-dimensional embeddings like in audio (`H=512`), we use low-rank projection for the means (`μ = M * μ_tilde + s`, `h << H`), reducing the dominant computational term to `O(O*K*h + H*h)`, again making the effective cost similar to a ~64K vocabulary model (`h=64`). This technique, previously used in CLaM-TTS, significantly mitigates overhead. We recognize these details were mainly in the Appendix and will clarify the MoG parameterization, its computational cost, and the low-rank optimization in the main paper.
Summary: This paper introduces ResGen, an efficient generative modeling method that uses Residual Vector Quantization (RVQ) for high-fidelity data generation. Its key innovation lies in predicting collective token embeddings rather than individual tokens, which decouples inference complexity from quantization depth. Claims And Evidence: The claims made in the paper—namely, improved efficiency and generation fidelity of RVQ-based generative modeling—are generally well-supported by clear evidence presented through extensive experiments. Methods And Evaluation Criteria: The methods and evaluation criteria are well-suited to the tasks at hand. Theoretical Claims: The paper formulates a probabilistic framework. While the paper presents equations clearly, no explicit mathematical proofs requiring detailed checking are presented. Experimental Designs Or Analyses: The experimental design is sound and well-executed. The authors clearly outline training configurations, baseline comparisons, and evaluation metrics. Ablation studies further strengthen the robustness of their conclusions. Supplementary Material: The supplementary material was thoroughly reviewed. Relation To Broader Scientific Literature: The paper fits well within the broader context of generative modeling literature. It makes a significant advance in RVQ-based generative approaches by solving efficiency bottlenecks that arise from depth-dependent inference complexity. Essential References Not Discussed: No significant omissions were noted. Other Strengths And Weaknesses: **Strengths:** - The method introduces a valuable conceptual innovation—predicting collective embeddings to decouple inference complexity from RVQ depth. - Comprehensive and clear supplementary material strengthens transparency and reproducibility. **Weaknesses:** - While extensive, evaluations are limited to specific datasets (ImageNet, standard TTS benchmarks). Performance on more diverse datasets or more challenging, higher-resolution settings could be further explored to validate scalability. - The paper would benefit from deeper analysis of the results in Table 2, particularly regarding the substantial performance gap between MAR-B and ResGen-rvq16, given that MAR-B has fewer parameters. Other Comments Or Suggestions: None Questions For Authors: How sensitive is ResGen's performance to different masking strategies, and could alternative masking approaches significantly affect the results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough assessment, positive feedback on our claims, methodology, and supplementary material, and for recognizing the conceptual innovation of ResGen. We appreciate the constructive suggestions for further improvement. Regarding the points raised: **Limited Dataset Diversity/Scalability (W1):** We appreciate the suggestion to evaluate ResGen on more diverse datasets and higher-resolution settings. Our current study focused on established benchmarks like ImageNet and standard TTS datasets to ensure the adaptability of this method across different domains. We agree that broader validation is important and plan to explore ResGen's scalability and generalization on more diverse tasks, including higher-resolution image generation, as a key direction for future work. But to provide additional context on model scalability, we conducted scaling experiments with ResGen-rvq16, increasing model parameters from 574M to 1B on ImageNet. Although these scaling experiments are ongoing (currently at 400K iterations, batch size 256 on 4 GPUs), intermediate results reveal FID improvements, as shown below: | FID (Exp Sampling) | 64 steps | 48 steps | 28 steps | | :----------------- | :------- | :------- | :------- | | **w/o CFG 574M** | 32.44 | 33.12 | 33.92 | | **w/o CFG 1B** | 26.58 | 26.74 | 28.39 | | **w/ CFG 574M** | 9.53 | 9.72 | 9.98 | | **w/ CFG 1B** | **8.07** | **8.10** | **8.82** | We will include the fully converged results in the manuscript. **Analysis of MAR-B vs. ResGen-rvq16 Performance (W2):** Thank you for prompting a deeper analysis of the results in Table 2. While MAR-B has fewer parameters and achieves slightly better FID *without* classifier-free guidance (CFG), ResGen-rvq16 excels *with* CFG. This difference stems from their distinct generative mechanisms. MAR-B utilizes multi-token prediction followed by continuous diffusion steps, allowing iterative refinement and revision of previously generated tokens. In contrast, ResGen performs disjoint token unmasking at each step without revisiting prior decisions. This fundamental difference in modeling leads to varying performance characteristics, particularly highlighted by the impact of CFG. Furthermore, although MAR-B can perform well without CFG, achieving its best results often requires significantly more sampling steps (e.g., 100+ diffusion steps) compared to ResGen, impacting inference speed. Thus, there is a trade-off between MAR-B's potential parameter efficiency (in certain configurations) and ResGen's inference efficiency, especially when aiming for high-quality results with guidance. We will incorporate this more detailed comparative analysis into the revised manuscript. **Sensitivity to Masking Strategies (Q1):** This is an excellent question regarding the robustness of our approach. We investigated ResGen's sensitivity by training ResGen-rvq16 models (400K iterations) using three distinct masking schedules during training: circle, exponential, and cosine. We then evaluated these models under two sampling conditions: using the same or different schedule during sampling than training for all models. Our results, summarized below (FID scores, lower is better), show interesting interactions: | CFG | Training | Sampling | 64 steps | 48 steps | 28 steps | | :------ | :------- | :------- | :------- | :------- | :------- | | **w/o** | cosine | cosine | 28.13 | 28.01 | 29.18 | | **w/o** | cosine | exp | 32.30 | 32.70 | 32.81 | | **w/o** | circle | circle | 26.04 | 26.41 | 26.73 | | **w/o** | circle | exp | 32.44 | 33.12 | 33.92 | | **w/o** | exp | exp | 41.12 | 41.67 | 41.87 | | **w/** | cosine | cosine | 15.46 | 15.69 | 17.27 | | **w/** | cosine | exp | 9.66 | 9.78 | 10.08 | | **w/** | circle | circle | 10.09 | 10.35 | 10.44 | | **w/** | circle | **exp** | **9.53** | **9.72** | **9.98** | | **w/** | exp | exp | 12.63 | 12.65 | 12.83 | Notably, the best performance (FID 9.53 at 64 steps with CFG) was achieved when training with the *circle* masking strategy but sampling using the *exponential* strategy. This suggests that the different training and sampling schedules impact the final quality. The exponential schedule unmasks fewer tokens in the crucial early stages and more tokens later. This coarse-to-fine unmasking during inference appears beneficial for ResGen, likely allowing the model to establish a more stable initial prediction before revealing finer details. The strong performance under exponential sampling, even when trained differently, indicates its effectiveness for inference with ResGen. We will include a detailed discussion of these findings in the revised manuscript.
Summary: This paper introduces ResGen, an efficient generative model leveraging residual vector quantization (RVQ). While RVQ typically enhances image fidelity by increasing quantization depth, it also demands more inference steps during sampling. Instead of sequentially predicting tokens at each depth, ResGen proposes a novel approach: predicting the sum of masked tokens at each layer, known as cumulative tokens. These cumulative tokens are then re-quantized using RVQ. By effectively integrating MaskGit with RVQ, ResGen achieves impressive performance improvements. Claims And Evidence: Yes Methods And Evaluation Criteria: Make sense Theoretical Claims: I check the Section 3.2. In lines 263-265, Shouldn't p(x^(0)|z, x^(t)) corresponds to RVQ dequantization instead of quantization? It seems there might be a mix-up here. Could you clarify? Experimental Designs Or Analyses: Yes, no issues. Supplementary Material: All appendix Relation To Broader Scientific Literature: All appendix Essential References Not Discussed: HART: Efficient Visual Generation with Hybrid Autoregressive Transformer. [ICLR'25] Other Strengths And Weaknesses: Strengths: 1. ResGen effectively combines MaskGit and RVQ for efficient generative modeling. 2. ResGen achieve significant performance improvements over MaskGit and efficiency improvement over RQ-Transformer. 3. The author provides the theoretical justification for RVQ. Weaknesses: 1. Will the inference step be further decreased for faster inference? 2. The discussion on the differences between ResGen and VAR could be elaborated. Since VAR also considers the hierarchical depth in RVQ and formulates it as a scale, a more detailed comparison would better highlight ResGen’s advantages and solidify its contributions. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive evaluation and constructive feedback. We address the specific points raised below: **Essential References Not Discussed (HART):** Thank you for bringing HART to our attention. We agree it is a relevant recent work and will incorporate a discussion into our Related Work section. HART proposes a hybrid approach, distinct from purely discrete tokenizers like ours (ResGen, VAR) or continuous ones (MAR, GIVT). It decomposes latents into discrete tokens (modeled autoregressively) and continuous residuals (modeled via diffusion). This hybrid strategy aims to reduce sampling steps compared to continuous methods like MAR. **Theoretical Claims (Sec 3.2, Lines 263-265):** Thank you for the careful reading and request for clarification. There might be slight confusion in terminology depending on perspective (encoding vs. decoding of VQ). Let us clarify our notation: * `z` represents the continuous cumulative embeddings corresponding to the target clean tokens `x^(0)`. Hence the term `q(z | x^(0), x^(t))` represents a form of dequantization or embedding lookup. * `p(x^(0)|z, x^(t))` represents the inference of the true clean tokens `x^(0)` given this embeddings `z` and the current masked tokens `x^(t)`. This involves finding the discrete tokens `x^(0)` whose corresponding embeddings are `z`. This step effectively performs the **quantization** of the continuous embedding `z` back into the discrete RVQ code space. So, `p(x^(0)|z, x^(t))` indeed relates to determining the discrete tokens `x^(0)` *from* the continuous embedding `z`, which involves the RVQ quantization mechanism applied to `z`. **Weakness 1: Potential for Further Inference Speed-up:** Thank you for this suggestion. In the vision domain, reducing sampling steps to 18 (Appendix B.2) improves inference speed but slightly decreases generation quality (FID 3.94). However, experiments in the audio domain show that fewer steps (8 and 16 steps) still achieve comparable performance (see Table below). |Continuation|WER|CER|SIM-o|SIM-r| |-|-|-|-|-| | melvae-resgen-25step|1.94|0.53|0.5421|0.5701| | melvae-resgen-16step|1.92|0.53|0.5419|0.5705| | melvae-resgen-8step|1.92|0.54|0.5429|0.5710| | rvqvae-resgen-25step|1.86|0.50|0.5853|0.5886| | rvqvae-resgen-16step|1.87|0.52|0.5820|0.5864| | rvqvae-resgen-8step|1.86|0.51|0.5847|0.5886| |Cross|WER|CER|SIM-o|SIM-r| |-|-|-|-|-| |melvae-resgen-25step|1.75|0.48|0.5597|0.6061| |melvae-resgen-16step|1.92|0.53|0.5419|0.5705| |melvae-resgen-8step|1.93|0.54|0.5433|0.5708| |rvqvae-resgen-25step|1.70|0.46|0.6037|0.6307| |rvqvae-resgen-16step|1.75|0.46|0.6037|0.6302| |rvqvae-resgen-8step|1.95|0.50|0.5898|0.6108| **Weakness 2: Elaborating Differences with VAR:** Thank you for this suggestion; a clearer comparison with VAR will indeed strengthen the paper. While both ResGen and VAR utilize the hierarchical nature of RVQ, they differ: * **Structure & Resolution:** VAR assigns different spatial resolutions to different RVQ depths (e.g., 1x1, 2x2, ... up to original resolution), requiring a predefined hierarchy. RVQ uses all quantization depths to refine the representation at a *single* resolution which is identical to the length of output sequence of the VAE encoder. * **Generative Process:** VAR generates tokens autoregressively across depths, although sampling within a depth can be parallel given the previous depth. ResGen predicts *cumulative embeddings* representing multiple depths simultaneously within a masked generation framework, allowing parallel prediction across the sequence length dimension. * **Task Adaptability:** VAR's predefined resolution hierarchy might be less straightforward to adapt to tasks with arbitrary output lengths, such as text-to-speech, where ResGen is easily applied. * **Resolution Flexibility:** RVQ allows us to achieve lower final spatial resolutions (e.g., 8x8) by increasing quantization depth (e.g., 16), offering flexibility in balancing sequence length and depth. Achieving similarly low spatial resolution might be less direct in VAR's depth-resolution coupled structure. We will incorporate a more detailed discussion contrasting these aspects in the related work.
Summary: This paper introduces ResGen, a method that directly predicts vector embeddings for groups of tokens rather than individual tokens. This design reduces the number of inference steps, thereby improving latency. Token masking is employed during training, while multi-token prediction is utilized during inference. Experimental results on image generation and audio synthesis demonstrate the effectiveness of the proposed approach. ## update after rebuttal Thanks for the authors rebuttal. Most of my concerns have been addressed and I decide to increase my rating. Claims And Evidence: See Strength and Weakness. Methods And Evaluation Criteria: See Strength and Weakness. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strength: - By eliminating the auto-regressive prediction of depth tokens, the proposed method improve the inference latency. Weakness and Questions: - **Method**: My main concern is that the proposed method predicts continuous embeddings rather than discrete tokens, subsequently quantizing these embeddings into discrete tokens. Given this approach, it's unclear why the method doesn't simply predict continuous embeddings directly, similar to diffusion methods. The quantization step inevitably introduces information loss, raising questions about the purpose and effectiveness of using discrete tokens. - **Experiment**: In Section 5.2.1, the paper presents a reimplementation of MaskGiT using RVQ quantization with a depth of 16. However, the implementation details are unclear. If the reimplementation yields significantly worse results compared to the original MaskGiT—as Table 1 suggests—then its utility as a baseline becomes questionable. - **Expression**: Throughout the paper, the definition of 'discrete diffusion model' is unclear. Specifically, methods such as MaskGIT, VAR, and RQ-VAE, which are listed in the related works, do not align with what I would consider discrete diffusion models. Other Comments Or Suggestions: Please kindly review the weaknesses I have outlined and provide clarifications for each point individually. Questions For Authors: See Weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments, which help improve our work's clarity. We address each point below: \ **Q1: Rationale for Predicting Quantized Discrete Tokens vs. Continuous Embeddings** Our approach of predicting discrete RVQ tokens via cumulative embeddings offers significant inference efficiency advantages over directly predicting high-dimensional continuous embeddings. While direct continuous prediction (e.g., DiT) is valid, it often requires many inference steps. Our method targets discrete RVQ tokens (8x8x16) for efficiency, even though it uses continuous cumulative embeddings internally. Generating masked tokens generally demands fewer inference steps than refining high-dimensional continuous vectors, as shown by MaskGiT achieving strong results faster than many continuous methods. Quantization, despite information loss, simplifies the target space, reducing the burden of precisely predicting large continuous vectors (e.g., 8x8x64 in our case). Predicting cumulative embeddings enables a few step iterative prediction of all 16 token depths. This contrasts with single-step embedding prediction (e.g., GIVT) or continuous diffusion models needing many refinement steps. Quantization is crucial for this multi-depth strategy's feasibility. In essence, we leverage internal continuous representations for modeling flexibility but target discrete tokens for efficient inference, particularly suited for our multi-depth prediction approach. \ **Q2: MaskGiT Reimplementation Baseline in Ablation Study (Section 5.2.1)** The MaskGiT variant in Table 1 uses 8x8x16 RVQ tokens specifically for a controlled comparison within our ablation study on multi-depth token generation, not to replicate the original MaskGiT's performance on its native 16x16x1 format. The experiments aimed to compare strategies for generating multi-depth RVQ tokens (8x8x16): (i) fully autoregressive (RQ-Transformer), (ii) masked sequence + autoregressive depth (our MaskGiT variant), and (iii) fully masked (ResGen). Fair comparison required all models to operate on the same 8x8x16 RVQ format. The relatively lower performance of the MaskGiT variant (predicting depth-by-depth) compared to ResGen highlights the challenge of extending single-depth masked models to multi-depth RVQ tokens and supports our approach of predicting all depths collectively. To further strengthen this analysis and provide a more precise ablation, we conducted additional experiments: 1. As detailed in our response to Reviewer aQEq (W1), we implemented an "AR-ResGen" model which combines an autoregressive sequence model (like RQ-Transformer's) with ResGen's efficient iterative depth prediction using cumulative embeddings. This isolates the benefit of our depth modeling strategy. 2. We also implemented a variant of our method which uses the same masked generation framework introduced in our method section but predicts discrete tokens directly and in parallel across all depths at each step, *instead* of predicting continuous cumulative embeddings. This represents a more direct application of masked diffusion to RVQ depths. This variant achieved FID scores of **12.79 (w/o CFG)** and **2.91 (w/ CFG)**. These additional experiments demonstrate that while variations of our method can perform well, our proposed final method using cumulative vector embedding prediction achieves the best overall performance within our ablation study, validating its effectiveness. We will revise Section 5.2.1 to clarify the baseline's purpose (controlled comparison on 8x8x16 RVQ tokens), implementation (sequential depth prediction), its role within the ablation, and add the results of our new ablation experiments as requested. \ **Q3: Clarity of the Term 'Discrete Diffusion Model'** We acknowledge the ambiguity regarding 'discrete diffusion model' and thank you for highlighting it. Our discussion of 'discrete diffusion models' in the related works primarily referred to models like VQ-Diffusion, GIVT, and conceptually MaskGIT, which learn to reverse a corruption process (like masking) on discrete data (tokens). This was mainly confined to the first paragraph of the related works. Other mentioned models (e.g., VAR, RQ-Transformer) were included for broader context in token-based modeling but were not necessarily classified by us as discrete diffusion models. We also aimed to highlight 'masked diffusion' (corruption via masking only) as an effective subset of the discrete diffusion model, following D3PM and VQ-Diffusion. We will revise the paper to provide a precise discrete diffusion model definition as used, clearly delineate this category from related models, explicitly refer masked diffusion, and ensure consistent terminology throughout. \ We hope these clarifications and the planned revisions adequately address the reviewer's concerns. We appreciate the constructive feedback.
null
null
null
null
Exact Recovery of Sparse Binary Vectors from Generalized Linear Measurements
Accept (poster)
Summary: The paper considers a new problem setting of recovering sparse binary vector from generalized linear measurements. The authors propose a simple algorithm based on Plan et al. (2017) and prove its performance guarantee, complemented by a nearly tight lower bound. This then implies tight resolution to noisy 1-bit compressed sensing and logistic regression. Interestingly, for SparseLinearReg, when $m = \Omega((k + \sigma^2) \log n)$, keeping the sign information only suffices. Also, for SparseLinearReg, the authors prove tighter upper and lower bounds based on MLE. Claims And Evidence: See Theoretical Claims. Methods And Evaluation Criteria: Not applicable. Theoretical Claims: I checked all proofs and concur that they are generally correct. There are some minor issues and typos regarding absolute constants and tightness of the results, but the orderwise guarantees remain intact. Typos are deferred to the end. Here are some minor issues that I saw: 1. In line 338 (second column), it should be $y_i(A_{i,j} - A_{i,j'}) - E$, not $y_i(A_{i,j} - A_{i,j'}) - 1$. 2. In the proof of Theorem 2.5, when Fano's inequality is applied, why is there a "+1" in the $\log\left( \binom{n}{k} + 1 \right)$? To my understanding, the total number of possible k-sparse binary vectors is precisely $\binom{n}{k}$? Then one could bound $I$ as $I \geq (1 - \delta) k \log \frac{n}{k} - h_2(\delta)$, yielding a slightly tighter inequality. 3. In the last part of the proof of Theorem 2.5, when the authors invoke the result of Topsøe (2001), there is a $\log 2$ missing. Also, the range of $t$ should be $[-1/2, 1/2]$, not $[-1, 1]$. 4. In many parts of the other proofs (e.g., Proof of Theorem 2.10), the authors use the inequality $\binom{n}{k} \leq n^k$. It may be minor, but still I would prefer if the authors write the proofs and statements via $\binom{n}{k} \leq \left( \frac{en}{k} \right)^k$. Experimental Designs Or Analyses: Not applicable. Supplementary Material: Yes. I have reviewed the entire supplementary material. I have skimmed through the detailed calculations, such as integrals and algebraic manipulations. Relation To Broader Scientific Literature: - To the best of my knowledge, tackles a new problem setting - Nearly tight sample complexity bounds, which cannot be obtained by prior algorithms nor analyses - Interesting proof techniques that may be of interest for statistics and information theory community Essential References Not Discussed: None to the best of my knowledge. Other Strengths And Weaknesses: **Strengths:** 1. Clearly and well written 2. Simple yet effective resolution to the well-studied statistics problem with tight theoretical guarantees extending to GLM observations. 3. Although the proof flow itself is standard in information theory and statistics, there are several novel details that I appreciated: correlation vs. uncorrelation based on the support of $\bf{x}$ and appropriate use of independency arguments (virtually all theorems, to my understanding), coupling(?)-based tighter lower bound for SparseLinearReg (Theorem 2.10). **Weakness:** 1. Given the algorithm's simplicity, it would have been better to include some toy experiments that showcase the tightness of the theoretical results presented. 2. Writing can be overall improved; see below. Other Comments Or Suggestions: Typos: 1. The notation $\bf{A}_i$ is used interchangeably between column vector and row vector throughout the paper. For instance, in Eqn. (2), $\bf{A}_i$ is the row-vector, but then shouldn't it be $\bf{A}_i \bf{x}$, not $\bf{A}_i^\top \bf{x}$? But then in Section 2.1, the same notation is overloaded with column vector. Also, in some parts, this is denoted as $A_i$ (e.g., line 360 left column). 2. Line 52 (first column): $[1 : m]$ is not defined 3. Line 91 (second column): Guassian => Gaussian 4. Line 128 (first column): Theorem 2.8 => Corollary 2.8 5. Line 639: what is the definition of $Q(\cdot)$? 6. Line 754: what is the definition of $h$? Actually, it seems that this is defined in line 978 as the differential entropy.... It should be defined sooner 7. Line 801: $\bf{x}' \Rightarrow \tilde{\bf{x}}$ 8. Line 1085: Jenson => Jensen 9. In Theorem 2.10, $\max_l \Rightarrow \max_{l \in [1:k]}$ Suggestions: 1. I would suggest putting in a table comparing prior results and the results shown in this paper. Right now, all the comparisons to prior works are crammed into Section 1.1, which was a bit hard for me to parse at first, and I had to go back and forth after going through the theorems. Also, the table would help the readers organize the theorem results more clearly, as they are subtly different depending on the setting: specifically, whether $A_{i,j} \sim \mathcal{N}(0, 1)$ or not. 2. Some notations are left undefined, e.g., $\lVert \cdot \rVert_{\psi_2}$. The definitions should be included, at least in a footnote, for completeness. 3. Line 913: maybe put in a reference for the inequality (Theorem 11.1.3 of Cover & Thomas (2006)) 4. Throughout the paper, I see things like [Example 2.5.8](Vershynin, 2018). If the authors meant to do \citet[Example 2.5.8]{vershynin2018highdim}, which should yield Vershynin (2018, Example 2.5.8), the authors should fix the bibtex errors accordingly. 5. Appendix B is too much not self-contained... The notations such as $s_1$, $K$, and $w_t$ aren't defined, as it seems that the reader must explicitly look them up in Plan et al. (2017) Questions For Authors: 1. The proof of Theorem 2.1 doesn't seem to utilize the fact that the entry is Gaussian explicitly; rather, the only facts used were 1. $\lVert A_{i,j} \rVert_{\psi_2} \leq C'$ and 2. it satisfies the power constraint (Eqn. (2)). Then are there examples of subGaussian distribution (not Gaussian) that satisfy the power constraint? Then maybe Theorem 2.1 can be written in a more general sense. One immediate example may be binary sensing matrix, with $A_{i,j} \sim \mathrm{Ber}(p)$ for some $p \in (0, 1)$? 2. In Theorem 2.1, what is the meaning of $L$? This reminds me of the Polyak-Łojasiewicz condition, with a statistical twist(?). What do the authors think? Also, are there (subGaussian) GLMs that do not satisfy this $L$ condition? Lastly, has this $L$ condition been considered before in the GLM literature? 3. The statements of Corollary 2.2~2.4 should be either "... if $m = \Omega(...)$, or "there exists a $m_0 = O(...)$ such that when $m \geq m_0$, Algorithm 1 recovers the unknown signal". 4. The authors mention that the lower bound applies to the average error prob, while the upper bound is for the max prob. Then is there a chance that there may be a better algorithm (or better analysis) for deriving an upper bound for the average error prob, somewhat closing the gap between upper and lower bound ($k \log n$ vs. $k \log(n/k)$)? Or is the conjecture of Gamarnik & Zadik (2017a) basically saying that the gap (for now) is inevitable regardless of whether one looks at max error prob or average error prob? 5. By the way, in the abstract, the authors mention that there is no statistical-computational gap for 1bCSbinary and logistic regression. Why is this the case? To my understanding, the upper bounds hold for $m \gtrsim (k + \sigma^2) \log(nk)$ and the lower bound requires $m \gtrsim (k + \sigma^2) \log (n/k)$? ($1/\beta^2$ for logistic regression). 6. In the proof of Theorem 2.9 (Appendix A.3), can the authors elaborate on the last equality (equivalent formulation of log likelihood testing)? 7. Is there a combinatorial (or something similar) intuition behind $l = k \left( 1 - \frac{k}{n} \right)$? 8. It seems to me that the MLE-based approach is valid for GLM observations beyond SparseLinearReg, as we have a parametric assumption on the distribution, which makes $p(y|x)$ well-defined...? Then, am I correct in saying that the improved upper and lower bound arguments (Theorems 2.9, 2.10) cannot be extended trivially to GLM observations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for a careful review and all your interesting insights and questions. We completely agree with the suggestions, minor issues and typos mentioned. We will incorporate them in the next revision of the paper. In particular, we will fix all the minor issues and typos, incorporate the suggestions including adding a table of results and defining a separate notation for a row vector and column vector of ${A}$. Below we give responses to the questions asked. (Question 1) Your observation is correct. As you mentioned, there are other subgaussian distributions, for example, each entry of $A$ is chosen iid uniform on the set {0,1} or {-1,1} that can be used as measurement matrices. The general proof technique will work for these distributions too. However, the theorem statement in its current form is true only for a Gaussian matrix where each entry is chosen iid $\mathcal{N}(0,1)$. The proof uses the fact that each entry of the measurement matrix has zero mean and uses the distribution of the entries while applying Stein’s lemma between (11) and (12). As you noticed, the proof outline can easily be used for any other distribution on the measurement matrix. We will mention it in the revised version of the paper. (Question 2) This is an interesting observation. While there might be some relation to an optimization paradigm, instead of this being a condition on the model, it can be thought of as a definition of $L$ which determines the rate of convergence. Note that we can also write $L:= \min_{i\in [1:m]} \frac{\mathbb{E}(g'(A_i^Tx))}{||y_i|| }$ or can omit it altogether and write $\min_{i\in [1:m]} \frac{\mathbb{E}(g’(A_i^Tx))}{||y_i||_{\psi_2}}$ instead of $L$ in (7) (Theorem 2.1). Since $L$ could be vanishing with $k$ as is the case in this paper, as defined it will always exist. The numerator in $L$ can also be written as $\frac{\mathbb{E}(y_iA_i^Tx))}{k}$ (see the series of transformations between (11) and (12) on page 7). This quantity has previously been used in Plan, Vershynin, Yudovina, 2017. See the definition of $\mu$ in proposition 1.1. This comes naturally while analyzing the convergence of linear estimator, since the estimator measures the contribution of each coordinate to the measurement outcomes. (Question 3) You are right. Thank you for noticing this. (Question 4) It turns out that for iid Gaussian design the avg error and max error criterions are equivalent. Given any decoder $\phi$ with good average error performance, we can design $\phi'$ with good max error performance as follows: On input $(A,y)$, sample a uniform $n$ dimensional permutation matrix $R$; compute $\hat{x} = \phi(AR, y)$, and output $R\hat{x}$. Then, for any x, $\mathbb{P}\left(\phi'(A, Ax+z)\neq x\right) =\mathbb{P}\left(R\phi(AR, Ax+z)\neq x\right) = \mathbb{P}\left(\phi(AR, ARR^{-1}x+z)\neq R^{-1}x\right) = \mathbb{P}\left(\phi(\tilde{A}, \tilde{A}R^{-1}x+z)\neq R^{-1}x\right) $ where $\tilde{A} = AR$. $AR$ is iid Gaussian design because if we take a uniform permutation of columns of a gaussian matrix, it is still iid Gaussian. Since $\tilde{A}$ is i.i.d. Gaussian and $R^{-1}x$ is uniform $k$-sparse vector, this implies the max error of $\phi'$ is same as the average error of $\phi$. Thus, the conjectured hardness by Gamarnik and Zadik is true for both maximum probability of error and average prob of error. We will mention it in the revised version. (Question 5) In the prior literature, $\log(n/k)$ and $\log{n}$ are not distinguished for the purpose of information-computation gap, which makes sense if $k = O(n^\alpha)$ for any $\alpha <1$. When $k = cn$ for some constant $c$, we can use an identity matrix to recover $x$ in $n$ measurements. Note that $k\log{n/k} = O(n)$ in this case. Therefore altogether $\log(n/k)$ and $\log{n}$ may not be differentiated for the purpose of this problem. (Question 6) We believe that you are referring to the last equality on page 15. We should have added another step here. This comes by substituting the value of probability densities as we show below. For any $r$ and set $\mathcal{V}$, the density $p(y_r|A_{r,\mathcal{V}}) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{y_r-A_{r,\mathcal{V}}^2}{2\sigma^2}}$. Thus, $$\mathbb{P}\left(\sum_{r}\log{\frac{p(y_r|A_{r,\mathcal{U}})}{p(y_r|A_{r,\mathcal{S}})}}>0\right) = \mathbb{P}\left(\sum_{r}-\frac{(y_r-A_{r, \mathcal{U}})^2}{2\sigma^2} + \frac{(y_r-A_{r, \mathcal{S}})^2}{2\sigma^2} >0\right) $$ The last quantity here can be manipulated to get the last equality on page 15. (Question 7) You are right. In principle, the bounds of Theorem 2.9 and 2.10 can be extended to GLM observations. Though, it will be more difficult to analyse them as the probabilities will be more complex. We did not try to extend it for one bit compressed sensing and logistic regression because the given upper and lower bound already match up to constants. Recall that we can use an identity sensing matrix when $k$ is linear in $n$. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing all of my concerns and questions! I intend to keep my score and champion for its acceptance. Out of curiosity, could the authors respond to my Question #7 regarding the intuition behind $l = k \left( 1 - \frac{k}{n} \right)$? The authors' response to Question #7 seems to be for my Question #8. Thanks! ------- **After the authors' second rebuttal comment:** I'm satisfied with the authors' responses. Thank you, and congratulations on this nice work! I hope that the authors will incorporate all the comments and suggestions from me and other reviewers into the future revision, which will further strengthen the paper! I keep my score. --- Reply to Comment 1.1.1: Comment: We apologise for missing the answer to Question 7. Please find it below. This value of $l$ maximizes $N(l)$. To see this, note that $N(l)$ can be thought of as condition entropy $H(W|V)$ where $W$ and $V$ are both Bernoulli random variables, with $P(V=0) = k/n$ and $P(W=0|V=0) = 1-l/k$ and $P(W=0|V=1) = l/(n-k)$. The marginal on $W$ is given by $P(W=0)=k/n$. Thus, $H(W|V) \leq H(W) = h_2(k/n)$. This upper bound is achieved by substituting $l = k(1-k/n)$ in $N(l)$. Note that $2^{nN(l)}$ approximately gives the number of $k$-sparse vectors at Hamming distance $2l$ from a given vector. Therefore, this value of $l$, approximately, gives the value of Hamming distance which maximizes the number of vectors at a particular hamming distance. We again thank you for your careful and positive review of the paper.
Summary: The paper studies the problem of recovering sparse binary vectors from noisy generalized linear measurements. For simplicity I am stating the special case problems SparseLinearReg and 1bCSbinary here: $x \in$ {$0,1$} is the unknown $k$-sparse vector that needs to be estimated. $A \in \mathbb{R}^{m \times n}$ is a sensing matrix (possibly random) with the power constraints $E[(A_i^\top x)^2] \leq k$. The $m$ linear observations using that matrix are the entries of the vector $y = Ax + z$ for SparseLinearReg or $y = sign(Ax + z)$ for 1bCSbinary. The goal is to use an appropriate $A$ and algorithm that takes $y$ and produces $x$ which is correct whp. The more general version of the problem uses generalized linear measurements using a link function in the standard way, which captures the above two problems as well as Logistic Regression as special cases. The paper uses $A$ having standard Gaussian entries and a very simple algorithm from Plan et al., 2017 that works by outputing the $k$ heaviest elements of the vector $(\langle y, A_i \rangle)_{i \in n}$ (which intuitively will correspond to the support of $x$ because $y$ is correlated with $A_i$ iff $x_i=1$). That way, the paper gives an upper bound for the sample complexity (achieved by the aforementioned computationally efficient algorithm) of the general problem, which yields corollaries for each of the three problems, SparseLinearReg, 1bCSbinary and LogisticRegression. Notably, if $m = \Theta((k+\sigma^2)(\log(k) + \log(n-k)))$ only the sign information in $y$ is enough in linear regression (i.e., 1bCSbinary is not harder than linear regression in that regime). The upper bounds also contradict the conjecture from Gamarnik and Zadik 2017 that no computationally efficient algorithm can exist for SparseLinearReg with the provided sample complexity. The paper provides an information theoretic lower bound based on Fano's inequality for the general problem which gives corollaries for each of the 3 problems. This gives tight characterization of the sample complexity for 1bCSbinary LogisticRegression. Finally, for exact recovery in SparseLinearReg, the authors complement the lower bound with an analysis of the MLE that gives almost a matching upper bound. Claims And Evidence: The main body and of the paper contains proofs for the claimed results. Methods And Evaluation Criteria: Not applicable. Theoretical Claims: I went through the main body of the paper and the claims seemed reasonable. That being said, I have not done a thorough check of all the proofs. Experimental Designs Or Analyses: Not applicable. Supplementary Material: Not applicable. Relation To Broader Scientific Literature: The paper discusses related work adequately. Essential References Not Discussed: I do not have additional suggestions. Other Strengths And Weaknesses: **Strenghts** - Unified approach: the main result holds for the general problem that uses generalized linear measurements, and gives as corollaries the results for the three other problems. - The algorithm is very simple to state and is computationally efficient - It is interesting (and perhaps a bit surprising) that in the regime $m \approx (k + \sigma^2) \log n$ only the sign information suffices for sparse linear regression. - The paper shows tight results for exact linear regression. **Weaknesses** - The gap between upper and lower bounds is hard to quantify, as the bound is expressed as some optimization problem. - It remains open problem to find an efficient algorithm with better sample complexity than $(k + \sigma^2) \log n$ for linear regression. Overall, I feel that the paper makes important contributions and would like to recommend acceptance. Other Comments Or Suggestions: . Questions For Authors: Related to the conjecture in Gamarnik and Zadik, 2017, do the authors think that a statistical-computational gap still exists but for some weaker threshold for the number of samples? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your careful and positive review of the paper. Regarding your question, for the 1bCSbinary problem, there is no computational-statistical gap, since the information theoretic lower bound matches with the sample complexity of the linear estimator (up to constants). However the constant factors can be different, and also for other GLMs it may be the case that there is a computational statistical gap depending on the quantities $L$ (in Thm 2.1) and the mutual information $I(y_i, x|A)$ in Thm 2.5. Hope this answers your question.
Summary: This paper addresses the problem of recovering a $k$-sparse binary vector from generalized linear measurements. Given observations $y = (y_1, \dots, y_m)$, which are related to a sparse vector $x$ through an inverse link function $g$ such that: $$ \mathbb{E}[y_i | A_i] = g(A_i^T x), \quad \text{for each } i \in [m], $$ the goal is to accurately reconstruct $x$. Since $x$ is binary, the problem reduces to support recovery. It presents a simple “linear estimation followed by top‑k selection” algorithm – essentially a one‐shot version of an iterative greedy scheme – and provides tight sample complexity guarantees for several settings, including noisy one‑bit compressed sensing (1bCSbinary), sparse linear regression (SparseLinearReg), and logistic regression. In addition to the algorithmic upper bounds, the paper offers nearly matching information theoretic lower bounds. Notably, logistic regression and 1-bit compressed sensing emerge as special cases of this framework. The authors build upon the simple linear estimation algorithm from Plan et al. (2017), focusing on its application to binary vectors. They establish both upper and lower bounds on the sample complexity required for successful recovery: ## Upper Bound: Sample Complexity of Algorithm 1 (Theorem 2.1) If the generalized linear model (GLM) ensures that each $y_i$ is a subgaussian random variable with norm $\|y_i\|_{\psi_2}$, and for some $L$, $$ \mathbb{E} \, g'(A_i^T x) \geq L \cdot \|y_i\|_{\psi_2} \quad \text{for all } i \in [m], $$ then Algorithm 1 successfully recovers $x$ with high probability, provided the number of measurements satisfies: $$ m \geq C \frac{\log(k) + \log(n-k)}{\min\{L, L^2\}}, $$ where $C$ is a constant. ## Lower Bound for GLMs (Theorem 2.5) For any sensing matrix $A$, if $x$ is a uniformly chosen $k$-sparse vector, then any algorithm $\varphi$ that attempts to recover $x$ satisfies: $$ P(\varphi(A, y) = x) \leq \delta $$ only if the number of measurements meets the condition: $$ m \geq k \log(n/k) \left[ 1 - h^2(\delta) + \delta k \log(n/k) \right], $$ for some mutual information term $I$ satisfying $I \geq I(y_i; x | A)$ for all $i \in [m]$. Specifically, when $y$ takes binary values ($y \in \{-1, 1\}$), the expected squared inverse link function satisfies: $$ \mathbb{E} [g(A_i^T x)^2] \geq I(y_i, x | A), $$ where the expectation is taken over the randomness in $A$ and $x$. ## Update After Rebuttal: I think the authors addressed my review sufficiently, and vote to accept the paper. Claims And Evidence: The claims made in the submission are supported by clear and convincing proofs. Methods And Evaluation Criteria: This is a theoretical paper, and this does not apply. Theoretical Claims: I checked the first 9 pages and they seem fine to me. Experimental Designs Or Analyses: This paper does not have experiments. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The work builds on the foundational ideas from compressed sensing and sparse linear regression as developed by Candès et al. (2006), Donoho (2006), and Tibshirani (1996). The literature on one-bit compressed sensing—initiated by Boufounos and Baraniuk (2008) and later refined by Jacques et al. (2013)—has dealt with the challenges posed by extreme quantization (only sign information is available). This paper extends that line of work by incorporating Gaussian noise before quantization (the 1bCSbinary model), and it rigorously shows that the sample complexity remains optimal ($O((k+\sigma^2) \log n)$) even when only sign information is used. Essential References Not Discussed: I wonder if it might make sense to also cite work on lower bounds for sparse recovery (such as "Lower Bounds on Sparse Recovery" (Khanh Do Ba, Piotr Indyk, Eric Price, and David P. Woodruff)), since they also have these communication complexity style proofs. Other Strengths And Weaknesses: ### Strengths: 1. The paper has a unified treatment of multiple regression models. 2. The proposed algorithm -- linear estimator followed by top-k, is simple. 3. The insight into not having a stat-comp gap here is interesting. ### Weaknesses: 1. Many of the results make strong assumptions on the measurement matrices. 2. I feel the paper doesn't communicate the main novelty in analysis or algorithm clearly. Other Comments Or Suggestions: None Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your constructive and positive review. Thanks also for suggesting the additional reference. We will cite this paper as a relevant lower bound for sparse recovery. Since the input signal is binary in our case, the lower bounding techniques are somewhat different, as we can use information theoretic inequalities directly. Regarding the strong assumptions on the measurement matrices, we would like to mention that our lower bounds (sec 2.2 and Thm 2.5) work for any family of measurement matrices. Regarding the upper bound: we can make Thm 2.1 work for a more general class of matrices, at the expense of some clarity. Indeed, the only property of the Gaussian measurements that we use, conditioned on other assumptions of the theorem, is centered-ness, and Stein’s lemma. So the theorem holds for other matrices too, albeit will result in slightly complicated expressions (see some more details in the response to Reviewer mHu2).
null
null
null
null
null
null
null
null
QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search
Accept (poster)
Summary: Given the limitation that current agent task do not possess high-quality granular reward signals, this work proposes the QLASS (Q-guided Language Agent Stepwise Search) method to automatically explore states , learn step-wise values, and apply these value-based heuristics are inference time. QLASS is shown effective in improving downstream agent performance through efficient value learning procedures. Claims And Evidence: 1. The main claim that learning and applying Q-value to language agents improves performance, is supported by the main result in Table 2, where the proposed QLASS method achieves the highest result on all three benchmarks on varied settings. 2. The claim on inference-time search efficiency in Figure 3, may not be fully supported due to incomplete computation cost calculation. IIUC, the baseline Best-of-N method only requires inference-time scaling, therefore the cost (“Completion Tokens”) calculated in Figure 3 are sufficient; nonetheless, for the QLASS method, additional training process on policy and reward models are required, yet Figure 3 only computes the inference-time cost. A more detailed discussion on the overall computation cost could be helpful. Furthermore, the metric “Completion Tokens” is not very intuitive to understand, e.g., if 150 tokens is sufficient for generating 1 or multiple responses (which is sufficient for common agent inference), or what’s the expected value/distribution of “Completion Tokens” on the tested benchmarks. Knowing this information would help decide which are the critical regions on Figure 3 x-axis, and if QLASS is more empirically useful than the simple Best-of-N method. 3. For ablation study in Section 5.5, while QLASS still achieves higher results than other methods with the 13B model. However, comparing to 7B model results in Table 2, which should be presumably lower since smaller models are usually weaker, QLASS with 13B model actually underperforms QLASS with 7B model. More justification on this inferior result, or further more comprehensive studies on model scaling could be helpful. Methods And Evaluation Criteria: The method design is well-motivated and reasonable in general. However, one question that I have is the necessity of the “Behavioral Cloning” stage introduced in Section 4.1. Because it is a warm-start for the language agent and not related to the core Q-value model, it seems that the QLASS method could potentially work without this BC process. While BC is a beneficial process given that the main task-solving agent experimented in this work is open-source models, another experiment that (1) do not involve this BC stage, or (2) ablates more Ns in section 5.3 (including N=0) could be helpful. Furthermore, using closed API models (e.g., gpt-4o, claude) as the task-solving agent (while keeping the QNet training and Q-Guided training with open-source models), could offer more information on the effectiveness of the designed modules. Theoretical Claims: The paper introduces the Q-value learning process in symbolic expressions, which reads reasonable. Experimental Designs Or Analyses: 1. Corroborated by the “Methods and Evaluation Criteria” section, additional experiments using varying N examples for BC could be helpful, especially the case when N=0. 2. The choice of main model, LLama-2-7B-chat, is somewhat unexpected. As more upgraded versions of similar-sized Llama models (3.1, 3.2) have been released, it is a bit unclear why the somewhat older Llama-2 model is selected. Similar confusion holds for choosing the “-Chat” version specifically. Supplementary Material: No, I did not find any supplementary material. Relation To Broader Scientific Literature: The key Q-learning idea of this paper is related to reinforcement learning and process reward modeling. The results are consistent with several reward modeling works in that PRMs are effective (more so that ORMs) in improving multi-step agentic tasks. Essential References Not Discussed: The related work section has discussed the relevant literature rather comprehensively. However, there is one paper that I find relevant but not cited in this paper: Koh, Jing Yu, et al. "Tree search for language model agents." arXiv preprint arXiv:2407.01476 (2024). Other Strengths And Weaknesses: The paper is written with clarity; all results are presented clearly in tables and figures. Other Comments Or Suggestions: For Figure 4, the bar plot shows Q-value achieves 66.4, while Table 2 shows 70.3. What causes the difference between? Questions For Authors: The method (specifically BC and Q-learning) requires supervision data corresponding to the test set, I wonder how the method could be generalized to agent tasks without compatible training examples or ground-truth environmental rewards? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer agwr, We greatly appreciate your insightful comments for our work. Here are our responses to your questions. > 1 Incomplete computation cost calculation and explanation on “Completion tokens” We would like to clarify that all the inference-time methods in Figure 3, i.e., Best-of-N and QLASS, share the same policy training computation. The computation overhead of QLASS lies in exploring trajectories to build the reasoning tree and then train the reward model. Second, the training cost is once while inference cost is almost infinite from a long-term perspective. The terminology is also used in prior work[1]. For the computation leveraged for exploration, we have included the hyperparameters in Table 5 and discussed in Appendix A.2, where we kindly direct the reviewer for further details. The “Completion tokens” in Figure 3 x-axis refers to all the inference cost leveraged in Best-of-N and QLASS. A rough estimation is that each complete trajectory includes 40 * 1e3 tokens. Take 400 * 1e3 on the x-axis as an example, it means the inference cost includes 10 generated trajectories. > 2 Justification of QLASS with 13B model weaker than 7B We summarize and compare the results on 7B and 13B models in the table below. | | SciWorld-seen | SciWorld-unseen | |-|-|-| |SFT-7B|67.4|53.0| |SFT-13B|68.1 (+0.7) |57.6 (+4.6)| |ETO-7B|73.8 | 65.0| |ETO-13B|71.4 (-2.4)| 68.6 (+3.6) | |QLASS-7B|75.3| 66.4 | |QLASS-13B|72.7 (-1.6) | 69.3 (+2.9)| On the seen set, both the ETO-13B and QLASS-13B models show slightly lower performance compared to their 7B counterparts (-2.4 and -1.6, respectively). This might suggest some degree of overfitting in the larger 13B models, where the model may have specialized too much on the training data. On the unseen set, all 13B models show significant improvements over their 7B counterparts (+4.6, +3.6, and +2.9, respectively). This indicates that the larger 13B models demonstrate better generalization capabilities, performing better on new, unseen data. > 3 The necessity of BC stage and the impact of BC with different N examples We observe that both the effectiveness of exploration and the QNet depend heavily on the initial capabilities of the model. Recent research [2] suggests that a good initialization of the LLM achieved through SFT is crucial for reducing the search space and is beneficial for inference-time scaling. We empirically found that with N=0, the quality of exploration was very poor, making it difficult to train a good reward model. Some tasks show nearly zero performance w/o BC as shown in Table 2. In practical applications, it is generally feasible to secure a small set of high-quality training trajectories annotated by experts or researchers. But scaling up the dataset is difficult due to cost, time, and maintaining consistency in data quality. We added experiments on the setup of using 200 examples for behavior cloning and summarize the results of leveraging different N examples for BC in the table below. | |WebShop| WebShop-1000 | WebShop-200 | |-|-|-|-| | SFT | 63.1|21.7| 20.9 | | ETO | 67.4 |66.7| 52.1 | | BoN|67.9| 47.1| 45.1| | QLASS|70.3| 67.3| 53.7| We can observe from the table that QLASS consistently outperforms other baselines under setups leveraging different BC examples, demonstrating its robustness across different BC datasets. > 4 Choice of Llama-2-Chat We chose the Chat version because it is more appropriate and more easily adapted to the multi-turn, interactive agent tasks that our paper focused on. We chose Llama-2-Chat for our experiments because we are building on the code from ETO [3], which also uses Llama-2-Chat. Additionally, FastChat that we use for serving and generation, provides stable support specifically for Llama-2 models. Due to resource limitations, we are unable to rerun all experiments on Llama-3 at this stage but plan to include Llama-3 results in the future. > 5 More API-based models We have GPT4, GPT3.5-Turbo, and GPT-4o based Reflexion included in our main Table 2. We also add additional experiments on GPT4o on the WebShop. We kindly direct you to the discussion in Q3 of Reviewer hiva. > 6 Explanation of 66.4 of Q-value in Figure 4 and 70.3 of QLASS in Table 2 The experimental setups are different. As we stated in L358-361, results in Figure 4 are self-training baselines where QNet is used for self-training data generation. QLASS in Table 2 is leveraged to provide inference-time guidance. > 7 Missing citation Thanks for bringing this missing citation to us. We will add it in our revised version. We hope that our answers can resolve your concerns. [1] Zhang, D., Zhoubian, S., Hu, Z., et al. (2024). Rest-MCTS: LLM self-training via process reward guided tree search. [2] Team, K., Du, A., Gao, B., et al. (2025). Kimi K1.5: Scaling reinforcement learning with LLMs. [3] Song, Y., Yin, D., Yue, X., et al. (2024). Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents.
Summary: This paper proposes QLASS, a method for Q-value estimation in process reward modeling, providing stepwise guidance for language agents. QLASS consists of four main stages: SFT to train the LLM agent, exploration tree construction, QNet training and Q-guided generation. Compared to multiple baselines, QLASS achieves significant performance improvements across various tasks with fewer training data, demonstrating its efficiency and effectiveness. Claims And Evidence: The paper claims that QLASS enhances LLM-based agents' decision-making performance in complex interactive tasks and enables self-improvement under limited supervision. These claims are well-supported by experimental results, which show that QLASS outperforms several baselines across multiple benchmarks. Methods And Evaluation Criteria: QLASS utilizes Q-value-based stepwise guidance to optimize agent behavior, which is a well-justified approach. The construction of QNet appears to be a key advantage of this method compared to others, however, the evaluation does not assess QNet in combination with multiple different LLM-based agents, which makes it difficult to fully demonstrate the superiority of the approach. Theoretical Claims: The paper employs Q-learning for process reward modeling, which is theoretically sound. Experimental Designs Or Analyses: The paper provides a comprehensive comparison with various agent training paradigms, effectively demonstrating QLASS's advantages. However, the absence of comparisons with other process reward modeling methods weakens the argument that QLASS is the best approach for process reward estimation. Supplementary Material: I checked the appendix. Relation To Broader Scientific Literature: The key contributions of this paper build upon and extend multiple areas of prior research, including LLM-based agent reasoning and process reward modeling. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1. QLASS introduces an innovative stepwise search strategy, effectively addressing the issue of suboptimal decision-making caused by outcome-based rewards. 2. The paper conducts thorough experiments across multiple complex interactive environments, demonstrating QLASS's effectiveness and robustness. 3. QLASS maintains strong performance even with reduced annotated data, making it valuable for real-world applications where labeled data are scarce. Weaknesses 1. QNet's effectiveness is not tested with multiple LLM-based agents, making it unclear whether its benefits extend beyond the specific agent used in the experiments. 2. The paper does not compare QLASS with alternative process reward modeling techniques. 3. Lack of cross-domain generalization experiments. Other Comments Or Suggestions: The visual consistency of figures could be improved to enhance clarity and readability. Unifying the style across diagrams would make the presentation more polished. Questions For Authors: 1. I am curious about the generalization ability of QLASS. If the model is trained with SFT and QNet only on WebShop data, can it still achieve performance improvements on the ALFWorld test set? 2. Since QNet is designed by sharing the backbone of the LLM, I would like to know whether the inherent performance of the backbone affects the quality of Q-value predictions. 3. Could you demonstrate the effectiveness of Q-guided generation by applying the trained QNet to other LLM agents? 4. How does a QNet constructed using QLASS compare to other process reward models (e.g., process rewards built using the Math Shepherd paradigm or process rewards derived from advanced closed-source models) in terms of performance when used with the same LLM agent? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer hiva, We sincerely thank you for the constructive suggestions and positive feedback. We will address your concerns below. > W1: Lack of experiments of applying QLASS to different LLM agents We understand that the reviewer encourages us to experiment on diverse LLM agents to validate the effectiveness of QLASS. In Section 5.5, we experimented with base agents with different model sizes to investigate the effectiveness of QLASS on different LLM agents. Additionally, we experimented on applying QLASS on GPT-4o (stated in Q3 below). We also summarize the results below. | Base LLM | Method | SciWorld-seen | SciWorld-unseen | |----------------|--------|---------------|-----------------| | Llama-2-7B | SFT | 67.4 | 53.0 | | Llama-2-7B | ETO | 73.8 | 65.0 | | Llama-2-7B | **QLASS** | **75.3** | **66.4** | | Llama-2-13B | SFT | 68.1 | 57.6 | | Llama-2-13B | ETO | 71.4 | 68.6 | | Llama-2-13B | **QLASS** | **72.7** | **69.3** | From the table above, we can see that QLASS can effectively enhance the performance significantly across diverse LLM agents. > W2: Lack of comparison with other process reward modeling methods To validate the effectiveness of QLASS compared with other PRM, we compared our Q-value based method with Avg reward[2] and Reward[3]. Avg reward computes the averaged final rewards; Reward directly treats the final outcome reward and backpropagates it as the process reward for each intermediate step. In addition to the self-improvement at inference time. More details are included in Section 5.2, L370-384. We leverage Q-value, Avg reward, Reward to guide agent self-training. Results are in the table below. | Q-value | Avg reward | Reward | |-------------|-------------|-------------| | 66.4 | 65.4 | 64.7 | We can observe that Q-value achieves the highest among all the PRM baselines, demonstrating the effectiveness of QLASS compared with other process reward models. > W3: Lack of cross-domain generalization experiments. In our current setup, we experimented on unseen test sets on SciWorld and ALFWorld in Table 2, which is a commonly adopted setup to test the out-of-distribution generalization ability of LLM agents in prior works[1]. Also, we respectfully clarify that the tasks and action spaces explored in our experiments, such as WebShop (a shopping task) and ALFWorld (a navigation task), involve distinctly different knowledge bases with minimal overlap. Consequently, it is impractical to apply the QNet trained on WebShop to enhance performance on ALFWorld due to these fundamental differences. We acknowledge the importance of this clarification and will include this point in the revised version of our paper. > Q1: The cross-domain generalization ability of QLASS We have discussed this question in W3 and we kindly direct the reviewer to the discussion in W3. > Q2: How the inherent performance of the backbone affects the quality of Q-value predictions In practice, we found that QNet, when initialized by sharing the backbone and weights from the agent's LLM, performs slightly better than when trained from scratch. We previously employed llama3.2 3B as the base model for both the agent model and QNet. However, llama3.2 3B performs poorly in our multi-turn, interactive agent tasks. It also struggled to provide effective process rewards during inference time. These shortcomings may stem from the model's capacity limitations, as the 3B size may be inadequate for handling the complexity required for solving these specific tasks. > Q3: Applying QNet to different LLM agents We have experiments showing that our method can also work on 13B LLM in Table4. Also, we add additional experiments by applying the trained 7B QNet to GPT4o on the Webshop. Noted that GPT-4o is not specifically trained on agent tasks so the 7B results after behavior cloning can perform better. | Method | Performance| |-------------|-------------| | GPT-4o | 54.5 | | GPT-4o w/ QLASS | 56.2 | > Q4: Comparison with other process reward models We include detailed discussion on comparison with other PRMs in W2. We hope that our answers can resolve your concerns. [1] Song, Y., Yin, D., Yue, X., et al. (2024). Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents. [2] Wang, P., Li, L., Shao, Z., et al. (2023). Math-shepherd: A label-free step-by-step verifier for LLMs in mathematical reasoning. [3] Yuan, L., Li, W., Chen, H., et al. (2024). Free process rewards without process labels.
Summary: The paper proposes an LLM self-improvement recipe for tasks where there is a (possibly sparse) external verification signal, inspired by the Q-learning algorithm for Markov Decision Processes. Experiments conducted on three domains (ALFWorld, SciWorld and WebShop) show that the proposed recipe can yield performance improvements for LLM-as-agents, and scales well with increased inference-time compute budgets. [Post rebuttal update] The additional experiments and details submitted by the authors in their rebuttal address most of my questions. Claims And Evidence: The main claims are: 1. The proposed approach for constructing reasoning trees and computing Q-values can produce a good quality dataset for LLM self-improvement. 2. Using predicted Q-values to guide LLM generation can yield a good policy for agent tasks. 3. The general QLASS pipeline produces good performance at low cost for agent tasks, relative to other baselines. The evidence for (1) can be substantially improved with some additional analysis: - For a given cost budget (w.r.t. number of tokens or LLM calls or latency) there is a tradeoff between trying a task for longer (i.e. higher T; trajectory length) vs. backing off and exploring other nodes as in Algorithm 2. By carefully varying T, D (reasoning tree max_depth) and W (reasoning tree max_branching) we can empirically understand this tradeoff. - The paper conjectures that outcome reward models yield inferior policy learning results than process reward models because sometimes the resulting policy may be inefficient. An ablation experiment with varying gamma (discount factor) would be a great way to verify or falsify this conjecture -- does gamma=1 allow inefficient policies and as gamma < 1 the learned policies become more timestep-efficient? - Can we still produce good Q-value datasets after multiple steps of LLM self-improvement? An experiment where Stages 2 and 3 of Algorithm 1 are re-iterated a few times would shed light on this question. The evidence for (2) seems adequate. Figure 3 can be made more convincing by including other inference-compute scaling techniques. The evidence for (3) is missing some important baselines: (even allowing for excluding the ones discussed in Appendix A.1). For instance, from the paper's citations & related work -- process reward models learned via random rollouts (e.g. Uesato'22, Lightman'23, Wang'23, Chen'24); learned outcome reward models (Snell'24, Wang'24a, Shinn'23); MCTS approaches to building the "reasoning tree" (e.g. TS-LLM Feng'23, ReST-MCTS* Zhang'24). A representative approach from each of these 3 threads of research are important baselines to compare. Methods And Evaluation Criteria: The benchmarks used (WebShop, Alfworld, SciWorld) are well-motivated. Evaluation metrics such as task success rate and token counts are also reasonable. Evaluating the costs of the proposed approach vs. baselines needs clarification or perhaps a more rigorous approach. QLASS has a dataset generation step, followed by Q-Net training, and followed by using Q-Net during inference. The only costs reported are in Figure 3 which only reports the cost of Q-Net during inference. The other costs are important to contextualize. Theoretical Claims: The paper does not make any theoretical claims. Experimental Designs Or Analyses: How were the hyper-parameters for QLASS (e.g. expansion depth D) selected across the different task domains? Was it by monitoring performance on the test set? If so, there may be unfair bias against the other tested baselines. This is an important missing detail that should be clarified for a rigorous experimental setup. Supplementary Material: I reviewed all of the appendices. Algorithm 2 should be moved from the appendix into the main paper. Relation To Broader Scientific Literature: The paper adequately describes the related works on self-improvement recipes for LLMs, using LLMs as agents, and using process reward models to derive fine-tuning reward signals on intermediate steps of agent tasks. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: The biggest strength of the paper is the very strong empirical performance of QLASS across the three tested domains. Other Comments Or Suggestions: Section 3: Mention the MDP problem setting that the Q-learning algorithm is designed for (because it is not yet apparent that the tasks tackled later in the paper are modeled well by an MDP). Notation of N has a conflict. N is the number of expert trajectories during SFT; and also denotes a node in the exploration tree. Appendix A.2.2. Notation clash between (x1 ... xn) representing tokens being input to the LLM vs. the subscript representing the time-step of the task trajectory (each step of the task trajectory will have a sequence of tokens right?) Section 4.4: the second paragraph is superfluous and can be cut. Algorithm 2 should be in the main paper instead. Algorithm 2: "Get a new branch b constructed on \tau" this is not described adequately in the paper. When a trajectory is sampled from a state, is each call to the LLM made into a node? And the state corresponding to the node is the concatenation of all previous inputs and outputs of the LLM from the root node to that node? Line numbers are not printed, so "repeat function in Line 5-12" is hard to interpret; perhaps refactor into an Algo block and a Subroutine block. Main paper Section 5.1 should mention the perturbation augmentation done for WebShop. Why was the cost for that step high on SciWorld/AlfWorld (since it was just about paraphrasing the task descriptions)? Algorithm 3: When describing Q-value estimation in the main paper, mention that the Q-values are normalized to be in [0,1]. Questions For Authors: Please see questions in the other review responses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
null
null
null
null
null
null
null
null
TreeLoRA: Efficient Continual Learning via Layer-Wise LoRAs Guided by a Hierarchical Gradient-Similarity Tree
Accept (poster)
Summary: This paper proposes a novel continuous learning approach, TreeLoRA (K-D Tree of Low-Rank Adapters), which exploits hierarchical gradient similarity to build layer-wise adapters for efficient CL.To achieve even greater efficiency, the authors develop a confidence lower bound based bandit techniques to efficiently explore the task structure. In addition, the authors provide theoretical analyses to demonstrate the validity of the proposed approach. Claims And Evidence: The claims made by the authors are well-supported by clear theoretical proofs and experimental results, which effectively validate their assertions. Methods And Evaluation Criteria: The proposed methods are effective in addressing the problem outlined in the paper. Theoretical Claims: I have checked the proof of theory provided by the author and there are no obvious problems. Experimental Designs Or Analyses: See Weaknesses (2). Supplementary Material: I have reviewed the supplementary material provided by the author including the code and additional proofs. Relation To Broader Scientific Literature: The research is a study of model base capabilities, with potential implications for the broader scientific literature. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Weaknesses: 1. The definition of $f_{i}(w_{j})$ does not conform to common notation. It is recommended to swap $i$ and $j$ to write it as $f_{j}(\mathcal{T}\_i)$, or consider using $\theta_{j}$ to represent the model parameters. This would enhance the clarity of the paper. 2. In the methods compared in this paper, it seems that there is a lack of comparison with some recent advanced continual learning methods [1, 2], which might have better performance than the approach proposed in this paper. [1] Zhao, W., et al. Sapt: A shared attention framework for parameter-efficient continual learning of large language models. In ACL, 2024. [2] Feng, Y., et al. Tasl: Task skill localization and consolidation for language model continual learning. In ACL, 2024. Other Comments Or Suggestions: Typos: I'm not sure if this is a typo, but I noticed that the variable $j$ in equation (1) seems to be unnecessary, as it not be defined. Questions For Authors: Question: 1. Regarding the use of LCB to calculate the similarity between tasks especially in transformer-based models, is it computed only at the last layer of the model, or is each layer calculated individually? I noticed that the figures in the paper seem to indicate that only the last layer is calculated, so why not compute each layer separately, given that the features learned at each layer of the model are different? 2. The authors mention that the setting of the threshold $\delta$ does not need to be done manually and is done dynamically, why and how is this done? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's constructive feedback. In the following, we respond to each question. --- **Q1.** "Regarding the use of LCB to calculate the similarity between tasks especially in transformer-based models, is it computed only at the last layer of the model, or is each layer calculated individually? I noticed that the figures in the paper seem to indicate that only the last layer is calculated, so why not compute each layer separately, given that the features learned at each layer of the model are different?" **A1.** Thank you for your question. We would like to clarify that the LCB calculation in TreeLoRA is indeed performed ***layer by layer*** across the entire model. As described in Equation (2) in our paper, the LCB is computed as follows: $$ \mathrm{LCB}\_k= \begin{cases} \widehat{\mu}\_k-2 \sqrt{\frac{\log t}{n\_k}}, & \text { if } k \in \mathcal{L} \\\\ \max \\left\\{ \min\_{j \in \mathcal{C}} \\left\\{\widehat{\mu}\_j-2 \sqrt{\frac{\log t}{n\_j}}-\delta \\right\\} \\right\\}, & \text { if } k \notin \mathcal{L} \end{cases} $$ where $\hat{\mu}\_k = \frac{1}{|\operatorname{Select}\_k|} \sum\_{\tau \in \{\operatorname{Select}\_k\}} \hat{\xi}\_{\tau}^k$ is the estimated task similarity between the current task and the $k$-th task group (i.e., the nodes in the branch of the selected leaf node at round $t$), $\mathcal{L}$ is the set of all leaf nodes, $\delta$ is the automatically determined threshold, and $\mathcal{C}$ is the child nodes of the $k$-th node. Therefore, the LCB is computed for each layer of the model. By calculating the LCB layer by layer, TreeLoRA captures the similarity between tasks at various levels throughout the model hierarchy as illustrated in Figure 1, allowing us to better capture hierarchical task similarities, which is especially advantageous in transformer-based models. We will add these details in the revised version of the paper to provide further clarity. Thank you again for your valuable question! --- **Q2.** "The authors mention that the setting of the threshold $\delta$ does not need to be done manually and is done dynamically, why and how is this done?" **A2.** Thanks for your comment. Inspired by the K-D tree data structure [Bentley, 1990], the threshold $\delta$ does not require manual tuning. Specifically, during the construction of the K-D tree after each task, the gradient space is partitioned based on the distribution of task gradients. At each split, the threshold is computed by taking the median of the similarity (L1-norm) between each task gradient and the mean gradient within the corresponding task group. This approach ensures balanced tree growth and adaptive partitioning of the gradient space, without the need for manual threshold adjustments. --- **Q3.** "definition of $f_i(w_j)$ does not conform to common notation. It is recommended to swap $i$ and $j$ to write it as $f_j(\mathcal{T}_i)$, or consider using $\theta_j$ to represent the model parameters" **A3.** Thanks for your comment. We will revise our paper accordingly and use clearer notations, which would enhance the clarity of the paper. Thanks again for your feedback. --- **Q4.** "it seems that there is a lack of comparison with some recent advanced continual learning methods [1, 2]" **A4.** Thank you for pointing out these two references. Following your suggestions, we add a comparison with these two recent advanced continual learning methods, SAPT [Zhao et al., ACL 2024] and TASL [Feng et al., ACL 2024]. For a fair comparison, we do not employ the generative replay in SAPT. The results, using _meta / LLaMA-2-7B-Chat_ as the foundation model, are presented in the table below: |Metric|FIX|SeqLoRA|OGD|GEM|EWC|L2P|DualPrompt|HiDeLoRA|O-LoRA|SAPT|TASL|TreeLoRA| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Op (%)|38.94|34.30|42.09|40.08|42.36|36.23|37.69|41.60|42.78|42.93|43.19|**43.52**| |BWT (%)|-|18.50|8.06|6.77|5.97|8.25|8.03|7.12|7.16|5.49|4.58|**3.46**| |Time (s)|-|1132|6416|7385|50283|899|912|1286|1293|1205|1185|**485**| We also add comparison with other recent methods, InfLoRA [Liang and Li, CVPR 2024], please refer to **A3** for Reviewer bzQv for more details. We will add these results to the revised version, and will also add discussions with SAPT and TASL methods in the related work section. --- We hope these clarifications address your concerns. Thanks again for your valuable comments. --- Rebuttal Comment 1.1: Comment: Thank you for providing the experiments and explanations regarding my questions and concerns. However, I still have some issues with the experimental part: **Q1**: You mentioned that you did not use SAPT's generative replay for a fair comparison. Why is disabling generative replay more fair? As far as I remember, the generative replay in SAPT does not use the original data but instead uses fabricated data, which should not affect fairness. **Q2**: In the training times you provided, O-LoRA is surprisingly close to SAPT's time. In my understanding, O-LoRA involves computing orthogonal structures for each layer and incorporating them into gradient calculations, which should be time-consuming. Or perhaps the authors considered that O-LoRA does not retain all of LoRA blocks. **Q3**: Why is the training time reported rather than the inference time? --- Reply to Comment 1.1.1: Comment: We are grateful to the reviewer for the follow-up feedback. We address each of the additional questions regarding experiments as follows. **Q1.** "You mentioned that you did not use SAPT's generative replay for a fair comparison. Why is disabling generative replay more fair? As far as I remember, the generative replay in SAPT does not use the original data but instead uses fabricated data, which should not affect fairness." **A1.** We thank the reviewer for the question. To clarify, the generative replay mechanism requires maintaining a pre-generated dataset of pseudo data (as observed in the SAPT's codebase) or, alternatively, employing an additional generative model to produce pseudo data. In our opinion, this process introduces additional information _beyond the original data stream_. Therefore, we exclude this mechanism and instead adopt another strategy by storing a fixed number of data samples in a buffer. This ensures that all methods rely _solely on the original data stream_. On the other hand, we also appreciate the idea of introducing generative replay in continual learning, which can be considered as a "plug-in" component. This component could be integrated into our method or O-LoRA, etc. We will conduct additional ablation studies for a more comprehensive evaluation. --- **Q2.** "In the training times you provided, O-LoRA is surprisingly close to SAPT's time. In my understanding, O-LoRA involves computing orthogonal structures for each layer and incorporating them into gradient calculations, which should be time-consuming. Or perhaps the authors considered that O-LoRA does not retain all of LoRA blocks." **A2.** We thank the reviewer for the question. We would like to clarify that although O-LoRA requires computing orthogonal structures for each layer during training, the additional computational cost remains acceptable. This is because the orthogonal regularization across different layers can be computed in a batched and parallelized manner — treating the LoRA adapters at different layers as one concatenated matrix. This strategy is implemented in both the original O-LoRA's and our codebase. --- **Q3.** "Why is the training time reported rather than the inference time?" **A3.** Thank you for your comment. In this paper, one of the key contributions of our proposed TreeLoRA is to explore the task structure in order to **facilitate adaptation to new tasks by leveraging task-shared knowledge**, therefore decreasing the training time and enhancing the efficiency. Regarding inference, our method incurs the same time cost as other LoRA-based methods since we do not modify the inference process. While reducing the inference time is also an important problem in the LLM field, our current framework is primarily designed to address the challenges associated with task adaptation speed and training overhead, and we will consider it as an important future work. --- Thanks again for your time and feedback, we hope this response addresses your concerns.
Summary: TreeLoRA presents a continual learning method that enhances the efficiency of updating large pre-trained models. By integrating layer-wise LoRA with a hierarchical gradient similarity tree, it improves knowledge retention while reducing computational costs. TreeLoRA mitigates catastrophic forgetting while maintaining efficiency in VITs and LLMs. Claims And Evidence: Yes, please refer to the Strengths and Weaknesses section for more details. Methods And Evaluation Criteria: Yes, please refer to the Strengths and Weaknesses section for more details. Theoretical Claims: Yes, please refer to the Strengths and Weaknesses section for more details. Experimental Designs Or Analyses: Yes, please refer to the Strengths and Weaknesses section for more details. Supplementary Material: Yes, please refer to the Strengths and Weaknesses section for more details. Relation To Broader Scientific Literature: Please refer to the Strengths and Weaknesses section for more details. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ***Strengths*** 1. This paper presents a hierarchical gradient similarity tree for task organization, optimizing parameter updates with improved efficiency. A novel bandit-based similarity estimation reduces complexity, enhancing scalability. Sparse gradient updates further adapt TreeLoRA for ViTs and LLMs. 2. A rigorous theoretical analysis derives tighter regret bounds than standard bandit approaches. The hierarchical structure minimizes computational overhead while preserving task knowledge, ensuring efficiency gains. 3. Experiments on vision and language tasks demonstrate that TreeLoRA surpasses state-of-the-art methods, accelerates ViTs and LLMs, and mitigates catastrophic forgetting with reduced backward transfer. ***Weaknesses*** 1. The paper presents evidence for TreeLoRA’s effectiveness but lacks discussion on the stability and robustness of its tree structure over extended task sequences. A deeper analysis of its evolution in long training sequences, especially in non-stationary environments, would add valuable insight. 2. Additionally, the memory and computational trade-offs for extreme-scale LLMs remain unexamined. 3. While TreeLoRA is compared to other LoRA-based continual learning methods, benchmarking against non-LoRA-based strategies is limited, such as replay-based approaches. A broader comparison would better contextualize TreeLoRA’s advantages within the continual learning landscape. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your constructive and helpful comments. We provide our response to each question as below. --- **Q1.** "The paper presents evidence for TreeLoRA's effectiveness but lacks discussion on the stability and robustness of its tree structure over extended task sequences. A deeper analysis of its evolution in long training sequences, especially in non-stationary environments, would add valuable insight." **A1.** Thank you for your comment. Following your suggestion, we conduct additional experiments to validate the stability and robustness of TreeLoRA over ***long task sequences***, which consist of a total of 15 tasks, including C-STANCE, FOMC, MeetingBank, Py150, ScienceQA, NumGLUE-cm, NumGLUE-ds, 20Minuten, dbpedia, amazon, yahoo, agnews, yelp, BoolQA, and QQP, using _meta-llama / Llama-3.2-1B-Instruct_ as the foundation model. The results are summarized in the following table: | Metric | FIX | SeqLoRA | OGD | GEM | EWC | L2P | DualPrompt | HideLoRA | O-LoRA | TreeLoRA | | -------- | :---: | :-----: | :---: | :---: | :---: | :---: | :--------: | :------: | :----: | :-------: | | Op (%) | 41.32 | 40.71 | 32.52 | 35.48 | 31.46 | 41.05 | 41.29 | 42.38 | 44.02 | **45.68** | | BWT (%) | 0.0 | 15.72 | 21.32 | 18.33 | 22.22 | 14.92 | 15.58 | 11.23 | 10.99 | **6.41** | | Time (s) | - | 721 | 1921 | 2235 | 13058 | 403 | 411 | 683 | 679 | **251** | The results demonstrate that TreeLoRA maintains stable performance even with a long sequence of 15 diverse tasks, achieving higher average accuracy and lower forgetting compared to other contenders. Moreover, TreeLoRA shows even better efficiency than short-term task sequences, indicating its scalability for long-term continual learning scenarios. Additionally, we include a figure that illustrates the evolution of the tree structure under dynamic task flow in our LLM experiment. This figure helps to better visualize how TreeLoRA adapts to the evolving task structure over time. The figure is available at the following link: [https://anonymous.4open.science/r/TreeLoRA/scripts/rebuttal.jpg](https://anonymous.4open.science/r/TreeLoRA/scripts/rebuttal.jpg) --- **Q2.** "the memory and computational trade-offs for extreme-scale LLMs remain unexamined" **A2.** Thanks for your comment. In our paper, we validate our method using small ViT models, as well as large-size language models (1B, 2B, and 7B), which contain commonly used models in the research community [Wang et al., 2023a, Wang et al., 2023b, Dou et al., 2024]. To further investigate the computational trade-offs for extreme-scale LLMs, **_we add an experiment using a 13B model_** (meta / Llama-2-13b-chat-hf), as shown in the following table: | Metric | FIX | SeqLoRA | OGD | GEM | EWC | HideLoRA | O-LoRA | TreeLoRA | | -------- | :---: | :-----: | :---: | :---: | :---: | :------: | :----: | :-------: | | Op (%) | 40.15 | 39.16 | 42.32 | 43.77 | 41.23 | 43.27 | 44.32 | **47.13** | | BWT (%) | 0.0 | 15.58 | 9.72 | 8.42 | 10.12 | 11.27 | 5.19 | **3.42** | | Time (s) | - | 1525 | 8712 | 9931 | 67819 | 1835 | 1839 | **662** | The results show that TreeLoRA achieves better accuracy while using lower training time compared to other contenders. Additionally, the memory (storage) overhead of TreeLoRA is minimal, requiring only 15 MB. These findings demonstrate the effectiveness of our tree-based adaptation strategy in both performance and efficiency aspects, and its scalability to large-scale models. --- **Q3.** "While TreeLoRA is compared to other LoRA-based continual learning methods, benchmarking against non-LoRA-based strategies is limited, such as replay-based approaches. A broader comparison would better contextualize TreeLoRA's advantages within the continual learning landscape." **A3.** Thank you for your comment. It appears there may be a misunderstanding due to our insufficient emphasis. Specifically, we have compared TreeLoRA with several non-LoRA-based continual learning strategies, including replay-based (rehearsal-based) methods, GEM, regularization-based methods such as EWC, and the baseline OGD (which represents full update, a non-LoRA method). As shown in Table 3 and Table 4 of the submission PDF, TreeLoRA outperforms these methods both in terms of performance and efficiency. Additionally, TreeLoRA offers particular advantages for transformer-based models due to the large parameter size and inherent hierarchical structure in these models. --- We hope these clarifications address your concerns. We will improve the paper writing to better emphasize these points. Thanks again for your constructive comments. --- Rebuttal Comment 1.1: Comment: Thanks for the author's rebuttal. After reading the comments from other reviewers, I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for recognizing the novelty and theoretical soundness of our work. We also greatly appreciate your insightful feedback. We will incorporate the suggested discussions and additional experiments in the revised version. Thank you again!
Summary: This paper proposes TreeLoRA, a novel and efficient approach for continual learning in large pre-trained models. TreeLoRA constructs a hierarchical tree structure of LoRAs based on gradient similarity, enabling efficient task adaptation and knowledge sharing. The method employs bandit algorithms to explore task-similarity structure and leverages sparse gradient updates to optimize parameters, demonstrating superior efficiency and performance compared to previous state-of-the-art continual learning methods. ## update after rebuttal Thank the authors for the detailed follow-up and additional experimental results. I appreciate the authors' efforts to extend the evaluation to the full 15-task benchmark and to clarify the role of LoRA depth in TreeLoRA's performance. However, I still find some aspects unclear. Specifically, while the explanation about LoRA depth partially clarifies the observed performance drop in LLMs, it remains ambiguous how TreeLoRA itself solves the issue when the model size becomes larger. Furthermore, although the authors state that TreeLoRA's depth is independent of the number of tasks, the rationale for choosing specific depths for different models is still not clearly explained. For instance, in the LLaMA-2-7b-chat experiments, both TreeLoRA and O-LoRA perform poorly at depth 8 but significantly improve at depth 64, raising the question of whether TreeLoRA consistently outperforms O-LoRA or whether its benefits only appear under particular depths. As there is no empirical evidence indicating a linear relationship between performance and LoRA depth, the observed improvements remain difficult to interpret. The newly added results are appreciated and add value to the submission. Moreover, the idea behind TreeLoRA represents a novel and promising research direction. However, I believe further clarification is needed regarding TreeLoRA’s consistent advantage on LoRA depths. Therefore, I maintain my original score. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods make sense for the problem, but for the benchmark datasets, this paper does not use the same dataset used by O-LoRA, since it is a direct and very related baseline for this paper. Theoretical Claims: I checked the correctness of the theorem 1. Experimental Designs Or Analyses: I checked the experimental designs. Please see the questions. Supplementary Material: I reviewed the supplementary material. Relation To Broader Scientific Literature: As mentioned in the paper, the proposed method may help to decrease energy consumption and carbon emissions associated with training AI models, contributing to environmentally sustainable machine learning. Essential References Not Discussed: This paper uses the image datasets CIFAR-10 and ViT but it does not compare with the similar work [InfLoRA] published in CVPR 2024, which also uses the same datasets and the same model. [1] Liang, Yan-Shuo, and Wu-Jun Li. "Inflora: Interference-free low-rank adaptation for continual learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Other Strengths And Weaknesses: Strengths: 1. TreeLoRA is a novel hierarchical structure that efficiently groups tasks based on gradient similarity, efficient task adaptation, and knowledge sharing. 2. This paper provides a theoretical analysis to support the proposed method’s efficiency, demonstrating regret bounds compared to conventional methods. 3. The proposed method achieves speed improvements while having similar or even better performance compared to current existing methods. Weaknesses: 1. There is limited analysis of how the performance of TreeLoRA is affected by key hyperparameters such as the tree depth and gradient similarity threshold. The authors mentioned that the tree depth is set to 5 for ViT and 64 for LLMs, but there is insufficient discussion on how the values are chosen and how sensitive the method is to the choices. 2. The authors compare TreeLoRA with O-LoRA and other baselines, but there is insufficient analysis of how different task orderings affect performance. O-LoRA explored the impact of different task orders, it would strengthen the evaluation if using the same task orders to compare. 3. The paper does not use the same datasets as O-LoRA, which makes the current comparison less rigorous and kind of unfair. Other Comments Or Suggestions: Include a discussion on the impact of task order on the performance of TreeLoRA. Questions For Authors: 1. How does TreeLoRA handle situations where task order changes over time? Since the experiments compare with the recent baseline, O-LoRA, which explored the impact of different task orders, how does TreeLoRA perform on the different task orders in O-LoRA. 2. How does TreeLoRA determine the depth of the K-D tree chosen for ViTs (5) and LLMs (64)? Is there any strategy for guidance? Is there any range for the depth? Does it connect with the number of tasks? 3. In the experiment, the TreeLoRA uses image datasets CIFAR-10 as one benchmark, and I found one previous work, InfLoRA (CVPR 2024), which also utilizes this dataset and the same model ViT to conduct the experiments. How does TreeLoRA’s performance compare to this work? [1] Liang, Yan-Shuo, and Wu-Jun Li. "Inflora: Interference-free low-rank adaptation for continual learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's feedback. In the following, we address each of your technical inquiries. **Q1.** "impact of task order on the performance ...this paper does not use the same dataset used by O-LoRA." **A1.** Thank you for your comment. First, we would like to clarify that the TRACE dataset used in our paper consists of 8 tasks, which is ***larger*** than the 4 tasks used in the O-LoRA paper. Moreover, the TRACE dataset includes a ***diverse set*** of tasks, such as text generation and code generation, whereas the datasets used in the O-LoRA paper primarily focus on classification tasks. To further address your concern, we conduct additional experiments to validate TreeLoRA and other contenders using the _**same datasets**_ (i.e., dbpedia, amazon, yahoo, and agnews) and _**same task orders**_ as in O-LoRA. We use *Llama-3.2-1B-Instruct* as the foundation model, and convert classification tasks to text generation tasks. Results of overall performance (%)/BWT (%) indicate that TreeLoRA achieves superior performances across different orders, and also improves efficiency (about 1.5x speedup compared to O-LoRA): Task Order|FIX|OGD|GEM|SeqLoRA|HideLoRA|O-LoRA|TreeLoRA :-:|:-:|:-:|:-:|:-:|:-:|:-:|:-: Order1|48.75/0.0|54.16/10.82|54.10/10.25|55.71/8.63|56.32/2.75|59.50/2.51|**59.73/2.22** Order2|48.75/0.0|46.82/21.50|46.70/20.93|45.26/7.70|53.41/5.58|52.53/5.65|**53.78/5.74** Order3|48.75/0.0|43.79/27.31|51.12/16.24|49.03/19.05|61.25/3.12|**63.82/2.03**|62.76/2.23 Time|-|684|712|43|56|58|**45**| Further, we include an experiment involving long task sequences, which is similar to the experimental setup used in O-LoRA's paper. For more details, please refer to the **A1** for Reviewer ygph. We will add these results in the revised version. Thanks again for your valuable comments. --- **Q2.** There is limited analysis of how the performance of TreeLoRA is affected by key hyperparameters. How does TreeLoRA determine the depth of the K-D tree chosen for ViTs (5) and LLMs (64)? Is there any strategy for guidance? Is there any range for the depth? Does it connect with the number of tasks? **A2.** Thanks for your comment. We detail the hyperparameter analysis of our method below: - **Hyperparameter Sensitivity.** In our paper, we provided an analysis of the sensitivity of TreeLoRA's performance to key hyperparameters, such as the regularization coefficient $\lambda$ and the learning rate $\alpha$. These results are detailed in Appendix A.6. - **Impact of the Tree Depth.** To further explore the impact of tree depth, we conducted additional experiments to validate how varying tree depth affects the model's performance, with overall performance (%) and training time shown in the table below: Tree Depth|CIFAR-100 (ViT)|Time (s) :-:|:-:|:-: 1|86.52|171.31 2|88.22|182.42 5|**88.54**|**212.66** 7|88.39|233.17 Tree Depth|TRACE (LLM)|Time (s) :-:|:-:|:-: 8|21.49|455 16|22.62|468 32|38.62|476 64|**43.52**|**485** These results show that TreeLoRA is relatively robust to the choice of tree depth. For ViT models, a tree depth of 5 provides a good balance between performance and efficiency, while for LLMs, a depth of 64 is recommended. These settings bring slightly better trade-offs between performance and efficiency. We clarify that tree depth is not determined by the number of tasks, as a single node in the tree can contain multiple tasks, allowing the structure to scale to a large number of tasks. Additionally, the maximum tree depth should not exceed the number of transformer layers (as illustrated in Figure 1) - **Impact of the Gradient Similarity Threshold.** Regarding the threshold $\delta$, as mentioned in Section 3.3, we clarify that it is automatically determined and does not need manual adjustment. Specifically, inspired by the K-D tree data structure [Bentley, 1990], at each split, the threshold is computed by taking the median of the similarity (L1-norm) between each task gradient and the mean gradient within the corresponding task group. This approach ensures balanced tree growth and adaptive partitioning of the gradient space, without the need for manual threshold adjustments. --- **Q3.** One previous work, InfLoRA (CVPR 2024), also utilizes CIFAR-100 and the same model ViT to conduct the experiments. How does TreeLoRA's performance compare to this work? **A3.** Thank you for your comment. We add experiments to directly compare the performance with InfLoRA on the CIFAR-100 dataset using ViT models: ||InfLoRA|TreeLoRA :-:|:-:|:-: Acc (%)|85.44|**88.54** BWT (%)|4.82|**4.37** Time (s)|695|**214** The results demonstrate that TreeLoRA achieves similar accuracy compared to InfLoRA, with lower training times. We will add these results and corresponding discussions in the revised version. --- We hope these clarifications address your concerns. We sincerely wish that you can re-evaluate our paper and consider updating the score for our paper. Thank you for your time and feedback! --- Rebuttal Comment 1.1: Comment: Thank the authors for providing additional experiments to clarify in the rebuttal. Based on the authors' responses, I have some further concerns: 1. The answer in Q1 "which is larger than the 4 tasks used in the O-LoRA paper" is a misleading expression since O-LoRA conducted experiments on both 4 tasks in the standard continual learning benchmark and 15 tasks in the large number of tasks. 2. For "Impact of the Tree Depth", it seems like TreeLoRA has better robustness on ViT using CIFAR100 than LLM using TRACE. Since the tree depth has more influence on LLM accuracy and llama-3.2-1B has more parameters than ViT-B/16, it looks like TreeLoRA cannot be simply extended to LLM. --- Reply to Comment 1.1.1: Comment: **[New!] Thanks for your recognition of the novelty of our method and theoretical analysis. We sincerely wish the reviewer could kindly review our newly added experiments, which we believe adequately address all of your concerns.** Please do let us know if you have any additional comments (use the "edit" function). With the inclusion of additional experiments and expanded discussions on related work, we'd be deeply grateful if you could consider raising your score to further support our paper. --- We thank the reviewer for the follow-up questions and we address each of your additional concerns in detail. **Q1.** "The answer in Q1 'which is larger than the 4 tasks used in the O-LoRA paper' is a misleading expression since O-LoRA conducted experiments on both 4 tasks in the standard continual learning benchmark and 15 tasks in the large number of tasks." **A1.** Many thanks for your further comments. We will revise the misleading expression in the next version. We'd like to clarify that our earlier choice of focusing on the standard CL benchmark was based on the following two main considerations: - In the O-LoRA paper, the 15-task benchmark was evaluated using the T5 model only, without including the LLaMA architecture. Since our study focuses on widely-used, decoder-only LLM structures such as LLaMA and Mistral, we prioritized the standard CL benchmark to enable a direct and fair comparison. - Moreover, although the 15-task setting includes a greater number of tasks, all of them are classification problems measured by accuracy. As such, the increased quantity may not necessarily suggest greater task diversity or increased difficulty compared to the standard CL benchmark. Nonetheless, to more directly address the reviewer's concern, **we have now extended our evaluation to include the full 15-task benchmark, with 3 orders (same as in the O-LoRA paper)**: MNLI, CB, WiC, COPA, QQP, BoolQA, RTE, IMDB, Yelp, Amazon, SST-2, DBpedia, Agnews, MultiRC, and Yahoo, using _meta-llama / Llama-3.2-1B-Instrcut_ as the foundation model. This required additional effort to adapt the new benchmark into our codebase and align it with our pipeline, which has just been completed. The results are as follows: |Task Order|FIX|HideLoRA|O-LoRA|TreeLoRA| |:---:|:-:|:-----:|:---:|:--:| |Order4|52.13/0.0|59.44/4.33|**59.89/4.67**|58.45/4.98| |Order5|52.13/0.0|54.49/7.52|57.05/4.42|**58.12/3.31**| |Order6|52.13/0.0|57.26/6.98|58.02/4.73|**59.00/4.12**| |Time|-|124|121|83| We hope this clarification addresses your concerns. We will continue expanding experiments on these benchmarks using additional foundation models and will report comprehensive results in the revised version of the paper. Thanks! --- **Q2.** "For 'Impact of the Tree Depth', it seems like TreeLoRA has better robustness on ViT using CIFAR100 than LLM using TRACE..." **A2.** We appreciate your insightful observation. We would like to take this opportunity to clarify this phenomenon and provide additional empirical evidence to support the scalability of TreeLoRA to LLMs. - As illustrated in Figure 1 of the main paper, the tree depth in our method design is directly constrained by the **LoRA depth**, i.e., the number of layers where LoRA adapters are applied. LLMs such as LLaMA-3.2-1B or LLaMA-2-7B have significantly more layers and parameters than ViT-B/16, which naturally calls for more LoRA adapters. A shallow LoRA depth (aka tree depth) in such architectures can lead to a performance drop. Therefore, we clarify that **the performance drop should not be attributed to the TreeLoRA architecture itself, but rather to the insufficient LoRA depth**. - To support this interpretation, we conducted an additional experiment comparing with O-LoRA, a widely-acknowledged method in the field, to directly show the influence of constraining LoRA depth, both using _meta-llama / LLaMA-2-7B-Chat_ as the foundation model: |LoRA Depth|O-LoRA|TreeLoRA| |:---:|:----:|:----:| |8|21.43|21.49| |64|42.78|43.52| As the table shows, both O-LoRA and TreeLoRA exhibit substantial performance drops when the LoRA depth is limited (e.g., depth = 8). When the LoRA depth increases (e.g., depth = 64), TreeLoRA performs comparably—or even slightly better—than O-LoRA. This suggests that TreeLoRA is indeed extensible to LLMs. In practice, the default LoRA depth used in other methods often suffices — often set as the number of layers in the LLM. We will incorporate this discussion along with the experimental results into the revised version of the paper to provide a clearer picture of TreeLoRA's scalability. --- **In summary, we have** - Extended our evaluation to include the full 15-task benchmark, with 3 orders (same as in the O-LoRA paper). - Provided a detailed analysis of the relationship between LoRA depth and performance, demonstrating that TreeLoRA can easily scale to large models such as LLMs.
Summary: This paper proposes TreeLoRA, a continuous learning method that builds hierarchical adapters based on gradient similarity, which aims to solve the computational efficiency problem in continuous learning of large pre-trained models (LPMs). By organizing tasks into a K-D tree structure and introducing sparse gradient updates, this method achieves better accuracy than baselines (such as HiDeLoRA) on ViT and LLM, while reducing training time by about 2.4 times. However, the core dynamic update mechanism (such as node addition and reduction rules) is not fully explained, and the complexity comparison with mainstream parameter efficient fine-tuning methods (such as LoRA/O-LoRA) is insufficient, which may affect the credibility of the method. Claims And Evidence: The tree structure can dynamically capture task similarity and reduce computational complexity. It does not explain how nodes are dynamically increased or decreased (such as the conditions that trigger splitting). The rationality of the structure is only indirectly proved through visualization, and there is a lack of mathematical description of the dynamic process. Methods And Evaluation Criteria: - The construction of tree structure depends on gradient similarity, but it is not clear: How to update the tree level when new tasks are inserted (incremental or global reconstruction) Threshold design for node splitting/merging Adaptive relationship between tree depth and number/similarity of tasks - Only CL methods such as HiDeLoRA are compared, and no theoretical/experimental comparison is performed with standard LoRA (independent adapter for each task) or O-LoRA (orthogonal constraint adapter), which cannot prove its advantages over basic methods. Theoretical Claims: The regret bound analysis does not consider the adjustment cost of the dynamic tree structure. The theoretical model assumes a static task relationship, which is inconsistent with the actual dynamic scenario. Experimental Designs Or Analyses: The evolution of tree structure under dynamic task flow (such as the tree changes when 10 new categories are added in Split CIFAR-100) Average adapter parameters per task vs. standard LoRA. The impact of different tree depth/width on performance. Supplementary Material: Yes Relation To Broader Scientific Literature: \ Essential References Not Discussed: Rusu et al. (2016) Progressive Neural Networks Wang et al. (2023) DyLoRA: Dynamic Low-Rank Adaptation Other Strengths And Weaknesses: \ Other Comments Or Suggestions: Add parameter/FLOPs comparison experiment with standard LoRA/O-LoRA Discuss the additional overhead of tree structure maintenance (such as gradient similarity calculation complexity) Questions For Authors: Is node splitting based on a fixed threshold? How to avoid over-complication of the tree structure? When the new task has low similarity with the existing node gradient, how to expand the tree structure? (Add new branches or increase the layer depth?) Is the theoretical relationship between the number of tree levels L and the number of tasks T O(log T)? How to set L in actual experiments? Compared with the O(d) parameter increment of O-LoRA (d is the adapter dimension), is the parameter growth rate of TreeLoRA strictly lower? Is the sparsity rate of sparse gradient updates related to the tree structure? How to balance sparsity and knowledge retention? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your helpful comments! Below, we address your major technical questions and will revise the paper to improve clarity and resolve any potential misunderstandings. --- **Q1.** Elaborate more on the construction of the tree structure, including update and expansion, threshold design, relationship between tree depth and number/similarity of tasks. **A1.** Thanks for the question. The construction of the tree structure is explained in detail below: - **Update of Tree Structure.** After each task, we store the task-specific LoRA adapter (as in Section 3.3) and update the tree by inserting adapter into the leaf node of nearest branch by a depth-first search (DFS), thus adding new nodes and expanding the tree. If nodes exceed the storage budget, we choose the closest adapters and reduce them to a single one (as in Appendix A.3). - **Threshold Design.** Inspired by the K-D tree structure [Bentley, 1990], the threshold $\delta$ does not require manual tuning. Specifically, at each split, the threshold is computed by taking the median of the similarity (L1-norm) between each task gradient and the mean gradient within the corresponding task group, ensuring balanced tree growth and adaptive partitioning. - **Relationship Between Tree Depth and Number/Similarity of Tasks.** We clarify that tree depth is not directly determined by the number of tasks, as a single node can contain multiple tasks, allowing it to scale to a large number of tasks. However, the depth should not exceed the number of transformer layers (as illustrated in Fig. 1), and it is treated as a tunable hyperparameter. Further empirical analysis of tree depth is provided in **A2** in response to Reviewer bzQv. We will add these details in the revision for more clarity. --- **Q2.** No comparison with standard LoRA or O-LoRA. **A2.** We believe this is a misunderstanding here — we have compared our approach with the stand LoRA (aka SeqLoRA) and O-LoRA in our experiments, as presented in Table 3 of the submission PDF. The results demonstrate our improvements over basic methods in both performance and efficiency. We also add FLOPs comparison with LoRA/O-LoRA, please refer to **A5**. --- **Q3.** On Theoretical Claims "does not consider the adjustment cost of the dynamic tree structure... assumes a static task relationship" **A3.** Thank you for the insightful comments. Our regret analysis focuses on a simplified scenario to provide foundational justifications for the proposed algorithm. The elements you mentioned can certainly be incorporated into future work by some modern online learning techniques. For instance, incorporating the *switching cost* would allow us to account for the cost of adjusting the tree structure, and adopting *dynamic regret* could help capture time-varying task relationships. Nonetheless, we believe these extensions are non-trivial to achieve. For instance, the minimax rate for MAB is $\Theta(\sqrt{T})$, whereas introducing a switching cost increases it to $\Theta(T^{2/3})$ [Dekel et al., STOC'14]. Our theoretical results serve as a first step toward tackling more complex scenarios, and we will include these points in the discussion of future work. --- **Q4.** Theoretical relationship. **A4.** In Theorem 1, the regret bound is dependent on both the number of tasks $N$ and the task complexity $J_n$ (which is controlled by the similarity between tasks). Consequently, as the number of tasks increases or as tasks become less similar, the performance of our method is expected to deteriorate, requiring more rounds to maintain and search the tree structure. --- **Q5.** Other concerns about experiments, including: - [5-1] parameter/FLOPs comparison, additional overhead of tree structure, "is the parameter growth rate of TreeLoRA strictly lower (than O-LoRA)" - [5-2] evolution of tree structure under dynamic task flow - [5-3] impact of different tree depths on performance. **A5.** Thanks for your suggestions. - [A5-1] We include a comparison using *LLaMA-2-7B-Chat* as an example for training a single token on the 10-th task, as in table below: ||FLOPs|Parameter Complexity -|-|:-: OGD|28×10⁹|$\mathcal{O}(mn)$ LoRA|4.2×10⁶|$\mathcal{O}((m+n)r)$ O-LoRA|4.2×10⁷|$\mathcal{O}((m+n)rN)$ TreeLoRA|4.2×10⁶|$\mathcal{O}((m+n)r+Nr)$ Here, $m$ and $n$ denote the dimensions of the transformer's parameter matrix, $r$ is LoRA rank, and $N$ is the number of tasks. - [A5-2] We also add an analysis of the evolution of the tree structure under dynamic task flow, which provides a clearer illustration of how TreeLoRA captures task structures over time: https://anonymous.4open.science/r/TreeLoRA/scripts/rebuttal.jpg - [A5-3] Additionally, we conduct further experiments to evaluate the impact of tree depth on performance, please refer to **A2** for Reviewer bzQv. --- Thank you again for the helpful review. We will revise the paper accordingly and include the related references, such as Rusu et al. (2016) and Wang et al. (2023).
null
null
null
null
null
null
What can large language models do for sustainable food?
Accept (poster)
Summary: This paper explores the potential of large language models for sustainable food science. Specifically, this paper evaluates LLMs on four tasks, including experimental design, menu design, sensory profile prediction and recipe preference prediction. Then, this paper equips LLMs with combinatorial optimization to overcome the challenge (i.e., fail to balance emission target and satisfaction) in menu design tasks. Experimental results show initial success of LLMs for sustainable food science. ## update after rebuttal I keep my scores unchanged, but I belive this is a good starting point for LLM in sustainable food science. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper explore novel perspectives of LLM application, i.e., sustainable food science. Essential References Not Discussed: None. Other Strengths And Weaknesses: Overall, this is a solid paper with clear motivation and novel empirical findings. However, in my view, it may not be a strong fit for ICML. A substantial portion of the paper focuses on sustainable food science, which may not align closely with the core interests of the machine learning community. Additionally, the machine learning component primarily involves the application of large language models, serving more as a tool than contributing ML methodology. Given this, I believe the paper might be better suited for journals focused on food science, where the audience has deeper domain expertise, and where the work may have a greater impact. That said, since ICML also encourages application-driven machine learning, I am inclined to recommend a borderline accept for this paper. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your feedback. # Fit for ICML Thank you for raising this point. We believe that this paper is a strong fit for the Application-Driven Machine Learning track of ICML, defined as papers that “introduce novel methods, datasets, tasks, and/or metrics according to the needs of a real-world use case”. We have made contributions along all of these dimensions. We have combined LLMs and optimization to address the sustainable menu design task (outperforming an expert human chef and several other baselines), introduced two datasets not previously studied by the ML community, and introduced four novel tasks and associated evaluation metrics. We believe that publication of our work in ICML would be mutually beneficial for the ML community, serving to raise awareness of this important testbed for ML algorithms, and the sustainable food community. Climate Change AI was founded in 2019 by AI researchers to encourage more research at the intersection of AI and sustainability, and has held 12 workshops at NeurIPS, ICML, and ICLR over the past 6 years, suggesting that the intersection of AI/ML and climate change mitigation is indeed of interest to the ML community. Additionally, in recent years, sustainability organizations such as the Good Food Institute, Food Systems Innovation, New Harvest, and the Bezos Earth Fund have called for research at the intersection of AI/ML and sustainable food, particularly novel sustainable protein sources. Food systems are responsible for one third of human-caused greenhouse-gas emissions, and thus represent an important part of AI/ML for climate change mitigation efforts. Despite this, no prior work on sustainable food, to our knowledge, has been published in an AI/ML conference. Our work is a step in this direction. We plan to publish followup work in domain journals to also ensure impact in the sustainable food community. # ML contribution We would like to clarify our ML contribution. For the recipe preference prediction, sensory profile prediction, and experimental design tasks, yes, we apply LLMs and analyze their performance, without contributing new methods. However, for the menu design task, we find that LLMs on their own do not adequately balance multiple constraints, and present a framework for combining LLMs with optimization techniques, which we find reduces emissions by 79% while maintaining patron satisfaction. We also provide an initial theoretical analysis of our framework (Proposition 6.1), showing that the error of our approach can be bounded as a function of the number of items selected and the maximum item-level prediction error of the LLM. While we currently only apply this framework to sustainable menu design, we believe it can also inspire future work. Besides the sustainable menu design task we study, examples of applications we are motivated by include health coaching (“Generate a diet and exercise plan for achieving my goals while meeting constraints on ingredients, cost, preparation time, and current injuries”), curriculum design (“Generate a curriculum that will maximize student engagement while meeting constraints on topics covered and class time”), and travel planning (“Generate a travel plan that meets my preferences and covers the following locations”). In these applications, both background knowledge (of nutrition, fitness, education, travel, human preferences, etc.) and optimization (to meet constraints) are necessary, and we show how to combine the background knowledge of LLMs with optimization tools. Our framework contributes to two strands of prior work in the ML and optimization communities. The first is the literature on LLMs for optimization modeling [1,2,3], which focuses on reducing barriers to the use of specialized optimization software via allowing users to specify optimization problems in natural language. Our work builds upon this literature, which assumes that the optimization problems are precisely specified. In our framework, LLMs are used to provide background knowledge on topics such as human preferences, to convert an imprecisely specified optimization problem (as is the case with many real-world planning, scheduling, decision making, etc. problems, such as the three mentioned above) to a precise optimization problem. The second is the predict-then-optimize framework [4,5], as pointed out by reviewer XTjM. Here, our contribution is to add a generation step, in which an LLM generates the elements of the ground set (e.g. recipes, exercises, travel destinations) from which a solution is produced, as well as to apply LLMs for the prediction step. [1] AhmadiTeshnizi, A., et al. “OptiMUS.” ICML 2024. [2] Jiang, C., et al. “LLMOPT.” ICLR 2025. [3] Huang, C., et al. "ORLM." arXiv 2024. [4] Elmachtoub, A. et al. "Smart `Predict, then Optimize.’” Management Science 68.1 (2022). [5] Bertsimas, D. et al. "From Predictive to Prescriptive Analytics." Management Science 66.3 (2020). Thank you for your review! --- Rebuttal Comment 1.1: Comment: Thanks for your response. I decide to keep my score unchanged. --- Reply to Comment 1.1.1: Comment: Thank you, VYkF, for reviewing our response. We greatly appreciate it.
Summary: This paper explores the capabilities of Large Language Models (LLMs) in a set design of prediction tasks associated with sustainable diets (mainly plant-based) that were based on sustainable food literature and collaboration with domain experts. The overall objective of the tasks consists of generating low-emission menu designs (based on the associated emissions induced by the ingredients used in the recipes) that preserve human satisfaction. The authors' main contribution is a framework that evaluates how good LLMs are in a zero-shot setup and a novel approach that involves defining the problem as an LLM-guided combinatorial optimization for the task of menu design. They tested the framework with six different state-of-the-art LLMs. Their framework uses two food-related datasets: NECTAR (about the sensory evaluation of food products) and Food.com (about recipes); they also include a list of recipes from delivery applications. They included twenty plant-based food scientists in evaluating their method based on four metrics (accuracy, specificity, complementarity, and time saved) in a blinded randomized test in which the scientist collaborated with an anonymous scientist (the LLM or a peer). The answers were homogenized in style to avoid detecting the use of a model. Claims And Evidence: Most of the claims in the paper are relatively well supported; they remain conservative, but that's natural given the nature of the setup: LLMs have biases towards omnivore diets, and satisfaction score depends on a proxy evaluation through the LLMs. The results are analyzed statistically using a confidence interval of 95% confidence for different tests. However, those relying on human evaluators ($n = 20$) might hinder the validity of their statistical tests, considering the rule of thumb of having at least $n = 30$ for the central limit theorem to hold. Methods And Evaluation Criteria: The datasets used in the paper are relevant to the proposed tasks. The paper introduces a novel evaluation criteria that could have some caveats. It considers four metrics (accuracy, specificity, complementarity, and time saved) that rely on the open questions to a group of food scientists with moderate experience in plant-based food science and meat. This evaluation method seems reasonable given the nature of the tasks, but it could introduce biases given that it is entirely human-reliant. Something similar happens with the satisfaction metric, which relies on a proxy obtained by the LLMs, which the paper points out as biased towards omnivore options. There is confusion when the paper talks about accuracy; as for the task *Sensory Profile Prediction*, the definition corresponds to the ability of the LLM to describe the same sensory profile defined in the NECTAR dataset. For *Recipe Preference Prediction*, the accuracy corresponds to correctly estimating the recipe preference. Theoretical Claims: There is just one theoretical claim related to the LLM-guided combinatorial optimization problem in section 6.1. with a proof in the Appendix A. I believe that the theoretical derivation has a problem, overestimating the bound of $|f(\hat{x}^*) - f(x^*)|$, following their justification, the right bound should be: $|f(\hat{x}^*) - f(x^*)| \leq 2 K \epsilon$ the diversity term cancels out. However, this claim does not seem particularly important in their methodology. The selection of K is based on matching the length of a menu from another work (line 365). The explanation of equation 2 has a typo (Line 361) when referring to $C_i$ instead of $C_j$, which is important to highlight because index $i$ and $j$ are used to refer to selection and constraint, respectively. This confuses the understanding. Experimental Designs Or Analyses: Experiments and procedures are rigorous and follow a methodic evaluation process. Their experiments are Menu Design, Sensory Profile Prediction, and Recipe Preference Prediction. I don't identify any concern with the way they performed the experiments other than the confusion that could exist between metrics like accuracy (which has two definitions depending on the task). Supplementary Material: I reviewed the supplementary material, which is composed of notebooks and files that correspond to the different tasks they proposed. Although it contains code referring to the experiments on the paper, it is not well documented and doesn't provide instructions on how to check their framework and results. Some cells don't run. Relation To Broader Scientific Literature: The topic of interest is addressed by prior work in the food science domain without a relationship to the domain of artificial intelligence. The paper proposes a novelty based on introducing LLMs in food science-related tasks of their design. The tasks were designed around the data availability, and datasets like NECTAR and Food.com played an important role. They also relied on prior work on how human preferences can be distilled from LLMs. Essential References Not Discussed: Not to my understanding. Other Strengths And Weaknesses: *Strengths*: - The paper is well-written and easy to follow - It is novel to consider LLMs in the field of sustainable food; it could enable the adoption of low-emission alternatives that preserve human satisfaction - The inclusion of domain experts gives solidity to the proposed framework and experiments *Weaknesses*: - The theoretical bound in section 6 is not used; it could have contributed to some analysis of the traceability of the LLM-assisted optimization problem - The results based on humans could be statically invalid given the number of participants they included - The supplementary material offers resources to corroborate their framework, but it's hard to follow and doesn't provide any instruction Other Comments Or Suggestions: No additional comments. Questions For Authors: 1. What is the primary purpose of the theoretical bound introduced in 6? 2. Do you agree that the sample size of the expert-assisted metrics impacts the applicability of the traditional methods to create confidence intervals? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your feedback. # Statistical analysis Thank you for raising this point. We note that we could have analyzed this data in alternative ways, given both the small sample size ($n=60$ ratings across 30 products) and clustering in the data (namely, pairs of products that were evaluated by the same food scientist, or where their associated experimental design was generated by the same food scientist). We ran additional statistical tests and found that the statistical significance of our findings are robust to all analysis variations we tested, which includes methods that do not rely on the central limit theorem. We will include this in the revised paper. The table of $p$-values across dimensions and methods is below. For all dimensions the mean value was higher (better) for o1-preview than the human food scientists. | Test | Accuracy | Complementarity | Specificity | Percent Time Saved | |-|-|-|-|-| | $t$-test (as in submitted paper) | 0.121 | 0.350 | **0.003** | **0.0002** | | Paired $t$-test (taking into account the pairing across products) | 0.125 | 0.368 | **0.003** | **0.0003** | | Wilcoxon signed-rank test (exact, for paired samples, nonparametric, does not rely on normality assumptions) | 0.078 | 0.284 | **0.005** | **0.0007** | | Linear mixed model with random effect for evaluator ID | 0.075 | 0.993 | **0.001** | **0.0002** | | Permutation test (nonparametric, exact, does not rely on normality assumptions) | 0.140 | 0.392 | **0.003** | **0.0003** | | OLS where we control for product ID, evaluator ID, generator ID as fixed effects | 0.1812 | 0.416 | **0.001** | **0.0006** | Additionally, we would like to clarify our sample size for this task. Each data point was a rating of a human or LLM experimental design to improve a product. 30 products total were evaluated, for each of the two groups (human and LLM), yielding **$n=60$** ratings total. The total number of human evaluators was 20. As stated in Appendix D.1, in Phase 1 (generation phase), 15 food scientists were recruited to generate experimental designs for 2 products each, yielding 30 products with associated experimental designs. o1-preview was also prompted to generate experimental designs for each of the 30 products. Then, in Phase 2 (evaluation phase), another 15 food scientists (with overlap from Phase 1, but ensuring that no one evaluated their own designs) evaluated both human and LLM designs for 2 products each, on the four dimensions of accuracy, complementarity, specificity, and percent time saved. Then, for each of the four dimensions, a $t$-test was performed on the human vs. LLM scores, though as we show above the result is robust to alternative tests. We will make this more clear in the final version. # Evaluation criteria and satisfaction metric Regarding the comment “Something similar happens with the satisfaction metric, which relies on a proxy obtained by the LLMs, which the paper points out as biased towards omnivore options”, we clarify that for the menu design task, the satisfaction metric is based on responses of actual human participants, in response to the question, “How satisfied are you with your set of choices?” Please let us know if we misunderstood your comment; we will be happy to respond further. Across tasks, we use a combination of automated metrics and those that rely on human judgment. In the preference prediction tasks, we use automated metrics (accuracy relative to ground truth sensory panel or recipe rating data). In the menu design task, the emissions computation is automated. We agree that the human-reliant evaluation in the experimental design task could introduce bias, but we recruited 20 distinct expert human raters to minimize systematic bias. # Definitions of accuracy We appreciate this point and will make the distinction clear in our revision. # Theoretical claim and typo You are correct that the upper bound was overestimated, and should be $2Kϵ$ - thank you! We will fix this as well as the typo in the updated version. # Purpose of theoretical bound The theoretical bound provides the insight that the maximum error of our approach can be bounded based on the number of items selected (rather than e.g. the size of the ground set) and the maximum item-level error of the LLM. We agree that future work could assess how downstream performance varies depending on the quality of the preference prediction method, and compare empirical results with the theoretical bound. # Supplementary material We apologize for the insufficient documentation. Though we are not allowed to modify our submission at this stage, we commit to releasing a public GitHub repository that is well documented and provides full instructions for reproducing our results, other than information we cannot release due to IRB restrictions or the policy of the data provider (for the sensory panel data). Thank you for your thoughtful review! --- Rebuttal Comment 1.1: Comment: I appreciate the clarification provided by the authors regarding the statistical validity of their tests and the extra detail about how they were implemented. That allowed me to understand your original draft better. Thank you for considering the rest of the suggestions about form and clarity of content. Assuming you include these in the camera-ready version, I will update my score. --- Reply to Comment 1.1.1: Comment: Thank you, YVD5, for reviewing our response and updating your assessment. We greatly appreciate it. Yes, we will definitely incorporate all of your suggestions in the camera-ready version.
Summary: The paper investigates how LLMs can help reduce environmental impacts associated with food production. It establishes a typology of tasks relevant to sustainable food development, specifically focusing on design and prediction tasks at various levels (ingredients, recipes, and food systems). Evaluations of various LLMs across tasks reveal that LLMs allow to reduce the time required to generate experimental designs in sustainable protein formulation, outperforming expert human scientists across different metrics. However, they perform poorly in fine-grained tasks like menu design when simultaneously addressing climate impacts and human satisfaction. To overcome this limitation, the authors integrate LLMs with combinatorial optimization, achieving a substantial 79% emissions reduction in hypothetical restaurant scenarios without compromising customer satisfaction. The results underscore LLMs' strong potential, especially when complemented by optimization techniques, to accelerate sustainable food development and adoption. Claims And Evidence: The paper provides substantial empirical evidence that LLMs can effectively help reduce the time required by food scientists in sustainable protein experimental design tasks, while improving specificity scores. In addition, combining LLM predictions with combinatorial optimization can successfully reduce emissions (79%) with respect to the baseline while maintaining consumer satisfaction, as shown in their human subject experiments. These findings are supported by concrete experimental results involving expert evaluations and online surveys. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the application. The integration of LLM predictions with combinatorial optimization addresses realistic trade-offs encountered in sustainable food design. The chosen datasets (NECTAR sensory panel and Food.com recipes) are appropriate. However, stronger baselines for menu design could further improve the evaluation, such as menus using lower-emission equivalent of the original menu (e.g., substituting beef with chicken) rather than only considering vegetarian or beef-free alternatives. Theoretical Claims: The proof of Proposition 6.1 is correct. This proposition is the only theoretical claim. Experimental Designs Or Analyses: n/a Supplementary Material: Parts A, C, and E. Relation To Broader Scientific Literature: The paper's key contributions are (1) the application of LLMs for scientific discovery and human preferences modeling to the sustainable food domain, and (2) combining LLMs with optimization methods. For (1), the paper extends prior work on LLM-based modeling of human behavior to the sustainable food domain, not currently explored in the literature. However, as discussed in the results sections, the observations align with the existing food science literature, resulting in few novel findings. Regarding (2), the paper cites [Yang et al., 2024] and [AhmadiTeshnizi et al., 2024] using LLMs for mathematical optimization problems. However, the coupling of LLM and integer quadratic programming is more closely related to the stream of data-driven optimization, in which the framework introduced in this paper has already been thoroughly studied. Essential References Not Discussed: The paper's contributions align with the broader scientific literature on the "predict-then-optimize" method. However, the paper does not explicitly cite or discuss critical works related to this steam of literature in Operations Research and related journals, such as [Smart “Predict, then Optimize”, Elmachtoub and Grigas, 2021] and [From Predictive to Prescriptive Analytics, Bertsimas and Kallus, 2020]. Other Strengths And Weaknesses: The paper is well-written and reads well. The experiments involving human feedback (particularly the evaluation with 20 expert food scientists and the online surveys with a total of 552 participants) represent significant empirical contributions. However, the optimization approach introduced in Section 6 (combining LLM-based predictions with combinatorial optimization) is not particularly innovative, as it essentially adopts the well-known predict-then-optimize methodology where the prediction model is an LLM. Furthermore, given that the quantities to be predicted are highly uncertain quantities, such as human preferences and ratings, the predict-then-optimize framework is known to perform poorly with point prediction compared to other more robust approaches accounting for prediction uncertainty (see [From Predictive to Prescriptive Analytics, Bertsimas and Kallus, 2020]). Additionally, the paper does not include significant theoretical contributions. Other Comments Or Suggestions: n/a Questions For Authors: n/a Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We greatly appreciate your feedback. # Baselines We have added two baselines (expert chef, and the beef to chicken substitution you suggest), both of which we outperform, and three ablations (remove preferences component, remove diversity component, and remove both) for the menu design task. The chef was given the same set of instructions as the LLMs, but with more time: one hour. (Our o1-preview+IQP procedure takes ~3.5 minutes to run). We found that the chef did not always meet the ingredient availability constraint. Additionally, while the chef generated creative recipes, they lacked a strategy for reducing emissions, choosing to preserve several high-emissions dishes. Below are our new results, compared with the original menu and our proposed method o1-preview+IQP. SD is in parentheses. | | Emissions | Satisfaction | |-|-|-| | Original | 44.91 (38.83) | 8.40 (1.86) | | o1-preview+IQP ($\lambda=100$) | 8.70 (7.06) | 8.44 (2.20) | | o1-preview+IQP ($\lambda=0$) | 9.75 (7.50) | 8.16 (2.25) | | o1-preview+IQP ($\lambda=100$, remove preferences) | 9.44 (7.26) | 7.83 (2.56) | | o1-preview+IQP ($\lambda=0$, remove preferences) | 8.54 (6.72) | 7.12 (2.88) | | Expert chef | 41.28 (40.38) | 8.20 (2.51) | | Replace beef with chicken | 17.57 (15.07) | 8.04 (2.22) | # Novelty of empirical findings We would like to clarify that we do have several novel findings, most notably that 1) LLMs can outperform expert food scientists in the generation of experimental designs for improving sustainable protein formulations and 2) our LLM+IQP algorithm outperforms an expert human chef and several baselines in the design of sustainable menus. Additionally, we characterize performance of LLMs on preference prediction tasks, finding that they can be useful for coarse-grained prediction but that further work is needed for fine-grained prediction. This is in addition to our novel task formulations. # Related work Thank you so much for pointing us to the “predict-then-optimize” literature. We will cite it in the updated version. The ML contribution of our paper can be viewed as extending the predict-then-optimize framework by incorporating a component where the elements of the ground set (e.g. recipes) are generated, and applying LLMs to both the generation and prediction steps. Incorporating measures of LLM uncertainty and conducting additional theoretical analysis of this **generate**-predict-then-optimize framework is an ongoing area of work. Here we also address Vdt3’s references. Regarding [1] and [2] from their review, our framework differs in that the LLM is used to provide background knowledge on topics such as human preferences, to convert an imprecisely specified optimization problem (as is the case with many real-world planning, scheduling, decision making, etc. problems) to a precise optimization problem. Besides the sustainable menu design task we study, examples of applications include health coaching (“Generate a diet and exercise plan for achieving my goals while meeting constraints on ingredients and current injuries”), curriculum design (“Generate a curriculum that will maximize student engagement while meeting constraints on topics covered and class time”), and travel planning (“Generate a travel plan that meets my preferences and covers the following locations”). These applications cannot be readily addressed by the work in [1] and [2], which assume that the provided optimization problem is fully specified. Moreover, our work adds to the set of applications in this literature. Regarding Vdt3’s reference [3], we appreciate the pointer to this comprehensive survey. Our work differs from this literature due to our focus on leveraging background knowledge of LLMs to expand the class of reasoning problems that can easily be addressed with optimization tools. # Our contributions The Application-Driven ML reviewer instructions state, “Originality need not mean wholly novel methods. It may mean a novel combination of existing methods to solve the task at hand, a novel dataset, or a new way of framing tasks or evaluating performance so as to match the needs of the user.” We have made contributions along all of these dimensions. We have combined LLMs and optimization to address the sustainable menu design task (outperforming an expert human chef and several other baselines), introduced two datasets not previously studied by the ML community, and introduced four novel tasks and associated evaluation metrics. We agree that we do not have significant theoretical contributions. However, we do not believe that this is required for the Application-Driven ML track, which notes that the form of the contribution may differ from papers in the main track. Thanks again!
Summary: This paper explores the potential of Large Language Models (LLMs) in addressing sustainability challenges in food systems. The authors define a typology of tasks related to sustainable food, including design and prediction tasks at the ingredient, recipe, and system levels. They evaluate six LLMs on four specific tasks: sustainable protein design, menu design, sensory profile prediction, and recipe preference prediction. The study finds that LLMs can significantly reduce the time spent on experimental design tasks compared to human experts but struggle with tasks requiring the balancing of multiple constraints, such as menu design. To address this, the authors propose a framework that integrates LLMs with combinatorial optimization, demonstrating a notable reduction in emissions. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No proof Experimental Designs Or Analyses: Yes. It lacks real-world testing. Supplementary Material: Yes, I have checked some parts of the supplementary file. For example, additional related works in Appendix B, datasets details in Appendix C, and methods details and prompts in Appendix D. Relation To Broader Scientific Literature: I think the paper is application-orientated. The method and results are mainly for food systems. Essential References Not Discussed: On the related topic of LLM for optimization. The paper EoH [1] published in ICML 2024 integrates LLM in a search framework for optimization including combinatorial optimization. LLMOPT [2] adopts LLM for optimization modeling. The survey paper [3] also provides a more systematic discussion of this topic. [1]Evolution of heuristics: Towards efficient automatic algorithm design using large language model, ICML 2024 [2]LLMOPT: Learning to Define and Solve General Optimization Problems from Scratch, ICLR 2025 [3]A systematic survey on large language models for algorithm design, 2024 Other Strengths And Weaknesses: Strengths: The paper pioneers the application of LLMs to sustainable food systems. The authors evaluate multiple LLMs across diverse tasks, providing a robust comparison of their capabilities. The proposed framework combining LLMs with combinatorial optimization is an interesting approach that addresses LLMs' limitations in mathematical reasoning and multi-constraint optimization. Weaknesses: The datasets used, particularly for sensory profile prediction, are relatively small and may not fully capture the complexity of human taste preferences. The menu design task is evaluated in a hypothetical setting, and the proposed solutions have not been tested in real-world environments. The authors acknowledge that they did not extensively explore prompt engineering, which could potentially improve LLM performance. Other Comments Or Suggestions: Refer to questions Questions For Authors: How did you determine the optimal hyperparameters (e.g., λ for diversity) in the combinatorial optimization framework, and what sensitivity analysis was performed to ensure robustness? Could you provide more details on the Ratcliff/Obershelp sequence matching algorithm used for recipe similarity, and why it was chosen over other similarity metrics? What specific techniques or architectures were used to standardize the style of LLM and human responses in the experimental design task, and how did this affect the evaluation outcomes? How did you handle the trade-off between computational complexity and solution quality in the integer quadratic programming (IQP) formulation for menu design? How do you design the prompts? Can other advanced prompt engineering techniques improve the performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your feedback. # Theoretical claims A proof is in Appendix A. YVD5 noted that the bound can be improved, which we will incorporate into the final version. # Related work Thank you, we will cite these works. Please see our response to XTjM, where we discuss these papers; we did not have space to include it here. # Results mainly for food systems Please see our response to VYkF explaining why we believe our paper is a strong fit for the Application-Driven ML track of ICML. # Dataset size and complexity of taste preferences The Food.com dataset contains 522,517 recipes and 1,401,982 reviews. The recipes contain at least 50, and on average 121.34 reviews. The mean SD of ratings across the recipes is 1.21 (on a 5 pt scale), suggesting diversity of ratings. The NECTAR dataset consists of 47 products, with at least 100 human sensory evaluations per product along 21 dimensions, with 1150 distinct human taste testers. The subjects were restricted to American omnivores, and we will add to the Limitations section that our results may not generalize outside of this population. However, the sample was designed to be representative of American omnivores as a whole, e.g. was diverse along dimensions of age, gender, and race. Please see the NECTAR 2024 report for exact statistics (we will include this in the final version). The mean SD of the ratings across products for the dimensions we study in our sensory profile prediction task ranged from 0.89 (Greasiness) to 1.84 (Overall Satisfaction) - both on a 7 pt scale - suggesting diversity of ratings. Finally, we note that the NECTAR dataset is expanding, and will reach 500 products and 50,000 sensory evaluations by 2026, further increasing the potential impact of introducing this dataset to the ML community. It has already expanded by 126 products (each with at least 100 sensory evaluations) since the submission of this paper. We plan to re-run our experiments on this expanded dataset. # Hypothetical setting Please see our response to 7evN. # Optimal $\lambda$, and sensitivity analysis We placed the maximum weight on diversity (for our input data $\lambda=100$ achieves the same result as $\lambda=\infty$) to ensure a diverse set of options for online participants, who may have allergies or other constraints. Assuming a high quality ground set, we think this is a reasonable default choice in general. We studied the sensitivity of the optimal menu to the value of $\lambda$. When decreasing $\lambda$ with a step size of 1, the generated menus are identical until $\lambda=3$, at which point the optimal menu changes by 2 recipes (out of 36), suggesting robustness to the exact choice of $\lambda$. Future work could use the LLM-as-judge framework to optimize the value of $\lambda$ without running human subjects experiments. | $\lambda$ | Set Difference (Num. Recipes That Differ from $\hat{S}*$ for $\lambda=100$) | |-----------|-----------| | 3 | 2 | | 2 | 2| | 1 | 2 | | 0.5 | 6| | 0 | 12| # Ratcliff/Obershelp This algorithm computes similarity between $S_1$ and $S_2$ as $\frac{2K_m}{|S_1| + |S_2|}$. $K_m$ is the number of matching characters, defined as the length of the longest common substring (LCS) plus recursively the number of matching characters on both sides of the LCS. More details can be found in the difflib documentation. We used this algorithm for recipe similarity because it is a simple and widely used algorithm that allows for partial matches. As a sensitivity analysis, we tested the Levenshtein distance, another commonly used method for text similarity, and found that the optimal menu changed by 4 recipes (out of 36), suggesting that that performance boost, if any, would be limited. We will include this and other similarity metrics (e.g. those based on semantic similarity) in the final version. # Style standardization We used o1-preview to standardize the style. Our prompt is in Appendix D.1, Figure 5. As in [1], we did not assess how standardization affected the outcomes; in general, responses remained similar after the standardization step, with a few exceptions, e.g. where the humans referred to their personal experiences. # Trade-off between computational complexity and solution quality Our problem sizes are relatively small, with a ground set of size 56. We were thus able to solve our problem instances optimally in less than a second. Essentially, there was no tradeoff between computational complexity and solution quality for our use case. # Prompt engineering For style standardization, we adapted a prompt from prior work [1]. For the other tasks, the ML and culinary/food science members of our team collaborated to iteratively design the prompt. We aimed to test LLMs’ potential off the shelf, which is why we didn't do extensive prompt engineering, as in [2]. [1] Si, C. et al. "Can LLMs Generate Novel Research Ideas?" ICLR 2025. [2] Bulian, J., et al. “Assessing LLMs on Climate Information.” ICML 2025. Thank you for your review! --- Rebuttal Comment 1.1: Comment: Thank you for your responses. Most of my concerns have been addressed. I will adjust my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you, VdT3, for reviewing our response and updating your assessment. We greatly appreciate it.
Summary: This paper investigates how LLMs can contribute to developing sustainable food options (e.g., reducing greenhouse gas emissions). The authors define a typology of design and prediction tasks for sustainable food at three resolutions (ingredients, recipes, and food systems ). The paper focuses on four tasks: 1) Experimental Design, 2) Menu Design, 3) Sensory Profile Prediction, 4) Recipe Preference Prediction. Method-wise, the main contribution is the proposed integration of the LLMs’ background knowledge (especially about human preferences) with traditional combinatorial optimization to tackle real-world constraints. Claims And Evidence: The paper's main claim is that LLMs can reduce the time and effort needed to design more sustainable plant-based protein formulations. This is backed by experiments which show that the LLM outperformed or equaled the human baseline on specificity and time saved, evaluated by food scientists. LLMs alone handle multiple constraints poorly when designing menus (tend to produce fully vegan menus) and combining LLM-based preference estimates with a combinatorial optimization approach can yield large emissions cuts while keeping customer satisfaction. This is backed by Figure 2. In general, the claims are adequately backed by the experiments. However, I would like to see more ablations and analysis on the proposed method (i.e., o1-review-IQP). Methods And Evaluation Criteria: Strengths: - The methods and evaluation are straightforward and makes intuitive sense Weaknesses: - No ground-truth validation for menu-based emissions reductions (what if actual diners select differently in practice?) - LLM’s generated recipes are not tested for real taste, only inferred preferences - Fine-grained sensory predictions lack stronger benchmarks (e.g., simpler regression models based on molecular food science) Theoretical Claims: The paper does propose theoretical claims and provide proofs in the appendix, although the correctness of the proof would not largely affect the claims and conclusions of the paper. Experimental Designs Or Analyses: I would like to see more analysis/ablations on the proposed method (i.e, o1-preview-IQP). Supplementary Material: I checked mostly the prompts. Relation To Broader Scientific Literature: I think the paper adequately positions itself in the broader scientific literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See Methods And Evaluation Criteria. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your feedback. # Ablations and analysis Our submission included two ablations - removing the IQP component entirely (just prompting o1-preview directly to revise the menu) and removing the descriptions. We add three other ablations: removing the estimated preferences, removing the diversity term ($\lambda=0$), and removing both. Satisfaction declines ($p=0.01$) when both are removed. We also add two baselines: expert chef and replacing beef with chicken. Our method improves upon both. Please see our response to XTjM for our results table, which we did not have space to replicate in this response. # Ground-truth validation for menu-based emissions reductions Our setting, an online experiment, is commonly used in the sustainable food literature [1,2,3,4] since it mimics popular online food delivery platforms. We acknowledge that some prior work tried to align incentives via delivering the meals to 1 in 30 participants [3] or providing food vouchers to 1 in 20 participants [4]. Given the complexities of replicating this setup for LLM generated recipes, we leave this to future work. Regardless, it is not guaranteed that conclusions based on online environments will generalize to real-world food environments. We will note this in the Limitations section. [1] Attwood, S. et al. "Menu engineering to encourage sustainable food choices when dining out: An online trial of priced-based decoys." Appetite 149 (2020). [2] Weijers, RJ, et al. "Nudging towards sustainable dining: Exploring menu nudges to promote vegetarian meal choices in restaurants." Appetite 198 (2024). [3] Lohmann, PM., et al. "Choice architecture promotes sustainable choices in online food-delivery apps." PNAS Nexus 3.10 (2024). [4] Banerjee, S, et al. "Sustainable dietary choices improved by reflection before a nudge in an online experiment." Nature Sustainability 6.12 (2023). # LLM generated recipes are not tested for real taste, only inferred preferences Yes, in the menu design task, the generated recipes are not actually prepared and tasted by participants. This would involve a number of logistical and ethical challenges and is beyond the scope of the current study. We will note this in the Limitations section. We do use actual tasting data in our sensory profile prediction and experimental design tasks, and actual ratings in our recipe preference prediction task. # Sensory profile prediction task lacks stronger benchmarks Our baselines in the submitted version were constructed on the basis of the food science literature and the available nutritional information, e.g. for overall satisfaction and purchase intent, the average of normalized fat and sodium content is used. We agree that more sophisticated baselines could be tested. To address this, we have obtained an expert food scientist baseline for the Overall Satisfaction dimension of the sensory profile prediction task, which we think is the most relevant baseline for food science practitioners. The expert food scientist spent 2 hours on the task (approximately 1.5 minutes per pair). Statistically significant results (after Bonferroni correction) are in bold. We find that across all pairs for the Overall Satisfaction dimension, the expert food scientist does not achieve a statistically significant improvement over a random baseline, underscoring the difficulty of the task. The expert food scientist’s accuracy is similar to that of o1-preview, and both underperform our baseline based on the nutritional information and food science literature. We find that for coarse-grained prediction (quartile 4 of the ground truth preference gap), the expert food scientist again does not achieve a statistically significant improvement over a random baseline. However, both o1-preview and the nutritional baseline do outperform a random baseline (81% and 86% accuracy respectively), supporting the potential of automated methods, particularly for coarse-grained comparisons, which could save time in testing and development. We will include this result in the final version. | Data Subset (Quartile) | Expert Food Scientist | o1-preview | Nutritional Baseline | | |-|-|-|-|-| | All | 0.65 | 0.64 | 0.73 | | | Quartile 1 | 0.62 | 0.43 | 0.71 | | | Quartile 2 | 0.60 | 0.60 | 0.80 | | | Quartile 3 | 0.77 | 0.68 | 0.55 | | | Quartile 4 (largest gap in ground truth preferences; “coarse grained”) | 0.59 | **0.81** | **0.86** | | Thank you for your support!
null
null
null
null
Constrained Belief Updates Explain Geometric Structures in Transformer Representations
Accept (poster)
Summary: In this paper, the authors propose a theoretical framework suggesting that transformers implement constrained Bayesian belief updating, and explain observed geometric patterns in transformer representations. Using the tools of mechanistic interpretability, they derive and empirically validate precise predictions of attention patterns, intermediate fractal-like structures, and final belief states in the trained transformers. The study focuses on the Mess3 family of Hidden Markov Models (HMMs). ## Update after rebuttal I believe the authors have addressed and clarified some of my concerns. I maintain some reservations on the generalizability of the present findings, especially given the ad hoc nature of the considered learning task (not clearly connected to realistic tasks where transformers are employed) and of the theoretical analysis, here made possible because of the simplicity of the task. I also think that giving more prominence to the requisites for converging to a good representation (in terms of amount of data and training) could clarify when such a "constrained" regime could appear in the training of a transformer in a more realistic setting. However, I believe the paper is well written, and the presented analysis is sound and original, so I will raise my score to a 3. Claims And Evidence: The paper makes clear theoretical claims regarding the correspondence between transformer mechanisms and Bayesian belief updating constrained by architectural elements. Empirical evidence from analyses of transformers trained on synthetic Mess3 datasets mostly validates these claims, although some discrepancies (e.g., in the analysis in the appendix, and when the alpha parameter exceeds a certain range) are not addressed in the present work, and in my opinion are slightly at odds with the strong claim of full interpretability of the model computations. On the other hand, in the abstract the authors state that their approach "provides a principled lens on how gradient descent resolves the tension between optimal prediction and architectural design", but in the paper there is no analysis of the learning procedure and the "algorithm discovery" phase, since the analysis only takes into account the asymptotic solution after training (and does not even consider what happens in a data scarse regime). Methods And Evaluation Criteria: The authors utilize mechanistic interpretability (analysis of attention circuits and residual streams) and principal component analysis (PCA) to demonstrate the emergence of predicted geometric structures within the transformers' internal representations, relying on the synthetic and tightly controlled scenario under study. However, in some cases the authors rely on visual evaluation for quantifying the degree of agreement between their predictions and the empirical results (see e.g. fig. 4, where there is a general qualitative agreement but also visible differences, and assessing how relevant they are is difficult for the reader). Theoretical Claims: The presented theoretical framework is mathematically rigorous and predicts intermediate and final representation geometries. However, assumptions behind this theoretical setup, particularly regarding optimal Bayesian inference in constrained attention mechanisms, require clearer justifications, especially concerning their applicability to natural tasks beyond simplified HMM settings. In particular, the authors claim that it is crucial to investigate how model-specific constraints can warp the optimal Bayesian inference, but it is not clear how this interpretative route can be generalized to more complex cases (e.g. in LLMs trained on natural text, where there are open questions about the computational order mismatch between the transformer computations and the correct inference procedure). Experimental Designs Or Analyses: The empirical evaluations, confined exclusively to synthetic datasets (Mess3), mostly validate the theoretical predictions of the authors. However, the authors hint at the possible discrepancies that can arise in some regimes that are not investigated in this paper (e.g., large alpha) and some predictions that are not matched by the transformer implementation are left without explanation. This undermines the claims about the explanatory framework presented by the authors. Supplementary Material: The supplementary material effectively complements the main text but contains crucial insights (e.g., hyperparameter analyses, deviations between predicted and empirical embeddings) that would significantly improve clarity, and probably call for additional experiments, if presented within the main body of the paper. Instead, some of these nuances are a bit ignored in the claims of "predictability and theoretical traceability" of the toy model studied by the authors. Relation To Broader Scientific Literature: While the paper appropriately references computational mechanics and transformer interpretability literature, it does not contrast this study against other mechanistic or interpretability-focused frameworks (e.g., circuit-based methods, dictionary learning, causal interventions), making its novelty and unique advantages somewhat unclear. Many of these works also study how exact inference can be embedded within the architectural constraint of the transformer. Essential References Not Discussed: None. Other Strengths And Weaknesses: *Strengths:* * Rigorous mathematical derivations with clear empirical support in a controlled environment. *Weaknesses:* * Empirical scope confined to toy synthetic scenarios. * Limited to no analysis of the deviations from the predicted computational behavior. * No analysis of the training process and how the optimization lands on the discovery of the optimal inference algorithm. * Limited discussion of how findings extend or generalize to practical transformer models on natural language data. Other Comments Or Suggestions: The authors should explore practical implications or architectural modifications inspired by their theory. Additionally, discussing the conditions under which the theory may fail or require adjustments in more realistic or complex tasks would provide valuable balance and depth. Questions For Authors: * Can the authors discuss how gradient descent helps the transformer converge to the optimal representation? E.g., how much data is needed? * Can the authors discuss when there are deviations from the predicted behavior and why? Does something change qualitatively in the inference model (e.g. at large alpha)? * Is the MLP layer warping mechanism something the authors think will extend to different settings? * How might the theoretical framework be extended to transformers trained on natural language data? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and insightful feedback. We appreciate your recognition of our works rigor and the empirical support in our controlled setting. We also acknowledge the validity of your concerns regarding quantitative evaluation, learning dynamics, scope, and unexplained deviations. We have performed new analyses and made new figures to address these points and will make changes to the text, described below. **1. Learning Dynamics & Quantitative Evaluation:** You rightly pointed out the lack of analysis of the learning procedure despite the abstract's phrasing, and the need for quantitative evaluation beyond visual inspection (e.g., Fig 4). * **Learning Dynamics:** We have tracked how well the network activations fit our theoretical predictions during training. New Figure S ([link](https://bit.ly/4ldbEVZ)) shows the Mean Squared Error (MSE) when regressing activations onto theoretical geometries (Constrained Belief for Post-Attention, Full Belief for Post-MLP) throughout training. It demonstrates that the model rapidly converges towards the predicted geometries (Normalized MSE \<\< 1), quantitatively showing *that* gradient descent finds these solutions. * **Quantitative Fit:** New Figure R ([link](https://bit.ly/3QTfFkG)) provides the absolute final MSE values, confirming a quantitative fit to our theory in the trained model compared to random initialization, addressing the need for metrics beyond visual assessment. * **Abstract Phrasing:** We agree the phrasing "gradient descent resolves the tension" is misleading. Our core point is how the transformer's architecture itself (parallel attention vs. recursive Bayes) necessitates a specific computational form (Constrained Belief Update) to approximate the optimal solution. While Fig S shows GD finds this solution, our analysis focuses on what solution the architecture predisposes. We will revise the abstract to emphasize the role of architectural constraints in shaping the emergent computation, and get rid of the reference to GD. **2. Scope (Single-Layer Focus & Generalizability):** Our core derivation indeed focuses on the computation within the first layer. However, we have now analyzed deeper models. * **Multi-Layer Analysis:** As detailed in our response to Reviewer tAc4, we analyzed 4-layer transformers. New Figures X (quantitative MSE across layers ([link](https://bit.ly/4iP8vdo)) and Y (qualitative PCA visuals across layers ([link](https://bit.ly/4ch5W1m)) show that the 1st attn. layer consistently implements the Constrained Belief update, while subsequent layers refine the representation towards the Full Bayesian belief state. * **HMM Focus & Generalization:** We used Mess3 HMMs not just for tractability, but because such controlled systems are essential for isolating and rigorously testing fundamental principles – like how architectural constraints shape optimal prediction – before scaling hypotheses to the complexities of natural language, where similar principles likely apply but are harder to isolate. As for the MLP, it's role in refining might generalize, though its specific function will be task-dependent. **3. Discrepancies & Justification:** We acknowledge the discrepancies noted (e.g., Appendix C embeddings, deviations outside the central alpha range mentioned in Sec 4.3). * **Contextualizing Deviations:** While these deviations warrant further study (as noted in our limitations), the new quantitative results (Figs R, S, X) demonstrate that the overall fit to our theory is strong. Also, at large alpha Mess3 generates near IID data, which leads to a lack of learnable structure - we believe this explains why there are deviations at large alpha. * **Quantitative Predictions:** Importantly, our spectral theory correctly predicts the necessity of multiple attention heads when the HMM transition matrix has negative eigenvalues (Sec 4.4.1, Fig 3), a non-trivial validation of the underlying principles. * **Justification for Eq. 5:** We have clarified the derivation of the Constrained Belief Update (Eq. 5) in our response to tAc4, including a new diagram [link](https://bit.ly/3XGCU5g) explaining why it represents the natural parallel approximation achievable by a single attention layer due to its architectural inability to access intermediate tokens recursively. This clarification hopefully makes the theoretical motivation and assumptions behind our core mechanism clearer. **4. Relation to Literature:** Our work's unique contribution is in using computational mechanics not just to describe optimal prediction, but to theoretically predict how specific architectural constraints (attention's parallel structure, preventing recursive updates) lead to systematic, geometric deviations (the Constrained Belief Update) from the unconstrained Bayesian ideal. This predictive, theory-driven approach complements empirical circuit discovery or other mech interp. methods. Thank you again for your valuable feedback!
Summary: This paper studies internal representations in transformers trained with next-token prediction on sequences generated by Hidden Markov Models. It reveals that transformers (focusing on single-layer transformers) perform constrained Bayesian belief updates, implementing an approximate version of optimal Bayesian inference in their parallel architecture. The paper proposes a mechanistic algorithm allowing attention heads to implement this process, creating predictable geometrically structured representations. The authors also develop a spectral theory to explain how data-generating transition matrices influence attention behavior. Through experiments on Mess3 class Hidden Markov Models, the authors validate their predictions about attention patterns and representation geometry. In summary, this work investigates the connection between optimal prediction theory and mechanistic interpretability, providing a principled understanding of how transformers balance optimality and architectural design. Claims And Evidence: I find the empirical evidence in support of all claims made in this publication sufficiently compelling. The authors convincingly demonstrate that the proposed belief state update algorithm is the primary mechanism implemented by the single-layer transformer. A separate treatment of the negative eigenvalue case and the role of multi-head attention is also quite notable. My only concern at the moment is the observation of a "scalar discrepancy in the first two embedding vectors". I do not see this as being critical for the main conclusions of the paper, but hope that this might be understood eventually. Methods And Evaluation Criteria: Given the nature of this publication, the simplistic Mess3 class Hidden Markov Models appears to be an adequate choice that allowed the authors to probe the precise mechanisms implemented by the transformer model. The paper could potentially benefit from discussing more general families of HMMs and potential qualitative differences in the state dynamics, but in my opinion, this is not critically important. Theoretical Claims: While I have not exhaustively verified all of the claims (for instance, I have not tried computing the eigenvalues of T), I read all of the authors theoretical claims and could not identify any clear mistakes. Most theoretical statements and intermediate results appear to be correct. Experimental Designs Or Analyses: I reviewed the overall design of experiments aimed at exploring the properties of trained transformers and comparing them with theoretical predictions. The proposed analysis seems adequate and appears to capture remarkable similarity between theoretical expectations and properties of trained models. Supplementary Material: I read Appendix A and briefly looked through Appendices B-E. Relation To Broader Scientific Literature: This work investigates the specific mechanisms by which transformers capture belief state geometry in their residual stream, building upon a setup similar to “Transformers Represent Belief State Geometry in their Residual Stream.” It contributes to the broader field of mechanistic interpretability of transformers, which includes studies on linear and nonlinear regression, arithmetic, and other tasks. The paper also relates to research exploring transformers' ability to learn Hidden Markov Models (HMMs), as well as studies on the limitations of transformers in this context (for example, “On Limitation of Transformer for Learning HMMs”). Finally, it connects to: (a) 'An Explanation of In-context Learning as Implicit Bayesian Inference,' which uses HMMs to understand in-context learning through Bayesian inference, and (b) research on the information-theoretic properties of in-context learning (the observed fractal-like structure of system attractors can be linked to the growth of mutual information between the observed token sequence and the current state representation). Essential References Not Discussed: I am not aware of any essential references that were not discussed in the paper. Other Strengths And Weaknesses: In my opinion, two most important strengths of this publication are: (a) its creative setup and approach to understanding how transformers gradually attain information about the hidden state underlying the observed system dynamics; (b) the depth of theoretical understanding of very particular mechanisms that a trained transformer appears to implement. I also identified two weaknesses that made understanding this paper a little difficult. One is that the proposed mechanism and experimental studies seem to be focused on a single-layer transformer. The publication hardly mentions this fact and there appear to be just a few places (one in Appendix) where this exceptionally important detail is stated explicitly. This makes the main contributions stated by the authors in the abstract and the introduction less general than they may appear to be. I would expect the learned mechanisms in multi-layer transformers to be much more nuanced and while this clearly falls outside the scope of the current publication, it needs to be stated very explicitly. Secondly, the narrative structure of the paper is a little difficult to follow at times. For example, an exceptionally important equation (5) is not explained sufficiently well. It is my understanding that it can be related to the lowest-order term in the decomposition of the operator product $(T+\epsilon T_1)(T+\epsilon T_2)\dots (T+\epsilon T_n) \approx T^n + \epsilon (T_1 T^{n-1} + T T_2 T^{n-2} + \dots + T^{n-1} T_n) + \dots$, but this needs to be explained. Otherwise, it is not entirely clear why this particular expression is “one and only” sensible way of approximating the constrained belief update using a single-layer attention mechanism. Other Comments Or Suggestions: My primary suggestion is to strengthen the paper's narrative. I believe it would benefit from more in-depth discussions, such as the derivation of equation (5) and highlighting potential state geometry differences in other HMMs. If space is limited, detailed derivations could be moved to the appendix. Questions For Authors: Two primary questions I have are related to two potential weaknesses: 1. Is my understanding correct and all stated results (both theoretical and experimental) are only in relation to single-layer transformers. If so, have authors considered a very natural multi-layer extension? What were the most notable differences in how multi-layer transformers learn the updates in their residual stream? 2. Is my understanding of the origins of equation (5) correct? If not, what is the explanation and are there any formal statements explaining why we expect this exact form to be implemented by a self-attention layer? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive assessment of our work, and also your constructive feedback, which improved the clarity and impact of our work. We address your primary concerns regarding the single-layer focus and the justification for Eq. 5, as described below, with new figs and analysis: **Single Layer Focus and Multi-Layer Extension** You are correct that our analysis focuses on a single-layer transformer, and we acknowledge this could have been stated more explicitly throughout the main text. We will revise the Abstract, Introduction, Methodology, and Discussion sections to clearly delineate that the primary analysis investigates the computation within the first layer, while also discussing how this relates to deeper networks. Although we decided to focus on the first layer for this initial work, we plan on extending the theory out to the multilayer setting in future work. To directly address your question about multi-layer extensions in experiment, we have trained and analyzed 4-layer transformers. Our key finding is that the first attention layer consistently implements the constrained belief update mechanism (Eq. 5) derived in the paper, producing the intermediate fractal geometry. Subsequent layers (specifically the MLPs and later attention blocks) then act to transform this intermediate representation progressively closer to the full Bayesian belief state geometry (Eq. 2). We will add quantitative evidence for this to the appendix the main text. This includes: - PCA visualizations of activations after each layer in the 4-layer models, showing the transition from constrained to full belief geometry. New fig [here](https://bit.ly/4ch5W1m) - Mean Squared Error (MSE) analysis from linear regressions of residual stream activations onto both the theoretical constrained geometry (Eq.5) and the full belief geometry (Eq.2). This analysis, performed layer-by-layer, quantitatively shows the initial fit to Eq.5 post-attention-1, and the increasing fit to Eq.2 in deeper layers. New fig [here](https://bit.ly/4iP8vdo) These results demonstrate that our single-layer analysis captures the fundamental computation performed by the initial attention mechanism, even within deeper networks, while later layers perform refinement. **Intuition for Eq.5:** We appreciate you highlighting the need for a clearer explanation of Eq. 5. We will incorporate the following improved intuition into Section 4.3: - Bayesian inference (Eq. 2) requires multiplying token-specific transition matrices, a fundamentally recursive process where updates depend on the full history integrated up to the previous step. - In contrast, an attention head computes its output at position d via a parallel, feedforward weighted sum (Eq. 3) of value vectors ($v_s$) from source positions $s \le d$. - Crucially, each $v_s$ contains only local information from position $s$. It cannot directly access or depend on the specific tokens between $s$ and $d$ due to the parallel nature of the value computation and attention weighting. - Therefore, the most information that token z_s can independently contribute to the belief at position d within this single-layer constraint is the correction derived from knowing $z_s$ occurred d-s steps prior, assuming a default starting belief ($\pi$) and no knowledge of intervening tokens. These independent displacements from the stationary distribution over latent states is the difference of probability distributions: $Pr(S_d | Z_s) - Pr(S_d)$. Linear algebraically, this contribution is precisely $\pi T^{|z_s} T^{d-s} - \pi$. - Summing these over all past sources naturally yields the constrained belief update form in Eq. 5. It represents the best possible parallel approximation achievable by a single attention layer given its architectural limitations. - [link](https://bit.ly/3XGCU5g) depicts each of these above points. We believe this revised explanation clarifies why Eq. (5) is the natural computation for the attention layer to implement when reconciling optimal prediction with its inherent structure. Regarding the minor scalar discrepancy for the first two embeddings, we agree it doesn't invalidate the core findings. While still under investigation (as noted in Appendix C), the remarkable agreement across the attention pattern, OV vectors, and subsequent embeddings strongly supports our theoretical framework. And as the referee noted, the spectral perspective allows us to anticipate that multiple attention heads are needed when the transition matrix for the training data has negative eigenvalues. We will also strengthen the narrative as suggested, ensuring the scope (initial focus on first layer) and the derivation logic are clearer. While exploring other HMMs is interesting future work, Mess3's tractability was essential for this detailed mechanistic study. We hope these clarifications and planned revisions address your concerns and demonstrate the significance of our findings. Thank you again for your valuable feedback. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed reply. I do believe that the proposed modifications will make the publication much clearer. While my understanding of the derivation appears to have been correct, I am glad that it will be stated much more explicitly in the main text. Additional empirical multi-layer results are also quite interesting and insightful.
Summary: This work bridges mechanistic interpretability and Bayesian definitions of optimal predictions (constrained belief updating) to study emerging geometric structures in small (1-layer) transformer models. This is done with a solid theoretical foundation, in a ‘controlled’ experiment: We know the generative model behind the dataset used for training (the Mess3), that is an HMM with 3 states, that depends on two parameters. Sadly, this is quite far from my area of expertise, so please do not give much weight to my review, and forgive misunderstanding/mistakes. Claims And Evidence: The claims seem well supported by the methodology. In this, it is a good work: research question, claim, method, and experimental evidence are clearly explained. Methods And Evaluation Criteria: The proposed method and evaluations are well motivated and defined, in a controlled setup. Theoretical Claims: No, they are outside my area of expertise Experimental Designs Or Analyses: They seem valid, the models used are justified. Supplementary Material: Yes, especially the explanation of the HMM. Relation To Broader Scientific Literature: They allow us to understand transformer models a little better than before. Essential References Not Discussed: I'm not familiar with the literature. Other Strengths And Weaknesses: **Pros**: The paper seems very well motivated and structured, and the fact that it uses a ’toy’ controlled setup for training and testing makes it more reliable, and allows to produce testable predictions. I have also enjoyed the discussion on one and two head transformer models. For what I could understand, it was quite an interesting read. 
 **Cons**: I found some parts of the paper, especially the parts related to the spectral algorithm harder to understand. While this is not a cons overall (it could be given by my ignorance on the topic), I would suggest the authors to try to add a couple more intuitive explanations. Also, I think it is not clear, how the fractal drawings emerges from the belief states. As it is a key point, maybe it could be worth to add a couple more lines of explanation to this, either in the main body, or in the first section of the supplementary material. Other Comments Or Suggestions: Minor:
Z_{d+1} for a sequence is confusing, maybe use Z_{d:T}, or some other notation to express that it is a series and not a single token? Questions For Authors: No questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer bbxf, Thank you for your positive assessment and thoughtful feedback. We appreciate that you find the work well-motivated and structured. We care very much about making the paper understandable and clear, so your suggestions for clarification are very helpful! **Emergence of Fractal Geometry:** You asked for clarification on how the fractal drawings emerge from the belief states. That's an excellent point. These structures (visualized in Figs 1D, 4) arise directly from the iterative application of Bayesian belief updating rules within the probability simplex representing beliefs over the HMM's hidden states. - Each time a new token is observed, the current belief state (a point in the simplex) is transformed according to the update equations (Eq. 2 for the full Bayesian update, Eq. 5 for our derived constrained approximation implemented by attention). - Because the underlying Mess3 HMM involves probabilistic transitions between states, applying these update rules repeatedly for different token sequences acts mathematically like an Iterated Function System (IFS). IFSs are known to generate self-similar, fractal geometries. - The specific fractal shapes shown in our figures represent the collection of all possible belief states reachable after observing sequences up to a certain length. The "folding" or self-similarity comes from the fact that different sequences can lead to nearby belief states, and the update rules map regions of the simplex onto smaller regions within itself. (This connection between HMMs, Bayesian updates, and fractals is explored theoretically, e.g., by Jurgens & Crutchfield, 2021). We will add a clearer explanation of this link between the iterative belief updates and the resulting fractal geometry either in the main text (likely near Figure 1/Section 3, space permitting) and/or expand on it in Appendix A. **Intuition for the Spectral Algorithm (Section 4.3):** Thank you for highlighting that Section 4.3 could benefit from more intuition, which we will add to the text. The main goal of the spectral analysis is to understand precisely how information or influence from a past token (at source position s) propagates to affect the belief state at the current destination position d. - This influence mathematically depends on the sequence of hidden state transitions between s and d, captured by powers of the HMM transition matrix, $T^{d-s}$. - Spectral decomposition (using eigenvalues $\lambda$ and associated projectors $T_\lambda$) is a standard mathematical tool to analyze matrix powers because it simplifies $T^n$ into a sum $\sum \lambda^n T_\lambda$. - The eigenvalues ($\lambda$) are crucial because they tell us the rate at which the influence of past information decays (if $|\lambda|<1$) or even oscillates (if $\lambda$ is negative or complex) as the distance n=d-s increases. - Our key finding here is that the learned attention weights ($A_{d,s}$ in Eq. 3) directly implement this propagation effect, effectively learning to approximate the $\lambda^{d-s}$ decay predicted by the theory (Eq. 10). This explains why attention patterns often show exponential decay, and why multiple heads are needed (Fig 3) to capture oscillatory patterns arising from negative eigenvalues. - Furthermore, this spectral perspective allows us to make precise, verifiable predictions about the learned OV vectors ($\vec{v}_s$ in Eq. 13) and token embeddings ($\vec{x}_s$ in Eq. 14), directly connecting the dynamics of the data (via T's eigenvalues) to the specific parameters learned by the transformer. We will revise Section 4.3 to include a paragraph explaining this intuition more clearly. **Notation Suggestion:** We completely agree that using $Z_{d+1}:$ for a sequence could be confusing. Thank you for pointing this out! We will revise the notation throughout the paper to use $Z_{d+1:\infty}$ to unambiguously denote the sequence of future tokens. **Conceptual Diagram:** To further aid in understanding the core theoretical idea, we have also created a new diagram [link here](https://docs.google.com/document/d/1CDwiJp8qG4d4OqgntBkIMYQhxJ1vy-2j2BecubhrL5U/edit?tab=t.0#heading=h.pwspurd6saek), that visually explains how the architectural constraints of attention lead naturally from the ideal Bayesian update (Eq. 2) to our derived constrained belief update (Eq. 5). Thank you again for your helpful suggestions, which will undoubtedly improve the clarity of our paper. We appreciate your support for our work. Sincerely, The Authors
Summary: This paper discussed the circuit implemented within trained Transformer to implement (partial) Bayesian inference on a type of Hidden Markov Model. The paper combines theoretical analysis with interpretability tool to showcase that the implemented circuit and corresponding representation is optimal when accounting the architectural constraint of 1-layer Attention. Overall, this paper produce a satisfactory explaination on the inner working mechanism of Transformer on the Mess-3 HMM. Claims And Evidence: Yes, the claims are well supported. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked the theoretical claims produced by this paper and am convinced that they are correct. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: The authors made a very thorough discussion in Section 2.1 already. This paper should be of interest to the community of both mechanistic interpretability and representation theory. The most related work is [1], which discuss the same setting and find that Transformers approximately learn to encode the belief state linearly in the hidden state. This paper produces a comprehensive analysis on how this belief state is calculated within Transformer in a 1-layer setting. [1] Transformers represent belief state geometry in their residual stream Essential References Not Discussed: No that I can think of. Other Strengths And Weaknesses: # Strength 1. The visualization in Figure 2 is very intuitive. 2. The theory in Section 4 is clean and captures the essence of the architectural constraint. # Weakness 1. Only 1 layer case is studied here, while this keeps the paper compact, how the current result generalize to multi-layer is unclear. Other Comments Or Suggestions: In Section 2.2, it would be better to mention that the same deduction and notation is inherited from Shai et.al. Questions For Authors: 1. Following weakness 1, has the authors tried to ablate with number of layer and will the first layer implement similar algorithm? Is there any new phenomenon in the multi-layer case that can't be explained by the 1-layer analysis here? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your strong endorsement of our work. We appreciate your positive assessment of our paper's strengths, and we agree that this paper should make a nice contribution to both interpretability and the study of representations. We also appreciate your constructive feedback, which nudged us to investigate and confirm that our results indeed generalize to the multi-layer case. **Multi-Layer Extension:** To directly address your question about multi-layer extensions, we have trained and analyzed 4-layer transformers. Our key finding is that the first attention layer consistently implements the constrained belief update mechanism (Eq. 5) derived in the paper, producing the intermediate fractal geometry. Subsequent layers (specifically the MLPs and later attention blocks) then act to transform this intermediate representation progressively closer to the full Bayesian belief state geometry (Eq. 2). We plan to add quantitative evidence for this to the appendix (and summarize in the main text). This includes: 1. PCA visualizations of activations after each layer in the 4-layer models, showing the transition from constrained to full belief geometry. See new figure [here](https://docs.google.com/document/d/1CDwiJp8qG4d4OqgntBkIMYQhxJ1vy-2j2BecubhrL5U/edit?tab=t.0#heading=h.7gjcfnihl09i) 2. Mean Squared Error (MSE) analysis from linear regressions of residual stream activations onto both the theoretical constrained geometry (Eq. 5) and the full belief geometry (Eq. 2). This analysis, performed layer-by-layer, quantitatively shows the initial fit to Eq. 5 post-attention-1, and the increasing fit to Eq. 2 in deeper layers. See new figure [here](https://docs.google.com/document/d/1CDwiJp8qG4d4OqgntBkIMYQhxJ1vy-2j2BecubhrL5U/edit?tab=t.0#heading=h.uauqgkf7zz7l) These results demonstrate that our single-layer analysis captures the fundamental computation performed by the initial attention mechanism, even within deeper networks, while later layers perform refinement.
null
null
null
null
null
null
Elucidating Flow Matching ODE Dynamics via Data Geometry and Denoisers
Accept (poster)
Summary: This paper gives a theoretical characterization of the convergence behavior of flow model ODE trajectories. The authors show that flow trajectories can be divided into three stages -- initial stage where particles are attracted towards dataset mean, intermediate stage where particles are attracted towards local clusters, and terminal stage where particles converge to data points. Claims And Evidence: All claims are supported with theoretical analysis. However, I am not familiar with the literature, and I did not check the proofs, so I could be mistaken. Methods And Evaluation Criteria: Not applicable. Theoretical Claims: No, I did not check the proofs. Experimental Designs Or Analyses: Not applicable. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper gives a convergence guarantee as well as characterization of flow ODE trajectories under the assumption that the data distribution is a mixture of local clusters. Essential References Not Discussed: None, to the best of my knowledge. Other Strengths And Weaknesses: - **Significance** : this paper provides convergence guarantees under weaker assumptions compared to previous work such as [1]. Furthermore, the paper provides some insights into the memorization behavior of flow and diffusion-based generative models. Specifically, the flow velocity could overfit to the empirical data distribution near terminal time-steps, leading to memorization. [1] Gaussian Interpolation Flows, JMLR, 2025. Other Comments Or Suggestions: - At the bottom of page 2, there is a typo in the definition of medial axis Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive comments. Thank you for appreciating the convergence guarantees and insights into the memorization behavior of flow and diffusion-based generative models. We will fix the missing parenthesis in the definition of the medial axis to make it clearer in the revised manuscript. We would also like to point out that we have further strengthened our theoretical results with new experiments on FFHQ to show the utility of our theoretical findings (in addition to results we already have in the appendix); see response to Reviewer m3yf, as well as discussions to potential practical implications as in response to Reviewer zqMS. We will clarify these points in the revision. We believe that incorporating those changes will further strengthen our paper. --- Rebuttal Comment 1.1: Comment: I have read through other Reviewer's comments and the authors' rebuttal, and I believe the additional clarifications further strengthen the paper. In particular, I find the experimental results on FFHQ interesting, and I believe the results can serve as a basis for analyses of Rectified Flows and Consistency Models. Hence, I have raised the score from **Weak Accept** to **Accept**. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback. We appreciate your support and are glad the additional clarifications and experiments helped strengthen the paper further.
Summary: This paper studies the convergence behavior of the ODE trajectories in flow-matching w.r.t training data using analytical tools from geometry. Specifically, it provides an extensive analysis of the ground-truth FM ODE trajectories under the affined Gaussian path, and shows how trajectories are shaped by data geometry in terms of “attracting” and “absorbing”. They show that trajectory evolution can be divided into ithe nitial/intermediate stage where trajectories move towards overall data support (mean) and then towards clusters of the data, and a final stage where convergence analysis is provided under much milder conditions than previous works. The work presents a lot of novel theoretical claims and findings, from well-posedness of FM ODE trajectory, per-sample level trajectory evolution patterns, convergence under a broad class of data distribution including submanifolds, equivariance, and finally provides some connection to memorization phenomena when target distributions are discrete measures. Claims And Evidence: The claims are mostly presented as propositions and theorems with substantial proofs and some empirical analysis of toy data. The findings on trajectory seem correct, although I did not have the chance to finish all the extensive proofs. Since the analysis is mostly based on a geometric point of view, it mostly talks about how a single sample is absorbed into the support of a cluster in the data and eventually converges to the target data point. Methods And Evaluation Criteria: The proposed method (or theoretical analysis framework) is through the lens of data geometry, which allows more general target distribution, and since it only cares about every single trajectory, I guess the framework makes sense. There is no benchmark or experimental comparison as the work is mostly theoretical and analytical. Theoretical Claims: I only checked the proofs for propositions and theorems that appeared in the main text. The assumptions are clearly stated (e.g., bounded support for general attraction towards data support, the existence of FM ODE in $t\in[0,1)$, well-separated local cluster assumption for cluster absorbance, finite 2-moment assumption for probability). I don’t think the well-separated local cluster assumption holds for any datasets, and since the clustering analysis is an important conclusion of the work, they should provide some explanations on what will happen if the data is not well-clustered. Experimental Designs Or Analyses: There are no experimental results in the main text. There are some empirical validations of the attraction behavior on cifar10 dataset and a very simple toy data in the appendix, but these are all clustered dataset. It would be more interesting if a realistic dataset with less obvious classes/clusters such as CelebA are provided. Supplementary Material: I review the experiments and some of the proofs for the theorems in main text. Relation To Broader Scientific Literature: To my knowledge, the theoretical contribution is quite novel. As the authors claim, they are the first to individual trajectory-level behavior in flow matching, and the convergence is analysis under weaker assumptions than prior works (discussed in the related work section). Essential References Not Discussed: I am not aware of other essential references that are missing. Other Strengths And Weaknesses: The paper is very math-heavy and packed with lots of theoretical findings. Some of them may not be very relevant to the major claims on convergence behavior (e.g., equivariance). The paper studies the dynamic of ground-truth trajectories under gaussian path (or mostly diffusion path?), it mentioned that the findings can be used to support consistency model but did not include any results on that end. It is also unclear whether the conclusions can be generalized to non-random pairing (such as rectified flow after several rectifications), or non-Euclidean case. Other Comments Or Suggestions: No other comments. Questions For Authors: What is the meaning of $C_{\epsilon}^S <1$ in proposition 4.4? Does it hold for most of the clusters? How should I interpret this proposition to make a useful design of the prior noise? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s positive feedback and recognition of our novel theoretical contributions on the FM ODE convergence under weak assumptions, per-sample trajectory evolution patterns, and connections to memorization. We believe that the per-sample level analysis is important as it could inform data geometry-based sample step scheduling or steering strategies. Below, we respond to the comments/questions: **Cluster assumption and prior noise design:** We agree that the well-separated local cluster assumption may not hold for all datasets. Our analysis suggests similar attracting and absorbing behavior could happen for locally dense regions. For example, we can prove a particular case where data distributions obtained from well-separated clusters convolved with Gaussian noise. Specifically, Lemma C.11 identifies FM ODE trajectories with additionally convoluted Gaussian noise with early-stopped FM ODE trajectories w.r.t. the original distribution (up to parameterization), which can help extend current Proposition 4.4 to this more general setting. In practice, our analysis suggests that prior noise with a bias towards the cluster center could effectively control the sampling outcome to reflect the desired cluster characteristics. This could hold, in general, dense regions of the data distribution, as demonstrated in our new FFHQ experiments below. **Cluster experiments on FFHQ:** Following your suggestion, we conducted new experiments on the FFHQ dataset, which is similar to CelebA, with a pre-trained EDM model (Figures in PDF: https://anonymous.4open.science/r/ICML-7924/FFHQ_exp.pdf). Figure 1's t-SNE plot shows that images with similar illumination tend to cluster together, especially at brightness extremes, as RGB encoding makes brightness differences strongly influence distances in pixel space. This creates distinct dense regions for the darkest and lightest images. We performed sampling using a pre-trained EDM with three initialization strategies: random, near-light, and near-dark regions. All samples used standard EDM sampling (18 steps from noise 80 to 0.002), varying only initialization noise. The results in Figure 2 show that when initialized randomly, samples exhibit various illumination levels. When trajectories start near light regions, samples consistently display light characteristics, while dark initialization produces dark samples. These validate our theoretical findings on cluster absorption even where clustering is based on subtle, continuous features rather than discrete classes. **Non-random pairing and non-Euclidean case:** We suspect convergence still holds for non-random pairing, though general dynamics might differ significantly. For non-random pairings (as in rectified flow), the posterior distribution $p(x|x_t)$ is a Dirac delta, and the posterior mean is determined by the deterministic pairing. Hence, we suspect that the convergence in terminal time still holds in this case. But the ODE dynamics in other stages are different: for example, paths may straighten after rectification (due to convergence to optimal coupling), then there will be no travel to the data mean in the initial stage. The non-Euclidean case can be more challenging, as the transition kernels in those cases might be more complicated than Gaussian kernels. These topics are interesting and worth exploring in the future, and we will provide more discussions in the revision. We want to reiterate that our analysis of random pairing in Euclidean space is broadly applicable, as most generative models fit this setting. **Consistency model and equivariance** We agree that it would be beneficial to better connect our convergence result to consistency models and equivariance. The consistency model aims to distill the entire flow matching trajectory (from noise to data) into a single map, which necessarily requires the FM ODE to converge. Our work formally validates this convergence, ensuring consistency models are mathematically well-defined. We will add a discussion on this connection following our main convergence results. The equivariance result naturally extends from the convergence of FM ODE trajectories. Additionally, it provides another perspective on how data geometry affects the ODE trajectories: a similarity transformation results in some deterministic and explicit transformation of FM ODE trajectories. **$C_{\epsilon}^{S}<1$ in Proposition 4.4:** This condition ensures the formula is well-defined. It could fail when the weight $a_S$ on a cluster is large, but this is not problematic. In fact, when $C_{\epsilon}^{S}\geq 1$, the cluster exhibits a stronger attracting force and $\sigma_0(S, \epsilon)$ equals infinity, meaning that Proposition 4.4 would apply to any initialization time not just after time $\sigma_0(S, \epsilon)$. We will clarify this in the revision. We will incorporate these changes in the revision. Thank you for your feedback; we hope the revision addresses your concerns.
Summary: This paper presents a theoretical analysis of Flow Matching (FM) models addressing how data geometry influences the dynamics of the ODE trajectories used in FM-based generative models. They show that the denoiser guides the ODE dynamics through attracting and absorbing behaviors. They identify three stages of ODE evolution and establish the convergence of FM ODE trajectories under weak assumptions. The authors also provide insights into the memorization phenomenon and equivariance properties of FM ODEs. Claims And Evidence: The claims in the submission generally seem to be supported by the theoretical evidence. The authors provide mathematical proofs to support their claims about the convergence of FM ODE trajectories, the role of the denoiser in guiding these trajectories, and the influence of data geometry. However, the paper lacks empirical evidence to demonstrate the practical implications of the theoretical results. Adding experimental validation would strengthen the claims, particularly the stages on sampling efficiency and model performance. Methods And Evaluation Criteria: The paper seems theoretically sound and provides insights into the dynamics of FM ODE models, particularly in the role of the denoiser. However, the lack of practical evaluation, e.g. experiments on toy problems or benchmark datasets, is a significant gap. While the theoretical analysis is extensive, demonstrating these concepts on a simple synthetic or toy dataset would have helped validate the claims and made the results more tangible. Theoretical Claims: The theoretical claims in the seem sound, to my knowledge, e.g. the existence and convergence of FM ODE trajectories (Theorems 4.1 and 5.3) and the attracting/absorbing dynamics (Theorems 3.1 and 3.2). But, the paper's assumptions (e.g., data on submanifolds with positive reach) and occasionally difficult-to-follow notation may limit the generality and accessibility of the results. Is the hashtag in the medial axis definition a typo? It would help to elaborate more on concepts like "medial axis" and "reach". Experimental Designs Or Analyses: The paper lacks experimental validation to support its theoretical claims. While the analysis is thorough, the absence of experiments on toy problems or benchmark datasets limits the ability to assess the practicality. Supplementary Material: I have gone through appendices, but not thoroughly. Relation To Broader Scientific Literature: I believe that the results are insightful in relation to the surrounding scientific literature, but I feel that the related work section could use more substance to contextualize the contributions. Essential References Not Discussed: The Jacobian of the denoiser was mentioned as being related to the covariance. The following works also mention this. Ben-Hamu, Heli, et al. "D-Flow: Differentiating through Flows for Controlled Generation." Forty-first International Conference on Machine Learning. Rissanen, Severi, Markus Heinonen, and Arno Solin. "Free Hunch: Denoiser Covariance Estimation for Diffusion Models Without Extra Costs." arXiv preprint arXiv:2410.11149 (2024). It would be nice to have more discussion about how the Jacobian/covariance ties into the geometric insights of this paper. Other Strengths And Weaknesses: Strengths: - Provides theoretical insights, with convergence results for FM ODE dynamics. - Analyses connecting data geometry with FM model trajectories. - Extends prior theoretical analyses on flow matching and diffusion models. Weaknesses: - No experiments or demonstrations on toy problems or datasets. - Sometimes difficult-to-follow notation and dense mathematical content reduce accessibility. - Conditions (e.g., data lying on submanifolds with positive reach) might not apply broadly, potentially limiting practical relevance. - Does not seem to link theoretical results to empirical findings in related literature. Other Comments Or Suggestions: - Some assumptions, e.g. data lying on submanifolds with positive reach, could be explicitly discussed in terms of their limitations and applicability to real-world datasets. - The paper would benefit from a small empirical demonstration, even on a toy dataset, to illustrate theoretical results and make them more tangible. Questions For Authors: 1. Could you clarify how restrictive the assumption of data being supported on submanifolds with positive reach is in practical scenarios? 2. How do you envision your theoretical results guiding practical improvements in the design or training of FM models? Could you provide examples? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and for recognizing the theoretical importance of our work on the convergence of FM ODE dynamics, the analyses connecting data geometry with FM model trajectories, and the memorization phenomenon. Below, we address the main comments/questions: **Experimental validation:** We conducted experiments on synthetic data and CIFAR-10, presented in the Supplement due to space constraints. Specifically, we demonstrated the emergence of the three stages of ODE evolution on synthetic data (Figures 7- 10) and the convergence to data mean (Figure 11) and memorization phenomenon on CIFAR-10 (Figures 13, 14), aligning with our theoretical findings in sections 4 and 5. Following Reviewer m3yf's suggestion, we have now also conducted experiments on the human face dataset (FFHQ). Please see the response to Reviewer m3yf for more details. **Assumption on positive reach:** First, we note that this assumption is only used for our convergence results (Thm 5.3, 5.4) and equivariance (Prop. 5.7), not elsewhere in the paper. Also, the positive reach assumption is common in manifold learning literature (e.g., Fefferman et al., 2016) and is not overly restrictive. Specifically, the medial axis $\Sigma_{\Omega}$ of a manifold $\Omega$ is the set of points in the ambient space having more than one closest point on $\Omega$. The local feature size $lfs(x)$ at a point $x \in \Omega$ is the distance from $x$ to the medial axis $\Sigma_{\Omega}$. The reach of $\Omega$ is defined as $\min_{x\in \Omega} lfs(x)$. The local feature size of any submanifold embedded in Euclidean space is positive everywhere. Consequently, all compact submanifolds have positive reach, which includes most real-world datasets under the manifold hypothesis (e.g., image patches, audio). Moreover, our core analysis (e.g., Theorem 4.1) is local, meaning the positive reach assumption could potentially be relaxed to only require positive **local feature size**. This relaxation would extend our results to an even wider class of data distributions, including singular manifolds (e.g., sheets of manifolds glued together with singularities). Fefferman et al. "Testing the manifold hypothesis." JAMS 29.4 (2016): 983-1049. **Practical implications of our theory:** We believe that understanding the behavior of per-sample ODE trajectory is critical, as that is the trajectory followed during the inference stage. Our theoretical analysis has several direct practical implications for FM model design and training: 1. Memorization mitigation strategies: Our analysis in Section 5.2 reveals how the terminal stage of ODE evolution influences memorization phenomena, suggesting potential regularization approaches based on the denoiser near the terminal time–e.g. regularize the Jacobian of denoiser. 2. Latent space design principles: Our analysis reveals how data geometry directly shapes ODE trajectories, with important implications for latent space design. By leveraging the data's clustering structure, one can design more effective latent representations that facilitate both efficient sampling and precise feature formation. Our equivariance result suggests that preserving key equivariance properties can lead to more robust, generalizable, and interpretable models. 3. Theoretical foundation for one-step distillation method: Our convergence results provide rigorous mathematical justification for one-step distillation approaches like consistency models. These methods rely on the assumption that FM ODEs converge to stable, well-defined mappings, which our results formally validate. 4. Optimized sampling strategies: Our characterization of the three-stage ODE evolution suggests adaptive sampling approaches that allocate fewer integration points to the initial and final stages while concentrating computational resources on the intermediate feature formation stage. This provides theoretical grounding for empirically successful non-uniform time discretization schemes and explains why the practice of starting at moderately large noise levels (e.g., 80 in EDM) is successful, although infinite initial noise is required theoretically. **Other comments:** We will improve the paper's clarity in the revision to make the notation and concepts more accessible. For example, the hashtag in the medial axis definition is a shorthand notation for the cardinality of a set, and we will explain this explicitly in our future revision. Regarding the Jacobian of the denoiser, we will update its discussion and include the suggested references. For its relation to the data geometry, as the denoiser converges to projection onto the data manifold, the rank of the Jacobian of the denoiser can reflect the local dimensionality of the data, which could help indicate if the trained model is memorization- when the rank is very low. We will incorporate these changes in the revision. Thank you for your feedback and we hope the revision will address your concerns.
null
null
null
null
null
null
null
null
Categorical Distributional Reinforcement Learning with Kullback-Leibler Divergence: Convergence and Asymptotics
Accept (poster)
Summary: This paper analyzed categorical TD learning with KL loss in the tabular setting. They also proposed a preconditioned version of the algorithm and provided an asymptotic normal analysis. They also conducted experiments to verify the theoretical results. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes, I reviewed the theoretical parts. Relation To Broader Scientific Literature: This paper analyzes categorical TD learning with KL loss, which are used in many deep distributional reinforcement learning algorithms. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: 1. The motivation is good. On the one hand, the KL loss (instead of the Cramer loss) is adopted by many practical algorithms, and studying the performance under this loss function helps in understanding real-world algorithms. On the other hand, it also inspires the development of distributional reinforcement learning algorithms that utilize other probabilistic metrics as loss functions. 2. It is a good idea to use preconditioning to improve KL-CTD and guarantee convergence. 3. For readers, it is quite clear to use the asymptotic variance to compare the performance of algorithms in the simple Monte-Carlo and tabular settings. Weakness: 1. Although the motivation is quite good, and there are also experimental and some theoretical evidences to illustrate the advantages of using the KL loss compared with the Cramer loss, considering that this paper focuses on the simplified tabular setting, I think it would be better to have theoretical evidence demonstrating that, in terms of learning distributions (not just the means, i.e. value functions in Section 6), the KL loss has advantages over the Cramer loss. For example, you can provide the asymptotic normality of the vector p_k^{Cramer-CMC} and p_k^{KL-CMC} 2. The paper mentioned that the KL loss is used in many practical algorithms. If there could be experimental comparisons between the KL loss and the Cramer loss (for example, see linear function approximation algorithm in [1]) under the function approximation setting, I believe the paper would be more complete. [1] Lyle, C., Bellemare, M. G., and Castro, P. S. A comparative analysis of expected and distributional reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2019. Other Comments Or Suggestions: Some notations are used without being defined., such as \|... \|_{\pi}, d^\pi. There is typo in the equation in Line 225-228. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time and effort in reviewing our paper, and we are pleased that they found our motivation compelling, appreciated our use of preconditioning to obtain a convergence guarantee, and found our use of asymptotic variance for the analysis to be a clear choice. **Comparison between Cramér and KL losses for learning distributions** We thank the reviewer for this suggestion, as we agree that an asymptotic comparison of the probability vectors learnt by the Cramér and KL losses is a valuable comparison to make. We report this result in a new proposition, which we state below. Let us define the iterates $p_k^{\text{Cram\’er-CMC}}$ as follows: $p_{k+1}^{\text{Cram\’er-CMC}} = p_k^{\text{Cram\’er-CMC}} + \alpha_k (h(G) - p_k^{\text{Cram\’er-CMC}})$. Then under the assumptions of Lemma D.3., we have that $$k^\beta ( p_{k}^{\text{Cram\’er-CMC}} - \tilde{p}^\pi) \overset{d}{\to} \mathcal{N}\left(0, \mathbb{E}_{G}[h(G) h(G)^T] - (\tilde{p}^\pi)(\tilde{p}^\pi)^T\right).$$ We also want to emphasize that the goal of our work is not necessarily to argue that the KL version of the algorithm is always superior to the Cramer-based version (and indeed this is not always the case). Our main motivation is to study the KL version in its own right, which is the fundamental algorithm underlying the original deep RL implementation of distributional RL (C51; Bellemare et al., 2017). **Experimental comparisons between the KL loss and the Cramer loss under the function approximation setting** We appreciate that while our focus on the tabular setting potentially seems a limitation due to its simplicity, we want to emphasize that this was a conscious choice in order to develop a core understanding of categorical TD algorithms based on KL gradients, which have not been studied before, without confounding factors that often arise in larger-scale experiments (such as adaptive optimisation, function approximation, target networks, etc.). We additionally note that the paper suggested by the reviewer performs an empirical comparison of KL-CTD and Cramér-CTD in the function approximation setting: C51 is the deep learning equivalent of KL-CTD, and S51 as used in their paper is the deep learning equivalent of Cramér-CTD. **Other Comments Or Suggestions** We thank the reviewer for bringing attention to these typos. We will be sure to fix them in the text. **References** Bellemare, M. G., Dabney, W., and Munos, R. A distributional perspective on reinforcement learning. In International Conference on Machine Learning, 2017. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I think authors has addressed all my questions, and I will keep my positive score.
Summary: In this paper, the authors studied the theoretical properties of categorical distributional TD with KL loss. They proposed a preconditioned version of the algorithm called PKL-CTD, proved its asymptotic convergence, and derived the asymptotic distribution of the resulting value estimators. These theoretical results also provide valuable insights for practitioners. Claims And Evidence: The claims made in the submission are supported by sound theoretical analysis or empirical evidence. Methods And Evaluation Criteria: The proposed methods make sense for the problem at hand. Theoretical Claims: The theoretical results seem reasonable in this work. I checked most parts of the proof and found them correct. Experimental Designs Or Analyses: The experiments are adequate for the purpose of validating the theoretical findings. Supplementary Material: I review most of the proofs in the supplementary material. Relation To Broader Scientific Literature: This paper presents a thorough discussion of the theoretical properties of categorical distributional TD learning with KL loss. The theoretical findings advance the understanding of the distributional TD learning algorithm and provide valuable insights for practitioners. I think the authors has made a solid contribution to the field of distributional RL. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: * The idea of introducing a preconditioner is concise and intuitive. * The asymptotic analysis of the value estimator provides a series of valuable insights, for example, the bias-variance tradeoff incurred by the choice of the size of supports. Weaknesses: * The asymptotic analysis only focuses on the asymptotic behavior of value estimators in TD and KL-CTD. I think it may also be helpful to compare the asymptotics between Cramer-CTD and KL-CTD. * As the authors state, "However, most large-scale implementations of categorical temporal-difference learning use a KL loss, rather than Cramer loss. This is a crucial detail of large-scale implementations, but has not yet been theoretically analyzed." However, it seems that the author only considers the tabular case. I think the authors should at least use some experiments to show that the insights obtained in the tabular case are also valid in the large-scale case with the function approximation techniques applied. Other Comments Or Suggestions: I do not see the blue line indicating the MSE of TD learning in Figure 2, I hope the author can fix it in later revisions. Questions For Authors: Can the authors generalize the asymptotic analysis to estimators of statistical functionals other than the mean? For example, the asymptotic distribution of differentiable statistical functionals can be obtained with a combination of asymptotics of the learned weights and delta method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort in reviewing our paper, and for the helpful feedback and suggestions provided. We are pleased to hear that they found our work to provide valuable insights for practitioners and our idea of preconditioning to be concise and intuitive. **Asymptotic analysis missing for Cramér-CTD** We thank the reviewer for pointing out this limitation, and we have expanded the asymptotic results based on their feedback. In particular, we have derived additional results for the asymptotic normality of $p_k^{Cramer-CMC}$, and present this below. Let us define the iterates $p_k^{\text{Cram\’er-CMC}}$ as follows: $p_{k+1}^{\text{Cram\’er-CMC}} = p_k^{\text{Cram\’er-CMC}} + \alpha_k (h(G) - p_k^{\text{Cram\’er-CMC}})$. Then under the assumptions of Lemma D.3., we have that $$ k^\beta ( p_{k}^{\text{Cram\’er-CMC}} - \tilde{p}^\pi) \overset{d}{\to} \mathcal{N}\left(0, \mathbb{E}_{G}[h(G) h(G)^T] - (\tilde{p}^\pi)(\tilde{p}^\pi)^T\right). $$ **Extension of asymptotic analysis to general statistical functionals** We thank the reviewer for highlighting this as a direction to expand our current results, as we find this to be a nice generalization of our current analysis. We present the result for KL-CMC below, and will include it for TD and Cramér-CMC in the paper. Suppose $\psi: \mathscr{P}(\mathbb{R}) \to \mathbb{R}^k$ is a statistical functional sketch, and let $J_\psi: \mathbb{R}^m \to \mathbb{R^k}$ be the Jacobian of $\phi \mapsto \psi(p^\phi)$. Then if $J_\psi$ is continuous in a neighbourhood of $\tilde{p}^\pi$, we have $$ k^\beta ( \psi(p_{k}^{\text{KL-CMC}}) - \psi(\tilde{p}^\pi)) \overset{d}{\to} \mathcal{N}\left(0, \frac12 J_\psi(\tilde{p}^\pi) \left(\mathrm{diag}(\tilde{p}^\pi) - (\tilde{p}^\pi)(\tilde{p}^\pi)^\top\right) J_\psi(\tilde{p}^\pi)^\top\right). $$ **Work only considers the tabular case** We appreciate that while our focus on the tabular setting potentially seems a limitation due to its simplicity, we want to emphasize that this was a conscious choice in order to develop a core understanding of categorical TD algorithms based on KL gradients, which have not been studied before, without confounding factors that often arise in larger-scale experiments (such as adaptive optimisation, function approximation, target networks, etc.). **Missing blue line in Figure 2** The line for the MSE of TD is not missing, it rather perfectly overlaps with the line of Cramér-CTD, as predicted by the theory of Lyle et al. (2019). This is a key motivation for the work: while the behaviour of Cramer-CTD exactly coincides with classical TD learning, as predicted by Lyle et al. (2019), this shows that there are settings with KL-CTD has distinct, and sometimes superior, performance to these other algorithms, motivating us to understand the theory of this approach better. We will make this more emphasized in the text, and explore other methods of visualization to remove any confusion, such as making the linewidth of the blue curve larger. **References** Lyle, C., Bellemare, M. G., and Castro, P. S. A comparative analysis of expected and distributional reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2019. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their responses. I will retain my positive rating of this paper.
Summary: This paper revisited categorical distributional RL and conducted the analysis based on KL divergence instead of the conventional Cramer distance. The authors proposed a variant of the algorithm by preconditioning and showed its convergence. More importantly, the asymptotic normality or asymptotic variance is also provided in estimating the values. Experiments are conducted in some simple environments to demonstrate the theoretical results. ## update after rebuttal Thank you for the response. This paper has many technical contributions, but I still think it could be largely enhanced. In particular, the authors can consider strengthening the motivation for why we need to fill the gap between Cramer CTD and KL CTD here, beyond only providing some simple motivating examples. The theoretical contributions should be made clearer. Also, I would suggest one or two conclusions be emphasized in the end instead of applying a tree-manner writing style that directly stacks multiple insights. Thus, I keep my rating. Claims And Evidence: This paper gives me the feeling that many parts are disconnected without presenting one or two clear conclusions, even though each part is involved with rigorous analysis. For example, why do the authors consider the KL variant categorical distributional RL, and what is the gap between it from the original algorithm, such as C51? Why do we then move to a preconditioned variant of an algorithm rather than rethink the properties of real categorial distributional RL algorithms? Furthermore, I understand it takes effort to conduct the convergence and asymptotic analysis in value estimates, but what are the purposes or goals of presenting these results? Do the authors hope to demonstrate an advanced algorithm that can be superior to the original categorical distributional RL or just ``dump’ ’related properties that could be derived? After reading this paper, I am very confused about the main purpose of this paper and what kind of insights or conclusion the readers should gain after the reading. Methods And Evaluation Criteria: What is the motivation to consider categorical distributional RL with KL divergence? Is that because the previous analysis in (Rowland et al. 2018), such as Cramer TD, does not align well with the real practical algorithm? Or is it just because the KL variant algorithm could simplify the theoretical analysis? The motivation for the study of KL-CTD is not clear in Section 3, given that the environments are particularly chosen. In addition, the original categorical distributional RL may also behave erratically in Figure 3. Why do we not directly study the original algorithm? I find it difficult to understand the motivation of preconditioning variants. It seems to be mainly from a theoretical perspective. Theoretical Claims: The theoretical claims look fine to me. One question is how to understand the convergence result in this paper based on KL divergence, given that distributional dynamic programming is only non-expansive if we directly employ the KL divergence to measure the distribution distance. Is $\tilde{p}_i^{\pi}$ above Eq. 8 in $\tilde{\eta}$ the probability after the two-hot transformation? Does Proposition 5.3 mean that KL-CTD converges to the same fixed point of the original categorical distributional RL algorithm? It is not clear why $C$ is introduced in Eq.9 and 10. Could any intuitive explanation be provided? Why consider the asymptotic normality/variance in value estimates in Propositions 6.1 and 6.4? What is the main regularity condition? Is there any potential application of these asymptotic properties? Experimental Designs Or Analyses: Although this is a theoretical paper, the experiments are still weak from my perspective. For example, Figures 2 and 3 are on a particularly chosen environment, which is less convincing. The results in Figure 4 are appealing, but they are conducted on a very simple one-state environment after I checked the details in the appendix. In addition, in Section 7.2, it seems that the superiority of the proposed algorithms PKL-CTD over KL-CTD and TD highly depends on the step size (learning rate), which is frustrating, especially for practitioners. It is suggested to have experiments on some classical gym control environments, even though it may not be necessary to consider complex environments for a theory-focused paper. Supplementary Material: I have read the proofs in the appendix, which look good to me. The experimental details in Appendix G are also fine to me, but it seems the considered environments are too simple. Relation To Broader Scientific Literature: Although I agree that the theoretical analysis looks rigorous for this paper, my concern is that it may not be related to the broader scientific literature. The focus of the paper is a modification of categorical distributional RL, which is one specific distributional RL algorithm. Although this modification with KL divergence simplifies the theoretical analysis, allowing rich convergence and asymptotic normality analysis in value estimates, it may still be specific. I am also looking for some potential theoretical contributions that may be generalized to broader areas. However, I feel that many analyses are based on existing results and are established in a parallel manner. Therefore, I am concerned about whether this paper could be related to broader literature, given that the analytic tools may be a little specific in a variant of categorical distributional RL. Essential References Not Discussed: I think the references are relatively self-contained. It is a theory-focused paper that mainly discusses the related work directly linked with the analysis, but it would be better to add more references on the whole distributional RL area. Other Strengths And Weaknesses: Please see the comments above. Other Comments Or Suggestions: What is the gap between KL-based categorical distributional RL and the vanilla algorithm? Questions For Authors: It looks like this is a follow-up paper to Rowland (2018). What is this paper's main novelty or contribution relative to it? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort they spent in reviewing our paper, and we aim to resolve their questions below. **Why do the authors consider the KL variant of categorical distributional RL, and what is the gap between it from the original algorithm, such as C51?** We focus on the KL variant of categorical distributional RL as it is exactly the fundamental algorithm used by C51. The gap between KL-CTD considered here and C51 is due to the use of policy evaluation instead of control, and tabular analysis instead of the deep learning setting. **The motivation for the study of KL-CTD is not clear in Section 3. Why do we not directly study the original algorithm?** We motivate KL-CTD in two ways in Section 3: - Firstly, KL-CTD is the original categorical TD algorithm that C51 makes use of. This in itself is a strong motivation for its study. - Our examples in Figure 2 and 3 serve as further motivation, showing that even in tabular settings, without neural network function approximation, KL-CTD can exhibit drastically different behaviour to earlier studied algorithms such as Cramer CTD and classical TD learning. We emphasize that KL-CTD is the original CTD algorithm, as described at Eqn (6) and Line 141. **How to understand the convergence result in this paper based on KL divergence, given that distributional dynamic programming is only non-expansive if we directly employ the KL divergence to measure the distribution distance** Our convergence analysis relies on Lyapunov stability results, as opposed to writing our updates as a stochastic approximation of a contractive operator. More specifically, there are several reasons why a result concerning the unprojected distributional Bellman operator's behaviour as measured by KL divergence is not pertinent to our analysis here: We are concerned with convergence of an algorithm that maintains a categorical representation of the return distribution, but the result above is concerned with the unprojected distributional Bellman operator. While we analyze an algorithm that is defined via the KL loss, this does not obligate us to make use of contraction results measuring distance in KL. In Bellemare et al. (2023, Chapter 5), this difference is emphasised by distinguishing between metrics used for analysis, and metrics used to define algorithms. In our case, we study an algorithm that uses KL between categorical distributions in its definition, and our analysis makes use of both KL (to measure distance from the fixed point), and Cramér distance (we make use of contractivity of the categorical distributional Bellman operator in Cramér distance to show that the KL is a Lyapunov function; see the proof of Proposition 5.4 and line 783 in particular). Please let us know if you have any further queries on this point. **$\tilde{p}$ above Equation (8)** The $\tilde{p}^\pi_i$ appearing in the equation $\tilde{\eta}^\pi = \sum_i \tilde{p}^\pi_i \delta_{z_i}$ represents the probabilities of the fixed point of the projected distributional Bellman operator. ​​ **It is not clear why C is introduced in Eq.9 and 10. Could any intuitive explanation be provided?** Our previous calculation on lines 225-227 shows that the KL divergence from the current estimate to the fixed point may be increasing under the KL-CTD dynamics. But from this expression, in light of Proposition 5.3., we can reverse-engineer a modification to the updates which would guarantee the decrease of the KL, and lead to convergence. This modification is exactly the preconditioning using $C^\top C$. **It seems that the superiority of the proposed algorithms PKL-CTD over KL-CTD and TD highly depends on the step size.** The reviewer is correct that which algorithm is best between the various considered in Figure 7 depends on the step size, however we would like to emphasize that at least in the settings in the figure, the dependence of PKL-CTD on step size has a generally equal or smoother curve than the dependence of KL-CTD and TD on step size, which both have deep learning equivalents which are highly used in practice (DQN and C51). **Why consider the asymptotic normality/variance in value estimates in Propositions 6.1 and 6.4? What is the main regularity condition? Is there any potential application of these asymptotic properties?** We study the asymptotic distributions of the value estimates because they are able to give us exact forms of the errors incurred by the algorithms. These asymptotic results have led to the bias/variance results in our paper, and we believe that they have further applicability for deriving insights in future work. **Comparison with Rowland (2018)** While Rowland et al. (2018) study categorical dynamic programming and the Cramer-CTD update, this paper focuses on the categorical TD learning algorithm based on the gradient of the KL loss, matching the form of loss used in the original categorical distributional RL paper (Bellemare et al., 2017). --- Rebuttal Comment 1.1: Comment: Thank you for the explanations and some of them are helpful. However, I think my major concerns have not fully been addressed. Here are some follow-up questions. * I understand the authors' claim that the KL-CTD is exactly the TD update of categorical distributional RL, is that right? If so, maybe the paper has mentioned, but I am still curious what the exact gap is between the previously proposed cramer CTD and KL-CTD analytically. I understand the behavior difference in the illustrative experiment, which may be less convincing as it is conducted only in a certain environment, but I think this difference may not be highlighted sufficiently in the current paper. Additionally, despite this gap, is it meaningful enough for practitioners to choose to use the revised CTD in a large-scale environment (not just the illustrative environment) instead of the Cramer CTD? It seems that the motivation in this paper is on the theory side. * In the authors' response, this paper uses a novel analysis tool based on Lyapunov stability. It is great to do that, but my concern is why not directly on the KL function or what is the challenge if we still insist on the classical stochastic approximation analysis? I agree this does not obligate the authors to make use of contraction results measuring distance in KL, but I am wondering how the novel analysis tool can circumvent the issues when using the classical analysis tool? This is more suggested to be highlighted in the paper. * I also agree with other reviewers' comments: the experimental section can be largely enhanced in the future, although for a theory paper, it is not a critical issue. * Some properties are fine to present, including asymptotic normality, even though the authors also agree that they may be useful in future work. However, I suggest that the authors make the theoretical results of the paper more connected and highlight the main conclusion they want to make. Unfortunately, I think there is still room for improvement in the current paper regarding this point. In summary, I am inclined to keep my rating for now. --- Reply to Comment 1.1.1: Comment: Thank you very much for the additional questions, we provide responses in turn below. **The KL-CTD algorithm.** - You're correct, the KL-CTD update we study in this paper, defined in Eqn (6), exactly matches the update proposed by Bellemare et al. (2017) (Algorithm 1 of their paper writes the loss as a cross-entropy, the gradient of which is identical to the KL appearing in Eqn (6)). - In contrast, the Cramer-CTD algorithm, which is analyzed by Rowland et al. (2018), and summarised at the very end of Section 2 in our paper, does not exactly match the algorithm of Bellemare et al. (2017): the probabilities are not parametrised with a softmax, and updates are not computed via gradients of a cross-entropy/KL loss. We hope this makes clear the analytic difference between KL-CTD and Cramer-CTD, and welcome any further questions. - Having established that Cramer-CTD and KL-CTD are given by distinct update rules/parametrizations in Section 2, Section 3 provides motivating examples of distinct behaviour in practice. Indeed, as the reviewer pointed out these examples are only in a particular environment, we chose to do this to to the analysis of Cramer-CTD by Lyle et al. (2019), which showed that Cramer-CTD is *always* equivalent to TD in tabular settings. Our onus of proof in light of their results was to show that *there exists* settings where the algorithms are different in practice, to motivate our study. - "Is it meaningful enough for practitioners to choose to use the revised CTD in a large-scale environment". We want to emphasize that KL-CTD is the method that is typically used in large-scale applications, beginning with the work of Bellemare et al. (2017), and this is an important motivation for aiming to obtain a theoretical understanding of this algorithm. **On classical stochastic approximation techniques.** - If we understand correctly, one possible interpretation of your suggestion would be to attempt to perform an analysis like that of Jaakola et al. (1994) or Tsitsiklis (1994) for Q-learning, where we aim to interpret KL-based CTD updates as approximating the application of a contractive operator, and use the stochastic approximation results described in these papers to derive a convergence guarantee. This is the approach taken by Rowland et al. (2018) for Cramer-CTD. - However, it is not clear whether the KL-based CTD updates can be written in this way, as writing the right-hand-side as the application of an operator becomes rather complicated, we can write out the argument in more detail if the reviewer would like to see it. Furthermore, we believe that this difficulty is one of the reasons that the convergence of KL-CTD remains an open question 8 years after it was originally introduced in Bellemare et al. (2017). - We want to emphasize that this is a commonly encountered scenario when analyzing RL algorithms. For example, this is the situation encountered when analysing TD with linear function approximation (Tsitsiklis and Van Roy, 1997), and the proofs of convergence for linear on-policy TD rely on Lyapunov stability analysis (Tsitsiklis and Van Roy rely on this approach in establishing their Theorem 2, making use of Lyapunov stability results from the textbook by Benveniste, Métivier, Prioret (1990)). As another example, in distributional RL, Rowland et al. (2024) made use of Lyapunov stability theory described by Benaïm et al. (2005) to prove convergence of quantile TD. - We are not sure we fully understand what the proposal to use classical stochastic approximation techniques would look like, would you be able to provide more detail? **Experiments.** - Thank you for this comment: We agree that there are promising directions for future empirical study, particularly of the PKL-CTD algorithm proposed in this paper, and also share the view that as the paper is primarily theoretical, our emphasis has been on establishing theoretical understanding, rather than an empirically-focused work. **Organisation** - We acknowledge that the sequence of results in our paper is likely better represented as a tree rather than a linear chain. However we believe this to be a strength rather than a weakness, as this approach lets us present multiple key insights within a single body of work. This is particularly beneficial for future research building on of our results, as it opens multiple avenues which can be expanded upon. That said, we will revisit the presentation of our results, and see if there’s anywhere we can make the connection between results more explicit, or more clearly guide the reader through our findings. **References** A. Benveniste, M. Métivier, and P. Prioret, Adaptive Algorithms and Stochastic Approximations. Berlin: Springer-Verlag, 1990. Michel Benaïm, Josef Hofbauer, and Sylvain Sorin. Stochastic approximations and differential inclusions. SIAM Journal on Control and Optimization, 44(1):328–348, 2005.
Summary: The paper studies categorical distributional reinforcement learning with a KL divergence loss. Unlike previous analyses relying on the Cramér distance, this paper introduces a novel preconditioned version of categorical temporal-difference learning with KL divergence , proving its convergence under mild assumptions. The paper also analyzes the asymptotic variance behavior of categorical estimates under various learning rate schedules. Empirical evaluations demonstrate that KL-based algorithms perform differently from classical temporal-difference methods in specific tabular reinforcement learning environments. Claims And Evidence: The main claims inluding convergence of PKL-CTD, the asymptotic variance analysis, and advantages of KL-based methods are clearly supported by theoretical proofs and empirical experiments. However, the manuscript occasionally lacks clarity, especially around certain critical derivations (e.g., derivation of equation (6) from equation (5)). Additionally, the authors should more clearly differentiate their contributions from prior work. I do have some additional concerns - The paper provides a counterexample showing KL divergence does not serve as a Lyapunov function, but that does not exclude the possibility of convergence of KL-CTD without preconditioning. - The paper presents empirical results showing distinct learning dynamics for KL-CTD. However, the theoretical justification is somewhat heuristic, and the paper lacks a rigorous explanation for why KL-CTD is preferred beyond empirical observations. - The clarity regarding the derivation of certain critical equations (such as equation (6)) needs improvement. Methods And Evaluation Criteria: The proposed methods (PKL-CTD and KL-CTD) and evaluation criteria, which uses standard RL benchmarks (Cycle, Garnet, Dirichlet environments), are reasonable and appropriate for assessing theoretical claims. However, the analysis is restricted to synchronous updates and policy evaluation settings without exploring more interesting setting like asynchronous or control (Q-learning), which limits the generality of the results. Theoretical Claims: The main convergence proofs appear correct and leverage standard stochastic approximation techniques and existing convergence theory. No technical flaws were found, though the techniques used are relatively standard and do not introduce significant novel technical complexities. Experimental Designs Or Analyses: The experimental analyses illustrate the theoretical findings. Experiments comparing TD, KL-CTD, and PKL-CTD are well-designed and show that KL-CTD outperforms TD in high-stochasticity environments, while PKL-CTD is the most stable. Supplementary Material: I skimed the supplementary material, particularly Appendices A-D, which contain detailed proofs supporting the main theoretical claims. Relation To Broader Scientific Literature: The paper extends prior work on distributional RL (Bellemare et al., 2017) and categorical RL (Rowland et al., 2018). It is related to Cramér-based CDRL (Boeck & Heitzinger, 2022) but focuses on KL divergence instead. Some recent works on quantile-based DRL are not cited. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths - Theoretical advancement by addressing the convergence gap for KL-based distributional RL, which partially solved the problem raised by Dabney et al., 2018 - Asymptotic variance analysis is solid - Connects theory with empirical observations, particularly in variance behavior. - Derives insights on learning rate selection and category scaling Weakness - Analysis is restricted to synchronous policy evaluation scenarios, not covering asynchronous and control setting - The proof of Theorem 5.5 is largely standard stochastic approximation theory, meaning the novelty lies more in the application rather than the mathematical difficulty. - Minor clarity issues in crucial derivations. Other Comments Or Suggestions: - Clarify the derivation from equation (5) to equation (6) explicitly in the main text or clearly reference its derivation in supplementary material. - Highlight the key differences/novelty from Rowland et al. (2018) and Boeck & Heitzinger (2022). - Provide more intuition for PKL-CTD by explaining why preconditioning improves convergence. - It might be insightful to discuss the potential impacts of different choices of the preconditioning matrix on convergence behavior and practical performance. - Indicates where the proof can be found - Proposition 4.3. The author should clarify that the stationary point is fixed point of Projectied Bellman operator - The equation in Line 224-228 is not clear Some typos - $|| ||_{\pi}$ is not defined in Proposition 5.3. - Proposition 6.1. middle line $\tilde{p}$ Questions For Authors: - Have you explored the convergence and performance of KL-CTD or PKL-CTD with asynchronous updates and control setting? - While the paper provides a counterexample showing KL fails to be a Lyapunov function for KL-CTD, but that does not exclude the possibility of convergence of KL-CTD without preconditioning. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort in reviewing our paper, and for the helpful feedback provided. We are pleased that the reviewer found our work on the convergence of PKL-CTD, the asymptotic variance analysis, and the connection between theory and empirical observations to be valuable. Below, we address each of the reviewer’s points in detail. **Analysis is restricted to synchronous policy evaluation scenarios** *Synchronous/Asynchronous* We note that all experimental results in Section 7 are done using asynchronous updates (with synchronous updates performed as an ablation in Appendix F), though the reviewer is correct that our convergence result addresses the synchronous setting. In light of the reviewer’s suggestions we have expanded our analysis, and give a sketch below of how it can be applied to analyse asynchronous cases too. Following work presenting core convergence guarantees in asynchronous stochastic approximation, such as Borkar (2008), considering the case where states are updated according to an ergodic Markov chain with stationary distribution $c$, the key concern to establishing convergence under appropriate technical conditions on step sizes is to establish a corresponding Lyapunov function for the continuous-time dynamics $$ \partial_t \phi_t (x) = c(x) C^\top C (T^\pi - I) p^{\phi_t}(x), $$ which take into account the average per-state weighting of updates. From our analysis of the synchronous case (see Proposition 5.4), it can then be verified that $L(\phi) = \sum_{x\in\mathcal{X}} \frac{d^\pi(x)}{c(x)} \mathrm{KL}(\tilde{p}^\pi(x) \| p^\phi(x))$ is a Lyapunov equation for this ODE, which can then be used to guarantee its convergence. *Policy Evaluation/Control* We focus on the policy evaluation setting in this paper, and leave the control setting as interesting future work. We remark that understanding the policy evaluation setting first is typical in the analysis of both standard and distributional reinforcement learning. **Proof techniques used are relatively standard, existing stochastic approximation theory** We agree that our proof of Theorem 5.5 uses existing stochastic approximation theory, although we would argue that many applications of stochastic approximation theory to RL make use of core existing theory (such as in the classic papers analyzing tabular TD algorithms by Jaakkola et al. (1994) and the linear TD analysis of Tsitsiklis and Van Roy (1997)), and indeed novel theory and techniques of stochastic approximation are often published as stochastic approximation results in their own right. Further, we want to emphasize that our theoretical analysis leads to a number of novel results, such as an undiscovered algorithm (PKL-CTD), the effective learning rate phenomenon which we show is fundamental to KL-based categorical algorithms, and the bias-variance tradeoff in the atom locations. **Convergence of non-preconditioned KL-CTD** We agree that the reviewer is entirely correct that our counterexample to the KL being a Lyapunov function for non-preconditioned KL-CTD does not exclude its possibility of convergence, and we will make this clearer in the paper. We chose to include this as an additional result in the appendix as it is a natural question in light of Proposition 5.4 (does the weighted KL, our exhibited Lyapunov function for PKL-CTD, also work as a Lyapunov function for KL-CTD?), we did not intend to suggest that non-preconditioned KL-CTD cannot converge. **Recent works on quantile-based DRL are not cited** We would like to ensure that no relevant work is missing from our discussion of related work in Appendix E. We invite the reviewer to suggest any works in particular they find to be missing. **Other Comments Or Suggestions** We thank the reviewer for all of the additional catches/suggestions, and we will ensure to fix them in the text. **References** Jaakkola, Tommi, Michael Jordan, and Satinder Singh. "Convergence of stochastic iterative dynamic programming algorithms." Neural Computation (1994). Tsitsiklis, John and Van Roy, Ben "An analysis of temporal-difference learning with function approximation," in IEEE Transactions on Automatic Control (1997) Borkar, Vivek. Stochastic approximation: a dynamical systems viewpoint. Cambridge University Press, (2008). --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed rebuttal and clarifications. However, several core concerns remain only partially addressed to me - The authors’ sketch connects their synchronous analysis to asynchronous updates using Borkar (2008), but the current rebuttal does not provide a rigorous convergence result or even a concrete theorem statement for the asynchronous case. Given that all experiments are performed with asynchronous updates, the absence of a matching theoretical guarantee limits the strength of the claims. A more formal statement or at least a more complete derivation (even in supplementary material) would better support the claim. - While focusing on policy evaluation is reasonable, the paper would benefit from at least discussing challenges in extending to control, especially given the practical importance of distributional RL for control tasks. - The authors argue that their use of existing stochastic approximation theory is consistent with prior reinforcement learning literature. I agree. However, the technical novelty of the convergence analysis is limited, and this remains a weakness of the theoretical component of the paper. While the application to a novel algorithm (PKL-CTD) is worthwhile, I encourage the authors to better emphasize which parts of the analysis, e.g., insights about the Lyapunov structure or atom bias/variance tradeoffs, are new and important, beyond the convergence guarantee itself. - I appreciate the authors’ clarification that the failure of KL to be a Lyapunov function does not imply divergence. However, the current version of the manuscript (prior to the rebuttal) may give readers the incorrect impression that KL-CTD cannot converge. I strongly recommend making this distinction more prominent in the main text (not only in the appendix), along with a discussion of whether practical convergence of KL-CTD is typically observed empirically. Additionally, if the goal is to motivate preconditioning, a deeper discussion on why KL-CTD might be unstable without it, e.g., via spectral properties or gradient scaling, would strengthen the contribution and distinguish it more clearly from previous works. Although I find the core idea interesting and potentially impactful, I maintain my recommendation. If revised to address the concerns above, I believe this paper would be a strong candidate for a future venue or for acceptance in its improved form. --- Reply to Comment 1.1.1: Comment: Thank you very much for these additional questions, we add our responses below. **Asynchronous analysis** We agree that the exact form of the theoretical guarantee is valuable, and we present it below. --- Suppose that $(\phi_k)_{k\geq 0}$ is a sequence of logits generated according to asynchronous PKL-CTD updates. That is, for each $k \geq 0$, we receive a transition $(x_k, R^{x_k}, X^{x_k})$ and update $$\phi_{k+1}(x_k) = \phi_k(x_k) + \alpha_k C^\top C \left(\sum_{i=1}^m p(X^{x_k}) h(R^{x_k} + \gamma z_i)- p^{\phi_k}_i(x_k) \right),$$ and maintain $\phi_{k+1}(x) = \phi_k(x)$ for $x \ne x_k$. Further suppose that the stepsizes $(\alpha_k)_{k \ge 0}$ satisfy the Robbins-Munro conditions $\sum_{k=0}^\infty \alpha_k = \infty$ and $\sum_{k=0}^\infty \alpha_k^2 < \infty$, $\alpha_{k+1} \leq \alpha_k$ eventually, $\sup_{k\geq 0}\frac{\alpha_{\lfloor zk \rfloor}}{k} <\infty$ for all $z\in(0,1)$, and $\left(\sum_{i=0}^{\lfloor zk \rfloor} \alpha_{zk}\right) / \left(\sum_{i=0}^{k} \alpha_{k}\right) \to 1$ as $k\to \infty$ for all $z \in (0,1)$. Letting $\nu(x,k)$ be the indicator of whether the state $x$ is updated at step $k$, we further assume that $\lim\inf_{k\to\infty} \frac{\nu(x,k)}{k} \geq \Delta$ for some constant $\Delta > 0$, and defining $N(k,z)=\min\{ n>k: \sum_{i=k+1}^n \alpha_i > z \}$, the limit $\lim_{k\to\infty}\frac{\sum_{n=\nu(x, k)}^{\nu(x, N(k, z))} }{\sum_{n=\nu(y, k)}^{\nu(y, N(k, z))}}$ exists almost surely for all $x, y\in \mathcal{X}$. Then we have that $\phi_k$ converges in the sense that $p^{\phi_k}(x) \to \tilde{p}^\pi(x)$ for every $x\in \mathcal{X}$ such that $d^\pi(x)>0$ almost surely. --- This result is based on Theorem 2.5 of Borkar and Meyn (2000), and this is only a single example of an asynchronous convergence result which can be obtained. We want to emphasize that different convergence results can be obtained under different choices of assumptions, but the vital component is the existence of a Lyapunov function, which we provided in our original rebuttal. In the setting of Borkar and Meyn (2000), the existence of the Lyapunov function provides us the fact that the fixed point is an asymptotically stable equilibrium. **Extension to control** Thank you for this comment: We agree that highlighting the challenges in extending to control is a good idea, and we will highlight this as a direction for future work in the text. To clarify where the complications arise, it is primarily due to the combination of the nonlinear dynamics of the softmax parameterization, and the nonlinear update due to the argmax over actions. Handling each of these nonlinearities individually is relatively straightforward, however their combination brings forward challenges. **Novelty of convergence result** We disagree with the reviewer’s statement that “the technical novelty of the convergence analysis is limited, and this remains a weakness of the theoretical component of the paper.” We expand on our reasoning below. - Firstly, we want to highlight one of the main motivations of our paper is that KL-based distributional RL losses have been ubiquitous in large-scale deep RL experiments since the introduction of C51 in Bellemare et al. (2017), yet they have had little theoretical analysis (to our knowledge none). - In light of this, we believe the convergence result in our paper to be a significant technical result: as we discussed in our rebuttal to reviewer wwL4, the analysis of dynamics with KL-based losses aren’t straightforward to analyse as the application of a contractive operator, which led us to perform a novel analysis in finding a Lyapunov function for its convergence. - We will also better emphasize which parts of the analysis, aside from the convergence result itself, are new and important, namely (i) exact quantities for the asymptotic variance of the KL and Cramér value estimators, (ii) the phenomenon of the effective learning rate being scaled proportional to the number of bins, and (iii) the bias-variance tradeoff present in the choice of atom locations. **Convergence/divergence of KL-CTD** - We agree that our current presentation does not explicitly state that the lack of the KL as a Lyapunov function for KL-CTD does not indicate that KL-CTD must diverge on the provided counterexample, and we will make this explicit in the main text. We will also make clear that practical convergence of KL-CTD is generally observed. - As for how to motivate the form of the preconditioning used, we aimed to do exactly that in lines 222-233: in particular we motivated the preconditioner as a way to change the inner product to a weighted inner product in which the quantity is always negative due to the contractivity of $T^\pi$ in the $C$-weighted $\ell^2$ norm. **References** V. S. Borkar and Sean P Meyn. The ODE method for convergence of stochastic approximation and reinforcement learning. SIAM Journal on Control and Optimization, 38(2):447—469, 2000.
null
null
null
null
null
null
Provable Policy Gradient for Robust Average-Reward MDPs Beyond Rectangularity
Accept (poster)
Summary: The studies very sparsely studied topic of average reward robust MDPs. Specifically, the paper establishes global convergence of robust policy gradient (RPG) for average reward MDPs with an iteration complexity of $O(\epsilon^{-4})$ given oracle access to the robust gradient. This paper combines the techniques of RPG in discounted reward MDP [1] and convergence policy gradient for average reward MDPs [2]. In addition, this paper improves the smoothness coefficient for average reward MDPs over existing result in [2]. The paper is well written. [1] Wang, Q., Xu, S., Ho, C. P., and Petrick, M. Policy gradient for robust markov decision processes. arXiv preprint arXiv:2410.22114, 2024a. [2] Kumar, N., Murthy, Y., Shufaro, I., Levy, K. Y., Srikant, R., and Mannor, S. On the global convergence of policy gradient in average reward markov decision processes.arXiv preprint arXiv:2403.06806, 2024b. Claims And Evidence: Yes, seems good. Methods And Evaluation Criteria: Yes, seems good. Theoretical Claims: I went over all the proofs but not in great details. All seems good. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: In RL, average reward MDPs has its own independent significance yet the most of the results are of discounted reward MDPs, due to the complexity of the setting. This work makes this gap little narrower. Essential References Not Discussed: All seems good. Other Strengths And Weaknesses: Strengths: Good theoretical results of significance. Weakness: The proofs are well presented (above average), however a little more clarification on the maths proofs would be beneficial to the readers. Other Comments Or Suggestions: Suggestion: A more readable proofs would be beneficial to the readers. Questions For Authors: Question: This paper is direct combination of [1] and [2]. Please elaborate on the technical novelty. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful questions and taking the time to assess our manuscript. 1. **_Comment 1: Additional clarification in proofs could be helpful ._** Thank you very much for providing helpful suggestions on writing style and missing clarification! We will clarify the wording in the next version of our manuscript, including more detailed explanations, to improve readability. 2. **_Comment 2: Technical novelty need to be further elaborated._** We'd like to emphasis that the extension from the discounted setting to the average-reward setting is non-trivial, even the average-reward MDP can be seen as a special case of discounted MDPs ($\gamma=1$). Please see our reply to **Reviewer iYjz, Comment 2** and the introduction of [2] for a detailed explanation. For the outer loop, we establish Lipschitz continuity via a novel sensitivity analysis and introduce a gradient dominance condition specifically tailored to the average-reward setting, leading to an outer-loop convergence guarantee. Compared to [2], we provide improved smoothness coefficients. While our approach adopts a double-loop structure, which is widely used in various fields such as game theory [3], min-max optimization [4,5], and robust MDPs [1], our analysis relies on standard non-convex minimax optimization techniques, which are significantly different from the mirror descent analysis used in [1]. For the inner loop, we propose two tailored algorithms for worst-case transition evaluation under both rectangular and non-rectangular settings, whereas [1] only addresses the rectangular case. To establish the inner-loop convergence guarantee, we derive the first form of the adversarial transition gradient (Lemma 4.1), establish first adversarial smoothness conditions using sensitivity analysis techniques (Lemma 4.3), and prove the first adversarial gradient dominance condition (Theorem 4.4), along with corresponding inner convergence guarantee. Therefore, our work differs substantially from both references [1,2]. Building on these theoretical advancements, our approach offers a comprehensive framework for addressing robust average-reward MDPs beyond existing methods. We will further clarify these contributions in the next version of our paper. [1] Wang, Q., Xu, S., Ho, C. P., and Petrick, M. Policy gradient for robust markov decision processes. [2] Kumar, N., Murthy, Y., Shufaro, I., Levy, K. Y., Srikant, R., and Mannor, S. On the global convergence of policy gradient in average reward markov decision processes. [3] Ding D., Wei C. Y., Zhang K., \& Jovanovic M. 2022. Independent policy gradient for large-scale markov potential games: Sharper rates, function approximation, and game-agnostic convergence. [4] Jin C., Netrapalli P., \& Jordan M. 2020. What is local optimality in nonconvex-nonconcave minimax optimization? [5] Davis D., \& Drusvyatskiy D. 2019. Stochastic model-based minimization of weakly convex functions. --- Rebuttal Comment 1.1: Comment: Thanks for response. I have few additional minor commnets. The paper establishes iteration complexity of $O(\epsilon^{-4})$ for average reward robust policy gradient with non-agressive learning rate. The work [1] , [2], and Theorem 4.7 of [3] establishes $O(\epsilon^{-1})$ convergence rate for smooth-discounted reward robust MDP, softmax discounted reward robust MDP, and sa-rectangular average reward robust MDPs respectively. Could authors comment on these? Could the results from [1] and [2] be translated to average reward too? If the result of this paper could be improved to $O(\epsilon^{-1})$ similar to [3]? --- [1] @inproceedings{ kumar2023towards, title={Towards Faster Global Convergence of Robust Policy Gradient Methods}, author={Navdeep Kumar and Ilnura Usmanova and Kfir Yehuda Levy and Shie Mannor}, booktitle={Sixteenth European Workshop on Reinforcement Learning}, year={2023}, url={https://openreview.net/forum?id=cWrwdbEBx5} } [2] @misc{wang2024policygradientrobustmarkov, title={Policy Gradient for Robust Markov Decision Processes}, author={Qiuhao Wang and Shaohang Xu and Chin Pang Ho and Marek Petrik}, year={2024}, eprint={2410.22114}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2410.22114}, } [3] @inproceedings{ sun2024policy, title={Policy Optimization for Robust Average Reward {MDP}s}, author={Zhongchang Sun and Sihong He and Fei Miao and Shaofeng Zou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=6FPZLnp1Zn} } --- Reply to Comment 1.1.1: Comment: Thank you very much for providing helpful comments on enhancing the significance of our work! First of all, we'd like to clarify that [2] and [3] are well discussed in our manuscript as concurrent works (both were made public around November). The contribution of [2] as the inspiration for the idea of decreasing tolerance is clarified and highlighted in Section 3.1 (Line 219). The distinction between our work and [3] is discussed in Section 1.1 (Line 71-79), and we also adopt the robust mirror descent policy gradient from [3] as a numerical benchmark method. We appreciate the reviewer for pointing out another relevant work [1] on robust policy gradient methods, which will help us further refine our literature review. On the technical side, while [1,2,3] all establish an $\mathcal{O}(\epsilon^{-1})$ convergence rate, [1,2] focus on the discounted setting, whereas [3] studies the average-reward setting. However, the fundamental analysis techniques differ. The result in [1] relies on standard optimization tools with a restrictive smoothness assumption, which does not hold for general ambiguity sets. In contrast, the analyses in [2] and [3] are based on standard mirror descent techniques, where smoothness of the objective is not necessarily required under direct parameterization. Therefore, we believe that adopting the analysis of [2] in the discounted setting to robust MDPs with the average-reward criterion is a promising direction for future research. At this stage, [3] serves as a pioneering work in this direction, establishing a faster $\mathcal{O}(\epsilon^{-1})$ convergence rate under a more restrictive $(s,a)$-rectangularity assumption compared to our general convergence guarantee. However, whether such an extension would be effective for RAMDPs with general ambiguity sets remains an open question that warrants further investigation.
Summary: This paper studies robust *average-reward* MDPs (RAMDPs) with general ambiguity sets. It proposes a policy-gradient-based algorithm, RP2G, that leverages an exponentially decaying adaptive tolerance mechanism $\{ \delta_t \}$ to enable provably efficient policy updates, assuming an oracle that solves the inner problem $\Psi(\pi)$, and coming with global convergence guarantees. Further, it also proposes an optimization algorithm (also leveraging projected gradient update) to solve $\Psi(\pi)$ for the worst-case kernel that also provably converges (in some sense). The performance of the proposed algorithms are supported by simulation results. Claims And Evidence: All claims are supported by concrete results. Methods And Evaluation Criteria: The algorithm makes intuitive sense, and the evaluation method is reasonable. Theoretical Claims: The theoretical results are supported by proofs that are checked to be correct. Experimental Designs Or Analyses: Since this is mostly a theory-oriented paper, the numerical simulations only act as a supporting evidence. For this purpose, the experimental design and results look good to me. Supplementary Material: Proofs in Appendices A, C & D of the supplementary material are checked to be correct. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper is overall well-written. Algorithms are sufficiently motivated and introduced with intuitive ideas. 2. Theoretical results presented in the main text, though lacking quite a few technical details, are largely self-explanatory and convincing. * It's delightful to see the nice symmetry between the sensitivity in $\boldsymbol{\pi}$ and that in $\boldsymbol{p}$. Weaknesses: 1. The discussion regarding the projected Langevin dynamics that solves $\Psi(\pi)$ for general ambiguity sets is a little too hand-wavy. Basically no technical details are provided in the paper (including the appendix). The current Section 4.3 also does not fit into the flow very well. * After a quick look, (Lamperski, 2021) is a very technical paper that deals with generic non-convex learning. Hence technical details are definitely needed here to show how it fits into the average-reward MDP setting. * It is a little weird to see "... is the first ..." after results cited from other papers, without further explanations. 2. The numerical experiments can preferably be extended to include more common benchmarks. Other Comments Or Suggestions: 1. There are a few typos observed during reading. * On line 715: does "DFunctions" mean "diffusion value functions"? * In Lemma 4.2: "is" should be "are". Questions For Authors: 1. Currently the theoretical guarantee for RP2G (Theorem 3.5) is isolated from that for the adversary gradient ascent (Theorem 4.6), in the sense that the former does not take the estimation error of the inner problem into consideration. Is it possible to incorporate this estimation error into the overall bound, probably using some quick fixes? 2. Honestly I'm not very familiar with average-reward MDPs. What makes average-reward MDPs so different from discounted MDPs? Is it the mixing property and the upper bounds involving mixing time that is key to the theoretical analysis? Is this approach potentially adaptable to discounted MDPs or episodic MDPs? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our paper and your insightful comments as well as suggestions. **_Comment 1: Technical details in Section 4.3 should be added._** Thank you so much for pointing out the missing details in this section! That would be definitely helpful if full technical details are provided. We will include a detailed discussion of the proof and analysis in the next version of our paper. **_Comment 2: Statement with "... is the first..." could be misleading._** Thank you for your suggestion. We would like to emphasize that while the technical tools for solving generic non-convex problems are well established [1], no existing literature has applied such analysis to robust average-reward MDPs. To the best of our knowledge, the most recent policy gradient method for robust average-reward MDPs [2] is limited to the more conservative $(s,a)$-rectangular case. We will clarify the wording in the manuscript in our revision to improve readability. **_Comment 3: Considering additional benchmarks in the experiment could be helpful._** Thank you for this insightful suggestion. We do totally agree that incorporating additional benchmarks would strengthen our experimental evaluation. As suggested by **Reviewer sWdH**, we conducted an experiment in an inventory control setting to demonstrate RP2G's superior performance (see https://drive.google.com/file/d/1VKnmT5_Wzpj6PwH_UbHImimhrhKBpVgq/view?usp=sharing). Our results show that the policy obtained by solving the non-rectangular RAMDP is less conservative (see our reply to **Reviewer sWdH, Comment 1** for a detailed explanation). We will evaluate our algorithms on other benchmarks in the next version of this paper. **_Comment 4: Typos._** Thank you for pointing out the typos. We will update our paper according to these suggestions! **_Question 5: Could you incorporate the estimation error of Algorithm 3 into the overall convergence bound in Theorem 3.5?_** Thank you for raising this important question. We would like to emphasize that the theoretical guarantee for RP2G (Theorem 3.5) accounts for the inner output error ($\delta_{t}$ at $t$-th iteration). However, unlike Algorithm 2 for the inner problem, which features a simple and effective structure for updating the inner transition kernel, Algorithm 3 employs Monte Carlo sampling to achieve a probabilistic convergence guarantee, which introduces additional estimation error. Therefore, incorporating this estimation error from Algorithm 3 (as detailed in Theorem 4.6) into a deterministic convergence guarantee (shown in Theorem 3.5) remains challenging. We believe that investigating the overall convergence bound while accounting for the inner worst-case evaluation error is an interesting direction for future work that could further enhance the significance of our results. **_Question 6: What makes average-reward MDPs so different from discounted MDPs?_** This difference arises in both theory and practice. While the RAMDP be seen as a special case of the RMDP ($\gamma=1$), extending the theoretical framework from the discounted setting to the average-reward setting is non-trivial (see our reply to **Reviewer iYjz, Comment 2** and [3] for a detailed explanation). In practice, many real-world systems prioritize steady-state or long-term behaviour, where policies derived from discounted MDPs may perform poorly (see Introduction line 37-48). **_Question 7: Is the mixing time important to the theoretical analysis?_** At this stage, the mixing time is crucial for bounding the differential (action) value function, which plays a key role in establishing Lipschitz continuity and the gradient dominance condition. **_Question 8: Is this approach potentially adaptable to discounted MDPs or episodic MDPs?_** Thank you for this insightful question. Regarding the discounted MDPs, [4] already proposed methods with a double-loop structure for solving RMDPs. However, extending our approach to episodic MDPs seems challenging, as the ergodicity assumption does not hold in this setting, preventing a direct application of our theoretical framework. Developing a suitable policy gradient framework for episodic MDPs remains an important direction for future research. [1] Lamperski, A. 2021. Projected stochastic gradient langevin algorithms for constrained sampling and non-convex learning. [2] Sun, Z., He, S., Miao, F., \& Zou, S. 2024. Policy optimization for robust average reward mdps. [3] Kumar N, Murthy Y, Shufaro I, et al. 2024. On the global convergence of policy gradient in average reward markov decision processes. [4] Li M., Kuhn, D., \& Sutter T. 2023. Policy gradient algorithms for robust mdps with non-rectangular uncertainty sets.
Summary: The paper investigates methods for solving robust Markov Decision Processes (MDPs) under the average-reward criterion. Building on existing approaches developed for the discounted-reward setting, the authors extend these ideas to the average-reward framework. The study presents multiple algorithms tailored to different structures of the ambiguity set, with a particular focus on whether the set exhibits a rectangular structure. Convergence results for these algorithms are provided, demonstrating their theoretical validity and practical implications. Claims And Evidence: The paper explores methods for solving robust Markov Decision Processes (MDPs) under the average-reward criterion, extending ideas originally developed for the discounted-reward setting. The authors introduce a robust projected policy gradient algorithm (RP2G) and establish its global convergence, despite the challenges posed by non-rectangular ambiguity sets. This convergence analysis relies on an oracle that efficiently solves the so-called inner problem. To address this, the paper presents two specialized algorithms for solving the inner problem: one based on projected Langevin dynamics and another using projected gradient ascent, specifically for rectangular ambiguity sets. The presented results are interesting, meaningful, and convincingly supported. However, one key question arises: Based on my understanding, Li et al. (2023) have already obtained similar results in the discounted-cost setting. Could the authors clarify what distinguishes their work from Li et al. (2023), beyond the shift from the discounted-reward to the average-reward criterion? Methods And Evaluation Criteria: The examples and evaluation criteria considered are meaningful and interesting, however, again almost identical to Li et al. (2023), with the only noticeable difference being the averge-reward criterion. Theoretical Claims: The theoretical results look surprisingly similar to those achieved for the discounted-reward setting in Li et al. (2023). Could you state clearer where the differences are? What happens if in the discounted cost setting you choose the discount factor $\gamma\to 1$? Can you recover the average-reward results? Experimental Designs Or Analyses: I don't fully understand Figure 2. It would be helpful to explain better why this plot is interesting. Supplementary Material: Careful details of the proofs are provided in the appendix. Relation To Broader Scientific Literature: The literature is well cited and discussed. Essential References Not Discussed: none Other Strengths And Weaknesses: The paper is well written and easy to follow. The only concern I have is the novelty compared to Li et al. (2023), see points raised above. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are sincerely grateful to the reviewer for the insightful comments and valuable questions. **_Comment 1. Additional clarification on the difference between our work and Li et al. (2023) is needed._** We apologize for the insufficient explanation of the contributions of this work. It is important to clarify that the extension from the discounted-reward setting to the average-reward setting is non-trivial (see our reply to **Comment 2** for a detailed discussion). While both our work and that of Li et al. (2023) rely on a double-loop structure, along with similar properties such as the smoothness of the objective function $J$ and the gradient-dominance condition, the technical foundations used to establish these properties are quite different. Compared to their work, our contributions span both loops. For the outer loop, we establish Lipschitz continuity via a novel sensitivity analysis and introduce a gradient dominance condition specifically tailored to the average-reward setting, leading to the convergence guarantee. For the inner loop, We propose a tailored projected gradient ascent algorithm for worst-case transition evaluation under rectangularity, with a convergence guarantee. This approach is structurally different from Algorithm 3.2 in Li et al. (2023). Although our inner convergence results may appear similar, the underlying analytical tools are fundamentally different. We hope this clarifies the key differences between our work and that of Li et al. (2023), both in theoretical foundations and algorithmic contributions. **_Comment 2: Recovering average-reward results from the discounted setting could be discussed._** We thank the reviewer for raising this point. While it is theoretically possible to recover average-reward results by taking the limit as $\gamma \to 1$ in the discounted-reward setting, this approach is often infeasible in practice. As shown in Li et al. (2023), the number of iterations is $\mathcal{O}(\frac{1}{(1 - \gamma)^4})$ (hidden within their constants), which increases rapidly as $\gamma\to 1$. This makes the computational cost prohibitively high for large $\gamma$. We also provide empirical evidence in Appendix E.5, where we observe a significant increase in iteration count as $\gamma$ increases. These results highlight the practical limitations of using discounted-reward methods for solving robust average-reward MDPs, and motivate the need for a tailored algorithm, as proposed in our work. **_Comment 3: Additional explanation for Figure 2 could be helpful._** Thank you for pointing this out. Regarding the experimental setup, we obtain and record the policies from both robust and non-robust AMDPs at each iteration. To assess policy robustness, we evaluate their performance under the worst-case transition scenario, i.e., computing $\max_{\boldsymbol{ p} \in \mathcal{P}} J(\boldsymbol{\pi}, \boldsymbol{ p})$ with given $\boldsymbol{\pi}$, and then record these values for plotting. This comparison is wildly adopted in robust MDPs literature [8,9] as a standard approach to demonstrate robustness. In terms of result interpretation, the RAMDP policies consistently achieve lower costs under worst-case transitions, highlighting their effectiveness against adversarial kernels. Moreover, as the number of iterations increases, the worst-case evaluation cost stabilizes, indicating convergence. These findings are in line with our theoretical results in Section 4.3. We hope these additional explanations could help clarify our numerical experiment. [1] Lamperski, A. 2021. Projected stochastic gradient langevin algorithms for constrained sampling and non-convex learning. [2] Wang, Y., Velasquez, A., Atia, G., Prater-Bennette, A., \& Zou, S. 2024. Robust Average-Reward Reinforcement Learning. [3] Riemer, M., Khetarpal, K., Rajendran, J., \& Chandar, S. 2024. Balancing context length and mixing times for reinforcement learning at scale. [4] Kearns, M., Mansour, Y., \& Ng, A. 1999. Approximate planning in large POMDPs via reusable trajectories. [5] Jin, Y., \& Sidford, A. 2020. Efficiently solving MDPs with stochastic mirror descent. [6] Puterman, M. L. 2014. Markov decision processes: discrete stochastic dynamic programming. [7] Xiao, L. 2022. On the convergence rates of policy gradient methods. [8] Sun, Z., He, S., Miao, F., \& Zou, S. 2024. Policy optimization for robust average reward mdps. [9] Tamar, A., Mannor, S., \& Xu, H. 2014. Scaling up robust MDPs using function approximation.
Summary: This paper extends the work of Li et al. 2023 from solving robust MDPs in discounted setting to average reward setting. Numerical results are also provided. ## update after rebuttal I thank the authors for their efforts in writing the rebuttal. I agree with that letting the discount factor to 1 would of course be a too simple adaptation, but still the essential difference seems to be an alternative dynamic equation -- the same technique from discount factor setting still applies. I will keep my score. Claims And Evidence: The claims are well-evidenced. Methods And Evaluation Criteria: Methods (PG) and evaluation criteria (average reward robust MDP) all make sense. Theoretical Claims: No, but I believe the plausibleness of the theoretical claims because of correctness in the discounted setting. Experimental Designs Or Analyses: The methods are evaluated in random synthetic MDPs instance GARNET, which is a standard benchmark in robust MDPs. The methods are evaluated from runtime and compared to non-robust method. However, the paper only compares the worst-case policy value of the one output by a non-robust PG method and the proposed method, which seemed incomplete because the power of non-rectangularity is really statistical efficiency compared to rectangular uncertainty sets, and the key contribution is extension to average reward criteria. Please refer to my comment for possible improvements. Supplementary Material: I reviewed the magnitude of Lipschitz constant part in terms of mixing time in the supplementary material. Relation To Broader Scientific Literature: The paper is solving an important instance of robust MDPs -- the one with average reward criteria. In this sense, the paper complements the work of Li et al. 2023 well. Essential References Not Discussed: The paper is well-positioned and relevant literature is discussed. Other Strengths And Weaknesses: Weakness: On a technical level, the essential thing that changes from solving discounted to average reward setting is the gradient formula and the magnitude of Lipschitz constant. Thus, I am unsure about the novelty on the technical side. Other Comments Or Suggestions: Please consider comparing to a data-driven MDP instance and compare the out-of-sample performance of robust MDP with non-rectangular uncertainty sets vs. the one with a rectangular uncertainty set. In addition, it would be enhancing the paper's significance if the authors can show the essentialty of using average-reward criteria (e.g., comparing against work of Li et al. 2023) where we can see the practical advantage of using average reward robust MDP with non-rectangular uncertainty set. Questions For Authors: How does the proof work when only access to stochastic policy gradients is available? What are the major changes required then? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful questions and constructive suggestions! **_Comment 1: Additional comparison between rectangular and non-rectangular RAMDPs would be helpful._** Thanks for the suggestion! As the (non-)rectangularity only affects the ambiguity structure and appears to be independent of the chosen reward criterion [1], we adopt the non-rectangularity to the average-reward case without further justification; thus we did not design an experiment specifically to show the advantages of non-rectangularity over rectangularity under this criterion. We do agree that it would be helpful if the superiority of the non-rectangularity could be better demonstrated in this setting. To address this, we compare the performance of $(s,a)$-rectangular and non-rectangular RAMDPs in an inventory control setting with the same set size (see https://drive.google.com/file/d/1VKnmT5_Wzpj6PwH_UbHImimhrhKBpVgq/view?usp=sharing). The results, presented in the table, show that the policy obtained from the non-rectangular RAMDP is less conservative, as evidenced by its lower average cost. We will provide a more detailed discussion in the next version of our paper and highlight the advantage and superiority of non-rectangularity. **_Comment 2: Further clarification of the technical novelty could be useful._** We appreciate the reviewer's concerns regarding our technical novelty and would like to emphasize that the extension from the discounted setting to the average-reward setting is non-trivial (see the example in **Reviewer iYjz, Comment 2**) As our proposed algorithm follows a double-loop structure, widely used in various fields, including game theory [2], min-max optimization [3,4], and robust MDPs [5], our analysis builds on standard non-convex optimization techniques, which were also adopted by [5]. Compared to [5], our novel contributions are twofold: (1) for the outer loop, we establish Lipschitz continuity via a novel sensitivity analysis and introduce a gradient dominance condition with an convergence guarantee tailored to the average-reward setting; (2) for the inner loop, we provide an effective algorithm for worst-case transition evaluation under rectangularity, along with which is structurally different from Algorithm 3.2 in [5]. While achieving a similar convergence rate, our analysis differs from [5], relying on standard non-convex optimization techniques [6]. **_Comment 3. Essentiality of using average-reward instead of discounted reward should be highlighted._** Thanks for your suggestion. As our work is theoretically oriented, our contribution focus on developing efficient algorithms with theoretical convergence guarantees for RAMDPs. We do understand average-reward setting is well-suited for agents concerned with long-term or steady-state policy behaviour, such as resource allocation, portfolio management, and healthcare [7,8]. In this sense, exploring the practical advantages of RAMDPs in these applications represents a promising direction for our future research. Moreover, as shown in Appendix E.5, when $\gamma$ approaches $1$, the computational cost increases significantly when using discounted-reward MDPs to approximate average-reward solutions. This underscores the necessity of methods specifically designed for the average-reward setting. **_Comment 4. Adaptation to stochastic policy gradients could be considered._** Thank you for this valuable question. Our analysis assumes a model-based setting in which the MDP structure is known except for the transition kernel, so the policy gradient can be computed exactly. When only stochastic policy gradients are available, additional challenges arise in modeling and in ensuring robustness under broader uncertainty. Extending our results to this case is a promising direction for future work, for example by incorporating robust temporal-difference methods such as [9]. [1] Wiesemann W., Kuhn D., & Rustem B. 2013. Robust Markov decision processes. [2] Ding D., Wei C. Y., Zhang K., & Jovanovic M. 2022. Independent policy gradient for large-scale markov potential games: Sharper rates, function approximation, and game-agnostic convergence. [3] Jin C., Netrapalli P., & Jordan M. 2020. What is local optimality in nonconvex-nonconcave minimax optimization? [4] Davis D., & Drusvyatskiy D. 2019. Stochastic model-based minimization of weakly convex functions. [5] Li M., Kuhn, D., & Sutter T. 2023. Policy gradient algorithms for robust mdps with non-rectangular uncertainty sets. [6] Beck A. 2017. First-order methods in optimization. [7] Ghalme G., Nair V., Patil V., & Zhou Y. 2021. Long-term resource allocation fairness in average markov decision process (amdp) environment. [8] Patrick J., & Begen M. A. 2011. Markov decision processes and its applications in healthcare. [9] Wang Y., & Zou S. 2022. Policy gradient method for robust reinforcement learning.
null
null
null
null
null
null
The Canary’s Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Accept (poster)
Summary: This paper argues that the synthetic data generated by LLMs, which are finetuned on private data, poses privacy risks. They identify a new class of canaries suited for these data-based privacy risks, and show that by choosing an in-distribution prefix and out-of-distribution suffix they can greatly increase the vulnerability of the synthetic data. ## update after rebuttal After the rebuttal, in which I mainly asked the authors to add some additional baselines/ablations, I keep my score. Claims And Evidence: I think all the claims in this submission are well-substantiated. There are extensive ablations throughout the entire manuscript. Methods And Evaluation Criteria: Yes, the metrics reported are standard in the MIA literature. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. I checked the main MIA experimental design. Supplementary Material: N/A Relation To Broader Scientific Literature: Existing literature has found that machine learning models trained on private data pose substantial privacy risks as quantified by MIAs (Shokri et al. 2017, Carlini et al. 2022a, Shi et al. 2023). However, the privacy risks of synthetic data generated by LLMs has not been explored by these works. This submission initiates a thorough investigation into these new risks and show that one can construct canaries which can be identified in the synthetic data by simple MIAs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper is very well written. 2. The framing of the data-based attacks for the privacy risks of synthetic data is important for practice. 3. The new construction of canaries with in-distribution prefixes and out of distribution suffixes is very original and interesting, and the authors do a good of ablating and thoroughly exploring this design space. Weaknesses: 1. The authors do not fully explore the design space of MIAs for synthetic dataset. For example, one could imagine finetuning on $\tilde{\mathcal{D}}$ and then applying standard MIAs to the finetuned model to try to extract canaries from $\mathcal{D}$. This is not a major concern because the MIAs solely based on synthetic data are already performant. Other Comments Or Suggestions: It might be worthwhile to include a discussion on the privacy risks of $\mathcal{D}$ itself when it does not have any canaries and just given access to the synthetic data itself. I imagine that the performance of the MIAs would be much worse, since the canaries are specially designed to be memorized. Questions For Authors: 1. Could you report CIs for some of the major results, i.e. Table 1 and Figure 1? 2. The text in Figure 3a is quite small; could you the plot more legible? 3. Could you explore the effect of training data size on the success of the MIAs? My intuition is that if the training data size goes to infinity, then one needs more repetitions of the canary. 4. Could you report the performance of the data-based MIAs on non-canaries belonging to $\mathcal{D}$ as a baseline? Are there privacy risks inherent in any finetuning data given synthetic generations from the finetuned model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We provide responses for the concerns raised below. > (1) one could imagine finetuning on D_tilde and then applying standard MIAs to the finetuned model Many thanks for pointing this out. We have opted to train an n-gram model on the synthetic data rather than a larger, transformer-based model due to its simplicity and computational cost. Indeed, training the n-gram model on the synthetic data, both for SST-2 and AgNews takes less than 1 CPU minute. We will add the suggestion of training a full LLM on the synthetic data instead to the discussion section. > (2) include a discussion on the privacy risks of D We provide results on fully in-distribution canaries (randomly sampled from D, no out-of-distribution suffix or F=max) throughout our work (in-distribution results in Table 1, and results for F=max in Figure 2 (c,f) and Table 2). Due to the lower perplexity of in-distribution sequences, we recover that data-based MIAs work quite well for these samples, especially compared to the high-perplexity canaries commonly used for model-based attacks. However, in these experiments, we consider the member canaries repeated n_rep times in the training data with n_rep up to 12 and 16. From Figure 2(a,d), we learn that when n_rep decreases, the MIA performance drops to no better than a random guess baseline. We hence anticipate that, at least in our experimental setup, the privacy risks associated with sequences appearing only once in D remains low. We will elaborate on this in the discussion section. We believe this also answers the reviewer’s last question (i.e. reporting the performance of the data-based MIAs on non-canaries belonging to D as a baseline). > (3) Could you report CIs for some of the major results For our main results (e.g. Table 1), we report ROC AUC as the performance of the MIA, representing an average performance over all (1000) canaries. Getting meaningful confidence intervals for these results requires training multiple (10+) target models, which is computationally quite expensive and was not feasible within the rebuttal period. We will run this for the SST-2 results in Table 1 to be included in a final version of the paper. > (4) The text in Figure 3a is quite small; could you make the plot more legible? Thanks for pointing this out, we will increase the corresponding fontsize. > (5) Could you explore the effect of training data size on the success of the MIAs? We share the reviewer’s intuition that as the size of the training dataset increases, the MIA performance likely decreases. As part of the rebuttal process, however, we have prioritized running other experiments and would leave this analysis to future work. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. I keep my score. --- Reply to Comment 1.1.1: Comment: We promised earlier (rebuttal for reviews above) to run 2 additional experiments. **1. MIA results for synthetic data with formal privacy guarantees** (L9fq, qpCC, MC8u) We hypothesized that MIAs against synthetic data generated from models fine-tuned with DP guarantees would approach random guess performance (AUC of 0.5). We run additional experiments to determine whether this intuition is correct. Below, we provide the MIA AUC for the best data-based attack (2-gram) when the target model is fine-tuned with DP-SGD with ε=8, under the setup of Table 1 in the paper (column *Synthetic 𝒜^𝐷 (2-gram)* vs *Synthetic 𝒜^𝐷_DP (2-gram)*). New results are in bold. We confirm the MIA AUC to be close to 0.5, providing strong evidence that DP constitutes a strong defense against data-based MIAs. We also find that the corresponding generated synthetic data maintains a high utility in downstream tasks. Specifically, for synthetic data generated with ε=8, accuracy on SST-2 reaches 91.6%, compared to 91.5% for non-DP synthetic data and 92.3% for real data (Table 6). | Dataset | Source | Label | Model 𝒜^θ | Synthetic 𝒜^𝐷 (2-gram) | Synthetic 𝒜^𝐷_DP (2-gram) | Synthetic 𝒜^𝐷 (SIM_Jac) | Synthetic 𝒜^𝐷 (SIM_emb) | |-|-|-|-|-|-|-|-| | SST-2 | In-distribution | | 0.911 | 0.741| **0.49**| 0.602 | 0.586| | | Synthetic | Natural | 0.999 | 0.620| **0.48** | 0.547| 0.530 | | | | Artificial | 0.999 |0.682 | **0.50** | 0.552 | 0.539 | | AG News | In-distribution | | 0.993 | 0.676 | **0.52** | 0.590 | 0.565 | | | Synthetic | Natural | 0.996 |0.654| **0.52** | 0.552 | 0.506 | | | | Artificial | 0.999 |0.672 |**0.51** | 0.560 | 0.525 | | **SNLI** | In-distribution | | **0.892** | **0.718** | **0.511** | **0.644** | **0.630**| | | Synthetic | Natural | **0.998** | **0.534** | **0.49** | **0.486** | **0.488**| | | | Artificial | **0.997** |**0.770** | TBD | **0.602** | **0.571** | Reviewer MC8u suggests readers could benefit from a discussion on defenses against MIAs, specifically on methods that offer DP guarantees. We concur and provide a discussion below that we will incorporate into the paper to complement the survey of methods to synthesize text with DP guarantees in Section 2 in the submission and the results we share above on synthetic data generated from models fine-tuned with DP-SGD. **Discussion on defenses.** Methods to generate synthetic text with DP guarantees mitigate MIAs by ensuring that any single training record exerts limited influence on synthesized data. These methods are broadly split into training-time [A,B,C] and inference-time [D,E,F,G]. We focus on the former, specifically on methods that fine-tune a pre-trained LLM with DP-SGD and then prompt this model to generate synthetic data. Training-time methods leverage the post-processing property of DP to transfer the guarantees from the fine-tuned model to synthetic data. Because generating synthetic data from a DP model does not consume additional privacy budget, they can generate an unlimited amount of data with a fixed privacy budget. In contrast, inference-time methods use unmodified pre-trained models prompted on private data and inject calibrated noise during decoding [E,F,G] or employ DP evolutionary algorithms to steer generation towards a distribution similar to the private data [D]. Empirical evaluation suggests that DP synthetic text can achieve high utility. Our results provide additional evidence of this and also that DP constitutes a strong mitigation against data-based MIAs. As the field progresses, we expect that rigorous privacy auditing using MIAs adapted to actual threat models will be crucial to the adoption of synthetic text generation. [A] Yue et al., Synthetic text generation with differential privacy: A simple and practical recipe. ACL 2023 [B] Mattern et al., Differentially Private Language Models for Secure Data Sharing. EMNLP 2022 [C] Kurakin et al., Harnessing large-language models to generate private synthetic text. ICLR 2024 [D] Xie et al., Differentially private synthetic data via foundation model APIs 2: Text. ICML 2024 [E] Wu et al., Privacy-Preserving In-Context Learning for Large Language Models. ICLR 2024 [F] Tang et al., Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation. ICLR 2024 [G] Amin et al., Private prediction for large-scale synthetic text generation. EMNLP 2024 **2. Results for a third dataset** (L9fq, MC8u) We also conducted experiments for a third dataset in the Table above. Specifically, we consider the SNLI dataset, and report the MIA AUC for the model-based attack and all three data-based attacks. We confirm that the data-based attacks also work for this dataset and recover an MIA performance drop compared to model-based MIAs similar to the one observed for the other two datasets. We will propagate other results for SNLI in an eventual final version of the paper.
Summary: The paper investigates the privacy risks associated with releasing synthetic data generated by Large Language Models (LLMs). It explores how much information about the original training data can be extracted from such synthetic data, even when adversaries do not have direct access to the fine-tuned model. - **Synthetic Data Leakage:** MIAs using only synthetic data can detect membership with AUC scores significantly above random, showing that synthetic text leaks training information. - **Attack Comparison:** There's a gap between model-based and data-based attacks; canaries effective in one setting require much higher occurrence to be vulnerable in the synthetic data scenario. - **Improved Canary Design:** The paper proposes canaries with an in-distribution prefix and high-perplexity suffix, enhancing their detectability in synthetic outputs for more reliable privacy auditing. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: Yes. Supplementary Material: Yes (A. Pseudo-code for MIAs based on synthetic data, B. Computation of RMIA scores, E. Detailed assumptions made for the adversary, F. Synthetic data utility) Relation To Broader Scientific Literature: The authors examine data-driven MIA, proposing a fresh framework that offers a more realistic assessment of threats compared to model-based MIA. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - The work shifts the focus from traditional model-based MIAs to attacks based solely on synthetic data, addressing a realistic threat model where adversaries do not have direct access to the fine-tuned model. - By proposing specialized canaries that blend an in-distribution prefix with a high-perplexity suffix, the authors enhance the detection capability of data-based MIAs, making privacy auditing more effective. Weaknesses: - A minor limitation is that the experiments are conducted on only two datasets, which may not capture the full diversity of real-world scenarios. The effectiveness of the proposed techniques across different domains or larger-scale datasets remains to be validated. Other Comments Or Suggestions: How do attacks work on differentially private text generation (text-to-text privatization) [1, 2, 3, 4]? Recent studies [2, 4] have demonstrated that paraphrasing techniques can achieve a highly favorable privacy-utility trade-off. I encourage the authors, if they have time, to explore simple paraphrasing-based DP methods, as they are relatively easy to implement and serve as strong defenses. A brief discussion on defenses, supported by some results, would greatly benefit readers seeking defense strategies, and if the authors provide such insights, I would be happy to change my rating to strong accept. References: [1] Privacy-and utility-preserving textual analysis via calibrated multivariate perturbations .WSDM 2020 [2] The Limits of Word Level Differential Privacy, EMNLP 2022. [3] TEM: High Utility Metric Differential Privacy on Text, SIAM, 2023 [4] Locally differentially private document generation using zero shot prompting, EMNLP 2023 Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We provide responses for the concerns raised below. > (1) A minor limitation is that the experiments are conducted on only two datasets We provide results for the *n*-gram based MIA for the SNLI dataset (for the setup from Table 1) below and will include this in the paper. These results suggest the same trends we report carry over to other datasets. |Canary injection||||| |-|-|-|-|-| |||AUC|TPR\@0.01|TPR\@0.1| |In-distribution||0.718|0.122|0.443| |Synthetic|Natural|0.534|0.016|0.111| ||Artificial|0.718|0.061|0.412| > (2) How do attacks work on differentially private text generation? We will add a section on mitigations strategies for our novel MIAs, focusing on fine-tuning the target model with DP-SGD before generating synthetic data, as in prior work (Yue et al., 2023; Mattern et al., 2022; Kurakin et al., 2023). Given past results (Table 3 in [1], Figure 3 in [2], or the results of the SaTML 2023 Membership Inference competition on STT-2 [3]), we expect the performance of model-based attacks to quickly decrease to a random guess baseline under DP guarantees. Since data-based attacks underperform compared to model-based attacks, and guarantees transfer to the synthetic data due to DP’s post-processing property, we expect data-based MIAs to also approach random guess for practical values of ε. By the end of the rebuttal phase, we aim to provide meaningful ablations on MIAs against DP-synthetic data, which we will then include in the paper. We leave other defense strategies (e.g. using paraphrasing techniques) as proposed by the reviewer for future work, and will elaborate on this in the discussion section. [1] Xie, et al. Differentially Private Synthetic Data via Foundation Model APIs 2: Text [2] Ma et al. Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models. [3] Microsoft Membership Inference Competition (https://github.com/microsoft/MICO).
Summary: This paper proposes to audit the privacy risks from synthetic data generated by LLMs as synthetic data is becoming increasingly prevailing in different applications. The authors found that the typical canaries designed for model-based auditing were not effective for auditing the synthetic data. The paper proposed a new design for canaries that is better suited for auditing the data-based scenario. The method is analyzed empirically on benchmark datasets with various evaluation metrics. Claims And Evidence: The claims are supported by the evidence under the assumptions that the author made. Methods And Evaluation Criteria: The evaluation criteria makes sense to demonstrate the improvement in the auditing performance. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiment designs are thorough. Though could be improved by analyzing how different domains might impact the auditing efficiency, e.g. would synthetic data generated for different domains in AG news make any difference in the auditing performance. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper is related to better understanding of privacy leakage through synthetic data which is a novel and important topic in the community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper is considered a novel auditing scenario, as private synthetic data generation is more and more popular, understanding the privacy leakage from synthetic data is critical. 2. The paper is well-written with clear methodology, and thorough analysis on why the existing canaries failed and how to craft canaries for synthetic data auditing. Weaknesses: 1. The evaluation can be strengthened by measuring the leakage against privacy-preserving methods such as private evolution, and DP fine-tuning for synthetic data generation. 2. The motivation for using n-gram for data-based attacks is not clearly described. Why n-gram is preferred, why not consider training a small neural network based model on synthetic data? How is n chosen? Other Comments Or Suggestions: N/A Questions For Authors: 1. How does the size of synthetic data impact the auditing? 2. For similarity scores based on embeddings, how do different embedding models impact the auditing? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We provide responses for the concerns raised below. > (1) analyzing how different domains might impact the auditing efficiency In Table 1, we also study the effect of which labels (or domains) the canaries belong to. In particular, we consider both ‘natural’ and ‘artificial’ labels associated with canary samples, where ‘natural’ corresponds to labels from the same distribution as the labels from the original dataset and ‘artificial’ corresponds to a new, canary-specific label (see Sec. 4). We observe a slight increase in MIA performance across all data-based MIAs, suggesting that more rare, potentially artificially crafted labels make canaries more vulnerable. We leave a more thorough study on this effect to future work and will add this to the discussion section. > (2) measuring the leakage against privacy-preserving methods We will add a section on mitigations strategies for our novel MIAs, focusing on fine-tuning the target model with DP-SGD before generating synthetic data, as in prior work (Yue et al., 2023; Mattern et al., 2022; Kurakin et al., 2023). Given past results (Table 3 in [1], Figure 3 in [2], or the results of the SaTML 2023 Membership Inference competition on STT-2 [3]), we expect the performance of model-based attacks to quickly decrease to a random guess baseline under DP guarantees. Since data-based attacks underperform compared to model-based attacks, and guarantees transfer to the synthetic data due to DP’s post-processing property, we expect data-based MIAs to also approach random guess for practical values of ε. By the end of the rebuttal phase, we aim to provide meaningful ablations on MIAs against DP-synthetic data, which we will then include in the paper. [1] Xie, et al. Differentially Private Synthetic Data via Foundation Model APIs 2: Text [2] Ma et al. Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models. [3] Microsoft Membership Inference Competition (https://github.com/microsoft/MICO). > (3) motivation for using n-gram for data-based attacks is not clearly described Many thanks for pointing this out. We have opted to train an n-gram model rather than a small, transformer-based model due to its simplicity and computational cost. Indeed, training the n-gram model on the synthetic data, both for SST-2 and AgNews takes less than 1 CPU minute. We further provide ablations of the value of n in Appendix H and Table 10, where we consistently find n=2 to be the optimal value. We will add the suggestion of training a small neural network instead to the discussion section. > (4) How does the size of synthetic data impact the auditing? We provide ablations for different sizes of the synthetic data in Appendix H and Figure 5. While we observe an improved performance for the n-gram based MIA as more synthetic data is generated, we maintain our main analysis considering a synthetic dataset of equal size to the private dataset as this is more realistic and used in prior work (Yue et al., 2023; Mattern et al., 2022; Kurakin et al., 2023). > (5) How do different embedding models impact the auditing? For similarly-based methods, we opted for paraphrase-MiniLM-L6-v2 from sentence-transformers as the embedding model, as it offers great performance in semantic search benchmarks [link](https://www.sbert.net/docs/sentence_transformer/pretrained_models.html). As the performance of the MIA based on semantic similarity using this embedding model is outperformed by all other data-based MIAs (Table 1), we did not further ablate the choice of the embedding model. We leave this to future work and will add this to the discussion section.
Summary: This paper aims at investigating the privacy risks of synthetic text generated by LLMs by developing a new membership inference attack (MIAs). The main novelty of the proposed approach is that the adversary model considered only has access to the synthetic text generated by the model and not the model itself. Two MIAs are proposed for this setting. Claims And Evidence: Overall, the claims with respect to the difference between model-based and data-based MIAs are supported only through experiments on two datasets, which raises some doubts on whether the observed results will carry to other datasets. However, a large set of variations of these experiments have been conducted, which demonstrate the robustness of the proposed approach. Methods And Evaluation Criteria: The difference between model-based attacks and data-based attacks are not well explained. The process for generating a synthetic text is also unconventional as it seems to indicate that the objective of the model is to create texts that match desired labels while in general the text will be generated based on the indication of the prompt. In addition, the fact that the synthetic dataset should be the same size as the original dataset is not very realistic are most LLMs are usually trained on a very large corpus (e.g., possibly the whole Internet). This major issue should at least be acknowledged and discussed in the paper. Theoretical Claims: Currently, the authors do not discuss the possibility that the training set and the canaries have been possibly used during the training of the LLM rather than simply during the fine-tuning. In particular as the Stanford Sentiment Treebank and the AG news datasets have respectively been published in 2013 and 2015, there is a high chance that they have been seen in the training set of the LLM models considered. Experimental Designs Or Analyses: Overall, the experimental evaluation is well-explained and seems sound. There is however no justification of the choice of parameters that have been used for the two variants of the MIA proposed. The paper also lacks experiments for evaluating how the success of the proposed attack will fare against differentially-private variant of the model training. Supplementary Material: I have reviewed the supplementary materials, which helps to clarify important aspects of the methodology. I also like to addition of some interpretability results at the end of the appendices. Relation To Broader Scientific Literature: The authors have done a good job at reviewing previous works on membership inference attacks against LLMs and synthetic tabular data. The proposed approach is also well-situated compared to existing works although the adversary model considered is non-standard. Essential References Not Discussed: Essential references, including recent ones, have been cited in the paper. Other Strengths And Weaknesses: The main novelty of the proposed approach is the adversary model considered, which only leverages the synthetic produced, and proposes two MIAs for this setting. Other Comments Or Suggestions: Figure 5 in the supplementary material has some issues with the corresponding legends. Questions For Authors: Do you have any way to verify if the two datasets considered for the experiments were not part already of the training set of the LLM considered? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the feedback; we provide detailed responses below. > Difference between model-based and data-based MIAs supported only through experiments on two datasets. A large set of variations of these experiments have been conducted, which demonstrate the robustness of the proposed approach. We are glad the reviewer thinks our experiments demonstrate the robustness of our approach. We chose to investigate the gap between model- and data-based attacks in depth through detailed ablation studies rather than shallowly on a broader range of datasets. We provide results for the *n*-gram based MIA on SNLI (same setup as Table 1), which we will include in a revision. Results suggest the same trends we report carry over to other datasets. |Canary injection||||| |-|-|-|-|-| |||AUC|TPR\@0.01|TPR\@0.1| |In-distribution||0.718|0.122|0.443| |Synthetic|Natural|0.534|0.016|0.111| ||Artificial|0.718|0.061|0.412| > Difference between model-based and data-based attacks not well explained We thoroughly describe the difference between model- and data-based MIAs in Sec. 2, including pseudocode in Alg. 1. Appendix E contains additional discussion on the difference between the threat models. We present concrete model- and data-based attacks in Sec. 3.1, including the calculation of membership inference signal and our adaptation of RMIA. We detail the choice of hyperparameters and how we evaluate attacks in practice in Sec. 4. We include additional details about model- and data-based attacks in Appendices A, B. We welcome suggestions to make this clearer if what we already provide is not enough. > The process for generating a synthetic text is unconventional We generate synthetic data as done conventionally (Yue et al., 2023; Mattern et al., 2022; Kurakin et al., 2023), by prompting the model on label-dependent templates (cf. Appendix C), so that text is generated based on the indication of the prompt. > The fact that the synthetic dataset should be the same size as the original is not realistic We report on experiments on synthetic datasets 2-8x larger than the original in Appendix I. We focus on generating synthetic data derived from a private dataset so that e.g. downstream tasks on the synthetic dataset have similar utility. Hence, in our main experiments we generate datasets matching the size and label histogram of the private data. This is the same setting studied by Yue et al., 2023; Mattern et al., 2022; Kurakin et al., 2023. > Possibility that the training set and the canaries have been used during pre-training Mistral-7B's training data is not public, so we cannot rule out SST-2/AG News being included in the training data. However, we aim to measure the difference in performance between model- and data-based MIAs on the fine-tuning dataset of a model used to synthesize data, which training and fine-tuning data overlap would affect similarly. We can, however, rule out the presence of canaries in the training data when they are constructed artificially (in-distribution prefix F=0; Table 1, Fig. 1(a,b,d,e), Fig. 2). Especially at high perplexities, these canaries are likely absent from the training data. For canaries with F>0 and if parts of the datasets were included in pretraining, it would, if anything, make MIAs more challenging. We will include this discussion in an eventual revision. > No justification of the choice of parameters used for the two variants of the MIA proposed. We discuss hyperparameter selection in Appendix H (paragraph “Hyperparameters in data-based attacks”). We consistently find the best performance for *n*=2 (*n*-gram MIA) and *k*=25 (number of closest synthetic records for similarly based MIAs), which we used in the main experiments. > differentially-private variants of the model training. We will add a section on mitigations, focusing on fine-tuning the target model with DP-SGD before generating synthetic data as in prior work (Yue et al., 2023; Mattern et al., 2022; Kurakin et al., 2023). Given past results (Table 3 in [1], Figure 3 in [2], or the results of the SaTML 2023 Membership Inference competition on SST-2 [3]), we expect performance of model-based attacks to decrease to a random guess baseline under DP. Since data-based attacks underperform compared to model-based attacks and guarantees transfer to synthetic data due to DP’s post-processing property, we expect data-based MIAs to also approach random guess for practical values of ε. By the end of the rebuttal phase, we aim to provide meaningful ablations on MIAs against DP-synthetic data, which we will then include in the paper. [1] Xie, et al. Differentially Private Synthetic Data via Foundation Model APIs 2: Text [2] Ma et al. Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models. [3] Microsoft Membership Inference Competition (https://github.com/microsoft/MICO). > Fig.5 legibility. Thanks for pointing this out, we will address this.
null
null
null
null
null
null
Proactive Agents for Multi-Turn Text-to-Image Generation Under Uncertainty
Accept (poster)
Summary: This paper focuses on the information-missing issue of existing AI applications, specifically image generation in the paper. Instead of passively waiting for humans to revise the prompt, this paper proposes a proactive design such that the agent system could proactively interact with the users to get the missing information. This paper has five main contributions: (1) the introduction of belief graphs for uncertainty modeling; (2) a proactive T2I agent prototype; (3) an evaluation pipeline; (4) a benchmark; and (5) extensive experiments. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A, no theoretical claim Experimental Designs Or Analyses: Yes, I checked the dataset, experiment results, and case studies. Supplementary Material: Yes, I checked the agent design and case studies in the paper. Relation To Broader Scientific Literature: The proactive agent design has been used in the general agent design but not in the image generation scenario, to the best of my knowledge. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: 1. A proactive agent is a sound solution for solving the information-missing problem and might be an essential step of future agent design. 2. The paper is clearly written and easy to follow. 3. The experiments and analysis is comprehensive Limitations: 1. Some detailed design choices require more justifications; for details, please refer to the questions for the authors section. 2. The efficiency might be a limitation for the proactive agent system to be useful in real applications. Other Comments Or Suggestions: The bottom lines of all tables are missing. Better to add them. Questions For Authors: 1. The current brief graph requires heavy human design, which will limit the generalization of the proactive agent system to other domains other than the image generation task. How do you prevent that? 2. In real application, how do you guarantee that the users will clearly answer the questions raised by the system? 3. This proactive design will introduce an extra efficiency issue. How do you prevent that? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The current brief graph requires heavy human design, which will limit the generalization of the proactive agent system to other domains other than the image generation task. How do you prevent that? Generating the belief graph only requires some few-shot examples, which are not difficult to write. Writing those examples offers humans the opportunity to insert their expert knowledge, so that the LLM’s behavior can be controlled. Moreover, for new domains, it is sometimes not necessary to re-design the belief graph parsing approach. We tried the same belief parsing approach without modifying anything for creative writing tasks, and the belief graph we obtained from the models is reasonably good, e.g., it is able to identify the characters, their attributes like clothing, super power etc and relations between characters. One future direction to further lower the effort of human design is to perform meta learning, where we give the model examples of different domains, and ask it to generalize to new domains. > In real application, how do you guarantee that the users will clearly answer the questions raised by the system? We cannot “guarantee” the user behavior but the agents can guide users. The questions the agents ask often contain several options for the answer. For example, the agent may ask “What is the color of the rabbit? (a) white, (b) grey, (c) brown, (d) mixed colors.” The user can choose an option (this guarantees that the answer is clear), or answer with words directly. > This proactive design will introduce an extra efficiency issue. The efficiency might be a limitation for the proactive agent system to be useful in real applications. How do you prevent that? We can use parallelism and selective generation to prevent efficiency issues. The agent is a system that operates in multi-threads/multi-processes. In the agent prototypes we developed, the efficiency of belief parsing was significantly improved by generating attributes and relations for different entities in parallel. The T2I models can be called while the next belief state and action selection are in progress. Our framework also supports the development of more sophisticated agents in real applications, which can incorporate strategies to selectively generate partial belief states that are important for showing to the users. This can be especially useful if the belief state is very large. We will fix the formatting of tables.
Summary: This paper addresses the challenge of underspecified user prompts in text-to-image (T2I) generation by introducing proactive agents capable of multi-turn interactions. These agents actively seek clarification through targeted questions and utilize a "belief graph" to represent and refine their understanding of user intent. The proposed approach aims to bridge the gap between user expectations and model outputs. Empirical evaluations show that the method achieves a VQAScore twice as high and is rated as helpful by 90% of human participants. Claims And Evidence: The claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method focuses on utilizing an editable belief graph for human-AI interaction. The idea is simple yet seems reasonable and effective. Theoretical Claims: The paper have no theoretical claims. Experimental Designs Or Analyses: The analysis consists of two parts: the VQAScore and human opinions. It can be considered sound. Supplementary Material: Yes. I reviewed the visualizations of agent generated images. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Important problem and practical value: The problem of underspecified prompts is common and significant in user-AI interactions. A common user without prompt-engineering training always finds it hard to convey their intentions efficiently to the models, causing frustration. This method effectively addresses the common issue of vague or ambiguous prompts, leading to more accurate and user-aligned image generation. - Effectiveness: The authors employ both human studies and automated evaluations to assess the effectiveness of their approach. Notably, over 90% of human subjects found the proactive agents and belief graphs beneficial to their T2I workflow. Additionally, the agents achieved at least twice the VQAScore compared to standard single-turn T2I generation. Weaknesses: - The universal prompt design: It seems like the design of the prompt lacks consideration of the alignment and prompt-following capabilities of the text-to-image model. From my personal user experience with T2I models, the correct wording and structure of the prompt may also greatly influence the quality of the generated image. Other Comments Or Suggestions: The writing of the paper needs improvement. Please try to avoid redundant wording and excessively long sentences. This issue appears even in the abstract, e.g., "As a result, users **commonly** have to **painstakingly** and **repeatedly** refine their prompts." This writing style makes the paper difficult to read. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The universal prompt design: It seems like the design of the prompt lacks consideration of the alignment and prompt-following capabilities of the text-to-image model. From my personal user experience with T2I models, the correct wording and structure of the prompt may also greatly influence the quality of the generated image. We have added a new method to take into account the T2I alignment of T2I models. Please see more descriptions in Q1 rebuttal to Reviewer F62f > writing Thank you for pointing these out. We will rewrite the long sentences and make the writing more accessible.
Summary: This paper proposed a proactive text-to-image agent designed to mitigate the issue of uncertain prompts by allowing users to do multi-turn interactions. The key contributions of the paper include: 1. Belief Graphs – A structured representation of model uncertainty, allowing users to visualize and edit entities, attributes, and relationships. 2. Proactive Questioning – The agent actively seeks clarifications from users to refine image generation. 3. New Benchmark (DesignBench) – A dataset created to test agent performance in complex scenes with both long and short descriptions. The study demonstrates that multi-turn agents significantly outperform standard T2I models in both automated and human evaluations, achieving at least twice the VQA score within five interaction turns. ## Update after rebuttal The authors have addressed all of my concerns. I vote to accept the paper and appreciate their detailed and thoughtful responses. Claims And Evidence: 1. Claim: Multi-turn interaction improves T2I alignment Evidence: Table 1, Fig. 2 and 3 demonstrate that multi-turn agents outperform single-turn T2I models across various datasets, with at least a 2x improvement in VQAScore. 2. Claim: Belief graphs effectively communicate and refine uncertainty Evidence: Over 85% of human participants found belief graphs helpful. 3. Claim: Proactive questioning enhances user experience Evidence: Human evaluations indicate that 90% of participants expect interactive clarifications to be beneficial. Issue: The main contribution of this work is the proposal of the belief graph and multi-turn interaction with users. However, the comparison includes only one baseline focused on single-turn generation. I suggest expanding the comparison to include more baselines, such as Recraft V3 or Midjourney v6, which have been reported to outperform Imagen 3 in numerical reasoning cases in the Imagen 3 paper. This would provide a more comprehensive evaluation of general performance improvements and adaptability compared to current state-of-the-art methods. Methods And Evaluation Criteria: Yes, the paper employs 2 evaluation methods for evaluating how the generated image align with user’s intent (prompt) 1. Automated Evaluation – Uses a simulated user to converse with a T2I agent 2. Human Studies – Participants rate the efficacy of the proposed framework with human subjects Theoretical Claims: The paper does not propose new theoretical results but extend concepts from belief state representations. Experimental Designs Or Analyses: - The experiments are well-structured, covering both automated and human evaluations. - The use of DINOv2 embeddings for image similarity and VQAScore for text-to-image alignment are reasonable choices. Supplementary Material: Yes, I have reviewed C, E.2, E.5, E.14, and H. Relation To Broader Scientific Literature: This work contributes to clarify the prompt before generation which would enhance the efficiency for trial and error on prompt engineering Essential References Not Discussed: None Other Strengths And Weaknesses: - Strengths 1. The idea to proactively clarify user input is novel and helpful for efficiently generate a target image. 2. Leveraging LLM as a user to provide clarification on a pre-given image is an efficient strategy. - Weaknesses 1. The description of the three agents is difficult to understand in terms of their differences, and the information is scattered across too many sections, including Section 4.3, Supp. C, and E. I suggest improving the clarity of the descriptions of the three agents in the main paper. Other Comments Or Suggestions: None Questions For Authors: - The motivation for including Agents 1 and 2 in the comparison is unclear. - Why are Agent 1 and Agent 2 designed in the current setting? - Additionally, why should Agent 1 ask whether the entity "cake" is present in the image, as shown in Fig. 5 of the supplement? Does this occur before or after generating the first image? - Does the performance of direct LLM prompting—where an image is given, the LLM describes it as a prompt, and a T2I model generates an image in a single turn—outperform multi-turn iteration? - How does the belief graph scale to complex scenes with dozens of entities? - Are there practical limitations on the number of entities and relationships that can be handled effectively? - To what extent does the T2I model adhere to the belief graph representation? - If the generated image does not follow the belief graph and the clarification question fails to identify the issue, how can the user provide feedback to help the agent improve the generated image? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > baseline We used only one single-turn T2I baseline because we want to keep the T2I backbone consistent between the baseline and all agents. To ensure the consistency, we will run all agent experiments with a different model (such as the suggested ones) and we can definitely add this comparison to the paper. > the clarity of the descriptions of the three agents Please see Q3 in the rebuttal for Reviewer F62f. > Why are Agent 1 and Agent 2 designed in the current setting? We designed the two agents to compare rule-based approach and LLM-based question-asking without explicit rules. Ag1’s action selection is purely rule-based: it maximizes the approximated information gain of questions (weighted by importance scores in the belief graph) and the model can only ask about the existence of an entity, the attribute of any entity or the relation between any pairs of entities in the belief graph. Ag2 also relies on the belief graph but it does not compute any heuristic scores and instead puts the belief graph in the context to prompt an LLM to ask questions. > why should Agent 1 ask whether the entity "cake" is present in the image, as shown in Fig. 5 of the supplement? Does this occur before or after generating the first image? Because Ag1 is rule-based and checks both the probabilities and importance scores, it may try to confirm the existence of an entity the user mentioned, because the importance score for that entity is very high and the probability is less than 1. The reason that the probability is less than 1 is because sometimes, when an entity is mentioned, the image doesn’t necessarily have to include the entire entity. Moreover, the belief graph parsing is not perfect, so errors can occur since Ag1’s question purely depends on the belief graph. In our current implementation, this occurs before generating the images. > Does the performance of direct LLM prompting—where an image is given, the LLM describes it as a prompt, and a T2I model generates an image in a single turn—outperform multi-turn iteration? If the LLM describes the image in detail and the T2I model has good text-image alignment, it should outperform the multi-turn approach. This is because the multi-turn approach’s goal is to eventually collect all information about the image, and the detailed description is the ground truth of what information should be collected. > How does the belief graph scale to complex scenes with dozens of entities? Are there practical limitations on the number of entities and relationships that can be handled effectively? The belief graph can scale to dozens of entities as long as the belief graph parsing prompts and every entity together with their attributes in the belief graph fits the context length and generation length limits of the LLM. Practical limitations: as shown in Algorithm 1, the time complexity of belief parsing is linear in the number of entities and the number of relations, but in practice, we make parallel calls to the LLM to generate attributes of entities and relations in parallel. So the time complexity is limited by the requests per second the LLM can afford and how many parallel threads / processes the python environment can afford. > To what extent does the T2I model adhere to the belief graph representation? The belief graph describes the state of the agent, not the T2I model. The belief graph entails a distribution over possible prompts with different combinations of entities, attributes and relations. The distribution over the images the agent generates will adhere to the belief graph relatively faithfully if the T2I model’s T2I alignment is good. We also describe an approach to further enhance the alignment between agent behaviors and user intents in Q1 rebuttal to Reviewer F62f. > If the generated image does not follow the belief graph and the clarification question fails to identify the issue, how can the user provide feedback to help the agent improve the generated image? The user can modify the current prompt the agent has and inspect/modify the belief graph to ensure it aligns with the user intent. Admittedly, the T2I models may still fail to generate images that align with the text descriptions. We described an approach to further enhance the alignment in Q1 rebuttal to Reviewer F62f. It is still possible that the VLM might not be able to tell whether the answers to questions conditioned on the generated image match with the user answers. These are intrinsic limitations of the current T2I models or VLMs, and improving those models are out of the scope of this paper. However, if there exist improved T2I models and VLMs, we can seamlessly integrate them into our agents. --- Rebuttal Comment 1.1: Comment: Most of my questions have been addressed. - Minor thoughts regarding Ag1: I understand the current logic, but I still wonder: if the prompt already specifies an entity (e.g., a cake), does Ag1 really need to confirm its existence before generation? This step feels somewhat redundant. Would it be more useful for Ag1 to ask questions that surface uncertainties instead of reaffirming what the prompt already states? > The belief graph describes the state of the agent, not the T2I model. - My understanding is that the belief graph reflects what the T2I model should generate. So my original question was about this: To what extent does the T2I model actually conform to the belief graph in practice? I believe this is clarified in the next point. > We described an approach to further enhance the alignment in Q1 rebuttal to Reviewer F62f. - While this approach is promising, I would caution that VQA-based evaluations have known limitations, as discussed in VQAScore [A]. These evaluations may struggle with complex prompts such as "someone talks on the phone happily while another person sits angrily." In such cases, divide-and-conquer methods like Davidsonian [B] tend to generate nonsensical questions (e.g., “is the someone happy?” or “is there another person?”), which raises concerns about their reliability in complex scenarios. Overall, I remain positive about the work and would keep my current rating. [A] Lin et al., VQAScore: Evaluating Text-to-Visual Generation with Image-to-Text Generation. ECCV 2024 [B] Cho et al., Davidsonian Scene Graph: Improving Reliability in Fine-Grained Evaluation for Text-Image Generation. ICLR 2024 --- Reply to Comment 1.1.1: Comment: > Minor thoughts regarding Ag1 Ag1's question generation process, as described in Appendix E.2, relies on the MHIS (Most Important to Ask Score) strategy. This strategy leverages the importance scores and probabilities associated with entities, attributes, and relations in the belief graph, and constructs heuristic scores for possible questions. The heuristic score dynamically determines the usefulness of posing a question about a specific element. Roughly speaking, the heuristic score of entity-existence-confirmation questions is higher for entities that have high importance score, high entropy in its attributes, and high probability of existence (but less than 1). The rationale behind this design is that entities that are very likely to exist should be clarified first. This design was based on our trial and errors. For an entity that is explicitly mentioned in the prompt, the probability of existence can be estimated to be 0.99 and this is still considered less than 1. So based on the suboptimal heuristic strategy, this approach may, at times, result in the formulation of redundant inquiries. > an approach to further enhance the alignment in Q1 rebuttal to Reviewer F62f.... known limitations in VQA-based evaluations Thank you for the advice. The approach we adopt uses the questions in the history of agent-user interaction, and this can alleviate the nonsensical question problem in the Davidsonian scene graph approach. > To what extent does the T2I model actually conform to the belief graph & more on T2I alignment Thank you for clarifying the question. The extent to which the T2I model conform to the belief graph is currently bounded by the image-prompt alignment capabilities of the T2I model. In an effort to mitigate images that contain T2I errors - we perform and show a new experiment that finds using the agent-user QA pairs can improve T2I fidelity over a batch of N seeds. Each QA pair from agent-user dialogue is converted into a (yes/no) VQA question concerned about a single detail of the image. Then using VQA score metric with the new questions - we can remove erroneous images from a set of N seeds, by filtering out images with low VQA scores. We perform this experiment on the DesignBench image-caption dataset from the paper. The design of the experiment is as follows - Using the 30 ground truth (GT) prompts of design bench, generate 10 images from 10 different random seeds with Imagen. - Take average DINO (I2I) score for all images against GT image:this was found to be 0.7637. - Take the first 5 Q-A pairs from Ag2 and change it to a yes or no question that should have a yes answer. - Run the VQA scorer over all 10 images per caption. - Choose best image of ten (a.k.a. image with highest score). - Take average DINO (I2I) score for best image against GT image: this was found to be 0.7838. - Find the Delta between before and after filtering out images via agent QA pairs: Delta is +.02. Conclusions of the experiment: we find that by using the QA pairs from the agent in combination with the VQA score we can improve image fidelity by filtering out images which do not follow the prompt. T2I models do not always follow a prompt exactly. They can make small errors or ignore a single detail while retaining all others. This is an inherent bottleneck of our current pipeline, however we show using the QA pairs from the pipeline that we can overcome this limitation.
Summary: This paper addresses the issue of suboptimal image generation caused by vague or incomplete prompts provided by users to text-to-image (T2I) generators. It introduces proactive T2I agents designed to improve image generation by actively asking users clarification questions. These agents maintain their understanding in the form of a belief state, which users can view and modify directly. Additionally, the paper presents a scalable, automated evaluation benchmark for assessing T2I systems. Experimental results indicate that 90% of human participants found the proactive agents and editable belief states beneficial, with the proposed approach significantly enhancing the quality of generated images, as evidenced by higher VQAScores. ## update after rebuttal The reviewer still recommends weak acceptance. Claims And Evidence: * Claim: Proactive T2I agents improve user interaction * Claim: Belief state interpretability and editability These two claims are supported by the experimental result indicating that 90% of human participants found proactive agents and the belief state useful, providing clear and convincing evidence. * Claim: Improvement in image alignment The work shows improved quality of generated images, supported convincingly by reported significant increases in metrics such as VQAScores. Methods And Evaluation Criteria: The evaluation protocol that guides the proposed DesignBench, as discussed in Sec 5.1, is sound and well-defined. The proposed DesignBench offers a good example of how to evaluate the performance of T2I systems with proactive agents. Theoretical Claims: No theoretical claims are made in the paper. Experimental Designs Or Analyses: The overall experiment design is sound and well-defined. However, there are some concerns in the design of the agents in the prposed work: Ag2 and Ag3 use LLMs without vision input, which indicates that the exploration is only conducted in the textual space. This might cause misalignment between the textual exploration and the image generation. Supplementary Material: Yes, I reviewed the supplementary material, including the related work, contributions, and additional visualizations. Relation To Broader Scientific Literature: The proposed work improves the text-to-image workflow by introducing proactive agents that can ask questions to the user to improve the quality of the generated images. The work builds on prior text-to-image generation works and focuses on a novel aspect of it. Essential References Not Discussed: Asking for clarifications for text-to-image generation is not a new idea. The work missed this related work [1] that also investigated into the ambiguity in user's prompt and asked the user for clarifications. [1] Is the Elephant Flying? Resolving Ambiguities in Text-to-Image Generative Models. https://arxiv.org/abs/2211.12503 Other Strengths And Weaknesses: The writing is a bit unclear in the explanation of each agent. The reader needs to look at the appendix to understand the design of each agent. Other Comments Or Suggestions: One minor detail: the bottom lines of Table 1 and Table 2 are missing. Questions For Authors: Is the capability of the whole pipeline limited by the prompt-following capabilities of the text-to-image model (i.e., if given a prompt that has a lot of details from clarifications, does the model always generate an image that is aligned with the prompt)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1: “Is the capability of the whole pipeline limited by the prompt-following capabilities of the text-to-image model?” Also in Experimental Designs Or Analyses: “misalignment between the textual exploration and the image generation.” The current agent prototypes call off-the-self T2I models directly, i.e., treating T2I API as tools. The benefit is that it is seamless to switch to better T2I models when they become available. We have now included a new module under our agent framework to further improve text-image alignment with vision feedback. The key idea is to use the existing <agent question, user answer> history and ask a VLM those agent questions on the generated image. If the VLM’s answers do not match the corresponding user answers, the agent can refine the prompt (such as adding “ensure the color of the rabbit is grey”; this refinement component can be replaced with better prompt optimization methods) and call T2I models to re-generate the image. We are testing this approach and will include more results in the updated paper. > Q2: Related work Thank you for sharing the related work. This paper proposes to use clarification questions to resolve ambiguities (a prompt has multiple meanings), but we aim to use clarification questions to resolve underspecification (the prompt is not ambiguous, but lacks information to fully describe the image). We will include this paper in the related work and note the difference. > Q3: Clarity of agent descriptions. Thank you for pointing this out. The main paper has page constraints and we had to make certain tradeoffs. The core of this paper is the new framework and overall design of proactive agents and automated evaluation. All 3 agents share the implementation of belief parsing (Algorithm 1), belief transition (Section 4.1) and principles of asking questions to collect information (Section 4.2). Those are detailed in the main paper. However, we recognize that readers may also be curious about the specific implementations. We will consolidate the relevant sections in appendix and add the important details back to the paper, including (1) Ag1 selects actions by maximizing the approximated information gain of questions, so that the questions can reduce the entropy of belief graphs; (2) Ag2 uses in-context learning and generates questions conditioned on user prompt, belief graph and conversation history; (3) Ag3 uses the LLM to generate questions conditioned on the principles of asking questions and conversation history. We will fix the formatting of tables. --- Rebuttal Comment 1.1: Comment: The authors have addressed my questions in the review. The authors are encouraged to add the details mentioned in the rebuttal to the camera-ready version of the paper.
null
null
null
null
null
null
ReQFlow: Rectified Quaternion Flow for Efficient and High-Quality Protein Backbone Generation
Accept (poster)
Summary: The paper introduces ​ReQFlow, a novel method for protein backbone generation. To ​address numerical instability in matrix-based representations, the main innovation is to use quaternions to model the rotations and spherical linear interpolation in the flow matching training. The paper also extends the rectified flow techniques to quaternion space. The resulting model achieves more than 30x speedup over RFDiffusion/Genie2 while maintaining high designability. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, for theorems 3.1 and 3.2, they are very similar to the results in the original rectified flow paper (https://arxiv.org/abs/2209.03003 theorems 3.3 and 3.5). Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, the appendices A2 and B. Relation To Broader Scientific Literature: ReQFlow builds upon and advances several prior works in protein generation, flow-based models, and quaternion representations. Essential References Not Discussed: NA Other Strengths And Weaknesses: 1. The theoretical part of the paper is substantial, but the innovation is a bit limited. 2. Some parts of the experiments are not clear. e.g., when to stop training? 3. The training dataset can be improved. PDB data biases the model toward naturally abundant folds. Or sampling strategies to ensure structure diversity can be applied. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your comments. **Correction of long-chain results:** Before resolving your concerns, we would like to report that we found a bug in our script in the rebuttal phase and correct the results of ReQFlow on long-chain generation in the anonymous link https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal-62BB/01_revised_longchain.pdf. **Note that, the new results are lower than the wrong ones in the submission, but they are still significantly better than those of baselines, and thus, do not affect our main claims and contributions.** Below we try to resolve your concerns one by one: **Q1. The novelty of our work, especially the theoretical part.** **A1.** As we claimed in the supplementary file, our theoretical result is a natural extension of Rectified Flow [1] and its proof applies the same pipeline. However, we indeed makes the first attempt to extend the theoretical result in [1] to the SO(3) scenario. In addition, technically, our approach is novel for the following reasons: First, we innovatively replace rotation matrices with quaternions. Thanks to the quaternion representations, our QFlow significantly enhances designability compared to its counterpart FrameFlow, without sacrificing diversity and novelty (see Tables 2 and 4). Moreover, QFlow exhibits superior efficiency, achieving approximately 10% and 25% speedups on the PDB and SCOPe datasets, respectively (see lines 358-370, Tables 2 and 4). Second, although the rectified flow technique was proposed in [1], we pioneeringly apply it to the protein generation and provide theoretical guarantees on SO(3). As demonstrated in the table below, ReQFlow markedly improves efficiency while maintaining the other three metrics on par with other advanced models. ||training set|model size|steps|time (s)|designability|diversity|novelty| |-|-|-|-|-|-|-|-| |Genie2|590k|15.7|1000|112.93|0.908|0.370|0.759| |RFDiffusion|>208k|59.8|50|66.23|0.904|0.382|0.822| |FoldFlow2|160k|672|50|6.35|0.952|0.373|0.813| |ReQFlow|30k|16.7|50|1.81|0.912|0.369|0.810| [1] Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow **Q2. More experimental details** **A2.** We adopt the the same hyperparameter settings as FrameFlow for a fair comparison, and the key parameters are shown below: |Hyperparameters|Values| |-|-| |aux\_loss\_t\_pass (time threshold)|PDB=0.5, SCOPe=0.25| |aux\_loss\_weight|1.0| |batch size|128| |max\_num\_res\_squared|PDB=1000000, SCOPe=500000| |max epochs|1000| |learning rate|0.0001| |interpolant\_min\_t|0.01| Note that the batch size of FrameFlow is originally set to 128 for PDB and 100 for SCOPe. We retrained FrameFlow on SCOPe in our work and set the batch size to 128, the same as QFlow/ReQFlow. This doesn't influence the comparison's effectiveness and fairness. As for when to stop training, we found that the loss converges after 600 epochs on the PDB dataset and 200 epochs on the SCOPe dataset. **Q3. Improve the diversity of training dataset by sampling** **A3.** For a fair comparison, we set our training dataset and training pipeline to be the same with those of the baseline models (e.g., FrameFlow, FoldFlow, and FrameDiff), so that the performance of various methods is achieved under the same setting and comparable. To our knowledge, existing methods, including the baselines mentioned above, train their models on the raw PDB dataset. Because the experimental setting is same, so the superiority of our method can be attributed to our technical contributions clearly. In addition, as shown in Figure 3, the distribution of generated proteins obtained by our method is comparable to that of PDB, which does not suffer severe mode collapse issue as FoldFlow does. The diversity and novelty scores shown in Tables 2 and 4 also indicate that the proteins generated by our method have reasonable diversity and novelty. Sampling PDB data may mitigate the inductive bias in the dataset and further improve our model performance, but it is out of the scope of our work at the current stage. In summary, we hope the above responses can resolve your concerns and help you re-evaluate our work. We would appreciate it if you consider raising your score based on our reply. Feel free to contact us if you have any other questions.
Summary: This paper focuses on the task of protein backbone generation. It proposes quaternion flow (QFlow) and rectified quaternion flow (ReQFlow) for generative modeling on a translation/rotation manifold. In particular, in contrast to previous work, QFlow models the rotations with quaternions and the authors introduce quaternion flow matching. In contrast to previous works, the quaternion formulation avoids numerical instabilities and is an elegant way to describe rotations. The authors then use the rectified flow method to rectify and accelerate their models, making it possible to sample with relatively few steps. The paper then applies the method to protein backbone generation, where each protein residue is represented by its backbone coordinate and a rotation on SO(3). The paper generally achieves appealing results, slightly outperforming or being on-par with previous methods on standard benchmarks and evaluation metrics. Claims And Evidence: The authors claim that the proposed method, i.e. the quaternion flow matching and rectification, improves protein designability, accelerates generation, and overall achieves state-of-the-art performance. While the proposed method generally shows good performance, I think the "state-of-the-art" comment might be exaggerated. - Generally, it is well-known that there is a trade-off between designability and diversity and novelty, and different methods achieve different trade-offs along this Pareto frontier. The state-of-the-art claim would be appropriate if the method clearly achieved both the best designability *and* diversity/novelty in the same setting, but this is not the case. After all, in theory, we can achieve 100% designability trivially by always outputting the same designable protein, in the extreme case. - We see that QFlow gets best diversity in Table 2, but not designability. - ReQFlow gets very good designability (in part potentially due to designable protein filtering, see comment below), but shows reduced diversity. A similar trend is also visible in Table 3. - Hence, it would be more appropriate to claim that the method performs "on-par" with existing works. - Moreover, ReQFlow, in fact, not only does rectification but also blends in a distillation procedure where the rectification is only carried out on designable high-quality proteins. The fact that ReQFlow without this does not work well is concerning. Also, none of the baselines is trained on 100% designable protein backbones, but these baselines would potentially benefit from training on purely designable proteins, too. This makes the comparisons and claims somewhat questionable. Methods And Evaluation Criteria: In general, all proposed methods and evaluation criteria do make sense. My more specific concerns are outlined above. Theoretical Claims: The theoretical claims and proofs in the appendix all seem correct to me. These are mostly generalizations of the derivations done in the rectified flow paper to SO(3). Experimental Designs Or Analyses: I do not have any concerns regarding the general experimental design or analyses and no reason to believe that any experiment should be incorrect. My concerns are described above. Supplementary Material: Yes, I reviewed the supplementary material, but did not study it in detail. It consists of the theorem proofs and derivations (some of which I checked), helpful implementation and evaluation details, and additional visualizations and analyses. Relation To Broader Scientific Literature: The paper's key contributions are appropriately positioned with respect to the broader literature. In particular, previous works that conduct flow matching on rotation manifolds are cited and discussed, and works that leverage similar quaternion formulations in other machine learning areas are also cited and discussed. The most relevant related works in protein generative modeling are also discussed and cited. Essential References Not Discussed: I cannot identify any essential references that are not discussed. Other Strengths And Weaknesses: **Strengths:** The paper is generally well-written and well-presented and easy to read. The proposed quaternion flow matching is novel and original, and successfully validated. I think the proposed quaternion approach makes sense to model rotations in this setting. As the authors pointed out, similar methods have been used elsewhere in other domains. **Weaknesses:** - As mentioned above, I think the paper is a bit "overclaiming" when saying it gets state-of-the-art results. - As discussed above, the ReQFlow method suffers from reduced diversity. There are trade-offs between designability and diversity. - The paper focuses on simple protein backbone generation and, as discussed, performs approximately on-par with similar work overall, or slightly better when looking at individual metrics like designability only. In practical settings, protein generation is almost always carried out in a conditional setting. For instance, there is a target protein, for which we want to generate a binding protein, or there is a motif, and we want to generate the scaffold. Unfortunately, the paper does not study any such tasks. Overall, I think the proposed method can be broadly useful to the community for modeling proteins with frame-based representations, including rotations on SO(3). While I do have criticisms, I do not see any fundamental flaws and hence I am generally leaning towards suggesting acceptance. I would be willing to raise my score for a rebuttal addressing my questions. Other Comments Or Suggestions: I do not have any further comments or suggestions. Questions For Authors: 1. The exponential step size scheduler for generating the rotations plays a critical role. This is well-known and similar to previous works. Nonetheless, why exactly do you think that this is the case? 2. Related, did you train the model with one joint interpolation time $t$ for translations and rotations, or did you sample separate independent $t$ for rotations and translations during training? 3. If you used one joint interpolation time $t$, did you consider not only running inference with the accelerated rotation schedule but to also train the model with the accelerated schedule (accelerated relative to the translations)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your positive feedback and constructive comments. **Correction of long-chain results:** Before resolving your concerns, we would like to report that we found a bug in our script in the rebuttal phase and correct the results of ReQFlow on long-chain generation are in https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal-62BB/01_revised_longchain.pdf. **Note that, the new results are lower than the wrong ones in the submission, but they are still significantly better than those of baselines on designability and efficiency and comparable on novelty and diversity. This correction does not affect our main claims and contributions.** Below we try to resolve your concerns one by one: **Q1. Trade-off among different metrics and the definition of SOTA performance** **A1.** In our work, the "SOTA performance" means that **ReQFlow achieves higher designability and computational efficiency than the baselines and obtains comparable diversity and novelty**. We emphasize on designability because novelty and diversity are computed based on designable proteins. We can obtain very low TM-scores even if the protein backbones are random noise, which, however, is meaningless. To be clear, it does not mean that we do not care about novelty and diversity. We have offered experimental results for all metrics in Tables 2 and 4. In addition, although the diversity score degrades slightly after rectifying QFlow, the data distribution of ReQFlow in Figure 3 is similar to that of PDB. It means that ReQFlow has a low risk of mode collapse and achieves reasonable diversity. We will claim the trade-off among the metrics and eliminate any potential misleading content in the revised paper. **Q2. The significance of Reflow and designable samples** **A2.** In our opinion, both Reflow and designable samples are important. Applying Reflow without selecting designable samples means fine-tuning the model under the supervision of noisy samples, which naturally leads to performance degradation (see Table 3). **However, it does not mean that Reflow is useless.** To verify our claim, we fine-tune a trained QFlow on generated designable samples, leading to "self-distill QFlow". The comparison between the self-distill QFlow and ReqFlow is in https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal-62BB/02_reflow_vs_selfdistill.pdf. Although self-distill QFlow obtains competitive results (comparable designability, slightly worse novelty and diversity), it changes the data distribution and suffers a high risk of mode collapse. In summary, our contributions include 1) first introducing quaternion algebra into flow-matching and protein design; and 2) making the first attempt to apply Reflow to protein design. For the second contribution, it is a potential technical route. Even if it is not consistently better than self-distillation, it does not mean it is not worth to explore. Self-distillation v.s. ReFlow, which one is better? It is interesting and can be our future work, but it is out of the scope of this work. **Q3. Conditional generation.** **A3.** Existing methods (FrameFlow, FoldFlow, etc.) focus on unconditioned backbone generation and design experiments accordingly. We just follow the same setting for a fair comparison. Nevertheless, we follow your suggestion, training QFlow and FrameFlow on SCOPe like [1] did. We select three motifs (4JHW, 5IUS and 1PRW) and report the success rate (%) of generation (i.e., scRMSD≤2A, motifRMSD≤1A). The results show that QFlow is at least comparable to FrameFlow. ||4JHW|5IUS|1PRW| |-|-|-|-| |FrameFlow|4|76|99| |QFlow|8|76|99| [1] Improved motif-scaffolding with SE(3) flow matching **Q4. Why does exponential step scheduler matter?** **A4.** Given a trained model, at the early stage of inference, i.e., $t\in [0, 0.25]$, the loss is significantly higher than that near the endpoint (i.e., $t\in [0.75, 1]$), as shown in https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal-62BB/rot_loss.pdf. It means that **the vector field is not well-learned in the early phase.** The reason for this phenomenon might be **the models like FoldFlow and FrameFlow learn the velocity field backward.** This strategy makes the predictions far from the endpoint challenging. Exponential scheduler allows the model to rapidly approach the endpoint with few steps, reducing error accumulation caused by the imprecise samples at the early stage. **Q5. Joint interpolation time? and acceleration in training.** **A5.** Yes, we train the model with one joint interpolation time. As noted in the Eq.(7) of the FrameFlow paper, they attempted to accelerate the training phase as well, but this made training too easy with little learning happening. This aligns with our observations in **A4**: accelerating training further exacerbates the difficulty of learning the velocity field at the starting point. In summary, we hope the above responses can resolve your concerns and help enhance your confidence to raise your score.
Summary: The paper proposes a method to train a generative model of protein backbones. They follow previous work in parameterizing protein backbones using a translation and a rotation. They have two main contributions: - The use of normalized quaternions to parameterize rotations (most previous work use 3x3 rotation matrices) - The application of the ReFlow process to enable fast sampling through additional training on synthetic data. The paper also extends the theoretical results from ReFlow (diminishes transport cost). The empirical evaluation shows promising results for the proposed method. Claims And Evidence: Partly. As I state in the sections below, I think a more careful discussion of the results (including all metrics), would be useful, and an extended analysis of the numerical stability of using unit quaternions (vs 3x3 rotation matrices) would strengthen the paper. Methods And Evaluation Criteria: The evaluation criteria follows standard practices in unconditional protein backbone design, training on a few subsets of the PDB, and evaluating widely used metrics such as designability, diversity, novelty, and secondary structure content. Theoretical Claims: The theoretical claims in the paper are extensions from the ReFlow [1] results (keeps correct marginals, reduces transport cost). I did not check the proof in detail. [1] “Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow”, by Liu et al. Experimental Designs Or Analyses: As mentioned above, the metrics reported in the tables are very reasonable and widely used in the literature. The authors, however, appear to heavily rely on the designability metric for most of its analyses and claims in the text, which could be problematic. Most methods (including, for instance, Genie2) are able to achieve very high designabilities by changing the temperature parameter during sampling. However, there’s usually a tradeoff between designability and diversity, and reducing sampling temperature often leads to reduced diversity. While the tables include several metrics (designability, diversity, novelty), it would be good for the text to discuss the results for all metrics more. For instance, line 375 (left column) claims that ReQFlow outperforms Genie2 and other baselines, but the comparison is only done in terms of designability. Taking the same lines in the table referred to in the text, it can be seen that ReQFlow, for that configuration, achieves a worse diversity and novelty than Genie2. Having a more comprehensive analysis, commenting on these tradeoffs, and all metrics, would be good. Another example, table 7 in the appendix only includes designability. Adding other metrics to get a more holistic view of the methods’ performance would be informative. Supplementary Material: I reviewed most of the supplementary material. Relation To Broader Scientific Literature: As I mentioned above, the paper has two main contributions. (1) Using unit quaternions to represent rotations in protein backbone design. (2) Using the ReFlow procedure to rectify the flow and get better generation with fewer sampling steps. Regarding (1). This idea attempts to improve backbone design methods that represent residues as a translation+rotation. However, instead of representing rotations using 3x3 rotation matrices, the paper proposes to use unit quaternions, claiming they provide benefits in terms of stability. I consider this to be the main reason why (1) is a relevant contribution (but please correct me if I’m wrong). This being said, I feel section 3.4, which comments on stability and does a small empirical study for both representations, could be a lot more detailed. The paper “Exploring SO(3) logarithmic map: degeneracies and derivatives” by Nurlanov discussed in quite some detail the instabilities of the log map when dealing with 3x3 rotation matrices. In certain cases, a Taylor approximation can be used to avoid instabilities (eqs 9a and 9b in that paper). In other cases, due to numerical reasons, when the rotation angle is close to pi, things can only be determined up to a global sign (see paragraphs right before section 2.2). Which, if any, of these instabilities are being addressed by using unit quaternions? I think providing more details about this would be informative in understanding exactly which instabilities are being addressed, and how / why. Also, would be interested to have a clear discussion of which instabilities are left when working with unit quaternions, to fully understand how the two methods compare. Regarding (2). I see this contribution as orthogonal to the method introduced, as most existing methods for protein backbone design could potentially benefit from ReFlow. However, to my knowledge this is the first paper to explore the use of ReFlow in this domain, showing promising results. However, doing ReFlow appears to affect the method’s performance (see “Other strengths and weaknesses” section below) quite a bit, not always positively. Essential References Not Discussed: Most relevant references are included. However, please see the section “Other comments and suggestions” for some issues connected to an existing paper. Other Strengths And Weaknesses: One weakness that came up when looking at the results in some detail, about the application of ReFlow. Using ReFlow requires creating a synthetic dataset, where each sample is obtained by fully simulating a pre-trained flow model. (w1) The synthetic datasets created are filtered to keep only designable samples. Without this filter performance of the ReFlow-ed model takes a bit of a hit. Can the authors explain why this may be the case? Under perfect ReFlow marginals should be preserved? Is the important component here the ReFlow or the designability filter? (w2) Generating the ReFlow synthetic dataset is quite expensive. In fact, the datasets created in the paper are somewhat small, consisting of roughly 5k samples. After ReFlow, designability goes up (this is likelt thanks to the self-distillation effect achieved by filtering the synthetic data for only designable samples), but diversity and novelty get worse. The authors don’t really comment on this. Could this be addressed by generating a larger synthetic dataset? Other Comments Or Suggestions: There is some discrepancy between the description of diversity in the main paper and in the appendix. The main paper states that the pairwise tm-scores are averaged, while the appendix states that this is done per length, and then averaged across lengths. Which one is used for the results shown in the paper? Additionally, Sections B.5 and B.6 in the Appendix appear to be copied from the paper “Proteina: Scaling Flow-based Protein Structure Generative Models”. While there are small changes (a few words here and there), there are entire paragraphs that are almost an exact copy. Since this is just describing metrics and baselines, and not a core part of the method, I don’t consider this to be very problematic. But copy / pasting entire paragraphs, especially without citing / mentioning the source (!) (the original paper, which also deals with backbone design using flow matching, is not even mentioned), seems in general unacceptable, even more so for an ICML submission. On this line, the Genie2 description in B.6 states that the noise scale was set to 1 for full temperature sampling. However, this full temperature sampling is not included in any of the results shown in the paper. Questions For Authors: Reading other papers, such as FrameFlow and FoldFlow, I was always curious about the gamma parameter to control the rotation generation speed. Do you have anh intuitions why this works so well? Most methods, including the one presented in the paper, perform quite poorly without it. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Other expertise'] Ethical Review Concerns: As mentioned in “Other Comments Or Suggestions”, sections B.5 and B.6 in the Appendix appear to be heavily copied from another paper (which is not cited nor mentioned). To be clear, this does not affect the paper contributions, as those sections mostly describe baselines and metrics. However, copy/pasting content from another paper, without citing the source, does not seem to the correct thing to do. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your positive feedback and constructive comments. Before resolving technical concerns, **1) Apology for reusing sentences in Proteina.** We sincerely apologize for reusing sentences from Proteina in Appendices B.5 and B.6 without proper citation. We read this paper when it was under review anonymously. We did not mention it in the related work because we focus on the tech route using frame-based representation + flow-based generative learning, while Proteina applies a very different route and did not release code at that time. In the appendix, originally, we had a sentence in the beginning of B.5, saying we follow existing evaluation methods, with the citations of Frameflow, FoldFlow, Genie2, and Proteina. However, it is accidentally commented when we adjusted the layout of Figure 6 and B.5. We thank the reviewers for pointing out our oversight. We take this matter seriously. **In the revised paper, we will 1) rewrite B.5 and B.6 completely in a different logic flow; 2) mention Proteina in the related work and introduce its tech route briefly; and 3) take the comparison with Proteina as our future work in the conclusion.** **2) Correction of long-chain results:** We found a bug in our script in the rebuttal phase and correct the results of ReQFlow on long-chain generation are in https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal-62BB/01_revised_longchain.pdf. **Note that, the new results are lower than the wrong ones in the submission, but they are still significantly better than those of baselines. This correction does not affect our main claims and contributions.** Below we try to resolve your technical concerns one by one. **Q1. More metrics on long chain experiment** **A1:** As shown in https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal-62BB/03_long_chain_metrics.pdf, both QFlow and ReQFlow outperform FrameFlow on designability, though their diversity and novelty metrics drop. We think this is reasonable because compared with FrameFlow, both QFlow and ReQFlow generate more designable proteins. There are more proteins used to compute novelty and diversity, which may lead to higher TM-scores because of the inherent trade-off among the metrics. In addition, currently, ReQFlow is not as good as larger models because 1) our training utilized only 23k samples, a small subset of the PDB, far fewer than Genie2 (590k), FoldFlow2 (160k), and RFDiffusion (>208k); 2) our model has much fewer parameters than RFDiffusion and FoldFlow2. Training a larger QFlow/ReQFlow model on a larger dataset can be one of our future work, but it is out of the scope of this work. **Q2. The reason for the stability of using quaternion.** **A2:** When angle is close to $\pi$,the quaternion logarithm is simply `log(q) = (π/2)u`. This calculation is direct and numerically stable; there's no division by a small number. When angle is close to $0$, quaternion logarithm `log(q)` also faces potential instability. It is handled by clamping the divisor much simpler than the Taylor expansion. **Q3. A larger synthetic dataset for rectifying flow** **A3:** We use 17.6k synthetic data to train ReQFlow. However, using a larger dataset does not affect results significantly. As shown in **A1**, this is due to 1) a trade-off among the metrics and 2) the limited model size and data we used. ||Steps|Designability|Diversity|Novelty| |-|-|-|-|-| |Original: 7.7k|500|0.972|0.377|0.828| ||50|0.912|0.369|0.810| ||10| 0.676|0.337|0.760| |Larger: 17.6k|500|0.968|0.381|0.832| ||50|0.932|0.379|0.825| ||10|0.724|0.360|0.793| **Q4. The rationality and usefulness of ReFlow on designable proteins.** **A4 (Short answer):** Selecting designable proteins matters, but the self-distillation of QFlow on such proteins may suffer mode collapse. ReQFlow avoids this issue to some extent and maintains competitive results. Please refer to our response **A2 to Reviewer K7Q3** for detailed answer. https://openreview.net/forum?id=f375uEmYDf&noteId=40R3BVTqCr **Q5. The computation of the average of pairwise tm-scores** **A5:** The score is first done per length, and then averaged across lengths. We will revise our paper to make it clear. **Q6. Setting of baselines.** **A6:** All the baselines are evaluated using the default settings in their repos. For Genie2, we use the default setting (noise scale=0.6). We will revise our paper to make it clear. **Q7. Explain the power of exponential step scheduler.** **A7 (Short answer):** **current method models velocity field backward**, making the prediction at the starting point difficult and inaccurate. Exponential scheduler allows models to approach the ending point with few steps, avoiding severe error accumulations near the starting point. Please refer to our response **A4 to Reviewer K7Q3** for more details. https://openreview.net/forum?id=f375uEmYDf&noteId=40R3BVTqCr In summary, we hope our responses can resolve your concerns and enhance your confidence to further support our work.
Summary: This paper introduces a new flow matching method for unconditional protein backbone generation, based on quaternion representations and rectified flows. More specifically in this work, the rotational part of the backbone residues is represented as unit quaternion, instead of an SO(3) matrix which is the canonical choice in most previous work. It is demonstrated that this choice improves the numerical stability of calculations, leading to improved backbone quality as well as a speed up of the inference process. Secondly the “reflow” method from rectified flows is applied to pretrained models, leading to straighter flow trajectories and further improving the quality of sampled backbones. ## Update after rebuttal I thank the authors for their detailed response and the additional experiments. The new results, in my view, enhance the clarity of the paper and further highlight its core contribution: a robust and numerically stable framework for protein backbone design. I will thus increase my score and recommend to accept the paper. Claims And Evidence: The claims about increased numerical stability and higher quality backbones are supported by sensible experiments and results. The application of the reflow procedure on a curated set of training samples significantly increases designability while trading off diversity and novelty, as is expected. The comparison of ReQFlow and ReFrameFlow indicates an increase in designability of approximately 8% and a 25% speedup of ReQFlow compared to Frameflow. Unfortunately no detailed summary of training hyperparameters is provided, which would allow for a better understanding of the differences between the two models. The time threshold of 0.6 for the auxiliary loss in eq. (50) and (51) indicates that the hyperparameters are not the same as in FrameFlow. Methods And Evaluation Criteria: The comparison of numerical stability of matrix and quaternion implementations in terms of a round-trip error is appropriate, given that similar operations are performed when calculating the geodesics on SO(3). The benchmark datasets SCOPe and the curated version of PDB are widely adopted in the field. The performed experiments cover most interesting benchmarks for unconditional protein backbone generation models, including evaluations of designability, diversity, novelty and secondary structure content in various settings. Theoretical Claims: The theoretical claims of this paper are mainly concerned with repeating the proofs of Liu et al. for the case of quaternion algebra and seem correct. Liu, Xingchao, Chengyue Gong, and Qiang Liu. "Flow straight and fast: Learning to generate and transfer data with rectified flow." arXiv preprint arXiv:2209.03003 (2022). Experimental Designs Or Analyses: The experimental design choices seem sensible and are in accordance with common choices in the field of unconditional protein backbone generation. In particular, the definition of the metrics designability, diversity and novelty as well as sampling steps and choices of backbone lengths to generate are the same as for multiple other baselines. Supplementary Material: Yes. Appendix A - C. Relation To Broader Scientific Literature: The proposed method directly builds on established flow matching models for protein backbone generation in particular FrameFlow. The usage of reflow is a direct application of the ideas in Liu et al. The idea of using quaternion representations and SLERP in exponential format for handling the rotations of residue frames is novel and could be readily applied also to many other models in the field which work with frame representations for backbone residues. Liu, Xingchao, Chengyue Gong, and Qiang Liu. "Flow straight and fast: Learning to generate and transfer data with rectified flow." arXiv preprint arXiv:2209.03003 (2022). Essential References Not Discussed: All important references are included. Other Strengths And Weaknesses: Weakness: The paper claims that the IGSO(3) prior corresponds to uniformly sampling rotation axis and rotation angle, which is not the case (see e.g. discussion in Leach et al. and Yim et al.). Crucially many other baselines use the IGSO(3) prior during training and a uniform prior on SO(3) during inference. Clarification on what is implemented for QFlow would be desirable. Leach, Adam, et al. "Denoising diffusion probabilistic models on so (3) for rotational alignment." (2022). Yim, Jason, et al. "Fast protein backbone generation with se (3) flow matching." arXiv preprint arXiv:2310.05297 (2023). Other Comments Or Suggestions: For the computation of novelty, Fold-seek was used. Fold-seek has an issue where the TM-Score is provided in the wrong column of the output. The command provided in the supplementary suggests that this error might affect the novelty results reported in the tables? (see https://github.com/steineggerlab/foldseek/issues/323) Questions For Authors: 1. Could you please provide a more detailed list of training hyperparameters and indicate if and how the QFlow models are different from FrameFlow? In the case of equal hyperparameters could you explain how the increased numerical stability for rotation angles close to π can lead to such a significant increase in designability? 2. The reported 25% speedup over FrameFlow seems quite large. Could you provide more details on the specific computational bottlenecks that QFlow optimizes and the corresponding speedup for each of these individual components? 3. For the ablation results in Table 4 I would like to see results on more than one checkpoint to evaluate the statistical significance of the reported metrics. This would also allow for a better comparison between the models. I am willing to raise my score if the above points are addressed in the rebuttal. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive and constructive comments. **Correction of long-chain results:** Before resolving your concerns, we would like to report that we found a bug in our script in the rebuttal phase and correct the results of ReQFlow on long-chain generation in the anonymous link https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal-62BB/01_revised_longchain.pdf. **Note that, the new results are lower than the wrong ones in the submission, but they are still significantly better than those of baselines, and thus, do not affect our main claims and contributions.** Below we try to resolve your concerns one by one. **Q1. Hyperparameters of QFlow/ReQFlow** **A1.** We adopt the the same hyperparameter settings as FrameFlow for a fair comparison, and the key parameters are shown below: |Hyperparam.|Value| |-|-| |aux\_loss\_t\_pass (time threshold)|PDB=0.5, SCOPe=0.25| |aux\_loss\_weight|1.0| |batch size|128| |max\_num\_res\_squared|PDB=1000000, SCOPe=500000| |max epochs|1000| |learning rate|0.0001| |interpolant\_min\_t|0.01| The batch size of FrameFlow is originally set to 128 for PDB and 100 for SCOPe. We retrained FrameFlow on SCOPe in our work and set the batch size to 128, the same as QFlow/ReQFlow. This doesn't influence the comparison's effectiveness and fairness. In addition, the value "0.6" in Eqs. (50, 51) does not represent a time threshold but rather an inter-atomic distance threshold, with the unit nanometers (nm). **Q2. Clarification on SO(3) prior** **A2.** Following FrameFlow, we use an IGSO(3) prior for training and a uniform prior on SO(3) for inference. We apologize for the mistake on omitting the density. The text on the right side of lines 149-150 actually means "uniformly sampling ... with the following density:" $$ f(\omega) = \frac{1 - \cos\omega}{\pi} \sum_{l=0}^{\infty} (2l + 1) e^{-l(l+1)\epsilon^2} \frac{\sin\left(\left(l + \frac{1}{2}\right)\omega\right)}{\sin(\omega/2)}. $$ This description is the same as Leach, Adam, et al.'s definition. We will clarify this point in the revised version. **Q3: Issue on Fold-seek whether affect novelty results.** **A3:** Thank you for pointing out this known issue (Foldseek Issue #323). We were aware of this issue where the `evalue` column reports the TM-score in TM-align mode (`--alignment-type 1`). To avoid this issue, as shown in our supplementary command, we requested the TM-score using the `--format-output ... alntmscore, ...` mode. Therefore, we utilized the correct `alntmscore` column for our analysis, ensuring our reported novelty results based on TM-scores are accurate and unaffected by this issue. **Q4. Why numerical stability on large ablges (close to $\pi$) increases designability.** **A4.** As mentioned in Section 3.4, lines 258–260, during the inference, the probability of suffering at least one large angle ($\geq \pi - 10^{-2}$) per protein is 0.59 for PDB and 0.34 for SCOPe. Meanwhile, Figure 2(a) illustrates that when the angle is close to $\pi$ ($\geq \pi - 10^{-2}$), the matrix’s mean round-trip error increases dramatically. Because this round-trip step (i.e., conversions between rotation angles and rotation matrices/quaternions) is a critical and high-frequent operation of interpolation during inference, the round-trip errors propagate and lead to significant orientation deviations in residues during inference. **Q5. Why QFlow/ReQFlow speedup over FrameFlow** **A5.** We had an in-depth analysis of the code, and the speed improvements can be attributed to the following 3 components. **1) Fewer float-point operations:** When describing a 3D rotation, the matrix multiplication (27 mul., 18 add.) is intrinsically more computationally expensive than a quaternion multiplication (16 mul., 12 add.). **2) Fewer matrix-vector multiplications:** For each residue, matrix-based interpolation performs 3 matrix multiplication (i.e., computing relative rotation matrix, implementing the Rodrigues formula, and applying one rotation matrix to the initial matrix) while quaternion-based performs 2 quaternion multiplications (i.e., two Hamilton products). **3) Cheaper nonlinear operations:** The matrix-based log/exp maps require to handle numerical issues (although still failed on angles $\geq \pi - 10^{-2}$ ), e.g., using truncated Taylor approximation. In contrast, quaternions rely on simple mathematical operations, such as acos, sin, cos, and normalization (sqrt/division). **Q6. Ablation results on more checkpoints** **A6.** For the ablation results, we use 5 checkpoints to evaluate their statistical significance, demonstrating their consistency and stability. |Exponential Scheduler|Flow Rectification|Data Filtering|500|50|10| |-|-|-|-|-|-| |x|x|x|0.143±0.079|0.047±0.030|0.002±0.002| |✓|x|x|0.910±0.029|0.795±0.051|0.309±0.058| |✓|✓|x|0.612±0.084|0.519±0.154|0.385±0.136| |✓|✓|✓|0.969±0.027|0.932±0.022|0.698±0.041| We hope the above answers help resolve your concerns and enhance your confidence to support our work. --- Rebuttal Comment 1.1: Comment: Answer to rebuttal: I thank the authors for their response. I would like to follow up on some of the points: Q3: Looking at the command in the Appendix and also comparing the reported novelty with values for FrameFlow computed using the numbers from the evalue column I still think that the reported values might be incorrect. I would encourage the authors to have another look at this issue and take the values directly from the evalue column. This should actually make the results better. Q4: I acknowledge the fact that the improved numerical precision can lead to higher quality backbones. However, I still have the concern that the reported increase in designability can not only be attributed to the usage of quaternions over rotation matrices. In my experience, when training FrameFlow, the designabilities between checkpoints show large fluctuations (up to 10%) even if the loss has already converged. It would thus be good to know how the checkpoints were selected and what measures were taken to ensure that the above-mentioned effect did not distort the results. Q5: The discussed differences between matrix and quaternion algebra show that the use of quaternions is beneficial for computational performance. I am however still skeptical about the magnitude of the acceleration. I would suspect that looking at the full pipeline the neural network is a much bigger bottleneck than the update of the frames. Do you have numbers for the relative compute time of the forward pass of the neural network compared to the frame updates? Q6: Thank you for providing errors for the values in table 3, I think that they are very helpful to judge the results. I my original review I however referred to table 4, since I think errors are especially important for the comparison between the FrameFlow and QFlow models (see also the point on Q4). I think it would be beneficial if these were included in the final version of the paper. For now, I keep my current rating. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful feedback. In the past few days, we added more experiments for resolving your remaining concerns. **Q3. The correction of novelty computation** **A3:** Thank you for your suggestion. We take the values directly from the E-value column to update our novelty computations. The revised Tables 2 and 4 are shown in https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal_Second_Phase-2F4E/revised_table_2_4.pdf. On PDB, QFlow/ReQFlow shows a trade-off between designability and novelty, while on SCOPE, QFlow/ReQFlow achieves higher designability with competitive novelty scores. **Q5. Comparison for QFlow and FrameFlow on speed.** **A5:** Thanks for your comment. We analyzed the runtime of QFlow and FrameFlow and found an implementation discrepancy: In the `interpolant.sample` function, when reconstructing the protein frame trajectory into atomic coordinates, our QFlow implementation only considers the first and last proteins, whereas FrameFlow reconstructs all intermediate steps. For a fair comparison, we reconstruct the first and last proteins for both methods and record their runtime (second) on generating a protein of length 300 in the PDB experiment and length 128 in the SCOPe experiment: |Datasets|Methods|Steps|Model Prediction|Rotation Update|Translation Update|Total Time| |-|-|-|-|-|-|-| |PDB|FrameFlow|500|16.308±0.093|0.608±0.005|0.033±0.000|17.053±0.099| |||50|1.609±0.013|0.059±0.001|0.003±0.000|1.727±0.014| |||20|0.635±0.008|0.024±0.001|0.001±0.000|0.713±0.010| ||QFlow|500|16.732±0.089|0.492±0.004|0.036±0.000|17.370±0.111| |||50|1.670±0.003|0.048±0.000|0.003±0.000|1.776±0.004| |||20|0.653±0.001|0.019±0.000|0.001±0.000|0.726±0.002| |SCOPe|FrameFlow|500|11.947±0.125|0.601±0.003|0.033±0.000|12.688±0.124| |||50|1.166±0.013|0.059±0.001|0.003±0.000|1.275±0.016| |||20|0.471±0.002|0.025±0.000|0.001±0.000|0.539±0.003| ||QFlow|500|11.994±0.037|0.483±0.003|0.034±0.000|12.602±0.040| |||50|1.166±0.015|0.048±0.001|0.003±0.000|1.262±0.021| |||20|0.466±0.002|0.019±0.000|0.001±0.000|0.528±0.002| Your intuition is correct: the neural network feedforward computation is the main computational bottleneck. However, we believe this new result does not diminish our core contributions --- 1) The quaternion operations are 15~20% faster than rotation matrix-based operations (see the Rotation Update column), and 2) applying quaternions indeed leads to better numerical stability and improves efficiency by reducing inference steps while maintaining high designability. We will add the above efficiency analysis in the revised paper, and we have update the runtime results, see https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal_Second_Phase-2F4E/revised_table_2_4.pdf. **Q4 & Q6. Checkpoint selection + Verify the contribution of quaternion operation.** **A4 & A6:** **Checkpoint selection strategy.** For each method, after observing loss convergence, we select checkpoints based on the metrics of the generated protein validation set. We choose the checkpoint where the `ca_ca_valid_percent` > 0.99 and the proportions of secondary structures are closest to the dataset's average values. For example, when selecting the checkpoint of QFlow on PDB, the generated protein validation set has a `ca_ca_valid_percent` = 0.996, a `helix_percent` = 0.400, and a `strand_percent` = 0.283. **Results in Table 4 achieved by different checkpoints.** To demonstrate that the improvement is attributed to quaternion operations, we show the performance below using five checkpoints for each model (Computing the novelty score is time-consuming while the rebuttal time is limited, we report one checkpoint's novelty in https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal_Second_Phase-2F4E/revised_table_2_4.pdf and report multi-checkpoint's designability and diversity here). The result of each checkpoint is in https://anonymous.4open.science/r/6342_ReQFlow_Rebuttal_Second_Phase-2F4E/statistical_significance.pdf. ||Step|Fraction|scRMSD|Diversity| |-|-|-|-|-| |FrameFlow|500|0.851±0.016|1.437±0.035|0.392±0.007| ||50|0.811±0.017|1.566±0.053|0.378±0.008| ||20|0.708±0.023|1.966±0.089|0.370±0.005| |QFlow|500|0.893±0.022|1.288±0.082|0.392±0.005| ||50|0.854±0.019|1.441±0.048|0.380±0.005| ||20|0.762±0.015|1.766±0.051|0.372±0.004| |ReFrameFlow|500|0.924±0.007|1.213±0.031|0.407±0.004| ||50|0.906±0.006|1.268±0.004|0.407±0.001| ||20|0.884±0.009|1.399±0.032|0.405±0.004| |ReQFlow|500|0.947±0.007|1.131±0.025|0.406±0.003| ||50|0.922±0.007|1.189±0.024|0.411±0.002| ||20|0.910±0.012|1.282±0.038|0.405±0.001| The results show that the designability improvement by using quaternion is stable and robust. These results, including novelty scores, will be included in the final version of the paper. **We hope that the above responses can resolve your remaining concerns completely and enhance your confidence to raise your score. We would appreciate it if you can further support our work in the following discussion and decision phases.**
null
null
null
null
null
null
On Volume Minimization in Conformal Regression
Accept (poster)
Summary: The authors present a method that minimizes the volume of a conformal prediction region, subject to the coverage being the desired one (1- alpha). They show its theoretical properties, and its empirical validity. Claims And Evidence: The claims are clear and convincing. I'd like the authors, though, to address all my questions (see below). Methods And Evaluation Criteria: The proposed evaluation criteria are apropos for the authors' goal. Theoretical Claims: The theoretical claims seem largely correct. Experimental Designs Or Analyses: The experimental designs and analyses seem sound and valid. Supplementary Material: I have only checked the statements' proofs, which appear largely correct. Relation To Broader Scientific Literature: The paper advances the state of the art on conformal prediction region methods, even though some related works are not considered. Please refer to the Questions section. Essential References Not Discussed: Some references to known problems and results are missing. Please refer to the Questions section. Other Strengths And Weaknesses: The paper is well-written, and studies an interesting problem. I gave it a score of 3, with the willingness to increase it if the authors are able to address the questions I raise. Other Comments Or Suggestions: Please refer to the Questions section. Questions For Authors: **Q1** In https://link.springer.com/book/10.1007/b106715, Theorem 2.10, the authors provide a proof by construction, that can be used to derive the narrowest possible (in the sense of diameter or volume) conformal prediction region. How does the authors' work relate to that result? **Q2** In https://proceedings.mlr.press/v216/sale23a.html, the authors show that the volume is not a good measure for (epistemic) uncertainty for classification problems with more than 2 classes. How does this result compare with what the authors present in their paper? **Q3** Despite the ubiquitous claim that conformal prediction (CP) is an uncertainty *quantification* tool, it is actually not. CP is an uncertainty *representation* tool. Indeed, CP *represents* uncertainty via the conformal prediction region. It does not quantify it: there is no real value attached to any kind of predictive uncertainty (aleatoric or epistemic, AU and EU, respectively). Some claim that the diameter (or the volume, like in the authors' work) of the conformal prediction region quantifies the uncertainty, but even in that case, it is unable to distinguish between AU and EU. Indeed, the diameter (and the volume too) is a positive function of both: it increases as both increase, and hence it cannot be used to distinguish between the two. This was already pointed out in https://openreview.net/forum?id=4NHF9AC5ui. I'd appreciate a discussion by the authors on this matter. I also briefly mention that a recent paper merged CP with imprecise probabilistic tools to obtain a "proper" AU and EU disentanglement https://openreview.net/forum?id=L7sQ8CW2FY. The authors may find it interesting, and possibly useful for the discussion section and/or for future research. **Q4** It is also worth to notice that CP is not entirely model-free, since the volume (and diameter) of the resulting conformal prediction region depends on the choice of the non-conformity score. This was already pointed out in https://www.arxiv.org/abs/2502.06331. The authors are suggested to include this fact in their work. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable feedback and for pointing out relevant related work. Below, we address each of their questions and comments. - *Q1 In https://link.springer.com/book/10.1007/b106715, Theorem 2.10, the authors provide a proof by construction, that can be used to derive the narrowest possible (in the sense of diameter or volume) conformal prediction region. How does the authors' work relate to that result?.* We appreciate the reviewer for highlighting this result. After carefully examining Theorem 2.10 and its proof, we find that our work does not explicitly relate to it. From our understanding, the proof constructs a theoretical conformal set that is guaranteed to be at least as efficient as a given conservatively valid confidence predictor. However, it is not evident that this result can be leveraged to construct a conformal predictor of minimal size in practice. In contrast, our approach explicitly formulates and empirically solves the length minimization problem. - *Q2 In https://proceedings.mlr.press/v216/sale23a.html, the authors show that the volume is not a good measure for (epistemic) uncertainty for classification problems with more than 2 classes. How does this result compare with what the authors present in their paper?.* The referenced paper differs significantly from our work. First, it does not explicitly address Conformal Prediction (CP) but rather focuses on credal sets as a means of quantifying uncertainty. Additionally, their analysis is centered on classification, whereas our work focuses on regression (extending to high-dimensional settings is left as future work by the authors). That being said, the authors do show that, in dimensions greater than two, measuring uncertainty through the volume of the credal set is inadequate—at least with respect to the axioms they propose (Section 3 of [1]). To draw a direct comparison between our work and the one mentioned by the reviewer, one would need to establish a precise connection between credal sets and conformal prediction and investigate whether the results in [1] extend to the volume of conformal prediction sets. We find this to be an interesting open question for further analysis of CP as an Uncertainty Quantification (UQ) technique. [1] https://proceedings.mlr.press/v216/sale23a.html - *Q3 Despite the ubiquitous claim that [...] for future research.* We thank the reviewer for bringing these references to our attention. We fully agree that CP is better described as an uncertainty representation tool rather than a UQ technique, which is why we deliberately avoided making such a claim in our work. It appears that this comment, like the previous question, highlights a general limitation of CP rather than a specific limitation of our contribution. Nevertheless, we find that linking CP with more classical UQ frameworks (e.g., credal sets) presents an exciting research direction. We will mention this in the conclusion of our paper as a promising avenue for future work. - *Q4 It is also worth to notice that CP is not entirely model-free, since the volume (and diameter) of the resulting conformal prediction region depends on the choice of the non-conformity score. This was already pointed out in https://www.arxiv.org/abs/2502.06331. The authors are suggested to include this fact in their work.* We fully agree with the reviewer, as this observation aligns with one of the conclusions of our paper: the choice of the prediction set class $\mathcal{C}$, which depends on $\mathcal{F}$ in Section 3, has a significant impact on the volume of the conformal prediction region. We will clarify this point in the final version of our manuscript and include a reference to the mentioned paper (noting that it was made available after the ICML submission deadline).
Summary: The paper theoretically studies the efficiency of CP sets, providing bounds for the specific task of regression. The bounds are given assuming a fixed base predictor, or in the case where the predictor is learned on held-out data (split CP). For the latter case, the authors also highlight the importance of minimizing the Quantile Absolute Error (QAE) in order to reduce inefficiency at calibration time. This insight is then used to propose two algorithms, EffOrt and Ad-EffOrt, which minimize a surrogate of the QAE. The solution is tested for regression tasks. Claims And Evidence: The paper claims to contribute to the existing literature by deriving upper bounds on the size of split CP and proposing a training step for split CP aimed at minimizing the set size of prediction sets. Methods And Evaluation Criteria: The problem is tested on synthetic data; it would be better to test it on actual benchmarks used in CP. Theoretical Claims: I haven't carefully checked the proofs but the results are coherent with existing ones. Experimental Designs Or Analyses: Experiments are carried out on simple synthetic data; however, the Appendix contains more extended evaluation. Supplementary Material: No. Relation To Broader Scientific Literature: There is a quite a lot of missing literature. There already exists works that study the expected set size or informativeness of split CP in standard settings as well as under covariate shift. Essential References Not Discussed: I believe that the current literature review misses many existing works. To the best of my knowledge, the first paper to theoretically study the expected set size of CP sets was [1]. This paper studies the set size for different tasks, including the regression task considered here. It examines the expected size under a fixed predictor. This analysis has been extended to split CP, where the learner is not fixed but learned, both under the i.i.d. setting and covariate shift [2-3]. While these works have not proposed algorithmic solutions to minimize the expected inefficiency of CP sets, [4] and [5] proposes a training algorithm starting from the PAC-Bayes bound on the efficiency of split CP and information-theoretic tools. [1]. Dhillon, Guneet S., George Deligiannidis, and Tom Rainforth. "On the expected size of conformal prediction sets." International Conference on Artificial Intelligence and Statistics. PMLR, 2024. [2] Zecchin, Matteo, et al. "Generalization and informativeness of conformal prediction." 2024 IEEE International Symposium on Information Theory (ISIT). IEEE, 2024. [3] Zecchin, Matteo, et al. "Generalization and Informativeness of Weighted Conformal Risk Control Under Covariate Shift." arXiv preprint arXiv:2501.11413 (2025). [4] Sharma, Apoorva, et al. "PAC-Bayes generalization certificates for learned inductive conformal prediction." Advances in Neural Information Processing Systems 36 (2023): 46807-46829 [5] Correia, Alvaro, et al. "An information theoretic perspective on conformal prediction." Advances in Neural Information Processing Systems 37 (2025): 101000-101041. Other Strengths And Weaknesses: I think the paper does a nice job of leveraging insights from theoretical results to propose a new training algorithm. The proposed methods seem to have advantages over existing ones. However, the paper overlooks many existing works, which I believe undermines its originality claims and experimental results. For example, how would the algorithms in [4]-[5] compare to EffOrt? From my understanding, when using EffOrt, one commits to a coverage level $\alpha$, assuming that the same \alpha will be used for calibration. However, standard CP is a post-hoc calibration method, and the training of the base predictor is independent of $\alpha$. I suggest including experiments to evaluate the performance of EffOrt when $\alpha$ at calibration time differs from the one used for training. In many instances, retraining a model is not feasible. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful feedback and the references provided. We acknowledge that some relevant prior works were overlooked, and we will incorporate them into the related work section to ensure a more comprehensive discussion. While we recognize that this may slightly refine the novelty claims of certain aspects of our work, we firmly believe that our contribution remains **distinct and complementary** to these existing efforts. In particular, while most of the cited works (except [5], which focuses on linking CP set size to the conditional entropy $H(Y|X)$) aim to theoretically analyze the size of conformal prediction (CP) sets, **none explicitly study the notion of "excess volume loss"** introduced in our work. This metric quantifies the gap between the CP set size and an oracle reference, offering a more precise measure of inefficiency. For instance, while [4] provides a PAC-Bayesian upper bound on CP set size, their analysis does not incorporate an oracle baseline, preventing an explicit characterization of suboptimality. Below, we present a new paragraph that will be added to the related work section, directly addressing the reviewer’s concerns. --- **New Paragraph: Comparison with Missing Related Works** *Recent studies [1,2,3] have analyzed the expected size of split CP sets, highlighting key factors such as the impact of the score function choice [1] and the generalization properties of the base predictor [2,3]. In [4], the authors take a step further by deriving a PAC-Bayesian upper bound on the expected CP set size, involving an empirical estimate of the size and a KL divergence term. They also propose an algorithm that modifies the calibration step of split CP to minimize their bound; however, they do not explicitly instantiate their bound for their proposed algorithm, leaving the practical implications unclear.* *Our work differs from these approaches in several fundamental ways. First, our theoretical guarantees are PAC-based rather than PAC-Bayesian, eliminating the need for restrictive assumptions such as boundedness of the score function or target variable $Y$. More crucially, our analysis introduces an upper-bound on the excess volume loss, explicitly quantifying how much larger the CP set is compared to an oracle reference. This perspective is absent in prior works, which focus only on bounding the expected CP set size without reference to an optimal baseline. Finally, it is worth mentioning [5], which provides an information-theoretic perspective by linking CP set size to conditional entropy.* --- **$\alpha$ different between learning and calibration** This is an interesting question. To address it, we conducted additional experiments where the QAE problem was solved with $\alpha_{QAE} = 0.05, 0.1$ and $0.9$. The results are presented in the following anonymous figures: (1) [Figure 1](https://ibb.co/fYR6MDgy) – Length for Normal and Normal + Extreme, (2) [Figure 2](https://ibb.co/5gdtRJ0t) – Length for Pareto and Pareto + Extreme, (3) [Figure 3](https://ibb.co/HTLb2DS4) – Coverage for Normal and Normal + Extreme, (4) [Figure 4](https://ibb.co/S488nwK6) – Coverage for Pareto and Pareto + Extreme. From Figures 1 and 2, we observe that QAE with $\alpha_{QAE}=0.1$ produces the smallest prediction sets. However, when no extreme values are present, the choice of $\alpha_{QAE}$ appears to have little impact. In contrast, when extreme values exist in the distribution, selecting $\alpha_{QAE}=0.05$ significantly deteriorates the final prediction sets. This is expected, as the dataset is constructed such that 5% of the values are extreme. Consequently, QAE with $\alpha_{QAE} = 0.05$ attempts to find $\hat{f}$ that minimizes the error for these extreme values, which is the opposite of being "robust." Figures 3 and 4 confirm that the calibration step ensures the final coverage remains close to 0.9. Notice that a similar sensitivity to $\alpha$ can be observed in other conformal prediction methods, such as CQR, when the $\alpha_{CQR}$ values for the quantile regressors are poorly chosen. --- Rebuttal Comment 1.1: Comment: Thanks for acknowledging the existence of these past works. I also believe there is a degree of orthogonality between previous research and the results presented in the paper. The paragraph you included does a good job of clarifying these aspects. I also appreciate these new results; to me, they look very insightful and worth including in the final version—perhaps in the appendix. I will raise my score in light of these modifications.
Summary: **Post-rebuttal edit:** The authors wrote convincing answers to my two grounds and questions. I keep my positive score. &nbsp; The submission considers (split) conformal prediction for univariate data, with scores given by absolute values of the residuals, and with prediction regions given by intervals of the form $C(x) = \hat{f}(x) - \hat{t_\alpha}, \hat{f}(x) + \hat{t_\alpha}$ in the main case (Section 3), where $\hat{f}$ is learnt on the learning set and where the $\hat{t}_\alpha$ are learnt on the calibration set, as quantiles of the empirical distribution of scores. The coverage level of these $C(x)$ are approximately $1-\alpha$, under an exchangeability condition, as follows from a well-known analysis. The regression functions belong to some set $\mathcal{F}$. The focus of this article is to relate the length of $C(x)$ to the length obtained by using the ground-truth quantile of the underlying distributions (depicted as the ideal length) and picking the optimal element in $\mathcal{F}$ in this respect. The main results are the final problem statement in Section 3.2 and the upper bound on the interval length given by Theorem 3.7 in Section 3.4: under natural assumptions (regularity of the quantile function, bound on the complexity of $\mathcal{F}$), the length of the $C(x)$ is smaller than the ideal length plus two closed-form terms, both converging to 0 as, respectively, the sizes of the training and calibration sets increase (and where the term due to the learning set is larger; thus the learning set should feature more data points than the calibration set). These main results are stated for an ideal version of the procedure (with a computational bottleneck); but simulations report effective results for a computationally more tractable version of the procedure. Also, an extension (without theoretical results) to intervals with lengths depending on the features $x$ is considered. Claims And Evidence: Yes, as far as the setting and the results of Sections 3-4-5 are concerned; see the detailed discussions below, both for theoretical claims and simulations. Methods And Evaluation Criteria: The setting considered makes sense. Theoretical Claims: I checked in details the proofs of Proposition 3.1 and Theorem 3.7 (located in Appendix A), and could spot no issue. Experimental Designs Or Analyses: I read Section 5, which looks to me like a decent set of experiments on artificial data. Supplementary Material: I read only Appendix A in details and had a quick overview of the rest of the Appendix. Relation To Broader Scientific Literature: The submission discusses well the broader literature, like the minimum volume estimation problem, typically dealt with through density estimation. Essential References Not Discussed: No issue spotted. Other Strengths And Weaknesses: The problem of interval-length-minimization is natural in (split) conformal prediction, yet, few results were available. This submission provides an elegant approach for this: the additional length due to calibration is handled through the DKW inequality (on empirical cdfs), while the additional length due to learning is bounded through the theory of suprema of empirical processes (hidden in Assumption 3.5). These classic tools of statistical learning are well used in the context of split conformal prediction. This is the first strength of the submission. The other strength is the clarity of the exposition and writing. In particular, it was a nice idea to discuss first the error due to calibration (in Section 3.3) and add later (in Sections 3.2 and 3.4) the error due to learning. Also, the final paragraph of Section 3.4, about the respective importances of the train and calibration sets, is an important take-home message. Other Comments Or Suggestions: Sections 1 and 2 could perhaps focus faster on the specific class of problems studied in Section 3; they take quite some space to discuss the problem in generality, but the problem is later not studied at this degree of generality. Also, results like the rewriting of the optimization problem as Equation (10) could fit earlier in the article. In particular, in Section 2.2, the optimization problem (4) is stated in great generality whereas the optimization problem (5) corresponds to a specific (yet legitimate) choice of non-conformity score. This dependence shoud have been emphasized, as in practice, the choice of the non-conformity score usually has a larger impact on the efficiency than the loss used in the training step. Section 5.1: Could you detail what a 'robust linear regression with Huber loss' is, and why it helps for a fair comparison? In Section 5, you take $n_{lrn} = n_{cal} = 1000$. Are these values large enough for the conditions in Theorem 3.7 to be met? Typos - 044:1 "miscoverage" instead of "coverage" - 055:2 drop the "for any $x$" - 403:2 "coverages" instead of "coverage" - 699 "thanks to (20)" Suggestions - State Assumption 3.5 and Proposition 3.6 with a $n_l$ instead of a $n$? Questions For Authors: Q1. The assumptions to obtain Theorem 3.7 are not clearly stated: are the training and the calibration data assumed to be i.i.d.? Q2. I guess that unfortunately, there are no theoretical guarantees for the computationally more efficient procedure described in Section 3.3? This looks like a critical limitation, which my score takes into account (I would raise it if I misunderstood this). Q3. Could you clarify the statement "theoretical analysis highlights how the complexity of the prediction function classes impacts the prediction interval's length"? From my understanding the upper bound (13) given in Theorem 3.7 is only an upper bound, and can only give some insights, especially as both its first and third terms depend on the complexity of the prediction function classes. Q4. Did you consider the length minimization problem for another non-conformity score? If you did, do you know if similar results hold for signed non-conformity scores (such as the residuals), which seem more adapted to the asymmetric heavy-tailed distribution you considered in Section 5? Q5. I am surprised that the empirical coverages given in Figures 2-3-4 are, on average, slightly below the nominal coverage of 90\%. How did you implement the SCP procedure in your experiments? &nbsp; **Post-rebuttal edit:** The authors wrote convincing answers to my two grounds and questions. I keep my positive score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback. We are especially grateful for their recognition of the clarity and novelty of our contribution, particularly our use of statistical learning theory in the context of split conformal prediction. We will incorporate the suggested writing improvements in the final version of the manuscript. Below, we address the reviewer’s main questions and remarks. **Questions** - *Q1. The assumptions to obtain Theorem 3.7 are not clearly stated: are the training and the calibration data assumed to be i.i.d.?* Yes, our theoretical analysis, including Theorem 3.7, assumes that both the training and calibration data are i.i.d. We will explicitly state this assumption in the manuscript. - *Q2. I guess that unfortunately, there are no theoretical guarantees for the computationally more efficient procedure described in Section 3.3? This looks like a critical limitation, which my score takes into account (I would raise it if I misunderstood this).* The reviewer is correct—our theoretical guarantees currently apply only to the idealized procedure, and we do not yet have formal guarantees for the computationally efficient variant. We acknowledge this as a limitation and appreciate the reviewer bringing it to our attention. We will explicitly discuss this in the manuscript and highlight it as an important direction for future theoretical research. - *Q3. Could you clarify the statement "theoretical analysis highlights how the complexity of the prediction function classes impacts the prediction interval's length"? From my understanding the upper bound (13) given in Theorem 3.7 is only an upper bound, and can only give some insights, especially as both its first and third terms depend on the complexity of the prediction function classes.* The reviewer is absolutely right that Theorem 3.7 provides an upper bound rather than a precise characterization of the true interval length. Our goal was to emphasize that this bound offers insights into how the complexity of the function class influences the gap between the constructed interval length, $\lambda(C_{\hat{f},\hat{t}}^{1-\alpha})$, and the optimal one $\lambda(C_{f^*,t^*}^{1-\alpha})$. Specifically, as the complexity of the function class increases, more training data is required for the bound on this difference to converge. - *Q4. Did you consider the length minimization problem for another non-conformity score? If you did, do you know if similar results hold for signed non-conformity scores (such as the residuals), which seem more adapted to the asymmetric heavy-tailed distribution you considered in Section 5?* To clarify, our framework does not explicitly rely on a chosen non-conformity score. Instead, the choice of score function is implicitly determined by the class of prediction sets, $\mathcal{C}$, that we consider—namely, intervals of constant (Section 3) or adaptive (Section 4) lengths. Conversely, in standard split conformal prediction, one can view the choice of non-conformity score as implicitly defining a class of prediction sets. In Section 3, the scores used in the calibration step correspond to absolute residuals $|y-f(x)|$, whereas in Section 4, they take the form $|y-f(x)| - s(x)$, which are signed scores. To answer the reviewer’s question, since we have not considered alternative classes of prediction sets, we have not explored other types of score functions. However, as emphasized in our conclusion, extending our framework to other classes of prediction sets (and therefore score functions) is an important direction for future work. - *Q5. I am surprised that the empirical coverages given in Figures 2-3-4 are, on average, slightly below the nominal coverage of 90%. How did you implement the SCP procedure in your experiments?* We appreciate the reviewer pointing out this observation. After re-running our experiments, we found that the slight undercoverage was due to the limited number of repetitions in our empirical evaluation—averaging over 50 trials was not sufficient to smooth out random fluctuations. Indeed, due to the inherent stochasticity of different random datasets, finite-sample variability can cause slight deviations from the coverage. We will clarify this in the appendix. --- Rebuttal Comment 1.1: Comment: **RE Q4:** I'm unsure if I'm getting your point. The class of prediction sets you consider in equation (5) line 171 does not include all the constant-length intervals and is, I believe, limited to using the absolute residuals. My question was about considering other intervals of constant lengths, produced by considering the quantiles $\alpha/2$ and $1-\alpha/2$ of the calibration residuals as the lower and upper bounds of the intervals (see for example Linusson et al. [2014], Signed-error conformal regression). In this configuration, the length of the interval is no longer $2t$, and I wonder if you think that a similar analysis can be derived in this case. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for the clarification and for raising this interesting question. If we understand correctly, the suggestion is to consider an extension of our framework to a broader class of set-valued functions, parameterized by $a \leq b \in \mathbb{R}$ and $f \in \mathcal{F}$, such that: $C_{f,a,b}: x \mapsto [f(x) + a, f(x) + b]$, with a fixed length $b - a$. This class notably includes the prediction sets proposed in Linusson et al. [2014]. Unfortunately, **when $f$ is fixed**, deriving a closed-form expression for the optimal values of $a$ and $b$ becomes challenging. One of the main difficulties lies in the fact that this family is not nested (as discussed in Appendix B.1) and that optimal values of $a$ and $b$ depend on each other, which prevents us from expressing them directly as a function of the score distribution. Note also that while choosing $a$ and $b$ as the $\alpha/2$ and $1 - \alpha/2$ quantiles of the scores (see Linusson et al. [2014]) would ensure valid coverage, such a choice does not necessarily minimize the length of the prediction interval, especially when the distribution of the scores is asymmetric. That said, an important observation is that **when $f$ is not fixed**, the above class of set-valued functions actually coincides with the class studied in our paper (denoted $\mathcal{C}^{\text{const}}_\mathcal{F}$), up to a mild assumption on the function class $\mathcal{F}$. Specifically, if $\mathcal{F}$ is stable under scalar translation (i.e., for all $f \in \mathcal{F}$, $f + k \in \mathcal{F}$ for any $k \in \mathbb{R}$—a condition satisfied by most standard models that include an intercept term), then the set $C$ parametrized by $a,b,f$ belongs to the class considered in our work. To see this more concretely, one can take $f' = f + (a + b)/2$ and $t = (b - a)/2$. This recasts the interval $[f(x) + a, f(x) + b]$ as $[f'(x) - t, f'(x) + t]$, which is exactly the form we analyze in our work. As a result, our theoretical analysis and results continue to apply in this setting.
Summary: Conformal prediction is a framework to construct label sets such that the marginal probability of coverage is guaranteed to be above a desired level $(1 - \alpha) \in (0, 1)$. This paper studies the conformal label intervals (one contiguous set) for unidimensional regression problems. The motivation is to minimize the marginal interval width (inefficiency) while maintaining the conformal coverage guarantee. In doing so, the paper first considers fixed-width label intervals. Under this setting, the optimal regressor that predicts the center of the label interval is the one that minimizes the $(1 - \alpha)$-th quantile of the residual distribution. The paper proposes EffOrt, a regressor trained to minimize an empirical approximation of the same. Additionally, the paper derives a bound on the difference in interval widths between the proposed method and an oracle, where the bound depends on the number of training and calibration data and the complexity of the function class. The paper also proposes to learn to predict the interval widths to allow adaptivity. The optimal regressor is the one that minimizes the $(1 - \alpha)$-th quantile of the conditional residual distribution. The paper proposes Ad-EffOrt, a conditional quantile regressor atop EffOrt, to approximate an empirical approximation of the same. Claims And Evidence: Yes, the claims are supported. Methods And Evaluation Criteria: Yes, they make sense. However, an additional baseline would be the methods by Stutz et al. [2022] and Kiyani et al. [2024]. They minimize the marginal label set size (an empirical approximation of it) with approximations and derivations similar to the ones in this paper. For instance, approximating the indicator function, minimizing the $(1 - \alpha)$-th quantile of the residual distribution, etc. Theoretical Claims: 1. I checked the proofs for Proposition 3.1, Corollary 3.3, and Theorem 3.7. 2. For Theorem 3.7, what does the distribution of $Y$ being atom less imply? Why is that required? 3. Is there a proof for Proposition 3.6? Experimental Designs Or Analyses: Yes, the experimental design is sound. However, during the analysis, the paper highlights that Ad-EffOrt constructs label sets with more consistent interval widths (compared to the other baselines). This property is not necessarily beneficial because varied interval widths can signify adaptivity and help achieve (approximate) conditional coverage. Supplementary Material: I reviewed Appendices A, C, and D. Relation To Broader Scientific Literature: The analysis and techniques used are very similar to existing works, notably Romano et al. [2019] and Stutz et al. [2022]. Because of this, the contribution of this paper feels limited to Theorem 3.7 only. Essential References Not Discussed: There are works on the marginal size of the conformal label sets, termed inefficiency. Most show that conformal inefficiency asymptotically converges to that of an oracle under different settings: unsupervised learning [Lei et al., 2013, 2015], regression [Lei and Wasserman, 2013], binary classification [Lei, 2014], and multi-class classification [Sadinle et al., 2019]. Similarly, Vovk et al. [2014, 2016] and Sadinle et al. [2019] provide results under per-class/label coverage. Additionally, Dhillon et al. [2024] quantify conformal inefficiency in the finite-sample setting. The paper includes some but not all references. References G. S. Dhillon, G. Deligiannidis, and T. Rainforth. On the expected size of conformal prediction sets. In S. Dasgupta, S. Mandt, and Y. Li, editors, Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, volume 238 of Proceedings of Machine Learning Research, pages 1549–1557. PMLR, 02–04 May 2024. J. Lei. Classification with confidence. Biometrika, 101(4):755–769, 10 2014. J. Lei and L. Wasserman. Distribution-free prediction bands for non-parametric regression. Journal of the Royal Statistical Society Series B: Statistical Methodology, 76(1):71–96, 07 2013. J. Lei, J. Robins, and L. Wasserman. Distribution-free prediction sets. Journal of the American Statistical Association, 108(501):278–287, 2013. J. Lei, A. Rinaldo, and L. Wasserman. A conformal prediction approach to explore functional data. Annals of Mathematics and Artificial Intelligence, 74(1):29–43, Jun 2015. M. Sadinle, J. Lei, and L. Wasserman. Least ambiguous set-valued classifiers with bounded error levels. Journal of the American Statistical Association, 114(525):223–234, 2019. V. Vovk, I. Petej, and V. Fedorova. From conformal to probabilistic prediction. In L. Iliadis, I. Maglogiannis, H. Papadopoulos, S. Sioutas, and C. Makris, editors, Artificial Intelligence Applications and Innovations, pages 221–230, Berlin, Heidelberg, 2014. V. Vovk, V. Fedorova, I. Nouretdinov, and A. Gammerman. Criteria of efficiency for conformal prediction. In A. Gammerman, Z. Luo, J. Vega, and V. Vovk, editors, Conformal and Probabilistic Prediction with Applications, pages 23–39, Cham, 2016. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and easy to follow. The ideas progress well, making the function class progressively more flexible. 2. The area of research is well-motivated for practical impact. Weaknesses: 1. The analysis and techniques used are very similar to existing works. Conformal quantile regression [Romano et al., 2019] also uses conditional quantile predictors. The method by Stutz et al. [2022] makes similar approximations to define an optimization procedure. Other Comments Or Suggestions: 1. Including the experimental results on real-world data in the main paper, over the synthetic ones, will help. 2. Including a brief description of approximating the indicator function in the main paper will help. 3. The term "the space of research" is used many times but does not sound correct. The "search space" or "function class" might be better. Typos: 1. "This would allows deriving..." $\rightarrow$ "This would allow deriving..." (line 364, column 1). 2. "...while $s$ in learned in order..." $\rightarrow$ "...while $s$ is learned in order..." (line 368-369, column 2). 3. "...both empirically showed to be..." $\rightarrow$ "...both empirically shown to be..." (line 422-423, column 2) Questions For Authors: 1. What are the differences between the analysis done in this paper and that of Romano et al. [2019] to develop conformal quantile regression? The paper states "...although CQR gives similar results to our method in some situations [...], it has the drawback to not assess the uncertainty of a particular prediction model $\hat{f}$." (lines 411-414, column 2). Why is this a drawback? Arguably, learning end-to-end adds benefits. 2. What are the differences between the algorithmic choices in this paper and that of Stutz et al. [2022]? 3. How is Collarary 3.3 different from other bounds like Lei and Wasserman [2013]? 4. In Proposition 3.1, what happens when $(n_{c} + 1)(1 - \alpha)$ is an integer? Will the same proof not extend to this setting? 5. Is it possible to experimentally validate the theoretical results? 6. Isn't splitting the data across steps 1 and 2 of Ad-EffOrt better to avoid overfitting? Refereces J. Lei and L. Wasserman. Distribution-free prediction bands for non-parametric regression. Journal of the Royal Statistical Society Series B: Statistical Methodology, 76(1):71–96, 07 2013. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback and insightful questions. Below, we address each point in detail. We also appreciate the suggestions regarding our real-world data experiments and the approximation of the indicator function—these will be included in the final version using the extra page. **Questions:** - *For Theorem 3.7, what does the distribution of $Y$ being atom less imply? Why is that required? In Proposition 3.1, what happens when $(n_c+1)(1-\alpha)$ is an integer? Will the same proof not extend to this setting?* We address both questions together, as they are closely related. In both Theorem 3.7 and Proposition 3.1, at least one of the two conditions must hold. This requirement stems from a technical detail in our proofs, where we rely on an inversion property of the quantile function with the cumulative distribution function (e.g., line 583). This inversion does not behave well when the distribution has atoms, requiring careful handling. In the context of regression, assuming that $Y$ is atomless is a reasonable and often natural assumption. - What are the differences between the analysis done in this paper and that of Romano et al. [2019] to develop conformal quantile regression? There seems to be a misunderstanding. In Romano et al. [2019], the authors construct prediction intervals by performing quantile regression of $Y$ given $X$ at levels $\alpha/2$ and $1-\alpha/2$, without learning a scalar prediction function. In contrast, our approach first learns a prediction function $f$ by minimizing the empirical $(1-\alpha)$-QAE, and then (for Ad-EffOrt) estimates the quantile function of the residuals $|Y-f(X)|$ given $X$ to construct the prediction interval. While both methods involve quantile regression, their objectives differ. Additionally, Romano et al. [2019] do not theoretically analyze the size of the resulting prediction sets. - *What are the differences between the algorithmic choices in this paper and that of [Stutz et al. [2022]](https://arxiv.org/pdf/2110.09192)?* Stutz et al. [2022] propose a method to incorporate conformalization directly into the learning phase by minimizing inefficiency through a differentiable loss. Key differences with our work include: 1. Their focus is on classification, while we consider regression with residual scores. 2. Our method directly optimizes the $(1-\alpha)$-quantile of the scores, whereas they define and minimize an efficiency loss, necessitating an additional data split. Additionally, we use a different technique for the smoothing of the quantile function. 3. Their approach requires a more complex algorithm, making theoretical analysis of prediction set sizes more challenging, whereas we provide explicit theoretical results on set sizes. Overall, while both approaches share conceptual similarities (e.g., smooth quantile computation), the methodologies differ significantly. - *How is Collarary 3.3 different from other bounds like Lei and Wasserman [2013]?* Could the reviewer be more precise by providing an equation to which he/she refers to in Lei and Wasserman [2013]? After a carefull check, it seems that this paper contains mostly asymptotic results, holding for their specific KDE-based estimator, but we might have missed something. - *Is it possible to experimentally validate the theoretical results?* We made additional experiments to validate the theoretical result of Prop. 3.1. In the two following figures: https://ibb.co/G4MPB244, https://ibb.co/MDJDG33J, we illustrate the bound when $\delta=.2$ and $\alpha=0.5$. The first figure displays the evolution of one realisation of $\lambda(\hat{C})$ (in blue) and the rhs term of Eq. (7) (denoted $2Q$, in orange) with respect to the number of calibration points $n_c$. The second figure displays an histogram of the distribution of $\lambda(\hat{C})$ when $n_c=200$. The red line corresponds to $2Q$, the rhs term of Eq. (7). - *Isn't splitting the data across steps 1 and 2 of Ad-EffOrt better to avoid overfitting?* This is a valid point. Splitting the data could improve generalization and reduce the prediction interval size. Theoretically, this could also facilitate deriving a bound similar to Theorem 3.7. However, even if overfitting occurs during training, the calibration step naturally corrects it by increasing the size of the prediction intervals. - *Is there a proof for Proposition 3.6?* Yes, the proof is provided in Appendix B.2. **Other Related Work:** We appreciate the additional references. As our work focuses on regression, we had not included classification-specific papers; however, we will incorporate them in the final version. Most of the other cited works are already referenced in our paper, except for Dhillon et al. [2024], for which we refer the reviewer to our response to Reviewer d77L.
null
null
null
null
null
null
Global curvature for second-order optimization of neural networks
Accept (poster)
Summary: The submission studies the structure of the "global curvature" of deep networks. The main result is that global matrix quantities such gradient covariance and Hessian (where "global" means the expected values of those matrices under some distribution on the weights) has a specific matrix structure with much fewer free parameters. As a potential application, the submission proposes an optimizer based on this structure and shows it outperforms SGD/Adam on a two-layer teacher/student problem. ## update after rebuttal The response has addressed my main issues from the `Claims And Evidence` section below. I was worried that the phrasing in the submission was over-claiming the benefit of "global curvature" and the proposed estimator. The proposed wording and additional evidence for the quality of the claimed estimator would clarify that the main benefit is in the structure of the curvature estimator, which is a more appropriate claim given the supporting experiments. With these changes, I think the paper should be accepted. Claims And Evidence: The main claim of the submission, that the expected gradient covariance has the claimed structure, is well supported. The visualization provided are convincing up to minor comments (see below). What's less clear is why this global quantity matters, whether for the gradient covariance or Hessian. The submission seems to take for granted that "global" is better than "local" in its introduction, and the only argument in favor of the global approach is that an algorithm is proposed that uses an estimator inspired by the global curvature structure. > We demonstrate the effectiveness of our approach by running exact second-order optimization on a two-layer MLP and synthetic data This appears to assume that the optimizer works better because it estimates the global curvature. I strongly disagree, as the evaluation of the proposed estimator is insufficient and potentially misleading (details in Methods And Evaluation Criteria). The one potential application I can see for the theory in the global curvature is in enabling efficient full-Gaussian posterior approximation in Bayesian inference at a reduced cost. This is mentioned tangentially in the discussion section, but should be expanded upon to make the link obvious. --- Please change the following sentence > We demonstrate the effectiveness of our approach by running exact second-order optimization The preconditioner does not use any second-order information and is instead an "approximation" of the global covariance matrix of the gradients. Methods And Evaluation Criteria: ### Evaluation of the estimator My issue is not that the optimization results are not impressive enough. I am not denying that the proposed preconditioner (using a structured approximation given the current gradient) could be a great optimization method. But whether it does appears disconnected from the global curvature discussion at the start of the paper. The paper claims to test the estimation error of the proposed estimator, > Appendix J provides additional details, along with an empirical analysis of the estimation error (see Figure 9) However, Appendix J and Figure 9 do not test whether the estimator is an appropriate estimator of the global curvature. Instead, they test that it produces consistent results. Those could be close, far, or completely unrelated to the global curvature. The paper also asserts > We expect the error in the estimation of hyperparameters to decrease with layer size (d1 in this case). I did not find evidence or mechanism that would explain this. The estimator claims to be able to estimate the global covariance from a single sample. This is very suspicious for any claim of globality. Taken to its extreme, the claims in the paper do not seem that far from the claim that "in $d$ dimensions, if $x \sim \mathcal{N}(0, I)$, then $\mathrm{diag}(xx^\top)$ is a good estimator of $I$, especially as $d \to \infty$." Again; my issue is not with the proposed optimizer or whether it works, but on attributing the reason for its performance to the global curvature. The current paper would be improved if the optimizer was instead introduced along the following lines, to make it clear that the connection is tenuous: "We observe that the global curvature is very structured and sparse. It is possible that this structure captures the most important variations in the covariance/Hessian. Therefore we propose to use this structure as a preconditioner for our optimizer instead of the traditional Diagonal ones. It seems to work well." I would raise my score if the text was changed to explicitly acknowledge this disconnect, and state that - the "globality" story is an inspiration, but not a justification, for the good performance of the algorithm - the good performance of the algorithm, being neither exact nor second-order, does not say much about the validity of the "global" perspective in optimization ### Evaluation of the structure > We highlight that theory predicts the overall structure, rather than the specific numerical values. As that is the case, the visualizations could be improved as they emphasize the numerical value. This could be done by fitting the factors of the theoretical structured matrix to the observed data to show agreement on the numerical values. ### Evaluation of optimizers > For SymO, we choose an exponentially decreasing learning rate The evaluation of the optimizer claims to use a decreasing step-size for the proposed optimizer. It is not specified whether the other optimizers (SGD, Adam) use a decreasing learning or other schedule. All algorithms should use the same regime; either a tuned step-size schedule or a constant step-size. Theoretical Claims: I have not checked the proofs in details. Experimental Designs Or Analyses: No strong issue in the experiment design beyond the evaluation issues listed above. Supplementary Material: Appendices E, F, G, J Relation To Broader Scientific Literature: The paper could be made stronger by discussing the relationship to the Bayesian variational inference literature in more details. This community is likely to appreciate the structure induced by the invariance, because VI maintains an estimate of the posterior distribution in the form of a mean and covariance which are used to samples weights at every step to compute updates to the are used to sample weights at every step to compute gradients and update the mean and covariance. The standard approaches use a diagonal approximation to the covariance matrix (eg [Blundell et al., Weight Uncertainty in Neural Networks](https://arxiv.org/pdf/1505.05424)) but more expressive families are sometimes used ([e.g. Lin et al., Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations](https://arxiv.org/abs/1906.02914) Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: The study of the impact of the invariance of the network on the gradient and curvature matrices appears novel, and the approximation obtained appear impressive. However, I fail to see why this "global" curvature, that enables such a simplification, helps in the context of optimization which typically relies on local quantities which need not share this structure. This disconnect between the main part of the paper and its tentative application in §3.4 onwards (as detailed above) is my main issue with the current submission. Other Comments Or Suggestions: ### Additional details Some explanation, even if high-level, for the following phenomena would help the reader follow the paper. - §2.2: Why do ReLU activations satisfy fewer invariances than odd activation functions? - §3: Why does the addition of an extra hidden layer reduces the inter-layer correlations? ### Word choices - Please change the following sentence > We demonstrate the effectiveness of our approach by running exact second-order optimization The preconditioner does not use any second-order information and is instead an "approximation" of the global covariance matrix of the gradients. - The probability distribution used to sample the relevant quantities in the definition of the "global curvature" is not explicitly given. My understanding is that it is out of a desire to remain general as the theory only requires invariance conditions, but having a running example (for example from variational inference) would help the reader. - The use of the term "curvature matrix" seems heavily overloaded, as for example in §2.1 it has to encompass the Fisher, the Hessian and the covariance of the gradients. > Various studies have employed different formulations for the curvature matrix, including the Fisher information matrix, the Gauss-Newton matrix, the Hessian matrix, and the gradient covariance matrix. The relationship between the Hessian and the gradient covariance is unclear, and as a result "curvature matrix" does not seem to imply more than "a matrix that can be used as a preconditioner". If "preconditioner" was the intended meaning, I would encourage its usage instead. - The submission uses the expression "hyperparameters of the curvature" (and "parameters" in §1). I initially did not understand what it meant, as to me a "hyperparameter" describes a user-specified parameter controlling the behaviour of an algorithm. A more appropriate term could be the "factors", as theorem 1 shows that the expected matrix obeys a specific factorization. ### Typos In Lemma 2.4, I assume $f$ should be a matrix analytic function of $\Sigma$? $f(\Sigma) = v^\top \Sigma v$ appears analytic but Eq. 15 does not make sense. Questions For Authors: Did I misunderstand the evaluation of the estimator? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detail comments that will improve our paper. - *Why global curvature*. We agree on the lack of clarity for why global curvature should be used as a preconditioner. We will revise as follows: Introduction: “The primary motivation for considering global curvature is its efficient computation, which serves as the main contribution of this work. Future research will explore whether global curvature serves as a reliable approximation of local curvature and whether it offers inherent advantages for enhancing convergence in second-order optimization.” Section 2.1: “We operate under the assumption that global curvature captures meaningful variations in local curvature for optimization purposes. Discussion: “This work provides preliminary evidence suggesting that global curvature enhances convergence when applied as a preconditioner. During optimization, the distribution of parameters collapses into a set of local solutions, therefore our method is expected to better approximate the local curvature towards the end of training, similar to other methods (e.g. Natural Gradient, Gauss-Newton). A comprehensive analysis of the errors introduced by our approximation will be a subject of future investigation.” - *Bayesian inference*. We agree that our method could be valuable for Gaussian posterior approximation in Bayesian inference. We will revise the Discussion: “Bayesian deep learning relies on approximating the posterior over parameters by a Gaussian distribution. The covariance of this distribution is usually approximated using a diagonal or block-diagonal structure. Our work offers a method for efficiently computing the full covariance, which may lead to more accurate Bayesian posterior estimates.” - *Use of the word “second-order”*. While “second-order” traditionally refers to the computation of second-order derivatives, several methods classified as second-order use only first-order derivatives, such as Natural Gradient (e.g. KFAC), Gauss-Newton, and Shampoo. Our approach can be applied to any of those methods. Furthermore, the gradient covariance provides second-order moments of the distribution, therefore we believe that it may be also called “second-order”. However, we are open to reconsidering the terminology if the reviewer strongly feels that it constitutes a misuse. - *Estimation error of the global curvature and dependence on layer size*. We will replace Fig 9 with a plot of the correlation of the single-model estimate with the average over a large sample of models (N=10,000), and how that depends on layer size d1. As the reviewer suggested, we do not observe a simple increase of correlation with layer size d1. We observe the following numbers (d0,d1,d2 are the layer sizes): |Layer sizes|correlation| |-|-| |(100,10,100)|0.66$\pm$0.05| |(100,100,100)| 0.75$\pm$0.03| |(100, 1000,100)|0.54$\pm$0.03| |(100, 10000, 100)|0.56$\pm$0.03| - *Theory predicts the overall structure only*. In Fig.2,3,4, we wanted to highlight that, while panels A and B differ significantly, both follow the same structure. Furthermore, the theory of Section 3 remains valid regardless of the specific values of the unknown factors. - *Decreasing step size*. We optimize two hyperparameters for all optimization methods except GD (in addition to Adam and SymO, we added Shampoo and KFAC, see answers to other reviewers), ensuring a fair comparison. The choice of an exponentially decreasing learning rate for SymO is based on a theoretical analysis of a quadratic loss function, which we will include in the Appendix. - *Why do ReLU activations satisfy fewer invariances?* The intuitive reason is that the symmetry group is smaller, the constraints that are satisfied by the covariance are fewer and therefore the covariance has more degrees of freedom. We will report the groups sizes in Section 2.2. - *Why an extra hidden layer reduces the inter-layer correlations?* Odd activations introduce an invariance with respect to sign changes in both the incoming and outgoing weights of a neuron. We speculate that this invariance may lead to cancellations in the correlations of these weights. Further studies are needed to answer this question. - *What is the probability distribution?* We agree that we do not know the distribution of the gradients, other than its symmetries and the structure of its first two moments. We will add an analytic study of a quadratic loss to the Appendix, for which the parameter and gradient distributions remain Gaussian. - *Use of the word “curvature”*. Curvature is frequently linked to Fisher Information and Gauss-Newton methods, which do not directly compute the Hessian. The gradient covariance is essentially equivalent to the “empirical” Fisher Information matrix, which is also considered a form of curvature. Nevertheless, we are open to changing the terminology if the reviewer believes it is a misuse. - *Use of the word “hyperparameter”*. We agree and we will replace it with “factors”. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I appreciate the check of the large width claim, and the response addresses most of my concern. The proposed modifications would make for a stronger submission. Below are some remaining issues which mostly center around word use. **Why global curvature** These explanations would definitely help, although I am still somewhat uneasy about the phrasing of "the global curvature", since what is being used is the structure of the global curvature, which is imposed on the local curvature to enable efficient estimation. The proposed method does not compute any global quantity, which still seems implied by the above phrasing ("global curvature captures meaningful variations in local curvature", "global curvature enhances convergence when applied as a preconditioner"). I would recommend the following edits to emphasize that the proposal is merely to use the factorization derived from the global quantity to approximate a local quantity. > Section 2.1: “We operate under the assumption that global curvature captures meaningful variations in local curvature for optimization purposes. "We operate under the assumption that the structure present in the global curvature is also present in the local curvature and captures meaningful variations for optimization purposes" > “This work provides preliminary evidence suggesting that global curvature enhances convergence when applied as a preconditioner" "This work provides preliminary evidence suggesting that the structure of the global curvature enhances convergence when applied as a preconditioner using local quantities" **Use of the word "second-order"** I grant that some section of the community uses "second-order" to talk about method that do not use second-order information, and even for method which have barely any connection to second-order methods. However, my issue here was more with the wording of "exact second-order", which I would understand to mean the exact computation of the Hessian. Computing the covariance might give some approximation of the second-order information under some specific assumptions, but not the exact one. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to provide additional comments. We believe that we understand your point. You are saying that the theory developed in Section 3 predicts only the structure of the global curvature, and estimating the unknown factors by a single model does not necessarily imply that we compute any global quantity. Therefore, we cannot claim that we compute any global quantity. We agree with this point. Even if we have shown that some of the factors estimated by a single model have nearly perfect alignment with the global ones (please see the latest answer to reviewer bCpH), that does not hold for other factors, and more work is needed to understand how much information about global curvature we can obtain with a single model. Therefore we will include the changes suggested by the reviewer in the final version of the paper, thank you for pointing those out. We also agree with the use of the word "exact second-order". We should not claim that we compute the curvature exactly, because we neither compute the Hessian nor we compute its factors exactly, we only compute its structure. Therefore, for all instances of the word "exact" in the paper, we will either remove it or point out that "exact" refers to the structure only, and we do not compute exactly any second-order information. We hope that the reviewer now agrees that our work deserve publication, thank you.
Summary: The work attempts to improve the computations of second order methods by analyzing the covariance matrix of the gradients in small MLP networks. They rely on certain symmetrics expected to be in network parameters and derive theory on the structre, as well as explicit solutions for the covariance matrix. They perform minor experiments, very toyish, where their predictions approximately coincide with the general obtain structure on average (over 10K-100K models). They show on this specific toy example accelaration is obtained, compared to standard GD and Adam optimizers. The experiments are not convincing enough, the setting is highly restrictive, "the input is sampled from a Gaussian distribution with zero mean. The covariance matrix of the input is generated using random orthogonal eigenvectors". It is very hard to conclude if this behavior is actually general and portrays well large NNs and true data. They perform experiments on 3-layer MLP's with RelU, which can already perform quite well on real (easy) data, such as MNIST. Unfortunately, only a toy example is shown. Thus, although the theory is of some interest, I cannot recommend publication in this form of the work. Claims And Evidence: Claim covariance matrix structure can be predicted, based on certain expected symmetries of the parameters and properties of the input data. Methods And Evaluation Criteria: Evaluation is of low quality, only a very simple toy example assuming Gaussian distribution input. Theoretical Claims: Theory on covariance matrix of the gradients. Appear fine. Experimental Designs Or Analyses: Very scarce, no real analysis. Do not show instances of a certain model, only an average over 10K or 100K models, do not show the variance of their prediction. Supplementary Material: ok Relation To Broader Scientific Literature: little. Essential References Not Discussed: refs ok Other Strengths And Weaknesses: Strength: theory and general idea may have merrit. Weaknesses: * Many assumptions are needed to derive the results. * Assumptions are not validated on real examples. * Experiments are not realistic. * No variance of the results is shown. * Solution optimization loss (Fig 5), the authors do not show other 2nd order methods, only GD and Adam. This toy examle perhaps works best for 2nd order methods (as it involves Gaussian inputs). Not clear if the proposed solution is faster or simply 2nd order is better here. * The experiments are perfromed with full gradient of the entire training set, how SGD works? Can the proposed algorithm work well in mini-batches? Other Comments Or Suggestions: see above Questions For Authors: Please explain in more detail if additional experiments were conducted that I missed, or justify why the chosen setting is reasonable to reflect real world DNNs and data. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comments, which will help improving our work. We would like to highlight that the main contribution of our work is theoretical. As acknowledged by the other reviewers, the theory presented in our work is novel and it could be of broad interests for other fields of machine learning. We hope that we can convince the reviewer of the value and novelty of our theory. Specific concerns: - *Comparison with other second-order methods*. We agree with the reviewer that comparison with other second-order methods is appropriate. We implemented KFAC and Shampoo. In KFAC, we optimised learning rate and damping. In Shampoo, we optimised learning rate and epsilon. The following table shows test loss values for a few time steps in the 2-layer MLP with Tanh case showed in the paper. We will include the full plot in the final version of the paper. | time | GD | Adam | KFAC | Shampoo | SymO | |----------------|-------------|----------|----------------|-------------|----------| | t=50 | 0.0378 | 0.0255 | 0.0091 | 0.0071 | 0.0058 | | t=100 | 0.0232 | 0.0116 | 0.0074 | 0.0060 | 0.0056 | |t=150 | 0.0177 | 0.0086 | 0.0067 | 0.0057 | 0.0056 | | t=200 | 0.0147 | 0.0072 | 0.0063 | 0.0056 | 0.0056 | We believe that SymO has better results because it goes beyond block-diagonal preconditioning, and considers also interactions among the two layers. However, the performance of KFAC and Shampoo is very similar to SymO. - *Relation to broader scientific literature*. The reviewer thinks that our work bears little relation to broader scientific literature, which is the opposite of what stated by the other reviewers, who recognised important connections of our work with other fields of machine learning. We kindly ask the reviewer to specify what references are missing in our paper. - *How the method works with mini-batches*. We agree with the reviewer that this is an important question. Typically, second-order methods work better with large batches, while first-order methods sometimes work better with small batches. However, the advantage of first-order methods on small batches concerns the generalisation performance of the optimum. Instead, here we are concerned with the speed of convergence, for which no advantage of small batches has ever been observed. Therefore we chose to analyse the full batch case, and postpone the study of mini-batches to future work. - *Toy examples*. We would like to clarify that the theory provided in Section 3, which is our main contribution, is valid for any size of a neural network. While Figures 2,3,4 report the case of a neural network with 5 neurons per layer, the choice of 5 neurons is only for illustrative purposes and the same theory is valid for any number of neurons. - *Variance of prediction*. We agree with the reviewer that the we should provide the quality of the estimation of the curvature with a single model. We computed the correlation of the single-model estimate with the estimate obtained by an average over a large sample of models (N=10,000). We observe the following numbers (d0,d1,d2 are the layer sizes): | Layer sizes | correlation | |----------------|-------------| |(100,10,100) | 0.66 $\pm$ 0.05 | |(100,100,100) | 0.75 $\pm$ 0.03 | |(100, 1000,100) | 0.54 $\pm$ 0.03 | |(100, 10000, 100) | 0.56 $\pm$ 0.03 | We will include a plot with more details in the final version of the paper. - *Assumptions are not validated*. We highlight that the theory provided in Section 3, which is our main contribution, is exact and there are no assumptions involved, other than the invariances that are known to hold exactly for neural networks and initialisation routines. We kindly ask to clarify which assumptions the reviewer is concerned about. - *Experiments are not realistic* We highlight that the main contribution of our work is the theory provided in Section 3. We consider the empirical evaluation as preliminary and not a major contribution of the work. We believe that a full empirical evaluation on a large scale problem deserves a separate paper. We would like to emphasize the novelty of our theory, as it introduces a unique approach to determining the curvature of neural networks by leveraging their inherent symmetries — a perspective not previously explored. The theoretical methods we developed to tackle this problem are not only original but also hold significant potential for advancing other fields. The study of neural network curvature and loss landscapes is a topic of broad interest, and our work offers new avenues for gaining deeper insights into this major unsolved challenge. We sincerely hope the reviewer will acknowledge the substantial value of our theoretical contribution. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal response and additional experiments. I agree the theoretical part is of significance, assuming the clarifications and experiments additions are incorporated to the paper, I raise my ranking.
Summary: The authors' work focuses on getting insights on the second moments of $\Sigma_t = \int \nabla \mathcal L(\theta)\nabla \mathcal L(\theta)^\top dp_t(\theta) - \mu_t \mu_t^\top$, by exploiting invariances in representation space. In particular, the authors observe that if $G$ is a symmetry of the loss function $\mathcal L(G\theta_t) = \mathcal L(\theta_t)$, then mean and covariance satisfy eigenvalue equations $\mu_t = G \mu_t$, $\Sigma_t = (G^\top \otimes G) \Sigma_t$. By exploiting symmetry groups $\mathbb G$ of common activation functions, the authors use the fact that these eigenvalue equations have to hold simultaneously for all $G \in \mathbb G$ to show which structure is imposed by the solution space. Claims And Evidence: I believe the claims of the authors are supported by sufficient empirical evidence. Methods And Evaluation Criteria: I believe the proposed experimental evaluation suffices for the authors' claims. See "Experimental Designs Or Analyses" for the only concern. Theoretical Claims: I confirm that I checked the proofs in the appendix of the manuscript and I didn't find any criticality. Experimental Designs Or Analyses: In section 3.4, I would include in the comparison also some local preconditioning methods other than Adam, to see in practice if the average structure gives comparable results with respect to a local precondition. In particular, I would compare with [1,2,3] [1] J. Martens et al., "Optimizing Neural Networks with Kronecker-factored Approximate Curvature", NeurIPS 2023. [2] N. Vyas et al., "SOAP: Improving and Stabilizing Shampoo using Adam", ArXiv 2024 [3] V. Gupta et al., "Shampoo: Preconditioned Stochastic Tensor Optimization", ICML 2018. Supplementary Material: I checked the implementation provided by the authors in the supplementary material. Relation To Broader Scientific Literature: The key contribution of this work is theoretical, quantifying a precise relationship between symmetries of the neural network and the averaged Hessian structure. The proposed preconditioning strategy resulting from the theoretical investigation sounds well-developed (no inversions of big matrices), placing also the work in the line of works proposing Quasi-Newton methods for neural network training. As mentioned above, experimental comparison with some of these methods is missing in the manuscript. Essential References Not Discussed: To my knowledge, the idea is novel in this context and the relevant literature has been discussed. Other Strengths And Weaknesses: The work is extremely original and the results are of serious interest both from a theoretical point of view and concerning potential applications. In particular, the idea of exploiting neural network's symmetries to infer the structure of the averaged Hessian is of broad interest to develop cheap preconditioners to accelerate training. Other Comments Or Suggestions: I believe a point of strong interest would be to quantify how much the second moment in a point $\nabla L(\theta_t)\nabla L(\theta_t)^\top $ can deviate from the average $\mathbb E_{\theta_t}[\nabla L(\theta_t)\nabla L(\theta_t)^\top]$. While not easy to perform since it depends on the update rule, a result of this kind would quantify the local effectiveness of $\Sigma_t$ as a preconditioner for optimization. Even just experimental investigation in this direction would be of interest. Questions For Authors: 1. It is interesting to see how your predictions for the structure of the Hessian are extremely sparse in figure 7 (Tanh activation) and less in figure 8 (ReLU activation). Do the authors have any insights about this observation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments. We are glad that the reviewer recognises that the key contribution of our work is theoretical and considers our work novel and interesting. We answer here the main concerns raised by the reviewer. - *Empirical comparison with other second-order methods*. We implemented KFAC and Shampoo. We used the original version of Shampoo with power 1/4. Using more recent versions with power 1/2 (as in SOAP) did not give better results in our experiments. In KFAC, we optimised learning rate and damping. In Shampoo, we optimised learning rate and epsilon. The following table shows test loss values for a few time steps in the 2-layer MLP with Tanh case showed in the paper. We will include the full plot in the final version of the paper. | time | GD | Adam | KFAC | Shampoo | SymO | |----------------|-------------|----------|----------------|-------------|----------| | t=50 | 0.0378 | 0.0255 | 0.0091 | 0.0071 | 0.0058 | | t=100 | 0.0232 | 0.0116 | 0.0074 | 0.0060 | 0.0056 | |t=150 | 0.0177 | 0.0086 | 0.0067 | 0.0057 | 0.0056 | | t=200 | 0.0147 | 0.0072 | 0.0063 | 0.0056 | 0.0056 | We believe that SymO has better results because it goes beyond block-diagonal preconditioning, and considers also interactions among the two layers. However, the performance of KFAC and Shampoo is very similar to SymO. - *Sparsity of Hessian for Tanh versus ReLU*. Our understanding of the stronger sparsity in the Tanh case with respect to the ReLU case comes from the symmetry groups. In the case of Tanh, in addition to the permutation symmetry, there is also a symmetry for switching the sign of parameters in adjacent layers. Therefore, the symmetry group of Tanh has larger size with respect to ReLU (please see Chen et al 1993 for specific numbers). As a consequence, the number of constraints that need to be satisfied are larger and the Hessian has less degrees of freedom. These constraints result in a higher sparsity of the matrix. In the final version of the paper, we will report the specific groups sizesfor Tanh and ReLU in Section 2.2. - *Difference between local and global (average) gradient outer product*. It is not easy to compare the second moment at a single point with the ensemble average, because the former is a rank-1 matrix while the ensemble average is usually full rank. Instead, we provide an additional analysis showing the quality of the estimation of the curvature with a single model. We computed the correlation of the single-model estimate with the estimate obtained by an average over a large sample of models (N=10,000). We observe the following numbers (d0,d1,d2 are the layer sizes): |Layer sizes|correlation| |-|-| |(100,10,100)|0.66$\pm$0.05| |(100,100,100)| 0.75$\pm$0.03| |(100, 1000,100)|0.54$\pm$0.03| |(100, 10000, 100)|0.56$\pm$0.03| We will include a plot with more details in the final version of the paper. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the thorough response. In order: 1. I appreciate the comparison with other second-order methods and indeed in light of these results (as I was expecting) your theory could be effectively applied to design effective and cheap preconditioners that go beyond diagonal or block diagonal. Most importantly, as far as I know, the idea of using invariances to infer the structure of the Fischer curvature matrix is novel. 2. *Difference between local and global (average) gradient outer product*: I believe it is worth also a discussion about these results since the behavior of correlation seems to be a bit "erratic" on $d_1$ and I cannot find a reason for this (aligned with reviewer 3sTt comment). I would appreciate it if the authors could expand on this point, at least giving some insights. In any case, I am very satisfied with the rebuttal and therefore I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you, indeed we also did not understand the dependence of correlations on d1, we thought that correlations should just increase with d1. After a more careful analysis, we now understand what is happening. We broke down correlations in different blocks of the curvature matrix, and we made two crucial observations that explain the numbers provided in our previous rebuttal: 1) Correlation is different in different blocks, and increases with d1 for all blocks 2) Frobenius norm is different in different blocks, and decreases with d1 for all blocks. The increase in correlation and decrease of Frobenius norm explains a non-monotonic correlation when considering the full curvature matrix with all blocks. We consider three blocks: B11 (layer-1 to layer 1), B12 (layer-1 to layer 2), B22 (layer 2 to layer 2). The correlations monotonically increase with d1 for all blocks |Layer sizes |B11 |B12 |B22 | |-|-|-|-| |(100,10,100) |0.67$\pm$0.05|0.38$\pm$0.06|0.30$\pm$0.05| |(100,100,100) |0.90$\pm$0.02|0.61$\pm$0.04|0.52$\pm$0.03| |(100, 1000,100) |0.96$\pm$0.01|0.64$\pm$0.03|0.54$\pm$0.03| |(100, 10000, 100) |0.97$\pm$0.01|0.65$\pm$0.03|0.56$\pm$0.03| Furthermore, correlations in B11 are higher while correlations for B22 are lower. However, the Frobenius norm decreases with d1, and has markedly different values in different blocks |Layer sizes |B11 |B12 |B22 | |-|-|-|-| |(100,10,100) |1.0684 |0.1119 |0.0203 | |(100,100,100) |0.0321 |0.0113 |0.0149 | |(100, 1000,100) |0.0024 |0.0011 |0.0129 | |(100, 10000, 100) |0.0002 |0.0001 |0.0131 | For small values of d1, the norm of B11 dominates, thus correlations in the full matrix are high. For larger values of d1, the norm of B22 dominates, thus correlations in the full matrix are low. We hope that this new analysis clarifies the issue.
null
null
null
null
null
null
null
null
CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
Accept (poster)
Summary: The paper introduces CodeSteer, a method to guide large language models (LLMs) in making optimal choices between textual reasoning and code generation. The authors propose a multi-round supervised fine-tuning (SFT) and direct preference optimization (DPO) approach using a newly created dataset. A benchmark, SymBench, is introduced, containing 37 reasoning tasks. The model CodeSteerLLM, trained on Llama-3-8B, significantly enhances GPT-4o’s symbolic reasoning capabilities. Experiments demonstrate that CodeSteer improves the performance of GPT-4o from 53.3% to 86.4%, outperforming OpenAI’s o1 and DeepSeek R1. The framework generalizes across different LLMs, improving performance on Claude-3, Mistral, and GPT-3.5. ## update after rebuttal The rebuttal address my questions. I remain positive on the paper. Claims And Evidence: Yes, the claims are well-supported. + Performance improvement demonstrated in Table 1 + Ablation studies show the importance of the design choices Methods And Evaluation Criteria: Yes, the methodology is well-founded and appropriate. + Eval is performed on both seen and unseen tasks Minor suggestion: - The evaluation could point out failure cases where CodeSteer struggles. This will help provide insights on how to further improve on the work. Theoretical Claims: The paper does not rely on theoretical claims, but its methodology is sound. Both DPO and SFT and well-studied in the literature, the work uses these approaches to build CodeSteer. Experimental Designs Or Analyses: Yes, the experimental setup is well-structured. Supplementary Material: Partially, reviewed the datasets part. The appendix contains useful details. Relation To Broader Scientific Literature: + The development of SymBench, which is open-sourced, unlike prior works. + New methods for dataset construction and fine-tuning + Strong empirical results Essential References Not Discussed: The paper is well-positioned within the broader literature. Other Strengths And Weaknesses: + The paper present a multi-round guidance mechanism to help LLMs to select between producing code or text-based reasoning + The design elements such as DPO, self-answer checking, symbolic evaluation seems to be helpful Other Comments Or Suggestions: - see questions for authors. Questions For Authors: Overall, the paper is a pleasant read. The paper is well-structured and easy to follow. The empirical study supports the claims made by the paper. The methodology seems sound and the empirical results demonstrate the effectiveness of CodeSteer in assisting LLMs in solving reasoning tasks. Below are a few questions: - What are the failure cases where CodeSteer struggles? - In Figure 1, why not address the case 1 problem with coding approach as well? - Can an alternative approach, where coding is preferred at the beginning and, if coding failed, switched to text-based reasoning, work? From Figure 1's examples, it seem this alternative approach could achieve similar results as the proposed method. I had the question mainly because it is hard to come up with examples where there would be frequent switching between code and textual based reasoning. Based on the examples shown in the work, the solution seem to be having the LLM first generating code-based solutions. And if the code-based solutions are hard-coded (detected from rules), then ask the model to do textual-based reasoning instead. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your appreciation with our work and helpful suggestions. Here are our responses and newly added experiments based on reviewers’ questions. We kindly ask the reviewer to reconsider our work in light of the responses below. We are happy to further communicate with the reviewer. ***Question 1:*** *The evaluation could point out failure cases where CodeSteer struggles. This will help provide insights on how to further improve on the work.* **Response 1:** CodeSteer encounters failures under several conditions: 1. **Insufficient Capability of TaskLLM**: In some cases, the capabilities of the TaskLLM—whether through coding or textual reasoning—are not sufficient to solve the given problem. 2. **Suboptimal Code Generation**: The generated code may not use the most efficient method, which can lead to timeouts. For example, as shown in Figure 4c, CodeSteer’s success rate decreases when the target values increase, due to the exponential growth in search complexity. 3. **Lack of Robustness to Task Complexity**: CodeSteer is not yet robust enough across tasks with varying complexity. As shown in Figure 4a, performance drops in medium-complexity samples. In these cases, CodeSteer sometimes selects textual reasoning over coding and ends up producing incorrect answers. **We will elaborate on these issues, along with relevant examples, in the Appendix to guide future improvements to the system.** ***Question 2:*** *In Figure 1, why not address the case 1 problem with coding approach as well?* **Response 2:** Thank you for the great question. In our study, **solving Case 1 problems through coding is acceptable**. During training data synthesis, we do not explicitly account for differences in execution or time cost between code generation and textual reasoning. Our primary criterion for data selection is whether the answer is correct. If both coding and textual reasoning successfully solve a problem, both are included in the dataset. Since code execution costs and runtimes **depend on user-specific hardware**, it is difficult to define a consistent comparison metric between the two approaches. Instead, we incorporate the number of guidance/generation rounds as part of the DPO scoring to favor more efficient solutions. **We will include this discussion in the revised version of the paper.** ***Question 3:*** *Can an alternative approach, where coding is preferred at the beginning and, if coding failed, switched to text-based reasoning, work? From Figure 1's examples, it seem this alternative approach could achieve similar results as the proposed method. I had the question mainly because it is hard to come up with examples where there would be frequent switching between code and textual based reasoning. Based on the examples shown in the work, the solution seem to be having the LLM first generating code-based solutions. And if the code-based solutions are hard-coded (detected from rules), then ask the model to do textual-based reasoning instead.* **Response 3:** Thank you for the insightful suggestions. Based on the suggestion from you and Reviewer AJhL, To better evaluate CodeSteer and provide deeper comparisons, **we include three prompt-based baselines:** 1. **Few-Shot**: Uses five example-based prompts to guide the TaskLLM in mimicking the 'code/text' switching reasoning process. 2. **Code-First-Rule**: A rule-based approach where the TaskLLM is prompted to use code for the first three rounds (with increasing complexity) and then switch to text-based reasoning. 3. **Code-First-Agent**: Employs GPT-4o as the CodeSteerLLM to guide the TaskLLM using the same code-first-then-text strategy as in **Code-First-Rule**. As shown in the table, the three prompt-based methods perform significantly worse than CodeSteer, **underscoring the effectiveness of training with our synthesized data**. Upon analyzing failure cases, we identify two main reasons: - CodeSteerLLM’s guidance often includes problem-specific coding knowledge (e.g., suggesting A* or DFS) and how to formalize the problem, which **purely prompt-based methods struggle to capture**. - **Switching between code and text can be advantageous**, as later code generations can build on insights from prior textual reasoning. For instance, in Path Plan, a text-generated trajectory may be partially correct; subsequent code can refine it directly, reducing the search space. | Task Success Rate % | CodeSteer | Few-Shot | Code-First-Rule | Code-First-Agent | |-------|----------------|-----------------|-----------------|-----------------| | Game 24 | **93** | 28 | 68 | 76 | | Path Plan | **75** | 54 | 59 | 57 | | Eight Queen | **78** | 47 | 62 | 73 | | Combinatorial Calculation | **86** | 58 | 47 | 59 | | 2048 | **56** | 49 | 40 | 48 | **We will include the above added experiments and discussion in the final paper.**
Summary: This paper targets the better performance of LLMs on symbolic reasoning tasks such as Game-24 with multi-round generations. The method contains many components, such as a small model fine-tuned for guiding the generation, a self-answer checker, and a hardcoded symbolic checker. The guidance model (CodeSteer based on Llama-8B) is fine-tuned to guide large models (e.g., GPT-4o) during multi-round generation, e.g., deciding on if to generate code to solve the problem or not. The guidance model is trained on a synthesized dataset with GPT-4o solving 28 symbolic tasks. The method is evaluated in another 9 unseen tasks. The self-answer checker always prompts GPT-4o to generate codes to verify the answer's correctness after each round of generation. The symbolic checker evaluates the complexity of the generated codes to reject codes that are not sufficiently sophisticated for the task at hand. With all the modules, the proposed method outperforms GPT-o1 and Deepseek-R1 for 9 unseen tasks. Without any of the symbolic checkers, the proposed method performs worse while still outperforming the baseline that prompts GPT-4o for guidance. ## update after rebuttal I still have concerns regarding the generalizability of such method, like if we can use it in more general domains/tasks. The authors’ replies are not that helpful. Claims And Evidence: The authors claim that the proposed outperforms baselines including o1. However, it only shows evidence in 9 tasks. They are not sufficient to evaluate the models' performance in general, especially when considering the symbolic checkers that are specifically designed for the tasks at hand. Methods And Evaluation Criteria: I like the proposed data generation and fine-tuning process for the guidance module in general. The model is fine-tuned to prefer actions that lead to higher returns whose expectations are estimated given the MCTS trees. The name, multi-round data generation, was confusing though, as it could also refer to generating the datasets multiple times (e.g., obtaining new MCTS trees using the current fine-tuned guidance model). The evaluation metric is less intuitive and stable with the normalization term as the maximum pass rates of all methods. I would recommend the raw average of pass rates. This may not be too problematic, as I did check the other metric using the numbers reported in Table 1, and the ranking of methods looks similar to the reported one. Theoretical Claims: No theoretical claim in this paper. Experimental Designs Or Analyses: The test sets are small and may not be diverse enough. There is no validation set, and I haven't seen links to the hyperparameter tuning process. Supplementary Material: I checked the prompts and analysis in the Appendix. They look good and informative. Relation To Broader Scientific Literature: Guided search is important for test-time-compute scaling. There are multiple works investigating it for various domains, such as multi-round code refinement. The recent reasoning modules such as o1 and R1 seem to have similar abilities as well. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: It could be interesting to finetune a bigger model, such as GPT-4o, for guidance, but I am not sure if the performance will improve a lot and surpass o1 without the symbolic checkers. Questions For Authors: * Are the symbolic checkers task/domain-specific? Can they generalize more general reasoning tasks? * Are there results in more domains and tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your appreciation with our work. The following responses clarify several misunderstandings raised by the reviewer. We've also incorporated new experiments and analyses. We kindly ask the reviewer to reconsider our work in light of the responses below. ***Question 1:*** *Test on 9 unseen tasks are not sufficient to evaluate the models' performance, especially when considering the symbolic checkers that are specifically designed for the tasks at hand. Are the symbolic checkers task/domain-specific?* **Response 1:** 1. **The 28 seen and 9 unseen tasks are randomly divided without human bias.** The test sets are diverse enough as they cover all 6 types of reasoning capabilities we evaluated as shown in Table 4. Furthermore, in our work, the seen tasks are also valid and decent benchmarks to evaluate since the tested samples are all different from the training samples and the complexity range is large enough. 2. **The relative improvement of CodeSteer to other methods are almost the same in seen and unseen groups, showing no overfitting is happening.** We include a table of CodeSteer’s performance improvement comparison over several main baselines for both seen and unseen tasks below. | Avg. Norm. Improvement | o1 | DeepSeekR1 | Symbolic Agent | Code/Text Choice | Code Interpreter | |------------------------|----|-----------|--------------------|-----------------|--------------------| | Seen | 4.3 | 8.8 | 11.1 | 8.4 | 14.8 | | Unseen | 1.9 | 12.2 | 13.4 | 9.2 | 19.4 | From the table we can observe, GPT-4o+CodeSteer has compatible and even better performance improvements for unseen tasks. This result indicates that no overfitting happens during our fine-tuning process and our method is generalizable. 3. **Symbolic checkers are not domain-specific**. Please refer to appendix F for the whole code, which is universal for all tasks we evaluate. Symbolic checkers evaluate the code complexity by checking all the symbolic computing characteristics, with no task-specific components. Thus, they do not weaken the generalization of our method. 4. **SymBench is not small since it gathers and covers nearly all types of symbolic tasks appearing in the current research domain (refer Appendix Section C describing SymBench)**. ***Question 2:*** *There is no validation set, and I haven't seen links to the hyperparameter tuning process.* **Response 2:** As explained above, we directly tune hyperparameters on 28 seen tasks without requiring a validation set. **We will add the explanation of the hyperparameter tuning process in the Appendix of the revised version.** ***Question 3:*** *The name, multi-round data generation, was confusing though, as it could also refer to generating the datasets multiple times.* **Response 3:** Thank you for your helpful advice. Admittedly, this will cause some confusion. **We will change it to multi-turn data generation in the final version.** ***Question 4:*** *The evaluation metric is less intuitive and stable with the normalization term as the maximum pass rates of all methods. I would recommend the raw average of pass rates.* **Response 4:** 1. **We use the normalized metric to better compare the relative performance among methods and prevent any single task from disproportionately influencing the overall evaluation.** 2. **To ensure the robustness of our conclusions against changes in the evaluation metric, we recalculate the average score without normalization.** From the result in the table, we can observe that for both seen and unseen tasks, our method can still outperform all main baselines obviously, even more on unseen tasks. **We will add this metric in the final paper.** | Avg. Score | o1 | DeepSeekR1 | Symbolic Agent | Code/Text Choice | Code Interpreter | **GPT-4o + Codesteer** | |------------|----|-----------|--------------|-----------------|--------------------|-------------------| | Seen | 73.7 | 70.4 | 67.5 | 70.0 | 64.9 | **76.1** | | Unseen | 67.8 | 64.8 | 60.9| 64.9 | 56.0 | **72.2** | | Total | 72.3 | 69.0 | 65.9 | 68.8 | 62.8 | **75.2** | ***Question 5:*** *It could be interesting to finetune a bigger model, such as GPT-4o.* **Response 5:** Since we do not have the access to finetune GPT-4o, here we add the experiments to **compare finetuning CodeSteerLLM on Llama-3-8B, Llama-2-13B, and Llama-3-70B (LoRA) on the same dataset of SFT and DPO, as shown in the following table**. We find the performance will not improve apparently with larger models. The current main bottleneck is still the lack of high-quality dataset. | Ave. Norm. Score | Seen | Unseen | |-------|----------------|-----------------| | Llama-3-8B (full-parameter) | 88.1 | 81.3 | | Llama-2-13B (full-parameter) | 87.3 | 82.2 | | Llama-3-70B (LoRA) | 88.6 | 80.0 | **We will add the above new experimental results and discussion in the final paper.**
Summary: This work introduces a comprehensive benchmark SymBench comprising 37 symbolic tasks with adjustable complexity and also datasets of 12k multi-round guidance/generation trajectories and 5.5k guidance comparison pairs, and fine-tune a CodeSteerLLM using the introduced datasets, achieving improved reasoning performance. Claims And Evidence: The claims are decent and supported by the comprehensive experimental results. Methods And Evaluation Criteria: The authors train the CodeSteerLLM to verify and guide the reasoning process of LLMs through supervised fine-tuning on the collected dataset. This is different from the recent surge of RL to incentivize such capabilities. I am curious if RL can further improve the potential of CodeSteerLLM. Theoretical Claims: There are no theoretical claims to be evaluated. Experimental Designs Or Analyses: Experimental designs are proper. Supplementary Material: no review Relation To Broader Scientific Literature: The proposed method can improve the reasoning capabilities of LLMs and help the realm of ai4science. Essential References Not Discussed: no issue to be discussed Other Strengths And Weaknesses: My biggest concern is if RL can further improve the potential of CodeSteerLLM, and I expect more experimental results that explore this direction. Moreover, why not combine CodeSteerLLM with reasoning LLMs like o1 or r1? Other Comments Or Suggestions: no Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your appreciation with our work and helpful suggestions. We've incorporated new experiments and analyses based on reviewers’ questions. We kindly ask the reviewer to reconsider our work in light of the responses below. ***Question 1:*** *The authors train the CodeSteerLLM to verify and guide the reasoning process of LLMs through supervised fine-tuning on the collected dataset. This is different from the recent surge of RL to incentivize such capabilities. I am curious if RL can further improve the potential of CodeSteerLLM. I expect more experimental results that explore this direction.* **Response 1:** 1. **Both CodeSteer and recent reasoning LLMs, such as R1, begin with supervised fine-tuning (SFT) to initialize the model with basic answer formats and reasoning capabilities.** In the second stage, we apply DPO to further optimize CodeSteerLLM, while other methods typically use PPO or GRPO. Prior work has shown that DPO, PPO, and GRPO aim to learn the same optimal policy. 2. **Why do we choose DPO rather than PPO/GRPO?** **The key distinction between CodeSteer and recent reasoning models is its two-LLM design: a fixed, non-tunable TaskLLM (GPT-4o) and a learnable CodeSteerLLM.** This setup introduces three main constraints of directly applying PPO/GRPO: - GPT-4o is closed-source and non-deterministic, producing varied outputs even with temperature set to 0, making it difficult to evaluate the effectiveness of a single CodeSteerLLM guidance using one answer trajectory. - Sampling from GPT-4o incurs significant API costs, unlike the virtually unlimited sampling budget available when training open-source models. - Existing PPO/GRPO approaches rely on final reward signals for optimization, but in CodeSteer, intermediate guidance steps are hard to evaluate directly. To address these challenges, we propose a tree-based multi-round guidance sampling strategy (Section 4.2). Each intermediate guidance is scored based on its average downstream returns. Additionally, since DPO relies on preference comparisons rather than precise reward values, it may require fewer TaskLLM samples than PPO/GRPO. In the following table, we compare the performance of DPO and PPO using the same number of answer samples (ranging from 0 to 5000) and starting from the same post-SFT model. After convergence, **DPO consistently outperforms PPO**, particularly in low-sample regimes. **This supports our decision to adopt DPO in this study. We will add the above discussion and results into the final paper version.** | CodeSteer Ave. Norm. Score | Num.0 | Num.1000 | Num.3000 | Num.5000 | |-------|----------------|-----------------|-----------------|-----------------| | DPO | 79.1 | 79.5 | 80.3 | 81.4 | | PPO | 79.1 | 79.0 | 79.6 | 81.1 | 3. **A promising future direction is to apply reinforcement learning to enhance reasoning abilities in unified code/text LLMs.** While PPO and GRPO may not offer clear advantages over DPO in our current setup, they could be more suitable when applied to a single unified model that both guides and generates code/text. In such cases, PPO/GRPO can directly optimize the policy, which we plan to explore in future work. ***Question 2:*** *Moreover, why not combine CodeSteerLLM with reasoning LLMs like o1 or r1?* **Response 2:** **We have added o1+CodeSteer to further verify CodeSteer’s effectiveness.** During the time this work is done, we find the o1 and r1 APIs always report errors when requiring multi-round answer generation, thus hindering testing CodeSteer with these reasoning LLMs. Now the OpenAI platform has fixed this issue. Here we add the experiments to test whether CodeSteer can improve the performance of o1. As shown in the following table, **the performance of o1 will improve notably when augmented with CodeSteer on 5 randomly chosen unseen tasks, further verifying the effectiveness of CodeSteer. We will include these results in the final version.** | Task success rate % | Cryptanalysis | Synthesis Decomposition | 2048 | Eight Queens | Combinatorial Calculation | |-------|----------------|-----------------|-----------------|-----------------|-----------------| | o1 | 60 | 57 | 52 | 84 | 57 | | o1 + CodeSteer | 73 | 94 | 72 | 97 | 95 | **We are glad to answer further questions from the reviewer.**
Summary: This paper introduces **CodeSteer**, a model fine-tuned to enhance reasoning abilities between text and code. The authors propose a synthetic dataset and a new evaluation suite of 37 symbolic tasks to demonstrate the model’s performance on complex reasoning tasks. They claim that by leveraging both code-generation capabilities and natural language generation, CodeSteer shows significant improvements on reasoning benchmarks. Claims And Evidence: - **Claim**: The fine-tuned model (CodeSteer) performs well on symbolic reasoning tasks by switching contexts between text and code. - **Evidence**: The experimental results on the proposed benchmark demonstrate CodeSteer’s effectiveness, supporting the claim that it leverages code generation to assist with reasoning. While this claim is promising, more details on the following would strengthen the evidence: 1. **Comparison with standard baselines**: A direct comparison with models that use text or code alone would highlight the advantages of the interleaved method. Methods And Evaluation Criteria: ### Dataset Synthesis One of the central elements of CodeSteer is the **synthesized training dataset**. However, the steps to create this dataset are not fully detailed in the main body. Questions that arise include: 1. **What is the precise process to generate synthetic data?** - How are prompts, code fragments, and textual explanations created or derived? - Are there automated or semi-automated processes for generating these examples? 2. **Data Evaluation** - How is the quality of the synthesized data validated? - Are there checks for correctness, diversity, and relevance to the targeted symbolic tasks? 3. **Data Quality Considerations** - What filtering steps are taken to remove low-quality or repetitive samples? - What metrics (e.g., coverage of different symbolic reasoning types) ensure the dataset’s comprehensiveness? ### Proposed Benchmark The paper introduces a benchmark consisting of **37 symbolic tasks**: - **Data Collection**: How exactly were these tasks sourced or created? - Were they adapted from existing symbolic reasoning benchmarks or newly constructed? - What cleaning and filtering procedures were applied? - **Benchmark Size**: With 37 tasks, the benchmark appears relatively small. - Is there a plan to scale it further in future work or to combine it with existing reasoning datasets for broader coverage? - **Placement in the Main Text**: Since the benchmark is a significant contribution, it would be beneficial to include more details and examples in the main paper (instead of primarily in the appendix). Theoretical Claims: There are no explicit theoretical claims beyond empirical demonstrations of CodeSteer’s effectiveness. Experimental Designs Or Analyses: In the current evaluation: - **Reported Metric**: Table 1 uses a single metric to show CodeSteer’s performance. - **Recommendation**: Consider adding additional metrics that assess different aspects of generation, such as: - **Exact Match** (for correctness of final answers). - **CodeBLEU** (for measuring code-generation quality and similarity). - **Baselines**: While CodeSteer’s performance is the focus, it would be illuminating to compare with: - **GPT-4 (Few-shot)**: Prompt GPT-4 directly with a few-shot approach, guiding it to replicate CodeSteer’s “code-then-text” reasoning procedure, but without the fine-tuning. This would reveal whether the gains come primarily from the interleaved prompting or from specific CodeSteer training data. By incorporating multiple metrics and stronger baselines, the paper’s findings could be even more compelling. Supplementary Material: Yes Relation To Broader Scientific Literature: Code Generation Reasoning Essential References Not Discussed: N/A Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the helpful feedback. We've clarified several misunderstandings raised by the reviewer and also incorporated new experiments and analyses. Hope the reviewer could reconsider our work based on the responses below. ***Question 1:*** *Direct comparison with models that use text or code alone.* **Response 1:** **We have already compared methods that use only text or only code in the original paper. As shown in Table 1**, the results for **All Text + CoT** and **All Code + CoT** represent cases where GPT-4o is prompted to generate solely text or code to solve the task in a single round (see Lines 209–213 for details). The superior performance of GPT-4o+CodeSteer further demonstrates the effectiveness of CodeSteer. ***Question 2:*** *Process to generate synthetic data: How are prompts, code fragments, and textual explanations created? Are there automated processes?* **Response 2:** 1. Apart from Symbench and dataset, we also propose novel training techniques such as **multi-round DPO, SFT data augmentation, Symbolic Checker, and Self-answer Checker**. 2. **All training data are synthesized and validated automatically using predefined rules, without costly human annotation.** Details of dataset synthesis for SFT and DPO are provided in **Section 4.1 (Lines 157–170)** and **Section 4.2**. The data generation process, including textual reasoning components, is handled by GPT-4o guided by detailed prompts with preset knowledge or hints. The implemented prompts are shown in **Page 3 (Lines 160–163, first column) and Appendix Section D**. Generated code is extracted using predefined scripts. ***Question 3:*** *Checks for correctness, diversity, and relevance to the targeted symbolic tasks. Filtering steps to remove low-quality samples. Metrics for the dataset’s comprehensiveness.* **Response 3:** **In both SFT and DPO, we include only answer trajectories that lead to correct solutions**. To encourage diverse responses, we use varied prompts for SFT (Section 4.1, Lines 163–168) and different model checkpoints for DPO. The selected training tasks span a wide range of symbolic reasoning types, as detailed in Appendix Section C and Table 4. Each task includes samples of varying complexity to promote diverse reasoning. Training data is dynamically updated based on task performance—tasks with lower performance receive more data in subsequent rounds. ***Question 4:*** *How exactly were these tasks sourced or created? What cleaning and filtering procedures?* **Response 4:** **As noted in the original paper (Lines 90–100 and 128–140, first column), we collect and redevelop all the tasks from former works since their datasets and codes are not open-sourced**. The specific sourced work of each task is explained in Appendix Section C. Among 45 tasks, we select 37 tasks since the left 8 tasks are not challenging to GPT-4o. ***Question 5:*** *With 37 tasks, the benchmark appears relatively small. Plan to scale it further in future work or to combine it with existing reasoning datasets?* **Response 5:** **SymBench is not small since it covers nearly all types of symbolic tasks appearing in the current research domain.** SymBench also comprises more tasks (37) than other reasoning and planning benchmarks like BIG-Bench-Hard (23 tasks), PlanBench (6 tasks), LogicGame (31 tasks). Definitely, we will add more tasks and combine with other existing reasoning datasets for a more comprehensive benchmark in the future. ***Question 6:*** *Additional metrics: Exact Match, CodeBLEU* **Response 6:** Since many SymBench problems have multiple correct solutions, Exact Match is not ideal. Similarly, CodeBLEU relies on reference code and assumes a single ground-truth solution, making it less suitable since TaskLLM outputs may be in text or code and correct solutions are not unique. **The reviewer’s suggestion inspired us to explore alternative metrics that capture different aspects of generation**. For example, the following table shows the average code complexity increases with more rounds, supporting the idea that TaskLLM refines code progressively under guidance. | Code complexity score | Round 1 | Round 2 | Round 3 | Round 4 | Round 5 | |------|----------------|-----------------|-----------------|-----------------|-----------------| || 9.32 | 11.44 | 12.94 | 13.31 | 13.54 | ***Question 7:*** *Prompt GPT-4 directly with a few-shot approach, guiding it to replicate CodeSteer’s “code-then-text” reasoning procedure.* **Response 7:** Here we **include three prompt-based baselines: Few-Shot, Code-First-Rule, Code-First-Agent**. Due to word limitation, please refer to our response to Reviewer RBSg for the full experimental results and detailed discussion. In summary, **these three prompt-based methods significantly underperform compared to CodeSteer, highlighting the value of training with our synthesized data**. **We will include all of the above contents in the revised paper. Happy to answer any further questions.**
null
null
null
null
null
null
CMoS: Rethinking Time Series Prediction Through the Lens of Chunk-wise Spatial Correlations
Accept (poster)
Summary: This paper proposes the CMoS, a highly lightweight model for time series forecasting tasks. Unlike previous studies, CMoS capture the temporal patterns in a chunk-level manner. The Correlation Mixing mechanism builds robust correlationmatrices, and Periodicity Injection technique help to leverage the periodicity information. Claims And Evidence: - In figure 1, thios paper claims that "the specific patterns in the time window change greatly, while the spatial correlations of the time series chunks remain similar." This may be a specific case for time series data with obvious periodicity. This may not hold when time series data shows no periodicity. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. There are no problems for proofs. Experimental Designs Or Analyses: Yes. There are no problems. Supplementary Material: There are no supplementary material. Relation To Broader Scientific Literature: Previous methods model the temporal relationship in point-level or learn feeature from patches. This paper proposes learn the relationship in the chunk-level. It provides a new perspective to leverage the relationship between channels, instead of simply employing Channel Independent strategy. Essential References Not Discussed: There are no problems. Other Strengths And Weaknesses: Strengths: - The writing is clear and fluent. - The method is effective and easy to implement. Weaknesses: - Some equations exists errors. For example, in the Theorem3.2, the superscript of the summation symbol should be n-1. Similar problem exists in equation 3. - The details of Periodicity Injection for each dataset should be described in experimental part. - This is a lightweight model, but SparseTSF seems much more lightweight than this model. Please specify the reason and the advantages against SparseTSF. Other Comments Or Suggestions: No more comments. Questions For Authors: Please see the weakness and problems above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Thank you for your detailed and thoughtful review! We will address your concerns point by point.** > Q1: The statement may not hold when time series data shows no periodicity. We appreciate your careful examination of this matter. **Indeed, as you pointed out, the formulation becomes less rigorous for some irregular time series, such as in the case of a time series generated by random walks. Nevertheless, it is notable that periodicity or stationary trend constitutes a fundamental indicator of time series predictability and forecasting potential**. Long-term forecasting can be very challenging if time series exhibit no periodicity or stationary trend. For example, DLinear demonstrates that a naive baseline that simply copies the most recent observation can outperform almost all SOTA deep learning models that on the aperiod financial dataset [1]. We also reach a similar conclusion by analyzing the 1st channel of Weather dataset described in Sec. 5. So, **our statement holds true for time series that are predictable and have strong forecasting potential**. And through extensive experiments, **we demonstrate both the soundness and superiority of our approach, which is designed based on the principles of the statement, across the majority of real-world scenarios**. *Also, motivated by your concerns, we will include an additional discussion section in the revised paper to elaborate on this issue, as well as other potential limitations.* > Q2: Some equations exists errors. Quite thanks for your reminder! We will carefully review all the formulas and correct the errors in the revised paper. > Q3: The details of Periodicity Injection for each dataset should be described in the experimental part. Thanks for your valuable suggestions. We will add the below detailed description of Periodicity Injection in our revised paper according to your suggestion: For each dataset, **firstly, we use the AutoCorrelation Function (ACF) to calculate the dominant period $p$ of this dataset** (the period of all datasets calculated by ACF are listed in the 5th column in Table 7 in Appendix D). Next, during the grid search process for the hyperparameter chunk size, **for each experiment setting, we input the the calculated $p$ and the hyperparameter chunk size $S$ into Algorithm 1 provided in Appendix E, to obtain a modified matrix initialized via Periodicity Injection**. This matrix is then used to replace the first basic correlation matrix of the initialized model, thereby completing the Periodicity Injection operation. > Q4: Please specify the reason and the advantages against SparseTSF SparseTSF significantly reduces the number of parameters by downsampling the original time series. Additionally, it adopts a Channel-Independent strategy (Described as One Bus in Appendix B)—modeling a single shared temporal structure for all time series—to minimize the overall number of model parameters. However, the oversimplified modeling of the time series forecasting task results in severely limited representational capacity. **Since it can only model a simple and singular temporal structure for a given time series system, this method fails to accurately capture the diverse temporal patterns that may exist across various time series within the system**. As a result, its forecasting performance tends to be suboptimal. In contrast, with the help of Correlation Mixing strategy, **CMoS focuses on building several fundamental basic correlations that represent the diverse and inherent patterns of the whole system, and apply a specific mixing strategy for each channel to capture the channel-specific and more accurate temporal structure, thereby greatly improving the prediction performance.** According to the experimental results, CMoS consistently outperforms SparseTSF with a significant margin on datasets containing more than 20 channels (more channels often contain a greater diversity of temporal structures). Therefore, **when comprehensively considering the trade-off between prediction performance and parameter efficiency, our method stands out as the optimal choice among existing approaches including SparseTSF**. [1] Zeng, Ailing, et al. "Are transformers effective for time series forecasting?." Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 9. 2023. **Thank you again for your kind review, and we hope our response can address your concerns!** --- Rebuttal Comment 1.1: Comment: Thank the authors for their response. I hope these discussions can be incorporated into the paper to enhance its comprehensiveness. My concerns have been addressed, and I am happy to raise my rating to 4. --- Reply to Comment 1.1.1: Comment: Thank you very much! Your suggestions are very helpful for improving the quality of our paper and work, and we will incorporate all the details mentioned in our discussions into the final paper. Once again, we sincerely thank you for taking the time to review our paper and for raising the rating!
Summary: This paper works on time series forecasting task, and the main idea is to split time series into chunks, and build up chunk-chunk spatial correlations to achieve robust time series forecasting. The paper is clearly written, the proposed modules are accompanied with good motivations and solid proofs. ## Update After Rebuttal I've read the rebuttal and other reviewers' comments, my final rating is weak accept. The reasons why I cannot give higher ratings are: (1) The performance gap in Table 1 is minor; (2) Although the proposed method can greatly reduce the parameter number, the FLOPs and inference time remain comparable to other methods. Claims And Evidence: 1. In Figure 1, which specific techniques are used to obtain the correlation value? It's interesting yet a bit confusing why the obtained correlation values are similar, though the time series in each chunk looks very different. Please give more details. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, no issues are found. Experimental Designs Or Analyses: Yes, the conducted experiments are reasonable. A few experiments are missing, and I mention them in "Questions for Authors" below. Supplementary Material: Yes, an appendix is provided and I have read it. Relation To Broader Scientific Literature: The key contributions of this paper are reducing parameter sizes specially for time series data, I think this technique would be general to all time series related tasks. Essential References Not Discussed: Can you discuss the differences between your method and the following chunk-based techniques? [1] Ju, Yue, Alka Isac, and Yimin Nie. "Chunkformer: learning long time series with multi-stage chunked transformer." arXiv preprint arXiv:2112.15087 (2021). [2] Johnsen, Pål V., et al. "Recency-Weighted Temporally-Segmented Ensemble for Time-Series Modeling." arXiv preprint arXiv:2403.02150 (2024). Other Strengths And Weaknesses: Strengths: 1. This paper can effectively reduce the number of model parameters to achieve robust time series forecasting. Weaknesses: 1. The performance gap is quite minor. Other Comments Or Suggestions: N.A. Questions For Authors: 1. Can you compare your aggregated spatial correlations (from basic correlations) with attention-based methods? I'm interested if your correlations can be similar or not. 2. In Section 4.4, for the efficiency-related experiments, can you further include time cost and flops? It's possible that you use less parameters but have more computations. 3. Can you provide some failure cases when your method cannot have good performance? Theoretically, I think aggregation-based correlations cannot well cover all cases. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Thank you for your detailed and thoughtful review! We will address your concerns point by point.** > Q1: Details about Fig. 1 As an illustrative example, we use the MSE of each pair of chunks as the correlation value (labeled in the figure), and a lower MSE meaning stronger correlation. **Only when the shapes of two chunks are relatively similar can the MSE between them be close to zero, indicating a strong correlation between the two chunks**. In other cases, there is no significant relationship between them since their MSE is usually much greater than zero. We would be pleased if the above description can address your conerns! > Q2: The differences between your method and other chunk-based techniques. Thanks for your valuable question. Despite all three approaches employing chunk operations on raw time series, **both Chunkformer and REWTS emphasize modeling intra-chunk relationships within individual chunks**. However, this design suffers from poor interpretability, as the internal mechanisms (such as attention weights of intra-chunk correlation) are often opaque. **It's hard for us to understand how these intra-chunk relationships contribute to the future predictions**. Moreover, since intra-chunk modeling is highly sensitive to each value within a chunk, **the model's performance is often greatly affected when the data contains high levels of noise or outliers**. **In contrast, our CMoS focuses on direclty modeling the relationships between historical chunks and prediction chunks (inter-chunk relationships)**. This approach offers several key advantages: - **Enhanced Interpretability**: Our method provides clear insights into how historical chunks influence future predictions (as shown in Sec.5). - **Improved Robustness**: The focus on broader temporal relationships rather than fine-grained intra-chunk patterns makes our method more resilient to noise in the data, which is theoretically proved in Sec. 3.1 and experimentally proved in Sec. 4.3. **We will add this point to our related works to help readers better understand the differences between CMoS and other methods.** > Q3: Can you compare your aggregated spatial correlations with attention-based methods? Sure! We visualized the attention-based representation (att1&2.png, based on the attention score of PatchTST's 2 layers) and the correlation of CMoS (cmos.png) on weather dataset in anonymous repo https://anonymous.4open.science/r/kjUH. From the picture, we find that there is a significant difference between the two. The attention-based correlation represents the attention relationships **among historical data points, but it does not reveal how these relationships contribute to the prediction of future time series**. As a result, it is difficult to derive an intuitive explanation of the forecasting process from the attention-based correlation. In contrast, our correlation **directly captures the mapping from historical data to the future time series**. This allows us to **clearly see which historical chunks contribute more to the prediction of future chunks**, making the model’s decision process more interpretable. > Q4: Can you further include time cost and flops? That's a quite good suggestion! We provide the inference FLOPs, GPU memory footprint, and inference time of CMoS and other baselines on Electricity dataset using a 3090 GPU as follows. The batch size of all methods is set to 64. ||FLOPS|Memory|Infer. Time| |-|-|-|-| |Dlinear|5.31G|245MB|1.81s| |CycleNet|5.68G|267MB|1.83s| |SparseTSF|1.02G|262MB|1.49s| |FITS|5.33G|691MB|4.71s| |iTransformer|249.51G|2271MB|1.92s| |PatchTST|1196.08G|22014MB|2.90s| |TimeMixer|10.58G|18642MB|2.85s| |CMoS|2.96G|252MB|1.58s| Although the memory allocation strategy and powerful computational performance of the 3090 may narrow the gap in computational overhead among models, it can still be seen that, **CMoS consistently maintains an advantage in computational overhead** except for SparseTSF. It is also notable that the prediction performance of CMoS greatly outperforms SparseTSF, especially on those datasets with more channels. **This means CMoS can achieve the best effectiveness-efficiency balance among all methods**. > Q5: Some failure cases. When the underlying data distribution shifts significantly over time (i.e. concept drift), such as sudden changes in market behavior or consumer patterns, the basic correlations may not include the new time strutures, so these rapid distribution shifts can affect the prediction accuracy of CMoS. However, it's also important to note that this is a fundamental difficulty in time series forecasting that the entire field is actively working to address. A possible solution is to quickly update the model when facing concept drifts, and owing to the efficiency advantage, CMoS can be updated more rapidly and at a higher frequency. **This rapid adaptation capability allows CMoS to mitigate the impact of concept drift more effectively than other methods**. --- Rebuttal Comment 1.1: Comment: Thanks for your nice rebuttal, I do not have other questions, I keep my score as weak accept. The reasons why I cannot give higher ratings are: (1) The performance gap in Table 1 is minor; (2) Although the proposed method can greatly reduce the parameter number, the FLOPs and inference time remain comparable to other methods. Taking all these factors into consideration, the motivation, i.e., reduce parameter cost, appear to be not that strong. --- Reply to Comment 1.1.1: Comment: # ==Update== Dear reviewer, **We are delighted to share our newest experiments on model efficiency with you!** To further investigate the model’s real-world practicality, we conducted additional experiments **by disabling the GPU and using only a single CPU core**, in order to simulate the model's performance **on edge devices with limited computational resources**, and there inference time under this limitation is listed in below: |Method|Infer. Time (One CPU)| |-|-| |DLinear|45.82s| |CycleNet|51.17s| |SparseTSF|16.84s| |FITS|72.34s| |iTransformer|393.09s| |PatchTST|2676.03s| |TimeMixer|1512.43s| |CMoS|25.23s| It can be observed that **the inference time of CMoS is at least 40% shorter than that of existing methods** except for SparseTSF (while CMoS has great performance advantages compared with SparseTSF). This indicates that CMoS has a significant computational advantage, making it suitable for deployment on a wider range of edge devices for high-quality time series forecasting. **With regard to your another concern about minor performance gap**, while CMoS may not demonstrate very large margins of improvement over the second-best method on individual datasets, **it uniquely maintains state-of-the-art performance consistently across multiple datasets**, a characteristic not shared by any baseline method. **Unlike other approaches that might excel on specific datasets but show inconsistent performance across different scenarios, CMoS exhibits robust and superior performance across a diverse range of datasets**. This consistent excellence across multiple benchmarks underscores the versatility and reliability of our approach, especially considering that CMoS is a super-lightweight method. *We’d greatly appreciate it if our response addresses your remaining concerns! And we are glad to engage in continued discussion with you!* # ==Previous reply== Dear reviewer, thank you for your timely response. Due to the super-lightweight design of our model, it is far from fully utilizing the computational resources of high-performance GPUs. As a result, our efficiency metric may not show a very great advantage over existing methods. However, for edge devices with strict memory constraints, the number of parameters directly determines whether the algorithm can be deployed on such devices. This is of great practical significance for real-world applications. Inspired by your suggestion, we plan to develop an ONNX version of CMoS to facilitate efficient time series forecasting on edge devices, and provide more possibilities for the broader application of CMoS and other further light-weight works. Once again, we sincerely thank you for taking the time to review our paper and providing these valuable suggestions! We will carefully revise our paper following your suggestions.
Summary: This paper presents CMoS, a super-lightweight time series forecasting model that utilizes chunk-wise spatial correlations to achieve parameter-efficient and interpretable predictions. The key innovation lies in directly modeling the spatial dependencies between time blocks of fixed size, rather than point-oriented patterns, as theory and experience suggest, to enhance noise robustness. CMoS introduces a correlation hybrid strategy that combines a small group of shared basis correlation matrices (e.g., long-term, short-term, periodic) with channel-specific adaptive weights to achieve diversified time structure modeling while maintaining minimal parameters. In addition, periodic injection through weight initialization accelerates the convergence of periodic data. The experiment demonstrated state-of-the-art performance across seven benchmarks, with interpretable correlation matrices revealing different time dependencies (e.g., daily periodicity, residual trends). This work highlights the inherent simplicity of time structures and provides a framework for resource efficiency forecasting. ## update after rebuttal I thank the authors for their thorough response to reviewers' feedback and the improvements made to the paper. After examining the rebuttal and considering the other reviews, I've updated my overall recommendation to "3 - Weak accept" as the authors have sufficiently addressed the primary questions and concerns I raised. Claims And Evidence: Yes, I believe the paper's claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method is reasonable for time series forecasting. With the support of theoretical analysis and ablation research, Chunk-wise spatial correlation modeling solves the problem of noise robustness and parametric efficiency. The correlated mixing strategy effectively balances model capacity and lightweight design through shared basis matrices, while allowing for channel-specific adaptations. Periodicity injection provides a practical inductive bias for cyclical patterns without overcomplicating the architecture. Standard Indicators (MSE/MAE) are used to assess seven established benchmarks to ensure fair comparisons. Theoretical Claims: The paper’s theoretical claim (Theorem 3.2) asserts that chunk-wise spatial correlations reduce noise sensitivity compared to point-wise modeling. The proof in Appendix F correctly applies the Cauchy-Schwarz inequality, showing that averaging the point-wise weights into chunk-wise weights reduces the L2 norm of the parameter and thus the noise variance. However, the theorem assumes linear and Gaussian noise, which may not be exactly consistent with real-world time series dynamics (e.g., non-Gaussian noise, nonlinear dependence). While the proof is mathematically sound under these assumptions, its practical significance depends on how closely the linear chunk-wise model approximates the real data. Experimental Designs Or Analyses: Experiments are valid using standard benchmarks and metric. Ablation studies support key designs (chunk-wise modeling, correlation mixing), and parameter efficiency is well quantified. Supplementary Material: No supplementary materials. Relation To Broader Scientific Literature: CMoS builds on recent advances in lightweight time series models (e.g., DLinear, FITS) but uniquely addresses their limitations in handling diverse temporal structures. While channel-independent strategies (PatchTST, SparseTSF) reduce parameters but restrict model capacity, and channel-mixing methods (iTransformer) incur high complexity, CMoS bridges this gap via correlation mixing—sharing basis matrices while adapting to channel-specific patterns, akin to "mixture of experts" but tailored for spatial correlations. Its chunk-wise modeling aligns with patching in PatchTST but prioritizes robustness over semantic embeddings. Periodicity injection extends CycleNet’s explicit cyclical modeling but integrates it into interpretable correlation weights. The work advances the paradigm of "simple yet effective" models, demonstrating that lightweight designs can achieve both efficiency and expressiveness by rethinking temporal dependencies. Essential References Not Discussed: I believe there is no essential reference missing from the discussion. Other Strengths And Weaknesses: Strengths: 1. The paper is well-structured, with clear technical exposition (e.g., chunk-wise formulation, correlation mixing pseudocode) and intuitive visualizations (Fig. 7-8) that enhance interpretability. 2. The paper compellingly critiques the limitations of channel-independent strategies for lightweight models and introduces correlation mixing as a novel middle ground between parameter efficiency and multi-pattern modeling. 3. The extreme parameter efficiency (1% of DLinear) and interpretable correlation matrices offer direct value for edge deployment and domain analysis (e.g., energy systems). Weaknesses: 1. Theorem 3.2 assumes linear and Gaussian noise, ignoring nonlinear dependencies and real-world noise types (e.g., burst noise), limiting its practical relevance. 2. Despite the emphasis on lightweight design, key metrics such as inference speed, and memory footprint are ignored, leaving the utility of the deployment unproven. 3. Chunk size selection depends on prior periodic knowledge (e.g., divisor of 24/168), requiring manual tuning of the new data set, and reducing the availability of aperiodic or irregularly sampled series. Other Comments Or Suggestions: No other comments. Questions For Authors: 1. How could Theorem 3.2 be extended to nonlinear dependent or non-Gaussian noise (e.g., burst noise), and what empirical evidence supports its robustness under such conditions? 2. Despite the emphasis on lightweight design, why are inference speed and memory footprint excluded from the evaluation? Can you provide some benchmarks to verify the usefulness of your deployment? 3. What strategies can automatically select the chunk size of the dataset without explicit periodic or irregular sampling, thereby reducing the reliance on manual tuning? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Thank you for your detailed and thoughtful review! We will address your concerns point by point.** > Q1: How could Theorem 3.2 be extended to nonlinear dependent or non-Gaussian noise (e.g., burst noise), and what empirical evidence supports its robustness under such conditions? - **Extended to Non-Gaussian noise like burst noise** Take the burst noise as an example, it can be viewed as extreme deviations that occur in the tail of a noise distribution. So we can define burst noise $B(t)$ mathematically as follows based on extreme value theory. Let $X(t)$ be a noise process. According to the Pickands–Balkema–de Haan theorem, the conditional distribution of the exceedances events exceeding a sufficiently high threshold $u$ follow a Generalized Pareto Distribution (GPD): $GPD(y ; \sigma, \xi)= P(Y(t)\le y|X(t)>u) \approx 1-\left(1+\frac{\xi y}{\sigma}\right)^{-1 / \xi}$,where $y=x-u>0$, $\sigma>0$ is the scale parameter, and $\xi > 0$ is the shape parameter. Next, since the burst noise can be regarded as discrete events, we define their occurence times ${T_i}$ follow a Poisson process with intensity $\lambda(u)$. So we can finally define burst noise $B(t)$ as $B(t) = y+u,\text{if}\ t\in\{T_i\},\ \text{otherwise}\ 0$, where $y$ is i.i.d. $GPD(\sigma, \xi)$. However, when we attempt to replace $\delta$ with $B$ in Definition 3.1, we find it difficult to theoratically analyze $Var(\theta^TB)$ since **B is sparse and thus $Var(\theta^TB)$ relies heavily on specific samples of GPD**. As an alternative, we choose to **experimentally compare the performance** of chunk/point-level modeling on the time series with random burst noise. Specifically, we construct several time series using sine functions, and following the above formulation, we additionally inject some burst noises into all time series. Some segments of these time series are visualized in *burst.png* in the anonymous repo https://anonymous.4open.science/r/kjUH. The prediction results are listed in the following table, and **we can conclude that chunk-level modeling is more robust than point-level modeling when facing burst noise, which can be seen as an empirical extension of Theorem 3.2 on other non-Gaussian noise like burst noise**. ||MSE|MAE| |-|-|-| |Chunk-level|0.0246|0.1166| |Point-level|0.0249|0.1176| - **Extended to nonlinear dependency** With regard to **modeling nonlinear dependency**, the presence of nonlinear interactions makes it difficult to derive closed-form solutions or establish rigorous mathematical proofs. Specifically, in Definition 3.1, if $f(\theta)$ is a nonlinear function, it's hard to obtain a parametric mathematical expression for $Var(f(x';\theta)-f(x;\theta))$, which hinders the following theoretical derivations. So we design a 3-layer MLP with **nonlinear activation** to validate the effectiveness of chunk-level modeling. From the results, **we can find that chunk-level modeling outperforms point-level modeling in most cases, indicating its robustness on multiple datasets**. ||Ele.|Tra.|Wea.|ETTh1|ETTh2|ETTm1|ETTm2| |-|-|-|-|-|-|-|-| |Chunk NonLinear|0.166|0.414|0.237|0.410|0.362|0.357|0.263| |Point NonLinear|0.170|0.431|0.245|0.413|0.381|0.359|0.260| > Q2: Inference speed and memory footprint > That's a quite good suggestion! We provide the inference FLOPs, GPU memory footprint, and inference time of CMoS and other baselines on Electricity dataset using a 3090 GPU as follows. The batch size of all methods is set to 64. ||FLOPS|Memory|Infer. Time| |-|-|-|-| |Dlinear|5.31G|245MB|1.81s| |CycleNet|5.68G|267MB|1.83s| |SparseTSF|1.02G|262MB|1.49s| |FITS|5.33G|691MB|4.71s| |iTransformer|249.51G|2271MB|1.92s| |PatchTST|1196.08G|22014MB|2.90s| |TimeMixer|10.58G|18642MB|2.85s| |CMoS|2.96G|252MB|1.58s| Although the memory allocation strategy and powerful computational performance of the 3090 may narrow the gap in computational overhead among models, it can still be seen that, **CMoS consistently maintains an advantage in computational overhead** except for SparseTSF. It is also notable that the prediction performance of CMoS greatly outperforms SparseTSF, especially on those datasets with more channels. **This means CMoS can achieve the best effectiveness-efficiency balance among all methods**. > Q3: Automatically select the chunk size of the dataset. That's quite a valuable question. As there is no enough information about period, using adhoc chunk size might not be a good choice. The good news is that **since CMoS is a super-lightweight model, it is entirely affordable performing hyperparameter search within a certain range in most cases. Therefore, in practice, we can take advantage of advanced hyperparameter optimization algorithms (such as Bayesian optimization) or frameworks (such as Optuna) to automatically find the optimal chunk size for best performance** on these aperiodic or irregularly sampled series. *Thank you again for your valuable review, and we hope our response can address your concerns.*
Summary: There's a recent line on making small architectures that match the performance of large deep learning models for TS forecasting, which raises the question of the relevancy of DL for time series forecasting. The authors propose CMoS, a novel architecture for time series forecasting that is very lightweight. The main contributions of the architecture include two elements: (1) chunk-wise spatial correlation modelling, which models the prediction of each chunk as a linear combination of previous chunks, and (2) correlation mixing, which uses cross-channel aggregation to get channel-specific spatial correlation mixtures with a low parameter count. The authors also introduce periodicity injection due to the structure of the correlation matrix, allows them to introduce pre-defined periodic peaks into the spatial correlation matrix's initialization. The authors use standard MSE loss and RevIN. Experiments are run on typical TSLib datasets against a few existing baselines. The authors ablate the different components of their proposed architecture, and offer analyses of their architecture's efficiency and interpretability. Claims And Evidence: - Novel architecture CMoS - chunk-wise Spatial correlation modelling - Correlation mixing strategy Using convnets for forecasting is not a new idea. However, the related work lacks any reference to these works. As such, it is difficult to determine how novel the introduced architecture actually is, and what components of it are novel. As the authors state, patching and chunking are quite similar, but the discussion on this topic is quite brief. As is stands, it is difficult to evaluate the novelty of the claims without a proper literature review on comparable architectures. Part of the purpose of the Related Work section is to help differentiate works that are similar in nature, and so should discuss the use of patching more extensively, any differences it has with chunking (other than purpose), the use of convolutions for forecasting. For example, see https://arxiv.org/abs/1906.04397, section 2, paragraph 2 which discusses many other examples of architectures that leverage similar convolutional biases. - Novel weight init strategy for periodicity injection This is an interesting strategy for smart initialization. One obvious limitation to this method is not only that you must know what the periodicity looks like, that periodicity must be fixed (cannot vary over time). Furthermore, it is unclear what happens if you inject a periodicity that is actually incorrect into a model, if e.g. you make an incorrect assumption around the periodicity. I would like to see experiments around this. Nevertheless, I find this idea quite interesting, albeit hard to generalize to other architectures. - CMoS is SOTA The baselines that the authors compare to are those available in the tslib library. Therefore, if their results are better than those of the best model in that library, they are SOTA. Looking at the best model on there currently, CMoS is SOTA on those datasets. However, Tslib seems to be missing many models, including Time-LLM, which performs better (see Table 1 of https://arxiv.org/abs/2310.01728). It would also be useful to add some non-DL baselines to compare, e.g. naive, naive with drift, seasonal naive. Also, if you're using TFB, why not have included crossformer in your table? PatchTST is often second-best, and crossformer is competitive with patchTST in the TFB paper. - Interpretable learned spatial correlation matrices The authors visualize the spatial correlation matrices and interpret them for the weather dataset, which is informative. - Difference between chunk and patch: really underexplored. Describe in an appendix or something? Methods And Evaluation Criteria: See above re: CMoS being SOTA. Theoretical Claims: Theorem 3.2 Has a proof in Appendix F, that I did not check. Experimental Designs Or Analyses: The datasets are widely used and typical within the field. - It's unclear how you chose to specify the grids for the chunk size and spatial correlation search. It's also unclear is what is meant by "lookback window search". - The authors conducted ablations on the components of the method, showing they are all important for the method's success. - Interpretability analysis through visualizations. Interesting discussion of the mappings. A way to quantify this approach would be better, especially if you could tie these mappings to the actual time series. Figure 8 is a start, and it would be interesting to expand this analysis to other datasets, or to compare datasets where some transfer might be expected and see if the mappings have similarities across datasets, e.g. in between the ETT datasets. Supplementary Material: - Appendix B is great, thank you for writing this Relation To Broader Scientific Literature: This paper relates to a line of work around deep learning for time series forecasting. Autoformer introduced the Time Series Library https://github.com/thuml/Time-Series-Library that has been used regularly by the community to benchmark TSF models on a set of common datasets. Among these models, some are extremely parameter-efficient and question whether transformers are useful/necessary for time series forecasting, such as DLinear https://arxiv.org/abs/2205.13504. This work continues that line of work, using the same TSLib to benchmark their model and comparing against baselines in that repo. Essential References Not Discussed: - The review needs to go back further than 2023, especially considering that this is a more traditional paper (see "Claims And Evidence", the first claim, the paper on temporal convolutions, the related work section, paragraph 2, as a place to start). Other Strengths And Weaknesses: Originality: This work further elucidates the importance of architectural choices for efficient ts forecasting. Significance: The results of this work are significant, in that they show solid performance with less parameters than typical methods. Clarity: The paper structure is standard and clear. It's well-written overall, with a few typos. Other Comments Or Suggestions: N/A Questions For Authors: - Did you ablate for overlapping chunks? - How about multiple layers of convolution? - Why are the K correlation matrices shared by all channels? - how do you select the degree of smoothing to apply during the two-stage weight allocation? - Are the train/test splits also across time? - Can the periodicity injection be applied to other architectures, for example, those with nonlinearities? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Thank you for your detailed and thoughtful review! We will address your concerns point by point.** > Q1: Novelty of some components Your example is quite helpful! So we explain the two aspects you mentioned following the format you provided: - The use of ConvNet. Existing methods like DeepTCN or TimesNet employs one ConvNet backbone for all channels. Since different channels may exhibit different noise levels, only one backbone may struggle to resist interference from various levels of noise. In contrast, **CMoS allocate a specific lightweight ConvNet for each channel to eliminate the channel-specific noise variations**, thus enhancing the model's robustness. **It is also notable that compared to this specific component, the whole Correlation-Mixing framework (including ConvNet) is a more important innovation.** As shown in Appendix B, **compared with existing channel strategies, our Correlation-Mixing can effectively model diverse temporal and even cross-channel dependencies with great efficiency**. ConvNet plays an important role in reducing the effect of noise in this framework. - Chunk v.s. Patching. Technically, patching **only splits the historical series into segments**, and patch-based models like PatchTST focus on **generating aggregated representation of the correlations between these historical segments** (similar to the high-level semantic information in LLMs) and then decode the representation to future time points. The black-box nature of such representations make it hard to figure out how specific segment influence the final prediction, limiting the interpretability of these methods. In contrast, chunk **splits both historical and future series**, and instead of learning the high-level representations, chunk-based CMoS focus on **directly modeling of the spatial correlation between historical and future segments**. which is quite interpretable. Also, we proved that chunks bring benefits in robustness and efficiency. *To better understand their differences in interpretability*, we visualized the patch-based representation (att1&2.png, based on the attention score of PatchTST's 2 layers) and chunk-based correlation (cmos.png) on weather dataset in anonymous repo https://anonymous.4open.science/r/kjUH. We can easily figure out how historical segments contribute to future segments for each time series in cmos.png ,while it's hard to obtain similar or other interpretable information from att1&2.png. **We will provide more details and recent works in the next version according to your valuable suggestions!** Also, reviews about works before 2023 will be added. > Q2: Inject incorrect periodicity That's an interesting question. To simulate this case, we randomly select an integer x!=period as interval in injection phase. Comparing the below results (MSE) with Table 3 in the paper, it can be seen that **using incorrect period performs barely better than random initialization**. So we suggest injecting the correct period only when most time series in the system passed the ACF test—a scenario that commonly occurs in real-world applications. ||Ele.|Tra.|Wea.|ETTh1|ETTh2|ETTm1|ETTm2| |-|-|-|-|-|-|-|-| |wrong period|0.130|0.372|0.151|0.371|0.297|0.294|0.174| > Q3: More Baselines We follow your suggestion to include more baselines. The prediciton results (MSE) of naive, seasonal naive, Time-LLM and Crossformer are listed in the below. *Since Time-LLM leverage very large LLM as part of the model, the trainging time is quite long (over 2 days for single setting). Therefore, we currently have only obtained partial results. The full results are on in its way.* From the results, CMoS outperforms these methods, indicating that CMoS is a quite effective model. ||Ele.|Tra.|Wea.|ETTh1|ETTh2|ETTm1|ETTm2| |-|-|-|-|-|-|-|-| |Naive|1.611|2.770|0.353|1.319|0.533|1.271|0.385| |Naive Season|0.230|0.630|0.371|0.598|0.477|0.489|0.358| |Crossformer|0.181|0.523|0.235|0.452|0.861|0.465|0.589| |Time-LLM|-|-|-|0.414|0.340|0.360|0.262| |CMoS|0.158|0.396|0.220|0.403|0.331|0.354|0.259| > Q4: Other concerns Very sorry that the response here cannot cover every point you have mentioned due to the text length limitation, and **we can further discuss any uncovered points during the discussion phase**. Here are the brief replies for some points: - Parameter concerns: We determined the grid sets through extensive experiments. - More visualizations and ablations: These are valuable suggestions. We will perform more experiments and analysis. - K matrices are shared: As mentioned in *Sec. Introduction*, the K matrices are designed to learn some basic temporal structures in the system, and each channel finds a specific way to combines these basic correlations. This design brings both efficiency and robustness benefits. - Nonlinear injection: If certain modules strongly correlate with periodicity, we can generalize Periodicity Injection to these modules. We believe this would be an interesting direction to explore.
null
null
null
null
null
null
Where is the Truth? The Risk of Getting Confounded in a Continual World
Accept (spotlight poster)
Summary: The paper explores a nuanced aspect of continual learning related to confounding data. It highlights how confounding data can create shortcuts by fostering spurious correlations, ultimately hindering the generalization ability of continual learning methods. The authors demonstrate the effects of confounding data on sequential continual learning using a confounding dataset generated from CLEVR. Their findings show that conventional continual learning methods struggle in this setting, underscoring the need for more robust approaches. Claims And Evidence: The authors claim to study continual confounding using ConCon dataset and they also provide evidence supporting the claim but evidence does seem strong since it lacks evidence on other real world datasets. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No supplementary Material found Relation To Broader Scientific Literature: Majority of the Continual Learning literature focuses on issue of Catastrophic Forgetting while this paper focuses on continual confounding which adds value to this field. Essential References Not Discussed: References are adequate. Other Strengths And Weaknesses: ## Strengths - The paper is well-written and easy to follow. - It offers a novel perspective on continual learning by focusing on the impact of confounding data, rather than the commonly studied issue of catastrophic forgetting. This adds value to the field, where most literature centers around catastrophic forgetting. ## Weaknesses - The ConCon dataset, used to study the impact of confounding data, appears somewhat artificial and monotonous. - While the paper effectively identifies a critical challenge in continual learning, it does not propose any solutions to address it. Other Comments Or Suggestions: None Found Questions For Authors: While the paper presents a novel perspective on continual learning, it lacks strong evidence of continual confounding due to the absence of evaluations on real-world datasets. Additionally, it does not propose any solutions, leaving the community without a clear direction for addressing and mitigating the identified issue. - Is it possible to evaluate on the dataset similar to the figure 1 shown in the paper? - Why the authors have not investigated the solution of the problem described in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. In the following, we would like to address their two concerns. **Synthetic vs Real-World Datasets** In our paper, we introduce a real-world experiment on the popular ImageNet dataset and observe a surprising discrepancy between joint and cumulative training. By introducing the ConCon dataset, we are able to systematically investigate variants of continual confounding and eliminate other (potentially unknown) factors present in real-world images. This enables us to eliminate alternative causes of unexpected model behavior and allows for analyzing the effects of continual confounding in isolation. Our real-world experiment introduced in section 1 shows the same behavior as we observed in our experiments on ConCon: cumulative training results in a lower accuracy than joint training. In order to further highlight that this behavior can also occur on natural images, we also include another real-world experiment involving a higher number of ImageNet classes. Here, we even observe a 5% difference between cumulative and joint training. Please consider the experimental details and results below ("New Real-World Experiment on ImageNet"). We have included this experiment in the appendix. **New Real-World Experiment on Imagenet** In this experiment, we select several ImageNet classes such that in each of the three tasks, we have a group of 4 ImageNet classes representing 4-legged animals and 4 ImageNet classes not representing 4-legged animals. Each group of 4 classes is assigned one label, resulting in 6 labels for the model to learn. See the table below for the class selection. The resulting groups are correlated with colors (Task 1: white vs blue, Task 2: green vs white, Task 3: blue vs non-blue). We hypothesize that for each task in isolation, learning to make predictions based on the confounding feature "color" is easier than learning the ground-truth features that correspond to "animal with 4 legs". | Task 1 | Task 2 | Task 3 | | --------------------------------------------- | ------------------------------------------- | --------------------------------------- | | arctic fox, polar bear, white wolf, samoyed dog | cheetah, deer, lion, leopard | hippo, crocodile, water buffalo, beaver | | jellyfish, blue jay, blue shark, tench | snowmobile, sailboat, snowplough, rattlesnake | eagle, spider, lifeboat, mushroom | As for our other experiments, we ran cumulative and joint training on a Resnet-18 model and averaged the results of 5 different seeds. The results for joint training and for cumulative training after covering all three tasks are shown in the table below: | Cumulative | 76.3% | 78.5% | 84.4% | | ---------- | ------ | ------ | ------ | | Joint | 83.45% | 86.25% | 86.25% | Across all tasks, we observe an average decrease in accuracy of 5.58 percentage points, a substantial degradation in performance. This provides further evidence that continual confounding can manifest in real-world images and that it does deteriorate model performance. This experiment has been added to the appendix. **Contribution Despite a Lack of Solutions for Continual Confounding** This paper is the first to investigate continual confounding. We not only showcase that this is an issue on real-world data but also introduce a thorough framework that provides a foundation for systematically studying different variants of confounding based on CLEVR, where one has complete control over the distribution of the generated images. ConCon, the resulting benchmark on continual confounding, is, therefore, one of many dataset and benchmark papers that are common in the machine learning community and fall under the ICML call for papers as "datasets that are of interest to the machine learning community". Our findings include a discussion on insidious continual confounding, a phenomenon where cumulative training yields worse accuracies than joint training. This by itself is an important contribution, as it questions the use of cumulative training as the gold-standard upper-bound for evaluating continual learning methods, which is a common approach in continual learning research. We fully agree that developing methodological solutions for overcoming our identified issue of continual confounding is very important. However, we consider this to be out of the scope of this dataset paper, which aims to identify and analyze the issue. We believe that ConCon will be of great use in evaluating such novel approaches. However, doing so requires further research and more in-depth analysis than could be provided by this paper.
Summary: This paper introduces the concept of continual confounders. A data contains confounders when a model trained on the data can fit the training data using spurious correlations but fails to generalize at test time. Continual confounders are ones that control distributions across a continual set of tasks. The paper also introduces a dataset, ConCon, a simulated dataset built on the framework of CLEVR using Blender software consisting of images of various objects with varying textures and colors in a 3D space. Section 3 introduces the concept of continual confounding. Continual confounders are divided into two categories, disjoin and strict. Disjoint confounders are only observed in their respective tasks. A model which fits continually disjoint confounders need not to unlearn the confounders of previous tasks to learn a new task. Strict confounders on the other hand, may appear in other tasks. Section 3.1 provides a rigorous definition of confounders. Section 3.2 introduces the ConCon dataset that contains disjoint and strict variants. The dataset contains various objects such as spheres, cubes, and cylinders in different sizes, colors, and textures. The task is binary classification to determine whether a sphere and small cube exists in the image. The confounders are the blue, metal, and large. It is argued that static confounders are insidious continual confounders that if a model is trained jointly on the dataset, it can generalize well by not utilizing confounders but learned continually, it will fail to generalize. As such avoiding catastrophic forgetting is not enough for continual learners in such a setup. The experiments evaluate the performance of continual learning methods for training two architectures, ResNet-18, and NeSy. The continual learning methods include replay methods and regularization methods. The following observations are made: - All CL methods on both models fail on unconfounded held-out test sets on the disjoint variant of ConCon. Similarly on the static variant all methods except for joint and cumulative training do not generalize. - Preventing catastrophic forgetting on the disjoint dataset does not help with making correct predictions on unconfounded data. - Training in a continual setup performs significantly worse than the joint setup. - Continual learning methods suffer from insidious continual confounding. ## Update after rebuttal I thank authors for their response including clarifications and new results with PNNs. I recommend incorporating the response into a revision. I maintain a weak accept rating for this work as I believe this dataset may be valuable for the community in studying continual learning methods but understand other reviewers have remaining concerns about the dataset being synthetic. Claims And Evidence: The paper makes the following claims and provide evidence in a synthetic setup: - Continual learning methods suffer from insidious continual confounding where the model can generalize if trained jointly on the data but does not generalize when trained continually. - Preventing catastrophic forgetting does not help with making correct predictions on unconfounded data. - The ConCon dataset is a benchmark for evaluating the impact of confounders in continual learning. Methods And Evaluation Criteria: The paper does not propose a new method. It provides a new evaluation benchmark that controls the confounders over tasks and provides meaningful observations. Theoretical Claims: The paper provides a rigorous definition of confounders but no theoretical claims. Experimental Designs Or Analyses: The construction of the ConCon dataset with disjoint and strict variants is sound. The results on these datasets also match the intuition. Supplementary Material: I skimmed through the appendix for more examples of images in ConCon dataset (Figure 4). Relation To Broader Scientific Literature: This work is related to two literatures on continual learning and spurious correlations. The connection between these two literatures has not been explored before and is interesting to study. Essential References Not Discussed: The paper is missing references to works on spurious correlations and related benchmarks. For example: - Koh, P. W., et al., Wilds: A benchmark of in-the-wild distribution shifts. ICML 2021. - Sagawa, S., et al. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. ICLR 2020. A discussion and comparison is needed, especially in terms of the types of spurious correlations identified in prior works. Other Strengths And Weaknesses: Strengths: - The paper introduces an interesting concept of continually confounded datasets, rigorously defines them and introduces an evaluation that provides novel insights. Weaknesses: - It is not clear whether the goal of the benchmark is to inspire model architecture design in the future or better continual training methods or to suggest some continual tasks are not learnable by continual learners. The paper needs to clarify its goal. - The evaluation focuses on only two models, ResNet-18 and NeSy trained from scratch on the data. For example, what happens if a pretrained model is fine-tuned on the data? What are the implications for continual learning methods with adaptive architectures such as Progressive Neural Networks (Rusu, Andrei A., et al. "Progressive neural networks." (2016))? Other Comments Or Suggestions: Typos: - Line 41: Resnet -> ResNet - Line 86: a -> an - Line 120: introduction example -> introductory example - Section 3: it may be helpful to add a summary table for comparison between disjoint and strict continual confounders and some of their characteristics. For example, which learner (with or without replay) in which setup (continual vs jointly) on which confounders (strict, disjoint) can fit/generalize. - Line 254: comprising of -> comprises Questions For Authors: - Is insidious continual confounding limited to strict confounders? Or is insidious continual confounding a property that various confounder types can have including strict confounders? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and positive evaluation of our work. We would like to comment on a few points in their review. **Related Work on Distribution Shifts** Thank you for suggesting further related work. We have included the references and discussed their relation to ConCon and the goal of our paper. **Goal of This Paper and ConCon** The goal of our paper is to "improve the evaluation of continual learning methods when confounding might be present and also inspire new approaches to deal with these issues. The ConCon benchmark is designed to facilitate systematic investigation of such methods in the future." We included these sentences in our paper. The disjoint dataset represents a simple confounded setting where confounding features only appear on certain images. A good example of such confounding is watermarks. We show that a challenge for CL methods on disjoint datasets is not only to avoid forgetting but also to learn the ground-truth features and not just the disjunction of confounders. The confounding in our strict dataset is less obvious, as confounding features may appear as regular, random features in other tasks. For example, snowy backgrounds might at first only be associated with polar bears, but in later classes, they also appear on other images in winter. Perhaps surprisingly so, we show that this type of insidious continual confounding can deteriorate model performance compared to non-continual learning settings. Our goal is, thus, to highlight these challenges and dangers related to confounding in continual settings. Our results on insidious continual confounding also question the use of cumulative training as the gold-standard upper-bound in continual learning. Our response to reviewer iRoy about our contributions might also be of interest. **Effects of Pre-Training and Implications on Adaptive Architectures** Our NeSy model is pre-trained on the CLEVR dataset. However, we see that it does not perform well on the unconfounded datasets, where the accuracies are worse than for the Resnet-18 model (70.8 vs 95.7). Pretraining, in general, can both be helpful and harmful when it comes to continual confounding, depending on what the model is trained on. If the pre-trained model has learned the ground-truth prediction, it is unlikely to unlearn that even on confounded data. However, our comparison of joint and cumulative training shows how a model pre-trained (tasks 1 and 2) to focus on confounders performs worse than a model trained from scratch (joint training). Following the reviewer's suggestion, we ran the Progressive Neural Networks (PNNs) method on the two variants of our ConCon dataset for the NN model and added them to tables 1 and 2 : | | Task 1 | Task 2 | Task 3 | Task 1 @ Task 3 | Task 2 @ Task 3 | UnConf | | -------- | ------ | ------ | ------ | --------------- | --------------- | ------ | | Strict | 100.0 | 99.77 | 99.76 | 100.0 | 99.7 | 49.2 | | Disjoint | 100.0 | 99.84 | 100 | 100.0 | 99.6 | 47.89 | We expand the ResNet-18 model when it encounters new tasks. Data from confounded tasks always use their respective output heads, and the unconfounded data uses the most recent head (task 3). Generally speaking, we do not expect adaptive architectures to help against continual confounding by themselves. Since the model first learns to focus on confounding features, thereby learning simple solutions and setting model weights accordingly, expanding the architecture does not automatically help learn the correct, more complex ground-truth rule. We hypothesize that such a model must be able to unlearn the previously learned behavior in order to ignore the confounders and learn the ground-truth feature instead. Therefore, we argue that adaptive architectures are unlikely to help against continual confounding, although specialized approaches might be useful. **Question on Insidious Continual Confounding** > Is insidious continual confounding limited to strict confounders? Or is insidious continual confounding a property that various confounder types can have including strict confounders? Confounders / confounding features themselves do not exhibit insidious continual confounding. Insidious continual confounding arises when a model is continually trained on confounded data, where confounding causes it to make predictions with lower accuracy than in a joint training scenario. (Note that confounding that does not change across tasks would not cause insidious continual confounding as joint training would suffer from it in the same way; see our evaluation on the disjoint dataset.) Whether insidious continual confounding occurs thus depends on both the type of confounding and the model and training procedure. **Typos** We thank the reviewer for their feedback on typos. We will correct them for the camera-ready version. --- Rebuttal Comment 1.1: Comment: I thank authors for their response including clarifications and new results with PNNs. I recommend incorporating the response into a revision. I maintain a weak accept rating for this work as I believe this dataset may be valuable for the community in studying continual learning methods but understand other reviewers have remaining concerns about the dataset being synthetic.
Summary: The paper presents the confounder problem in the continual learning regime, which is novel. It also establishes a benchmark with clear logical definitions and potentially highlights a new direction for studies in the continual learning field to improve the overall performance. Claims And Evidence: The claims made in the work are clear and logical. Methods And Evaluation Criteria: The evaluation methods presented in this work are novel and valid for use in continual learning scenarios to investigate whether the model is truly learning in a sequential setting rather than relying on shortcuts. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The experimental design is scientific, and the related continual learning methods are evaluated on the new benchmarks. Supplementary Material: Yes, the appendix. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Here are some related continual learning works the authors may consider including: [1] Liu, Y., Zhu, W., & Ren, S. (2022). Navigating memory construction by global pseudo-task simulation for continual learning. Advances in Neural Information Processing Systems, 35, 7342-7355. [2] Farajtabar, M., Azizan, N., Mott, A., & Li, A. (2020, June). Orthogonal gradient descent for continual learning. In International conference on artificial intelligence and statistics (pp. 3762-3773). PMLR. Other Strengths And Weaknesses: The work is well-written and complete. The continual confounding is clearly defined through logical rules and effectively explained with text and images. The dataset benchmark, ConCon, constructed also makes sense. Other Comments Or Suggestions: More models could be tested on the dataset for evaluation, but it is not necessary. The ResNet and NeSy models based on transformers used in the experiments are sufficient to support the statements. Questions For Authors: None so far. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and we will include the suggested references in our related work section. We are happy to read that they agree that our experiments are sufficient to support our claims. Nevertheless, we invite the reviewer to also take a look at our responses to the other reviewers, where we add additional experiments that support our claims even further.
Summary: This paper explores confounding in continual learning. The authors formally describe confounding factors that lead to poor generalization and introduce a CLEVR-based synthetic dataset (ConCon) to study these challenges. They evaluate several continual learning approaches on ConCon and show that these methods struggle to mitigate the influence of confounders, leading to degraded performance on unconfounded test data. ### Update after rebuttal The author's rebuttal addresses most of my concerns. Thus I have updated my score from 2 to 3. Claims And Evidence: 1. Sequential vs. Joint Training: The paper claims that sequential training (including cumulative training) is more challenging in the presence of confounders compared to joint training. Figure 1 and Table 2 indicate that joint training outperforms cumulative training. However, this may be due to the inherent difficulty of training on streaming data—non-convex optimization can lead to overfitting on biased, limited data, causing poor local minima. Thus, the performance gap might primarily reflect the challenge of learning from changing distributions rather than the effect of confounders alone. 2. Generalization in Continual Learning: The authors argue that even methods that prevent forgetting in continual learning fail to generalize well due to confounders, as shown in Table 1. While this observation may be valid, the experimental design appears to mix the issue with a train-test distribution mismatch. In task t, the training data includes both ground-truth features (g) and task-specific confounders (c_t), but the test set only contains the ground-truth feature g. This mismatch might be driving the performance gap, which is more an issue of domain adaptation than a failure of continual learning. Methods And Evaluation Criteria: The paper primarily introduces a new concept and dataset rather than novel methods. It proposes a formal description of confounding and uses the ConCon dataset—with its controllable confounding factors—to evaluate various continual learning approaches. However, the evaluation might be skewed by the train-test mismatch, making it unclear if the poor generalization is due to continual learning shortcomings or simply the data distribution shift. Theoretical Claims: The paper presents a formal description of continual confounding using Boolean algebra to capture the interplay between ground-truth predicates and task-specific confounders. Although the theoretical framework is plausible, the assumptions require further clarification. In particular, the distinction between ground-truth features and confounders is not well defined; defining confounders solely based on the test set is problematic. Experimental Designs Or Analyses: The experimental evaluation is based on the ConCon dataset. Concerns include: 1. Task Design: The current task design may not adequately isolate the effect of confounders from the inherent difficulty of learning on non-stationary data. 2. Train-Test Mismatch: The performance gap might be largely due to the mismatch between training (with both g and c_t) and testing (with only g), which is more a domain generalization issue than a continual learning problem. 3. Limited Real-World Evaluation: Relying exclusively on a synthetic dataset raises questions about the generalizability of the findings to real-world scenarios. Supplementary Material: The supplementary material provides additional experimental details, results, and implementation specifics. Relation To Broader Scientific Literature: The paper situates its contributions within the continual learning literature. It would benefit from a deeper discussion on the relationship between domain generalization and invariant risk minimization literature, both of which address issues of spurious correlations and distribution shifts. Essential References Not Discussed: Including related work from the domain generalization literature—where similar challenges of spurious correlations are addressed—would help strengthen the context for the paper’s contributions. Other Strengths And Weaknesses: Strengths: 1. Studies an interesting and underexplored problem in continual learning, which is related to dataset confounding. 2. Introduces a novel, controlled synthetic dataset (ConCon) to study confounding effects. Weaknesses: 1. The paper needs a clearer distinction between ground-truth features and confounders. If a factor is deemed confounding, it should ideally be identifiable in both the training and test sets; otherwise, the evaluation might be misleading. 2. The design suffers from a train-test mismatch that may be driving the observed performance gaps. The performance differences in Figure 1 could stem from challenges associated with streaming data rather than solely from the presence of confounders. Consider redesigning the experimental tasks (e.g., grouping tasks by different background colors) to better isolate the impact of confounders. Specifically, Task 1 could involve classification between images with a blue background (e.g., jellyfish and shark), while Tasks 2 and 3 could focus on white and green backgrounds, respectively. Comparing the final performance between this revised design and the original design would help assess the influence of confounding factors. 3. The reliance on the proposed synthetic dataset limits the applicability of the findings. Evaluating on real-world or multi-class datasets would provide stronger evidence for the claims. 4. It is helpful to incorporate more baselines. Such as generative reply [1], which is also an important baseline in continual learning. [1] Van de Ven, Gido M., Hava T. Siegelmann, and Andreas S. Tolias. "Brain-inspired replay for continual learning with artificial neural networks." Nature communications 11.1 (2020): 4069. Other Comments Or Suggestions: 1. The memory size used in experiments (e.g., 100 samples) may be too small to reflect practical scenarios; larger memory sizes should be considered, especially in table 2. 2. If the goal is to evaluate model performance under different distributions, exploring domain adaptation techniques might be more appropriate than a pure continual learning approach. Questions For Authors: 1. How do you ensure a clear and consistent distinction between ground-truth features and confounders in the ConCon dataset? 2. Could the performance gap between joint and sequential training be primarily attributed to the challenges of streaming data (non-convex optimization and local minima) rather than confounders? 3. How do you envision the proposed framework and findings translating to real-world continual learning scenarios, where data distributions might differ less drastically between training and testing? For additional details on potential weaknesses and suggested modifications, please refer to the previous discussion. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and constructive feedback. We first try to clarify their more general concerns. Features that act as confounders in one dataset can also appear as random features in other datasets. The distribution shift between the confounded tasks and the unconfounded dataset is a result of such features no longer **acting** as confounders, i.e., they might appear on both positive and negative images but are not informative wrt the class label. Our synthetic data generation used for ConCon ensures minimal distribution shift between confounded and unconfounded datasets while still exhibiting confounding in one but not the other. Thus, the resulting minimal train-test mismatch is intentional and allows for investigating whether a high model accuracy is caused by the model making predictions based on the confounding features or the ground-truth features. We have included this explanation in section 3.2. In addition to experience replay (ER), where we use a buffer of 100 samples, we also evaluate ER with an infinite buffer size. This is what we refer to as *cumulative training*. Any other buffer size larger than 100 should, therefore, result in accuracies between those of ER and cumulative training. We made changes to clarify this in Section 4.1. We will now respond to the weaknesses (W) and questions (Q). **W1/Q1** In this paper, there is no a priori distinction between confounding features and ground truth features. This is intentional, as we aim to investigate the models' capabilities in identifying the ground truth features without prior information. As a result, confounding is defined relative to the full set of tasks. How features that act as confounders in one task appear in other tasks depends on the dataset variant: In the disjoint dataset, confounding features are only present in their respective task. They are identifiable in the sense that their absence in the other tasks makes them irrelevant and unhelpful for accurate prediction-making. In the strict dataset, confounding features appear in other tasks in both positive and negative images. They are, therefore, incompatible with the correct decision rule in new tasks. Any confounding features of the confounded tasks may appear in images of either class in the unconfounded dataset. However, they are uninformative and provide no value for prediction-making, as they do not act as confounders there. **W2** We ran experiments on ConCon where we shuffled the training data before distributing it across tasks, thereby removing the alignment of confounders: | | Task 1 | Task 2 | Task 3 | Task 1 @ Task 3 | Task 2 @ Task 3 | UnConf | | -------- | ------ | ------ | ------ | --------------- | --------------- | ------ | | Strict | 79.85 | 93.4 | 95.32 | 95.18 | 96.72 | 89.52 | | Disjoint | 99.98 | 99.90 | 99.96 | 99.98 | 99.84 | 49.97 | For strict, the accuracy on UnConf is much closer to joint (95.7) than to cumulative (72.6). This confirms that insidious continual confounding is responsible for a majority of the drop in accuracy. For the ImageNet experiment, we align the confounders task-wise: Task 1: Arctic fox, snowmobile, Task 2: broccoli, tree frog, Task 3: tiger shark, jellyfish. We obtained 89.93% joint and 89.0% cumulative accuracy. Unfortunately, any differences here are obscured by variance due to data loading order, so we cannot draw substantial conclusions. **Q2** The gap arises because of the combination of confounders and the challenges of streaming data. As we also show in our new experiment on ConCon above, the gap between cumulative and joint training is more substantial when tasks are confounded. **W3** We ran additional experiments on real-world data. Please refer to our response to reviewer iRoy. **Q3** In real-world scenarios, we expect confounding to be imperfect. This could express itself in two ways: 1. Only a subset of the data is confounded. 2. The entire dataset is imperfectly confounded, i.e., in the training set, the confounders provide some limited bits of information about the class, but this relationship does not generalize on the test set. **W4** We here show the results for generative replay: | | Task 1 | Task 2 | Task 3 | Task 1 @ Task 3 | Task 2 @ Task 3 | UnConf | | -------- | ------ | ------ | ------ | --------------- | --------------- | ------ | | Strict | 99.96 | 98.54 | 99.58 | 48.84 | 48.61 | 49.26 | | Disjoint | 100.0 | 88.78 | 99.28 | 53.2 | 59.33 | 47.73 | Results match those reported for other approaches in our paper. We have included it in the appendix. Please let us know if the reviewer recommends any other methods to include.
null
null
null
null
null
null
General framework for online-to-nonconvex conversion: Schedule-free SGD is also effective for nonconvex optimization
Accept (oral)
Summary: This work investigates the effectiveness of schedule-free methods in nonconvex optimization. The authors first develop a general framework for online-to-nonconvex conversion, which converts a given online learning algorithm into a nonconvex optimization algorithm. This framework not only recovers existing conversions but also leads to two new conversion schemes. In particular, one of these new conversions corresponds directly to the schedule-free SGD, therefore allowing us to establish its optimal iteration complexity for nonsmooth nonconvex optimization. The analysis results also provide valuable insights into the parameter choices for schedule-free SGD in practical applications. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No. Experimental Designs Or Analyses: No experiments. Supplementary Material: No. Relation To Broader Scientific Literature: May inspire new optimization algorithm design and analysis in nonconvex machine learning. Essential References Not Discussed: No. Other Strengths And Weaknesses: This is a solid theory paper on extending the theoretical understanding of schedule-free methods to nonconvex optimization. The main contribution is the development of a general framework for online-to-nonconvex conversion, which converts a given online learning algorithm into a nonconvex optimization algorithm. By specifying the choice of sequences, this framework can recover existing conversions and also leads to two new conversion schemes that cover the schedule-free SGD method. Overall, I found the contribution of this work very fundamental, and the developed framework may spark new research directions on nonconvex optimization algorithm design and analysis. Other Comments Or Suggestions: Several algorithms (5,6,7) correspond to the same algorithm. It seems that only Algorithm 6 is necessary. Questions For Authors: The authors claim that the analysis result can explain why $\kappa_t$ should be chosen close to 1 in practice. I don't see a strong evidence here. The theoretical choice of $\kappa_t \approx 1$ is to achieve theoretical guarantee in general nonconvex optimization, while the practical choice of $\kappa_t \approx 1$ is to achieve the best performance for a specific problem. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful question. We agree that a deeper understanding of optimizer behavior ultimately requires incorporating finer-grained properties of the training loss landscape. That said, a somewhat surprising takeaway from our work—especially in light of your comment—is that certain practical choices of hyperparameters can already be explained using fairly generic assumptions on nonsmooth, nonconvex losses. This suggests that some aspects of real-world training dynamics may be governed by broader principles. We believe that incorporating additional structural assumptions on the loss—closer to the training losses in practice—could further bridge the gap between theory and empirical behavior, and we view this as a promising direction for future work.
Summary: This paper develops a general framework for online-to-nonconvex conversion, which reduces the problem of finding a stationary point of a non-convex objective function to an online learning problem. Their framework extends the work of Zhang & Cutkosky (2024) with a tighter analysis, and it also leads to two new conversion schemes. All three schemes are shown to achieve the optimal convergence rate for nonsmooth, nonconvex stochastic optimization problems. Moreover, the third scheme is shown to have the form of the schedule-free SGD in Defazio et al. (2024). As a result, it complements the convergence analysis for the convex setting in that work and establishes that schedule-free SGD remains optimal in the nonsmooth, nonconvex settings with the proper choice of parameters. Claims And Evidence: Most claims in this paper are supported by rigorous mathematical proofs. However, I have some reservations regarding the practical insights provided by the analysis. - The authors observe that under their framework, the choice of $\kappa_t$ for schedule-free SGD is close to 1, which aligns with the empirical observations in Defazio et al. (2024). On the other hand, other aspects of the parameter selection do not exactly match practical choices. Specifically, in Defazio et al. (2024), the choice of $c_t$ is independent of $\kappa_t$ and scales as $1/t$, which is different from the suggestion in Proposition 5.2. Due to this discrepancy, it seems unclear which formulation better explains the empirical success of schedule-free SGD. - At the end of Section 5, the authors claim that the learning rate $\gamma$ for schedule-free SGD is $\Theta(\frac{(G+\sigma)^2}{\epsilon^2})$ times larger than the optimal step size for SGD with momentum. However, in Proposition 5.2, $\gamma$ is compared to the **OMD step size $\eta_*$** for the online learning problem, which does not directly correspond to the step size in SGD with momentum. In fact, as mentioned by the authors, SGD with momentum corresponds to Option I with OMD as the online learner. Following the reformulation in Zhang and Cutkosky (2024), the effective learning rate for SGD with momentum is given by $\frac{\beta \eta}{\eta \mu + \alpha} = \frac{\eta \xi}{1- \xi}$. Since $\xi$ is close to 1 in the considered setting, the learning rates only differ by a constant close to 1. Hence, the analysis does not necessarily suggest a much larger learning rate than SGD with momentum. Methods And Evaluation Criteria: This is a pure theoretical paper, and thus empirical evaluation is not applicable. Theoretical Claims: Yes, I have checked the proofs in the Appendix, and to the best of my knowledge they are correct, except for some minor typos. Experimental Designs Or Analyses: There are no experiments in this paper. Supplementary Material: Yes, I have reviewed the appendix. Relation To Broader Scientific Literature: Two prior works are most relevant to this submission: - The proposed framework can be viewed as an extension to Zhang and Cutkosky (2024). In particular, it adopts the same key concepts (e.g., the approximate Goldstein stationary point and discounted regret) and the key techniques (e.g., the composite objective OMD). In some sense, the current submission presents the analysis from Zhang and Cutkosky (2024) in a more modular way and make the observation that there is some flexibility in choosing the sequence $\{x_t\}$ in the update rule. - The main contribution of this paper is to establish the optimal convergence guarantees for schedule-free SGD (Defazio et al., 2024), a recently proposed optimizer with strong empirical performance, in nonsmooth and nonconvex optimization. The original paper focuses on the stochastic convex optimization setting with a completely different analysis, and this paper offers a new perspective on the algorithm's design. Essential References Not Discussed: I think the authors did a good job and included the most essential references. Other Strengths And Weaknesses: The paper is well-written and easy to follow. However, one potential concern is that its theoretical contribution may not be significantly more substantial than that of Zhang and Cutkosky (2024), as it builds upon the same key concepts and techniques. While the authors introduce two new conversion schemes within their new framework, all proposed schemes ultimately achieve the same convergence guarantees (up to a constant). This raises the question of what advantages the newly proposed schemes offer beyond the existing approach. Other Comments Or Suggestions: For clarity, it may be helpful to briefly review how Option 1 corresponds to SGD with momentum and contrast it with Option 3. This addition would help readers better understand the differences between these two methods. Typos: - Lemma C.1 (Line 645): the denominator in the first term should be $DT$. - Lemma C.5: On Line 820, the second expectation should be over $\mathbf{X}_{\tau}$. Also, in the proof on Line 832, the left-hand side should be $\\|y_s - x_s\\|$ instead of $\\|w_s-x_s\\|$. Questions For Authors: I do not have additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments! **Regarding the step size comparison.** Thank you for pointing this out. Indeed, the step size of our Schedule-Free algorithm should not be compared with the step size of the OMD, and instead with the ``effective'' step size of the momentum method of Zhang and Cutkosky (2024). We agree with the observation that the stepsizes are of the same order and will update the presentation accordingly. **Regarding $\kappa_t$.** We acknowledge that our nonconvex analysis adopts a different averaging scheme, using $c_t = 1 - \zeta$ instead of the original choice $c_t = \frac{1}{t}$ proposed by Defazio et al. (2024). A more detailed investigation of this difference is an interesting direction for future work, and we will clarify this point in the final version of the paper. That said, our preliminary experiments suggest that the specific choice of $c_t$ has limited impact on the performance of schedule-free methods. For example, when training a ResNet on CIFAR-10, we often find that EMA averaging (as used in our method) results in more stable training compared to the full averaging scheme of Defazio et al. (2024). Here is the link for the experiments https://docs.google.com/document/d/1WBeV1DuS_zTZ6370hfRvQk617NILsp5uWwvD1CYUn74/edit?tab=t.0 . In contrast, the choice of $\kappa_t$ plays a much more critical role in practice. Performance degrades significantly when $\kappa_t$ is set below $0.9$, highlighting the importance of tuning this parameter carefully. Unlike the convex analysis of Defazio et al. (2024), which allows $\kappa_t$ to be chosen arbitrarily ($\beta_t$ in their notation), our nonconvex analysis imposes a more realistic constraint and, in this sense, better aligns with the practical settings. **Typos and other suggestions.** Thanks for the suggestions regarding the presentation and pointing out the typos. We will update the paper accordingly. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response and the additional experiment. As my concerns have been fully addressed, I am happy to raise my score to 4. I encourage the authors to incorporate the clarifications provided into the revision.
Summary: This paper introduces a more general online-to-nonconvex reduction. Based on the an OMD variant with discounted regret guarantees, the optimal convergence rate to a $(\lambda,\delta)$-stationary point is shown for three different variants. The third variant is shown to coincide with schedule-free SGD algorithm with specific parameter choices. The parameter choices required to achieve optimal rates for this variant of schedule-free SGD correspond to those that lead to the best empirical performance, which was not explained by the theory of schedule-free SGD in the convex setting. Claims And Evidence: Yes. Methods And Evaluation Criteria: - Theoretical Claims: No. Experimental Designs Or Analyses: - Supplementary Material: I read Appendix A and B. Relation To Broader Scientific Literature: The key contribution is the online-to-nonconvex conversion framework which is an extension of the work of Cutkosky et al. (2023). Using this reduction with a specific version of OMD, the authors show that the resulting algorithm corresponds to schedule-free SGD (Defazio et al., 2024), which was only studied in the convex setting before, and achieves optimal worst-case convergence rates. Essential References Not Discussed: No Other Strengths And Weaknesses: Designing and analyzing algorithms for the nonconvex and non-smooth setting is a very interesting direction. Given that SFO works so well in practice, it is very interesting to see that it is optimal for the nonconvex and non-smooth setting and that the parameter choices required to achieve optimal rates for this variant of schedule-free SGD correspond to those that lead to the best empirical performance. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for encouraging feedback! We do agree that designing and analyzing algorithms for the nonconvex and non-smooth setting is a very interesting direction. We also agree that it's nice to see have strong theoretical guarantees for practical optimizers.
Summary: This paper presents a general framework for conversion of any online-learning algorithm into a non-convex (non-smooth) optimization algorithm. The authors provide general non-convex convergence guarantees for the online-to-nonconvex unified framework in terms of the $(\lambda, \epsilon)$-stationary point. Their analysis, which applies to any online algorithm with appropriate discounted regret guarantees, is subsequently refined for the case of the discounted online mirror descent algorithm. Furthermore, the authors present three different conversions schemes as special cases of their framework. They show that their last conversion enables the recovery of the schedule-free SGD method. ## Update after rebuttal: I thank the authors for their response to my questions. Since my recommendation is already for acceptance, I will keep my score. Claims And Evidence: All the claims made in this paper are supported by sufficient evidence. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the given problems. Theoretical Claims: I looked through the proofs in Appendix C and D and they seem to be correct. Experimental Designs Or Analyses: Not applicable. Supplementary Material: I reviewed Appendix A, B, C and D. Relation To Broader Scientific Literature: This work extends the online-to-nonconvex framework to recover some popular optimizers, which have been shown to have optimal guarantees. This idea provides insight into the success of these algorithms even for the non-convex non-smooth case and allows for a better understanding of the step size choices of these in practice. This paper also introduces a new random EMA scheme, which is used for the algorithmic output. This method allows (slightly) improved convergence guarantees as compared to previous works selecting the output uniformly at random. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - This paper follows a good structure, the writing is good and most claims are presented with sufficient explanations to help explain the main results. - The designed random EMA iterate is a novel scheme that allows for improved convergence results and is consistent with practical applications. - The convergence analysis provided in this work is solid and based on relatively mild assumptions. Weaknesses: - I think including some experimental results possibly in some simple settings could help illustrate the behavior of the presented algorithms and potentially help clarify the advantage of schedule-free methods for non-convex optimization. Other Comments Or Suggestions: No. Questions For Authors: 1. Could the authors give some intuition/explanation about the choice of the comparators for the online learner in Lemma 3.1? 2. Could the authors elaborate a bit more on the motivation and intuition for the iterates produced by option II and the anchoring scheme? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback and suggestions. - The choice of these comparator sequences can be motivated by viewing them as the "good" update direction in hindsight. If we ignore EMA averaging in the choice of comparators (for simplicity), we can see that the comparator points exactly in the direction of $\nabla F(y_t)$, i.e., the ``lookahead'' gradient. Notice that at iteration $t-1$, we do not have access to this gradient, and that's why we call this lookahead. - As can be seen from the algorithm, each epoch can be viewed as performing local optimization around an anchoring point. However, if the algorithm spends too much time around the same anchor point, the landscape may remain underexplored, preventing the algorithm from reaching the desired stationary point. Therefore, once the algorithm completes an epoch, it shifts away to explore other regions of the landscape. As mentioned in the paper, this algorithm design has a nice connection to previous nonconvex optimization approaches that repeatedly solve convex subproblems constructed via appropriate regularization (e.g., Chen and Hazan (2024)).
null
null
null
null
null
null
Strategy Coopetition Explains the Emergence and Transience of In-Context Learning
Accept (oral)
Summary: This paper systematicallly studies transient dynamics of in-context learning (ICL) in transformers. In particular, the authors identify that after ICL disappears, a hybrid strategy between in-weights and in-context learning called "context-constrained in-weights learning" (CIWL) emerges, which competes with and eventually replaces ICL. Despite this competition, the two strategies share sub-circuits, leading to cooperative dynamics. The paper also proposes a minimal mathematical model to explain these interactions and highlights a setup where ICL remains persistent after long training times. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem. Theoretical Claims: I didn't check the proofs closely. Experimental Designs Or Analyses: The experimental designs and analyses seem sound/valid to me. Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: The paper built upon prior work such as Singh et al.'23 and Reddy'24, which shows a few new findings related to the transcience of ICL, such as the surprising CIWL and its asymptotic dominance, and its cooperation with ICL. Essential References Not Discussed: n/a Other Strengths And Weaknesses: **Strength** 1. The exploration of the transience of ICL is fascinating, and the nuanced findings regarding the existence of CIWL, its interaction with ICL, and its asymptotic dominance provide novel insights into the internal mechanisms of attention-based models and their emergent behaviors. These findings are particularly compelling to me. 2. In addition to the rich results, I appreciate that the authors have developed a simple mathematical model that (1) replicates real-world phenomena and (2) demonstrates some predictive power. **Weakness** 1. While I appreciate the attempt to develop a mathematical model, I find it somewhat difficult to grasp, particularly in terms of practical interpretation. For instance, the meaning of $\mu_1=0$ is not entirely clear to me—specifically, how it can relate to a characteristic of the dataset (lines 386-392). Other Comments Or Suggestions: 1. Some in-text citations should be done with \citep instead of \cite: line 28-29. 2. The term "attention delta" is used several times in the paper. I can get its meaning from context, I think it would be better to explicitly define the term to avoid confusion. Questions For Authors: 1. I’m a bit confused about how the mathematical model in Section 6 was developed, as it appears to come up rather abruptly. I understand that there is a pattern match between the model and the ICL experiment described in lines 379-351, but I’m curious about how the specific form of the objective was chosen initially. For instance, why was the tensor product chosen as a component? Why tensor product with three vectors? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their review and are glad they found our work "particularly compelling". We've factored in their suggestions and respond to their question below: The mathematical model presented was largely just our first attempt at extending the "single mechanism" model from Singh et al. (2024) – that's where the tensor product of 3 vectors came from. Our intuition was that, since ICL and CIWL can both "solve the task", we could use the product of the losses as an "OR" operation (a cool symmetry to how the tensor product within a mechanism's loss represents an "AND", as per Singh et al. (2024)). The offset mu1 was added as our results (and the original findings of Singh et al. (2023)) indicated CIWL was always asymptotic. The competition term is between $\mathbf{a}$ and $\mathbf{d}$ since Singh et al. (2024) use $\mathbf{a}$ to correspond to Layer 1. Further investigation of the toy model would be an interesting avenue for future work – we mostly introduced it to crystallize intuitions and since we were also (pleasantly) surprised by how it captured more nuanced dynamical features (the "divot"). Specifically, we think connecting it to the Neural Race Reduction ideas of Saxe et al. (2022) could be particularly noteworthy.
Summary: Architecture: 2-layer attention-only transformer (appendix has other models) Dataset: Omniglot, augmented to 12k+ classes. The majority of the classes are used for training, but the remaining 184 classes are used for testing. Training sequences are set up with a few shot learning favor and constructed using the exemplars from the training classes. Specifically, each training sequence consists of a context (which contains two exemplar-label pairs) and a query. Training sequences follow the “bursty” structure, requiring that at least one of the exemplars in the context always belongs to the same class as the query. Testing sequences are constructed using the exemplars from the test classes and they are crafted to evaluate various strategies including ICL, in weight learning (IWL), a new strategy CIWL, and a balance between ICL and CIWL named FLIP. The main finding is that in the beginning of training, ICL and CIWL cooperate and ICL can emerge. At some point ICL disappears and CIWL comes to the forefront. ## update after rebuttal I maintain my score and positive impression of the paper. Claims And Evidence: Yes, the setup is clear and well-justified. The training data design ensures that both ICL and CIWL have an opportunity to emerge, while the test evaluations (e.g., the FLIP test) directly measure whether ICL remains dominant or fades. We see strong empirical evidence for the claim that ICL transience is linked to “coopetition” with CIWL. Methods And Evaluation Criteria: The main method employed here is really setting up a minimal environment where we can observe “coopetition”. There’s also a portion of the paper dedicated to performing mechanistic interpretability. Here it is discovered that ICL and CIWL share sub-circuits. Theoretical Claims: The paper does not make traditional theoretical claims in the form of formal theorems. Experimental Designs Or Analyses: Yes, the experimental design is well crafted. The evaluation framework is particularly strong, as it systematically isolates different strategies through controlled test sequences. For example, the FLIP evaluator directly measures whether the model relies on ICL or shifts toward CIWL, providing a clear signal of strategy transition. Supplementary Material: I skimmed Appendix B Relation To Broader Scientific Literature: Recent work has shown ICL to be a transient phenomenon in that it can disappear after long training time. This paper shows that cooperation with CIWL enables the emergence of ICL in the first place, while competition leads to its eventual disappearance and replacement by the CIWL strategy. To the best of my knowledge, the identification of the CIWL strategy is novel. Essential References Not Discussed: Not that I’m aware of. Other Strengths And Weaknesses: This is an excellent example of rigorous deep learning science. The work is compelling and should have broad appeal. A key strength is that it reveals a previously unknown strategy—context-constrained in-weights learning (CIWL)—and its relationship with ICL. The paper exemplifies how well-designed experiments and mechanistic interpretability can lead to novel insights in deep learning. Other Comments Or Suggestions: * The term asymptotic is used very frequently throughout the paper. Asymptotic in what is never really spelled out. * The caption to Figure 1 is very dense. I found it a bit daunting to read in its position. I returned to it after reading far beyond it. * The x-axis in Figure 1b represents the number of training sequences seen, which implicitly corresponds to training time. Since emergence and disappearance are often framed in terms of training time, making this connection clearer in the text might help readers. * \\citep should be used in Line 28 and 29 for Olsson and Singh citations * I’m not sure the footnote on Hollywood studios is a good use of real estate. It’s enough to know that coopetition is a term from game theory and not a gratuitously silly word made up for this paper. Questions For Authors: 1. How do the authors expect the findings on ICL persistence to generalize beyond Omniglot? Would similar results hold for more complex datasets, such as natural language or vision tasks? 2. In the experiments with deeper networks and MLP layers (Appendix B), did the authors observe any systematic trends in how model depth or capacity influences the timescale of ICL transience? Specifically, does increasing depth delay the transition from ICL to CIWL, or does it accelerate it? If such trends exist, could they be described in terms of scaling laws similar to those seen in other deep learning phenomena? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and are happy they think our work is "an excellent example of rigorous deep learning science." We've also noted and updated the paper based on the suggestions, with responses to specific questions below: > How do the authors expect the findings on ICL persistence to generalize beyond Omniglot? Would similar results hold for more complex datasets, such as natural language or vision tasks? Given that recent works have found generalization of insights from the Omniglot setting to more naturalistic settings (e.g., language model token embeddings of Singh et al., 2023, and RL setups of Raparthy et al., 2023), we are cautiously optimistic w.r.t generality. For example, recent works on LLMs (https://arxiv.org/abs/2502.14010), point to related phenomena. > In the experiments with deeper networks and MLP layers (Appendix B), did the authors observe any systematic trends in how model depth or capacity influences the timescale of ICL transience? Specifically, does increasing depth delay the transition from ICL to CIWL, or does it accelerate it? If such trends exist, could they be described in terms of scaling laws similar to those seen in other deep learning phenomena? We didn't study the timescales as a function of architectural choices, as these were already discussed at scale by Singh et al., 2023 (from reading their paper, the analyses do seem motivated by scaling laws work, though not quantified as exactly, as the reviewer suggests). We did have some earlier experiments related to Layer 1 capacity (e.g., if training with fewer heads active) that showed fewer active heads lead to delayed transience (up to a point – fewer than 3 heads blocked ICL since the network effectively becomes a 1L model). We believe bridging the gap between the rigorous mechanistic understanding of our work, and scaling laws on larger models is an exciting direction for future research. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for detailed responses. I maintain my score of 5.
Summary: This paper investigates why in-context learning (ICL), a capability that emerges in transformer models without explicit training, sometimes disappears after extended training periods. The authors study this phenomenon using a simplified experimental setup with 2-layer attention-only transformers trained on a synthetic few-shot learning task based on Omniglot handwritten characters. The research demonstrates that after ICL disappears, the model does not simply revert to traditional in-weights learning (IWL). Instead, it adopts what the authors term "context-constrained in-weights learning" (CIWL) - a hybrid strategy that requires the correct label to be present in the context but does not need the full exemplar-label pairing that ICL uses. This CIWL strategy is implemented through skip-trigram mechanisms in Layer 2 of the network. Most notably, the paper uncovers that ICL and CIWL simultaneously compete and cooperate within the model architecture, a dynamic the authors call "strategy coopetition." While the two strategies compete in Layer 1 (where heads switch from attending to previous tokens for ICL to self-attention patterns for CIWL), they share critical subcircuits in Layer 2. This sharing explains why ICL emerges at all, despite not being asymptotically preferred by the model. The authors further develop a minimal mathematical model that reproduces these key dynamics. Their model captures how competition drives ICL's eventual replacement by CIWL, while cooperation enables ICL's initial emergence. Using insights from this model, they identify data conditions where ICL becomes persistent rather than transient - specifically, when context exemplars exactly match query exemplars. Claims And Evidence: The submission presents several key claims generally well-supported by evidence, but with some major limitations. The transience of in-context learning (ICL) is convincingly demonstrated in Figure 1b, reproducing previous findings from Singh et al. (2023) and others. The characterization of the asymptotic "context-constrained in-weights learning" (CIWL) strategy is well-supported by: - Behavioral evidence through specialized evaluators (Figure 1b) - Mechanistic evidence of skip-trigram-copiers in Layer 2 (Figure 2) - Ablation studies showing minimal pure in-weights learning (Section C.1) The core "strategy coopetition" claim is substantiated through experimental interventions such as: - Figure 3b shows that fixing Layer 2 weights to end-of-training values preserves the overall behavioral trajectory, indicating Layer 2 circuits are shared between strategies - Figure 3c demonstrates that fixing Layer 1 weights locks in behavior, suggesting competition happens primarily in Layer 1 - Figure 4a-c provides compelling evidence that CIWL enables ICL emergence despite eventually replacing it The mathematical model in Section 6 reproduces key behavioral patterns observed in the transformers, including the unexpected "dip" in CIWL formation. This strengthens the theoretical understanding of the observed dynamics. ## Problematic claims: 1. The claim that ICL can be made persistent by matching context and query exemplars is supported by Figure 6, though the mechanistic explanation for why this works could be more developed. 2. The authors briefly discuss the possibility that Layer 2 heads act in "superposition"—that multiple sparse features might share attention heads simultaneously (Appendix C.2). This claim is intriguing and potentially important, yet the current evidence is labeled by the authors as "preliminary." While their exploratory analyses do hint at complex non-additive interactions among heads, this aspect lacks conclusive experimental support. Further investigations—for example, systematically manipulating head temperature or conducting head-specific targeted ablations—would solidify this intriguing hypothesis. This is concerning because superposition could significantly impact the authors' interpretation of the mechanisms underlying CIWL, further robust experiments would substantially strengthen this claim. Without more conclusive evidence, this point remains somewhat speculative. 3. The paper claims that its results and explanations have potential implications for larger transformer models and realistic training scenarios (Section 7). However, the presented evidence is mostly limited to synthetic tasks using small-scale, simplified transformer setups. While the authors briefly show preliminary evidence from larger transformer models trained on similar synthetic tasks, they stop short of providing evidence on realistically scaled language modeling tasks or non-synthetic datasets. While the mechanisms discovered might generalize conceptually, the direct relevance to state-of-the-art transformers trained on natural datasets remains unclear. Explicit evidence from larger-scale empirical studies or tasks closer to practical applications would substantially reinforce this claim. 4. The paper proposes that ICL emerges transiently because it is "close to the path" toward the asymptotic CIWL solution. Although the experiments showing reuse of Layer 2 heads strongly support shared mechanisms between ICL and CIWL, the notion of “closeness” to the path is presented somewhat informally. The authors clearly show that certain intermediate CIWL-only checkpoints allow rapid ICL emergence, but a more explicit or quantitative measure of "closeness" in model space or loss landscape would strengthen this point. Without a clearly defined notion of "closeness" or detailed visualization of training trajectories (for example, using linear interpolation or functional similarity metrics), the explanation is somewhat abstract. Explicit analysis or visualization of model parameters or activations as training progresses would improve clarity and reinforce this important conceptual claim. Methods And Evaluation Criteria: The methods and evaluation criteria employed in this paper are well-suited to investigate the transience of in-context learning in transformers. The authors use a simplified architecture (2-layer attention-only transformers) which is methodologically sound for mechanistic studies. This choice follows established practices in transformer interpretability research and allows for clearer attribution of roles to specific components. While this simplification limits generalizability, the authors address this by demonstrating that their key findings extend to larger models (Figure 6, right panel), striking an appropriate balance between interpretability and relevance. The synthetic few-shot learning task based on Omniglot characters provides a controlled environment where multiple learning strategies are viable, making it an excellent testbed for studying strategy competition. The bursty data design intentionally permits both in-context and in-weights learning, which is crucial for their research questions. The specialized evaluators (ICL, IWL, CIWL, and Flip) are well-designed for this study. They enable precise measurement of distinct strategies through behavioral signatures: - The ICL evaluator isolates pure in-context learning by invalidating weight-based exemplar-label mappings - The CIWL evaluator tests for a specific hybrid strategy requiring context constraints but not full exemplar-label pairing - The Flip evaluator quantifies the relative dominance between strategies These behavioral measures are complemented by mechanistic analyses that strengthen the evidence: - Attention pattern analyses that reveal the underlying computational mechanisms - Causal ablation studies that establish the functional roles of specific components - Layer-fixing experiments that isolate the contributions of different model parts The mathematical model serves as both a theoretical framework and an additional evaluation criterion. By reproducing key behavioral patterns observed in the transformer experiments, it validates the proposed explanation and generates testable predictions. Theoretical Claims: This paper does not present formal mathematical proofs that require verification. Experimental Designs Or Analyses: I've examined the experimental designs and analyses in this paper and find them generally sound and well-executed. The authors' specialized evaluators (ICL, IWL, CIWL, and Flip) create controlled conditions that effectively isolate specific strategies, allowing clear attribution of model behavior to different learning mechanisms. The Flip evaluator is particularly innovative as it quantifies the relative dominance between strategies rather than just measuring their presence. The mechanistic analyses provide convincing evidence for the paper's claims. The attention pattern analyses effectively demonstrate the skip-trigram copying mechanisms underlying CIWL, and the authors establish causal relationships through interventions rather than relying solely on correlational evidence. For instance, the attention clamping experiments establish that Layer 2 heads functionally copy label tokens, while the layer-fixing experiments convincingly demonstrate that Layer 2 remains largely static after initial formation while Layer 1 drives strategy changes. The strategy-specific training experiments offer strong evidence for the coopetition hypothesis. Training on ICL-only data shows difficulty learning, but using CIWL-trained Layer 2 weights enables ICL learning—a key finding that supports the authors' central claim about strategy cooperation. Similarly, the CIWL-only training followed by bursty data effectively demonstrates that ICL emergence depends on CIWL not being fully formed. The paper demonstrates robustness through replication across multiple random seeds, different architectural variants, and various data configurations. Some aspects that could affect the validity of the analyses include reliance on averaged attention patterns that might mask individual variations in head behavior, the simplified nature of the toy mathematical model compared to actual transformer dynamics, and questions about generalizability from a 2-layer attention-only transformer to more complex architectures. Despite these minor limitations, the experimental designs and analyses provide good support for the paper's claims about strategy coopetition in transformer learning dynamics. Supplementary Material: Yes, I've thoroughly reviewed all parts of the supplementary material, including the extended related work (Appendix A), additional experimental results across different settings (Appendix B), mechanistic analyses (Appendix C), toy model details (Appendix D), and the rejected hypotheses section (Appendix E). The rejected hypotheses section (Appendix E) is particularly noteworthy as it strengthens the paper's credibility by showing the authors' scientific process. They considered and systematically tested alternative explanations for ICL emergence and transience. Specifically, they rejected two main hypotheses: that earlier ICL is necessary for CIWL to emerge, and that ICL emerges due to initialization effects (a "lottery ticket" hypothesis). this section has some limitations. The authors consider only two alternative hypotheses, potentially missing other explanations. Their experimental interventions, while well-designed, might not completely isolate the variables they aim to test - neural networks have complex interdependencies that make perfect isolation challenging. Additionally, the rejection experiments were performed on the same simplified architecture as their main findings, so the generalizability of these rejections to larger models remains unclear. Relation To Broader Scientific Literature: - The identification of "context-constrained in-weights learning" (CIWL) as the asymptotic strategy that replaces ICL builds directly upon Singh et al.'s (2023) discovery of ICL transience, providing a mechanistic explanation for what happens after ICL disappears. CIWL also relates conceptually to Lin and Lee's (2024) work distinguishing between "task recognition" and "task learning" modes of in-context learning. What the authors identify as CIWL shares similarities with the task recognition paradigm, where context serves more to identify what knowledge to retrieve from weights rather than providing new pattern-completion information. - The paper's core contribution—the "strategy coopetition" framework—extends several research threads. It complements Nguyen and Reddy's (2024) and Park et al.'s (2024) work on competition between strategies, but adds the crucial insight about cooperative interactions between seemingly competitive mechanisms. This reframes prior findings from Chan et al. (2022) regarding how data properties modulate the emergence of different strategies, suggesting that bursty data promotes cooperation between strategies before asymptotic competition takes over. - The mechanistic analysis showing shared subcircuits between strategies represents a significant advance over previous work. While Olsson et al. (2022) characterized induction heads and their role in ICL, and Elhage et al. (2021) described skip-trigram mechanisms, this paper shows how these circuit motifs can be repurposed between different computational strategies. This relates to Elhage et al.'s (2022) work on superposition in transformers, suggesting that limited model capacity leads to shared computational resources between different capabilities. - The toy mathematical model extends Singh et al.'s (2024) minimal model of phase changes in transformer learning, incorporating both competitive and cooperative dynamics. It also connects with Saxe et al.'s (2022) theoretical work on the "neural race reduction" and dynamics of abstraction in neural networks, providing further evidence that different learning dynamics can coexist and interact in complex ways during training. - Finally, the identification of data conditions leading to persistent ICL builds upon Chan et al.'s (2022) investigations of how data properties affect strategy adoption. It also connects to Lampinen et al.'s (2024) recent work on the "broader spectrum of in-context learning," suggesting that different manifestations of context-sensitivity may exist along a continuum rather than as discrete capabilities. Essential References Not Discussed: Based on my review, the paper generally provides a thorough discussion of relevant literature. Other Strengths And Weaknesses: **Originality** The paper demonstrates novelty in introducing the concept of "strategy coopetition" to explain a previously observed but poorly understood phenomenon of transient in-context learning. By identifying context-constrained in-weights learning (CIWL) as a distinct hybrid strategy and characterizing its mechanistic implementation, the authors provide novel insights beyond what was previously known about ICL transience. The discovery that competing strategies share subcircuits in Layer 2 while competing in Layer 1 represents a conceptual breakthrough in understanding transformer learning dynamics. However, the paper's originality is somewhat constrained by its foundation on existing work on ICL transience and the relatively straightforward extension of previous mathematical models. **Significance** This work makes a significant contribution by providing a mechanistic understanding of how transformers transition between learning strategies during training. By explaining why ICL emerges despite not being asymptotically preferred, the authors address a fundamental question about capability emergence in modern AI systems. The finding that ICL can be made persistent through specific data modifications has potential implications for training methodologies, especially if these dynamics extend to larger models as preliminary results suggest. The concept of strategy coopetition could influence how researchers think about capability emergence and circuit formation in neural networks more broadly. However, the significance is somewhat limited by the focus on simplified models and synthetic tasks, raising questions about generalizability to real-world language modeling scenarios. While the authors demonstrate some extension to larger models, more comprehensive validation across diverse architectures would strengthen the work's broader impact. **Clarity** The paper presents complex findings with great clarity through well-structured progression from phenomena reproduction to mechanistic understanding to mathematical modeling. The authors effectively use specialized evaluators to isolate and measure different strategies, providing clear operational definitions that facilitate understanding. The visualizations of attention patterns and strategy dynamics effectively communicate key insights, while causal intervention experiments clearly demonstrate functional roles of model components. However, the paper contains dense technical content that assumes substantial familiarity with transformer architecture and mechanistic interpretability techniques. The multiple interrelated experiments and detailed analyses could be challenging for readers to track without careful study. Some important methodological details are relegated to appendices, and the rejection of alternative hypotheses section, while valuable, could be better integrated into the main narrative to strengthen the central claims. Other Comments Or Suggestions: - The authors should verify Figure 4c, as the directional arrows for "ICL" and "CIWL" appear to be reversed compared to what's described in the text. - In Figure 14, there are references to undefined appendices that should be clarified or removed. - The mathematical notation in Section 6 could be more consistently aligned with the mechanism descriptions in earlier sections to help readers make connections between the empirical findings and theoretical model. - Finally, some figures (particularly Figures 14 and 15) contain dense information that could be simplified or restructured for clarity. Questions For Authors: 1. Your demonstration of strategy coopetition is compelling in 2-layer attention-only transformers, but how confident are you that this mechanism explains ICL transience in larger, more complex models? The preliminary results in Figure 6 suggest some generalization, but what additional evidence or theoretical arguments support the claim that similar dynamics operate in state-of-the-art models? 2. The CIWL strategy you identify bears conceptual similarities to what some researchers call "task recognition" (as opposed to "task learning"). Could you clarify whether you see CIWL as fundamentally the same phenomenon as task recognition, or whether there are important distinctions? 3. Your explanation for ICL persistence when context exemplars match query exemplars is intriguing but somewhat underexplored mechanistically. Have you conducted ablation studies or circuit analyses to understand why this modification equalizes the asymptotic preference between ICL and CIWL? 4. Your toy mathematical model reproduces key dynamics observed in transformers, but how sensitive is this reproduction to parameter settings? Is there a range of parameters for which the model fails to exhibit transience, and what would this tell us about conditions where ICL might naturally persist? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review. We really appreciate your acknowledgement of the strong and thorough evidential support for our main claims. ## We respond here to the main criticisms: > The claim that ICL can be made persistent by matching context and query exemplars is supported by Figure 6, though the mechanistic explanation for why this works could be more developed. We understand this desire, especially in the context of so many mechanistic explanations for other observed behaviors! However, given the large number of experiments and results, we believe it would be fair to leave this to future work. > The authors briefly discuss the possibility that Layer 2 heads act in "superposition"... manipulating head temperature... would solidify this intriguing hypothesis. These results are in Appendix C.2, Figure 15b and c. The overall claim about superposition is meant to be preliminary (and is stated clearly as such). It is not critically related to the main narrative, hence its appearance in the appendix. We chose to include these mentions in the paper to hopefully spur future work in this intriguing direction. > (Section 7)... Explicit evidence from larger-scale empirical studies or tasks closer to practical applications would substantially reinforce this claim. Figure 6 shows that our findings scale to other commonly used setups in literature (the 12L transformers and data of Singh et al., 2023, which used the same data as the widely cited Chan et al., 2022). However, we were careful never to claim that such dynamics operate in LLMs – this would need to be left to future work. As a wider point, there is by now a long tradition of this kind of work on smaller models and simplified setups, which has been well appreciated within the field, and which has indeed shown transfer to larger models and diverse scenarios. We believe it is important to value and undertake this type of work, because it allows us as a field to discover insights which can then be applied and tested in other setups in future work. ## There are also a few points in the remaining review that indicate that the reviewer may have overlooked some of our analyses: > Some aspects that could affect the validity of the analyses include reliance on averaged attention patterns that might mask individual variations in head behavior, We plot individual attention patterns in Figure 15a, in the preliminary evidence for superposition > The authors should verify Figure 4c, as the directional arrows for "ICL" and "CIWL" appear to be reversed compared to what's described in the text. As explained on Lines 160-163 when introducing the Flip evaluator, these lines are correct (and simply meant to remind the reader of this indication). ## Some additional responses: > In Figure 14, there are references to undefined appendices that should be clarified or removed. We thank the reviewer for pointing this out (it was simply old text we forgot to edit) and have made the corresponding change. > The preliminary results in Figure 6 suggest some generalization, but what additional evidence or theoretical arguments support the claim that similar dynamics operate in state-of-the-art models? We were careful never to claim that such dynamics operate in LLMs, though we do show extension to 12L transformers and common setups from literature (Singh et al., 2023). Our work is meant to build intuitions using rigorous analysis in smaller settings that may inspire work on larger models. For example, https://arxiv.org/abs/2502.14010 points to similar dynamics in larger models (with less mechanistic rigor on dynamics, given the difficulty of such experiments when using LLMs). > Could you clarify whether you see CIWL as fundamentally the same phenomenon as task recognition, or whether there are important distinctions? We see them as closely related intuitively, with further mechanistic work in LLMs needed to establish equivalence. Generally, we are wary of overclaiming. > Have you conducted ablation studies or circuit analyses to understand why this modification equalizes the asymptotic preference between ICL and CIWL? We believe the "what" has to precede the "why" – our paper mostly focused on the "why" of ICL transience, building intuitions that led us to a setting where ICL is persistent (a new "what"). We believe rigorous investigation of mechanistic explanations here (beyond the intuitions provided in the paper) is beyond the scope of our work, but would be excited for future work to tackle it. > how sensitive is this reproduction to parameter settings? We found the model quite robust to parameter settings, with the caveats we mention in the paper (and repeat here for clarity): When mu1=0, the faster strategy would be persistent. When alpha=0, there would be no transience since there's no competition. --- Rebuttal Comment 1.1: Comment: Dear authors -- thank you for detailed response to my comments. I am satisfied with most of your responses, however, I still recommend aiming to include (or at least give a directional discussion) on the mechanistic explanation of why ICL becomes persistent as it will be helpful for the readers. Overall, I see this paper as making a vital contribution to our understanding the dynamics of in-context learning and I've increased my score to reflect that. --- Reply to Comment 1.1.1: Comment: Thank you for the kind words and updated score -- we will be sure to add a directional, speculative discussion on the mechanistic explanation for ICL.
null
null
null
null
null
null
null
null
Offline Model-based Optimization for Real-World Molecular Discovery
Accept (poster)
Summary: The authors propose MolStitch as a generative method to generate molecular designs in an offline, multi-objective setting. Their method generates novel 'stitched molecules' that combine the desirable properties of original molecules sampled from the offline dataset. The authors evaluate their method on a number of offline molecular design tasks. Claims And Evidence: The claims made by the authors are clear and supported by convincing evidence. Methods And Evaluation Criteria: The molecular design tasks that the authors use to evaluate their proposed method are standard and representative of real-world molecular design tasks. I think this proposed methods and chosen evaluation benchmarks are well-motivated to study the problem proposed by the authors. Theoretical Claims: There are no theoretical claims to check the correctness of. Experimental Designs Or Analyses: The experimental design and analysis used by the authors are sound and use standard metrics from the offline optimization and multi-objective optimization literature (e.g., diversity of designs, Pareto fronts, hyper volume indicator, etc. Supplementary Material: I could not see any supplementary material included with the submission. However, I did review the source code included by the authors made publicly available with their submission - source code for their framework appears to be complete and well-documented. Relation To Broader Scientific Literature: In general, I think MolStitch is a meaningful contribution to the offline optimization for molecular discovery literature. The idea of using IPO for preference fine-tuning builds off of [IPO](https://arxiv.org/abs/2310.12036) and [DPO](https://arxiv.org/abs/2305.18290) from the language model literature, and the idea of trajectory annealing/stitching has been explore in prior work (e.g., [DiffStitch](https://arxiv.org/abs/2402.02439), [SSD](https://arxiv.org/abs/2402.07226), [GFNSeqEditor](https://openreview.net/forum?id=g0G8DQSBcj), [Fragment-RAG](https://arxiv.org/abs/2411.12078), Simulated Annealing). The authors also compare their method against relevant baselines in their experimental work. Essential References Not Discussed: 1. In general, I think most of the essential references have been discussed or included as relevant baselines. Additional references that I think would strengthen the experimental results include: - [Simulated Annealing](https://en.wikipedia.org/wiki/Simulated_annealing) - [Fragment-RAG](https://arxiv.org/abs/2411.12078) from Lee et al. Proc NeurIPS (2024). - [DyNA-PPO](https://openreview.net/forum?id=HklxbgBKvr) from Angermueller et al. Proc ICLR (2020). - [GFNSeqEditor](https://openreview.net/forum?id=g0G8DQSBcj) from Ghari et al. Proc NeurIPS (2024). That being said, I'm well aware that there are many offline MBO algorithms now proposed in the literature, and I think the authors have already demonstrated experimentally that their method works across a wide variety of different tasks. I would more strongly encourage Simulated Annealing and Fragment-RAG to be included as baselines given their similarity with the authors' proposed method. I feel less strongly that DyNA-PPO and GFNSeqEditor would need be included as baselines - appropriate discussion in the Related Work (if not already included) is likely more than sufficient. Other Strengths And Weaknesses: In general, I think this is a well-motivated, well-executed, and well-written submission and lean towards recommending acceptance of this work. There are some additional experiments and associated discussion I recommend that would potentially help strengthen the paper that I detail in my earlier comments, but overall, I think this submission is sound. ### Strengths 2. The idea to use a DPO-/IPO- like framework for "synthetic" molecule priority sampling is interesting, original, and significant to the best of my knowledge. 3. I appreciate the inclusion of batch hybrid learning results using the MolStitch method in the Appendix - this is a very real-world problem formulation and I the strong results of the authors' method in this setting strengthens the contributions of this paper. ### Weaknesses 4. The authors propose a method of objective scalarization via sampling from the Dirichlet distribution. A number of other method exist for scalarization - notably [Chebyshev scalarization](https://arxiv.org/abs/1904.05760) and even uniform sampling - that would be worth including as an ablation study. Other Comments Or Suggestions: None Questions For Authors: 5. In line 147, the authors assume that the offline dataset $\mathcal{D}$ contains all of the evaluated objective scores for each of the $k$ objectives. In practice, I would imagine that the majority of molecules would only be experimentally evaluated using only a subset of the $k$ objectives in building the offline dataset. How would the method proposed by the authors (or how have others in prior work) adapt to this setting? 6. Could the authors provide some additional details regarding the rule-based crossover operator using in the unsupervised pre-training stage? I am having a hard time understanding how this process should encourage StitchNet to internalize chemical grammar. More explicitly, my understanding is that molecules can have very similar token representations but represent very different molecules, and similar molecules in token space may have very different validity scores. 7. In Section 3.2, the authors mention that they use the oracle score of $m\_{orig}$ as an approximation for $\bar{m}\_{stit}$ because the stitched molecule shares the same molecular fragments as the original molecule. However, I would imagine that there might be functions that depend on the properties of the global molecule, or how the fragments are positioned with respect to one another in the molecule - such properties would be lost through the stitching process and make the approximation that $\mathcal{R}$ is similar for the stitched and original molecules invalid. Is this the case? This is more so a minor clarification question on my part - I understand that the authors have cited prior work to support their argument (lines 215-217, right column), but am not as familiar with these prior literature. 8. Is the loss function in Equation (9) indeed a summation over $m\_{orig}\in\mathcal{D}$, or an expectation value? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your helpful feedback and the opportunity to enhance our manuscript. # Q1: Additional references would strengthen the experimental results. First, regarding **Simulated Annealing (SA)**, it is a probabilistic optimization algorithm that enhances exploration by occasionally accepting worse solutions with a certain probability. Inspired by this, we incorporated an SA mechanism into our MolStitch by occasionally accepting a losing molecule with a certain probability. Due to character limitations, **we have uploaded the full result tables at the following link** [here](https://tinyurl.com/molstitch). As shown in Table 5, MolStitch w/ SA did not yield noticeable performance improvements. We hypothesize this is because our rank-based proxy already performs well in determining winning and losing molecules, and the additional exploration facilitated by SA is somewhat redundant given StitchNet's diversity. Second, we found **Fragment-RAG (f-RAG)** to be interesting. Although the official codebase was not available, we attempted to incorporate its core ideas into our framework. Specifically, f-RAG introduces the use of hard fragments—explicit structural components used to construct new molecules—and soft fragments, which are injected as embeddings to implicitly guide generation. StitchNet naturally supports hard fragments through explicit recombination of molecular substructures. To incorporate soft fragment guidance, we extended StitchNet to condition on embeddings of soft fragments during the stitching process. MolStitch w/ f-RAG achieved improved performance, highlighting the benefits of soft fragment guidance. Third and fourth, while **DyNA-PPO and GFNSeqEditor** are primarily designed for biological sequence design, they offer valuable insights from a model-based optimization perspective. We will include these important references in the related work section. # Q2: Include an ablation study with Chebyshev scalarization. Thank you for this valuable suggestion. A similar point was also raised by another reviewer, which prompted us to conduct additional experiments. For a detailed discussion of the results, we respectfully refer you to our response to `Reviewer B6bd, Q1`. # Q3: In practice, most molecules are evaluated on only a subset of objectives. Thank you for this insightful question. Several strategies have been proposed in the literature to address this challenge. * One direct approach is **imputation methods**, which estimate missing values from available data [1] or via pseudo-labeling in a semi-supervised manner [2]. * Another strategy is **multi-task learning** [3], where models are trained to jointly tackle multiple objectives while allowing for missing labels. By sharing knowledge across related objectives, these models can leverage observed objectives to inform the learning of others. Extending our MolStitch to handle missing objective values would be an intriguing direction for future work. We will include these considerations in the limitations and future work section. # Q4: Additional details regarding the rule-based crossover operator. We apologize for any confusion. The rule-based crossover operator ensures that child molecules generated from two parent molecules are chemically valid by following predefined chemical rules and constraints (e.g., SMARTS templates). Consequently, the goal of the unsupervised pre-training stage is to train StitchNet to imitate this rule-based crossover operator using a maximum likelihood estimation (MLE) objective, similar to teacher forcing. In practice, we first generate numerous (parent1, parent2, child) triplets using the rule-based crossover operator. These serve as training examples for StitchNet, which is designed to generate child stitched molecules from given pairs of parent molecules. Although StitchNet may initially produce invalid stitched molecules, it gradually learns to imitate the rule-based crossover through MLE. Therefore, as pre-training progresses, StitchNet becomes increasingly proficient at generating chemically valid stitched molecules. # Q5: Assumptions for approximating objective scores of 𝑚̄ₛₜᵢₜ. Thank you for this important question. A similar concern was also raised by another reviewer, and we respectfully refer you to our response to `Reviewer tueg, Q3`. Briefly, while we acknowledge the limitations, we tried to mitigate them by enforcing a similarity threshold to ensure stitched molecules retain sufficient structural overlap. # Q6: Is the loss function in Eq. (9) a summation or an expectation? Thanks for the clarification. Eq.(9) is written as a sum over samples from the finite offline dataset. [1] Lobato et al. “Multi-objective genetic algorithm for missing data imputation.” Pattern Recognit Lett (2015). [2] Huang et al. “Offline data-driven evolutionary optimization based on tri-training.” Swarm Evol Comput (2021). [3] Liu et al. “Structured multi-task learning for molecular property prediction.” AISTATS (2022). --- Rebuttal Comment 1.1: Comment: I thank the authors for their hard work on their rebuttal and manuscript overall. All of my concerns have been sufficiently addressed, and I maintain my initial rating of 4 to indicate that I am in favor of accepting this work. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time and effort in reviewing our work. Your valuable feedback has significantly helped us improve our manuscript.
Summary: This paper introduces the Molecular Stitching (MolStitch) framework, designed to address the molecular discovery problem in an offline setting, where an offline dataset is employed without requiring iterative queries to the oracle function. Particularly, MolStitch operates by leveraging existing molecules from the offline dataset to generate novel stitched molecules that combine desirable properties using the StitchNet model. A rank-based proxy model is then employed to compare molecules in the stitched set, determining which is preferable in each pair. This information is used to fine-tune the generative model through Identity Preference Optimization (IPO). The effectiveness of the MolStitch framework is demonstrated through two key offline MOMO experiments, showcasing its potential in molecular optimization. Claims And Evidence: The claims in this paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-suited to the problem. Theoretical Claims: This paper does not include any theorems. Experimental Designs Or Analyses: I have verified the soundness and validity of the experimental designs and analyses. Supplementary Material: I have solely reviewed the appendix. Relation To Broader Scientific Literature: This paper addresses the offline multi-objective molecular optimization (MOMO) problem, which has promising applications in drug discovery and molecular design. Essential References Not Discussed: No essential related works are absent that are crucial for understanding the key contributions of this paper. Other Strengths And Weaknesses: Strengths: - Pre-training on the ZINC dataset helps StitchNet internalize chemical grammar, allowing it to generate chemically valid stitched molecules. - Introduces a rank-based proxy for molecule evaluation instead of traditional value-based methods, followed by preference optimization to fine-tune the generative model. - The writing is clear and well-structured. - Empirical results support the effectiveness of the proposed method. Weaknesses: - The paper lacks a related work section in the main text. - While the mean performance over 10 different seeds is strong, the variance is high. Other Comments Or Suggestions: It is a minor point, but the authors could consider rearranging the order of stages. For example, Stage 1 could include the pretraining of the generative model and StitchNet, followed by the two proposed stages. Questions For Authors: 1. The authors should consider including and briefly describing recent works [1] [2] [3] [4] [5] on offline optimization in the Related Works section, particularly [1], as it shares a similar idea of training a rank-based proxy model. 2. Where is the pre-training process of the generative model described in this paper? It would be helpful to clarify this aspect. 3. Using the objective scores of $m_{orig}$ as chemical feedback to approximate the objective scores of $\bar{m}_{stit}$ in Eq.(9) seems questionable. Even minor structural modifications to a molecule can lead to significant variations in certain properties. The authors should provide justification or additional validation for this approximation. [1] Tan, Rong-Xi, Ke Xue, Shen-Huan Lyu, Haopu Shang, Yao Wang, Yaoyuan Wang, Sheng Fu, and Chao Qian. "Offline Model-Based Optimization by Learning to Rank." arXiv preprint arXiv:2410.11502 (2024). [2] Dao, Manh Cuong, Phi Le Nguyen, Thao Nguyen Truong, and Trong Nghia Hoang. "Boosting offline optimizers with surrogate sensitivity." arXiv preprint arXiv:2503.04181 (2025). [3] Nguyen, Tung, Sudhanshu Agrawal, and Aditya Grover. "Expt: Synthetic pretraining for few-shot experimental design." Advances in Neural Information Processing Systems 36 (2023): 45856-45869. [4] Hoang, Minh, Azza Fadhel, Aryan Deshwal, Janardhan Rao Doppa, and Trong Nghia Hoang. "Learning surrogates for offline black-box optimization via gradient matching." arXiv preprint arXiv:2503.01883 (2025). [5] Chemingui, Yassine, Aryan Deshwal, Trong Nghia Hoang, and Janardhan Rao Doppa. "Offline model-based optimization via policy-guided gradient search." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 10, pp. 11230-11239. 2024. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are truly grateful for your thoughtful feedback. In the following, we carefully respond to each of your comments. # Q1: The authors should include recent works We sincerely thank the reviewer for highlighting these important references [1–5], which are indeed highly relevant from the perspective of offline optimization. We fully agree with the significance of these works and will ensure they are properly cited and discussed in the revised manuscript. Regarding the method presented in [1], RaM, we appreciate the reviewer’s suggestion to emphasize its connection to our work. While we had included RaM as a competing baseline and compared its performance with our MolStitch framework in Tables 1 and 2 of the main manuscript (with additional details in Appendix N), we acknowledge that we had not discussed it explicitly in the related work section. We are grateful for the reviewer’s suggestion and will make sure to address this omission in the revision to provide a more complete overview of related work. Additionally, since an extra page is permitted for the revised manuscript, we plan to move the main related work section into the main text, while retaining the extended discussion in Appendix. # Q2: Where is the pre-training process of the generative model described in this paper? In this work, we used REINVENT as the main backbone generative model, both for our MolStitch and for all baseline offline optimization methods to ensure fair comparison. REINVENT is an RL-based generative model that produces molecules in an auto-regressive manner. As part of its standard training pipeline, REINVENT first undergoes a pre-training process, where it learns to generate chemically valid molecules by capturing the underlying chemical grammar. This pre-training process is critical to ensure that the model can produce syntactically valid molecular structures before any optimization takes place. Following pre-training, REINVENT is further optimized via reinforcement learning, where it receives feedback in the form of reward signals based on the desired molecular objectives. This two-stage pipeline—pre-training followed by fine-tuning—has proven highly effective in the molecular domain, enabling REINVENT to achieve robust performance across a range of molecular optimization tasks. We acknowledge that our original manuscript did not provide sufficient detail about the pre-training process for the generative model, and we will include a more comprehensive explanation in the revised manuscript. # Q3: Even minor structural modifications to a molecule can lead to significant variations in certain properties. In our study, StitchNet is trained through self-supervised learning by decomposing a single molecule into two fragments and then recombining them. Because the fragments originate from the same parent molecule, the stitched molecule is likely to preserve essential substructures (core scaffolds) that strongly influence molecular properties. Moreover, our approach is supported by the Similar Property Principle (SPP) [6], a foundational concept in drug discovery and QSAR research, which states that structurally similar molecules often exhibit similar properties. To further ensure sufficient structural similarity, we employed Tanimoto similarity metric between original molecules ($m_{orig}$) and their stitched counterparts ($\bar{m}_{stit}$). Prior research [7] has indicated that molecules with a Tanimoto similarity greater than 0.887 commonly demonstrate similar biological activities. However, we acknowledge exceptions to this principle, particularly in cases involving stereochemistry or activity cliffs, where minor structural changes can lead to major shifts in molecular properties. To address these limitations, we plan to incorporate advanced fingerprints—such as 3D-aware or chirality-aware descriptors—that capture more detailed structural and spatial information. We will include these considerations in the limitations and future work section of our revised manuscript. # Q4: Authors could consider rearranging the order of stages. Thank you for the helpful suggestion. We agree that rearranging the order of stages would improve the logical flow, and we will revise it accordingly. [1] Tan et al. “Offline Model-Based Optimization by Learning to Rank.” ICLR (2025). [2] Dao et al. “Boosting offline optimizers with surrogate sensitivity.” ICML (2024). [3] Nguyen et al. “Expt: Synthetic pretraining for few-shot experimental design.” NeurIPS (2023). [4] Hoang et al. “Learning surrogates for offline black-box optimization via gradient matching.” ICML (2024). [5] Chemingui et al. “Offline model-based optimization via policy-guided gradient search.” AAAI (2024). [6] O’Boyle, Sayle. “Comparing structural fingerprints using a literature-based similarity benchmark.” J Cheminform (2016). [7] Cheng et al. “Investigating the correlations among chemical structures, bioactivity profiles and molecular targets.” Bioinformatics (2010). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. My concerns have been resolved, and I will support accepting the paper. --- Reply to Comment 1.1.1: Comment: We are truly grateful for the time and effort you dedicated to reviewing our work. Your valuable comments provided us with many insights and significantly helped us enhance the quality of our manuscript. Thank you once again for your thoughtful feedback.
Summary: The paper introduces MolStitch, a framework for offline multi-objective molecular optimization (MOMO). Key contributions include StitchNet, which generates "stitched molecules" by combining fragments from an offline dataset; a rank-based proxy model for pairwise molecule evaluation; and preference optimization techniques (e.g., IPO) for fine-tuning the generative model. Priority sampling via a Dirichlet distribution is used to explore diverse trade-offs among objectives. Experiments on molecular property (MPO) and docking score optimization tasks demonstrate improvements in hypervolume (HV) and R2 metrics over baselines, with ablation studies validating the framework’s components. The method addresses challenges in offline settings where wet-lab evaluations are costly and slow. ## update after rebuttal The authors have addressed my comments including docking score optimization experiments using SMINA, and clarification on how StitchNet, the rank-based proxy contributes to final performance. Therefore, I will raise my score to accept. Claims And Evidence: The claims are supported by experiments across multiple tasks, ablation studies, and comparisons to diverse baselines. However, the evidence has limitations: 1. Benchmark coverage: it is not clear why recent docking score benchmark SMINAare not included. 2. Component contributions: The improvements from StitchNet vs. other data augmentation methods (e.g., crossover operators) are shown, but the analysis lacks depth in explaining why neural-based stitching outperforms other genetic algorithm alternatives such as Saturn. 3. Rank-based proxy: The proxy’s superiority over score-based variants is demonstrated via accuracy metrics, but its impact on downstream optimization (beyond pairwise classification) is not thoroughly analyzed. Methods And Evaluation Criteria: The methods are sensible for offline MOMO, leveraging stitching and preference optimization. However: - Benchmark choice: The omission of SMINA (a docking benchmark not discussed in related work) is unexplained, weakening the docking score evaluation’s credibility. - Metrics: HV and R2 are appropriate for multi-objective tasks, but the paper does not clarify why these metrics were prioritized over others (e.g., success rate, diversity scores). Theoretical Claims: The paper does not present theoretical claims, focusing instead on empirical validation. No theoretical issues are noted. Experimental Designs Or Analyses: Semi-offline experiments: Results are deferred to the appendix, limiting insight into this practically relevant setting. - Backbone model: While REINVENT is justified as a backbone, newer architectures (e.g., GFlowNets, Mamba) are only briefly discussed in appendices. Supplementary Material: I reviewed the appendix which includes pre-training details, ablation studies, additional results (e.g., semi-offline), and implementation specifics. Relation To Broader Scientific Literature: The work connects to offline MBO and preference optimization literature but could better contextualize: - Trajectory stitching: The analogy to RL trajectory stitching is under-explored; differences in molecular vs. RL state spaces are not discussed. - Preference optimization: While DPO/IPO are applied, their adaptation to molecular design (vs. language models) lacks critical analysis and insights. Essential References Not Discussed: - SMINA docking tool: Critical for docking score benchmarks, as highlighted in Ciepliński et al.’s work. - Recent preference-based molecular design**: Works like below apply preference optimization to molecular optimization but are not cited. 1. Extracting medicinal chemistry intuition via preference machine learning Oh-Hyeon Choung, Riccardo Vianello, Marwin Segler, Nikolaus Stiefl & José Jiménez-Luna Nature Communications volume 14, Article number: 6651 (2023) 2. Preference Optimization for Molecular Language Models Other Strengths And Weaknesses: Strengths: The integration of stitching, rank-based evaluation, and preference optimization is novel. The semi-offline experiments and diversity analysis (e.g., Bemis-Murcko scaffolds) are practical contributions. Weaknesses: The components (stitching, ranking, priority sampling) are largely adaptations of existing ideas, limiting conceptual novelty. The writing is dense, with insufficient intuition for non-experts. Other Comments Or Suggestions: The paper presents a technically sound framework with thorough empirical validation, but the incremental nature of contributions and benchmark omissions temper enthusiasm. Addressing the questions above could strengthen the case for acceptance. Questions For Authors: 1. Benchmark justification: Why was SMINA not used for docking evaluation, despite its relevance in the cited benchmark? 2. StitchNet vs. crossover: How does StitchNet’s neural approach fundamentally differ from genetic algorithm-based crossover operators in exploring chemical space? 3. Component dominance: Are the gains primarily from StitchNet, the rank-based proxy, or their combination? Ablation suggests synergy, but a breakdown would clarify. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful feedback and the opportunity to clarify the key aspects of our study. # Q1: Why was SMINA not used for docking evaluation? In our original study, we employed QuickVina (QVina) [1] for docking score evaluation, which is a widely recognized molecular docking tool derived from AutoDock Vina. Our decision to use QVina was motivated by its widespread adoption and following established practices from prior studies [2,3]. We appreciate your mention of SMINA [4]. Upon investigation, we found that SMINA is also derived from AutoDock Vina and provides additional capabilities, such as imposing constraints on ligand interactions. Due to its high relevance and in response to your valuable suggestion, we conducted additional docking score optimization experiments using SMINA. Due to character limitations, we have uploaded the full results tables at the following link [here](https://tinyurl.com/molstitch). As demonstrated in the table, **MolStitch maintained its superior performance even when evaluated using SMINA**. These additional results further validate the robustness of our framework across different docking evaluation tools. ||parp1|jak2|braf|fa7|5ht1b| |-|:-:|:-:|:-:|:-:|:-:| ||HV|HV|HV|HV|HV| |REINVENT|0.522|0.476|0.503|0.417|0.514| |BootGen|0.534|0.498|0.518|0.425|0.530| |MolStitch|0.550|0.539|0.530|0.450|0.544| We sincerely thank you for bringing SMINA to our attention, and we will include these additional results along with a description in our revised manuscript. # Q2: What is the novelty of StitchNet, and how does it differ from genetic algorithm-based crossover operators? Conventional genetic algorithm-based crossover operators typically rely on hand-crafted, rule-based procedures to recombine fragments of parent molecules. Although these rules incorporate chemical intuition and domain expertise, they remain fixed throughout the search process and do not adapt based on the quality of generated outcomes. Consequently, rule-based crossover operators may explore chemical space less effectively, as they lack mechanisms to refine and favor recombination strategies that produce higher-quality molecules. In contrast, StitchNet employs a neural network architecture trained via self-supervised learning to adaptively discover effective fragment recombination strategies. **Specifically, StitchNet leverages chemical feedback—using objective scores from an offline dataset—to guide the stitching process**, ensuring that resulting molecules are chemically valid and likely to exhibit desirable properties. This self-supervised learning mechanism allows StitchNet to refine its recombination strategies based on chemical feedback, enabling it to explore promising regions of chemical space. # Q3: Are the gains primarily from StitchNet, the rank-based proxy, or their combination? Thank you for the insightful question. As noted, the combination of StitchNet and the rank-based proxy is indeed synergistic and central to the effectiveness of our framework. However, we agree that a clearer breakdown of their individual contributions is valuable. In Appendix C.6 of our manuscript, we provided a comprehensive ablation study that isolates the effects of each component. From these results, we observed that the addition of the rank-based proxy alone yields greater performance improvements compared to the addition of StitchNet alone. To clarify the ablation study experimental setup: * In the **rank-based proxy alone** setting, new molecules are sampled directly from the generative model (`model.sample()`) and evaluated using the rank-based proxy. The generative model is then updated through preference optimization based on this proxy feedback. * In the **StitchNet alone** setting, new stitched molecules are generated by StitchNet and evaluated using a score-based proxy that directly estimates their objective scores. The generative model is subsequently updated using these estimated scores as pseudo-rewards. We believe the rank-based proxy alone performs more effectively than StitchNet alone because it provides robust and reliable feedback, enabling stable and meaningful updates to the generative model. In contrast, although StitchNet alone enhances molecular diversity by producing novel stitched molecules, the score-based proxy often struggles to evaluate them accurately, leading to less reliable feedback. **In essence, the full potential of our framework is achieved when these two components are combined, effectively leveraging their complementary strengths.** [1] Alhossary et al. “Fast, accurate, and reliable molecular docking with QuickVina 2.” Bioinformatics (2015). [2] Guo et al. “Saturn: Sample-efficient generative molecular design using memory manipulation.” arXiv (2024). [3] Lee et al. “Drug Discovery with Dynamic Goal-aware Fragments.” ICML (2024). [4] Cieplinski et al. “Generative models should at least be able to design molecules that dock well: A new benchmark.” JCIM (2023).
Summary: This paper introduces MolStitch, a framework for offline molecular optimization that generates novel molecules by "stitching" fragments from an existing offline dataset, eliminating the need for iterative oracle queries. Inspired by trajectory stitching in offline reinforcement learning, MolStitch uses StitchNet to combine desirable properties from parent molecules and a rank-based proxy to evaluate molecules through pairwise comparisons. To address multi-objective optimization, the framework employs priority sampling with a Dirichlet distribution to explore diverse trade-offs along the Pareto front. Experimental results demonstrate MolStitch's effectiveness in outperforming existing methods across various offline molecular optimization benchmarks. ## update after rebuttal The authors provided a strong rebuttal with new experiments and detailed clarifications. They showed that MolStitch benefits from Chebyshev scalarization in complex multi-objective settings, both in performance and molecular diversity. They also added comparisons to ICML 2024 baselines, including Multiple Models + COMs/RoMA and REINVENT-BO, where MolStitch remained superior. Finally, their justification of the design choices—molecular stitching, rank-based proxy, and priority sampling—was well-motivated by domain-specific insights. This is why I raised my score to accept. Claims And Evidence: Claim: Linear scalarization with priority sampling enables effective exploration of trade-offs in multi-objective optimization. Issue: While priority sampling with a Dirichlet distribution is effective for generating diverse weight configurations, the reliance on linear scalarization assumes a convex Pareto front. This assumption may not hold in practice, as Pareto fronts in molecular optimization can often be non-convex. The paper does not provide evidence or discussion on how this limitation affects performance in such scenarios. Including additional experiments or theoretical analysis on non-convex Pareto fronts would strengthen this claim and provide a more comprehensive understanding of the framework's capabilities. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of offline molecular optimization. MolStitch introduces StitchNet to generate novel molecules by combining fragments from an offline dataset and employs a rank-based proxy for robust evaluation through pairwise comparisons. These methods effectively address the challenge of optimizing molecules without repeated oracle queries. The evaluation uses standard benchmarks (e.g., molecular property and docking score optimization) and metrics (e.g., hypervolume, R2. However, one limitation is the use of linear scalarization for multi-objective optimization, which assumes a convex Pareto front—an assumption that may not always hold in practice. Theoretical Claims: They look good. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are sound and valid, with appropriate benchmark datasets, evaluation metrics, and comparisons to state-of-the-art methods. The ablation studies further strengthen the validity of the results. Supplementary Material: It looks good. Relation To Broader Scientific Literature: The paper addresses offline multi-objective optimization (MOO) in molecular discovery, building on prior work in offline single-objective optimization (SOO) and preference optimization (e.g., DPO, IPO). It introduces MolStitch, which combines trajectory stitching from offline RL and a rank-based proxy for robust evaluation. While the individual components are not entirely novel, their integration into offline MOO is timely, especially given the limited work in this area, as highlighted by the recent ICML 2024 "Offline Multi-Objective Optimization". That paper underscores the importance of offline MOO in real-world applications like molecule and protein design, proposing benchmarks to advance the field. MolStitch’s focus on avoiding costly oracle evaluations makes it a practical and valuable tool for practitioners, even if it does not introduce groundbreaking innovations. Essential References Not Discussed: Yes, the paper misses a key reference: the ICML 2024 paper "Offline Multi-Objective Optimization" by Ke Xue et al. This work is highly relevant because it introduces a wide range of baselines for offline MOO, including methods like Multiple Models + COMs, Multiple Models + RoMA, and Multi-head approaches. These baselines are essential for comparing and evaluating new methods like MolStitch. By not citing this work, the paper misses an opportunity to position MolStitch within the current state-of-the-art and demonstrate its performance against established baselines. Including this reference would provide a stronger foundation for understanding and validating the contributions of MolStitch. Other Strengths And Weaknesses: Strengths: 1- Practical Relevance: The paper addresses a highly relevant real-world problem: offline molecular optimization without costly oracle evaluations. This is significant for applications like drug discovery, where wet-lab experiments are time-consuming and expensive. 2- Clarity and Presentation: The paper is well-written and clearly explains the motivation, methodology, and experimental results. The use of ablation studies and visualizations helps readers understand the contributions and effectiveness of the proposed framework. 3- Strong Empirical Results: The paper demonstrates strong performance on benchmark tasks, outperforming several state-of-the-art methods. This empirical validation strengthens the significance of the work. Weaknesses: 1- Linear Scalarization Limitation: The reliance on linear scalarization assumes a convex Pareto front, which may not hold in many real-world scenarios. The paper does not explore alternative scalarization techniques or provide evidence of performance on non-convex Pareto fronts, limiting its generalizability. 2- Lack of Comparison with Offline MOO Baselines: The paper does not compare MolStitch with the baselines proposed in the ICML 2024 paper on offline multi-objective optimization. Including these comparisons would provide a clearer picture of how MolStitch performs relative to other state-of-the-art methods in offline MOO. 3- Limited Justification for Combination of Ideas: While the paper combines ideas from offline RL (trajectory stitching), preference optimization (rank-based proxy), and multi-objective optimization (priority sampling), it does not provide a detailed explanation of why this specific combination is the best fit for molecular applications in a multi-objective setting. A more thorough discussion of the rationale behind these choices would strengthen the paper. Other Comments Or Suggestions: No. Questions For Authors: 1- The framework uses linear scalarization, which assumes a convex Pareto front. How does MolStitch perform on problems with non-convex Pareto fronts, and have you explored alternative scalarization techniques (e.g., Chebyshev)? 2- The ICML 2024 offline MOO paper proposes several baselines (e.g., Multiple Models + COMs/RoMA). Why were these not included in your experiments, and how does MolStitch compare to them in terms of hypervolume (HV) and R2 metrics? 3- Can you explain why the combination of trajectory stitching, rank-based proxy, and priority sampling is particularly effective for molecular discovery in a multi-objective setting? Are there domain-specific insights that justify this design? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your thoughtful feedback. Below, we respectfully provide point-by-point responses to each of your comments. # Q1: Have you explored the Chebyshev scalarization technique? In our original study, we employed linear scalarization because it is one of the most fundamental scalarization techniques, aiming to establish the applicability of the MolStitch framework at a fundamental level. In response to your suggestion, we conducted additional experiments using MolStitch with Chebyshev scalarization in place of linear scalarization. Due to character limitations, we have uploaded the full results tables at the following link [here](https://tinyurl.com/molstitch). As shown in the table, **MolStitch w/ Chebyshev** achieved comparable performance to the linear approach in the two-objective setting. Notably, it outperformed the linear approach in the more challenging three-objective and four-objective settings. ||JNK3+GSK3β|JNK3+GSK3β+QED|JNK3+GSK3β+QED+SA| |-|:-:|:-:|:-:| ||HV|HV|HV| |w/ Linear|0.579|0.403|0.352| |w/ Chebyshev|0.580|0.440|0.397| We believe this is because in the two-objective setting, the Pareto front remains relatively simple, limiting the observable benefits of Chebyshev. As the number of objectives increases, the Pareto front becomes more complex and non-convex, which allows the advantages of Chebyshev scalarization to become more pronounced. Overall, these results show that MolStitch works well with both linear and advanced Chebyshev scalarization techniques. # Q2: Include baselines from the ICML 2024 offline MOO paper (Multiple Models + COMs/RoMA). Following your suggestion, we conducted additional experiments with Multiple Models + COMs/RoMA from the offline MOO paper. As shown in the table, **Multiple Models + COMs/RoMA** demonstrated better performance than their single model counterparts. This finding aligns with results throughout our paper, where ensemble proxy generally outperformed single proxy methods such as Grad. ||JNK3+GSK3β|JNK3+GSK3β+QED|JNK3+GSK3β+QED+SA| |-|:-:|:-:|:-:| ||HV|HV|HV| |Single Model+COMs|0.479|0.205|0.171| |Single Model+RoMA|0.492|0.198|0.169| |Multiple Models+COMs|0.489|0.211|0.190| |Multiple Models+RoMA|0.499|0.214|0.188| |REINVENT-BO|0.472|0.232|0.205| |MolStitch|0.579|0.403|0.352| Additionally, inspired by the strong Bayesian optimization (BO) performance in the offline MOO paper, we implemented a comparable BO method using REINVENT as the generative backbone. We refer to this as REINVENT-BO. As shown in the table, **REINVENT-BO** achieved competitive performance, highlighting its potential in this domain. However, our MolStitch framework consistently outperformed all these baselines, thereby reaffirming the superiority and efficacy of our approach. # Q3: Can you explain why the combination of trajectory stitching, rank-based proxy, and priority sampling is particularly effective for molecular discovery in a multi-objective setting? * Inspired by trajectory stitching, we propose a novel **molecular stitching** operation that recombines fragments from two parent molecules to create new stitched molecules. In molecule design, it is well established that core molecular substructures—often referred to as privileged scaffolds—play a critical role in determining key biological properties [1]. By recombining these scaffolds from diverse parents, molecular stitching creates molecules that inherit beneficial traits, enabling a broader exploration of chemical space and enhancing the discovery of novel, diverse, and high-quality candidates. * However, evaluating these newly stitched molecules presents a practical challenge: they often fall outside the distribution of the training data, making it difficult for conventional proxy models to reliably approximate their property scores. To address this, we introduce a **rank-based proxy**. Rather than regressing absolute property values, our proxy learns to predict which of two molecules is more likely to be superior with respect to the target properties. This classification-style formulation simplifies the learning task and enhances the robustness of the proxy model. * In a multi-objective setting, fixed weight configurations often struggle to balance competing objectives like potency and safety. Our **priority sampling** mechanism addresses this by generating diverse weight configurations that emphasize different objectives. This approach encourages the model to maintain a diverse population of candidate molecules that span a wide range of trade-offs. For example, in drug discovery, one scenario may prioritize potency over safety (e.g., cancer treatments), while another may require the opposite (e.g., pediatric applications). Priority sampling enables the exploration of these diverse trade-offs among multiple objectives, providing domain experts with a broader range of candidate molecules. [1] Welsch et al. “Privileged scaffolds for library design and drug discovery.” Curr Opin Chem Biol. (2010). --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal and for conducting additional experiments with Chebyshev scalarization. Your efforts to extend the analysis beyond linear scalarization are appreciated. The results suggest that Chebyshev scalarization provides an advantage in higher-dimensional objective spaces, but a more in-depth discussion of why this occurs should be included in the revised paper. You mention that the complexity of the Pareto front plays a role, but it is important to further explain how Chebyshev scalarization specifically handles non-convexity in this setting. For instance, does it lead to a more diverse or well-distributed set of solutions compared to linear scalarization? Clarifying this point would strengthen the discussion and provide better insights into when and why Chebyshev should be preferred. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate for your thoughtful comments. Thanks to you, we have gained valuable insights and learned so much. # Q4: More detailed discussion of Chebyshev scalarization. To the best of our understanding, Chebyshev scalarization operates by minimizing the maximum weighted deviation from a reference point (e.g., the ideal objective values). Unlike linear scalarization, which combines all objectives into a weighted sum, Chebyshev scalarization focuses on each objective separately and attempts to minimize the highest discrepancy among them. By targeting the worst-performing objective at each step, Chebyshev aims to balance trade-offs among objectives and can explore non-convex (concave) regions of the Pareto front—areas that linear scalarization often misses due to its inability to bend into such regions. As a result, Chebyshev scalarization tends to produce a more diverse and well-balanced set of solutions. In molecular optimization, this can lead to molecules with diverse scaffolds and balanced performance, even when objectives conflict with each other. To empirically assess the advantages of Chebyshev scalarization in the offline multi-objective molecular optimization task, we conducted an additional analysis comparing the diversity achieved by the two scalarization methods. Specifically, we measured the number of unique Bemis-Murcko (BM) scaffolds and carbon skeletons [1] among molecules on the Pareto front, comparing results from MolStitch using Chebyshev versus linear scalarization in the four-objective setting (JNK3+GSK3β+QED+SA). As shown in the table below, Chebyshev scalarization achieved higher diversity across both BM scaffold and carbon skeletons metrics, supporting the assertion that it facilitates more effective exploration of diverse regions within the search space. | | BM scaffold | Carbon skeletons | |---------------------|:---------------:|:------------------:| | w/ Linear | 3453 | 1664 | | w/ Chebyshev | 3836 | 1976 | Lastly, we would like to share our perspective on when each scalarization method might be more appropriate: * **Linear scalarization** remains effective when the Pareto front is expected to be convex (e.g., in two-objective problems with mild conflict), or when prior domain knowledge indicates that one objective should be prioritized. In these cases, linear scalarization provides a straightforward, intuitive, and interpretable approach to guide the optimization process. * **Chebyshev scalarization** is well-suited for problems involving high dimensionality or multiple conflicting objectives. In such cases, Chebyshev encourages the exploration of diverse regions within the search space and aims to produce well-balanced solutions that perform reasonably well across all objectives. We will incorporate this discussion, along with the additional diversity analysis, into the revised manuscript. Thank you once again for taking the time to review our work and for providing such thoughtful and constructive feedback. [1] Bemis GW, Murcko MA. “The properties of known drugs. 1. Molecular frameworks.” J Med Chem (1996).
null
null
null
null
null
null
A Cognac Shot To Forget Bad Memories: Corrective Unlearning for Graph Neural Networks
Accept (poster)
Summary: The authors propose a methodology, named Cognac, aimed at enhancing the fairness, robustness, and accuracy of Graph Neural Networks (GNNs) through corrective unlearning techniques applied to specific nodes. Cognac consists of two primary components: 1. Contrastive Unlearning on Graph Neighborhoods (CoGN): This component identifies nodes influenced by entities that should be removed and strategically repositions them away from those entities, while concurrently moving them closer to normal nodes through contrastive learning. 2. AsCent DesCend de-coupled (ACDC): This technique leverages gradient ascent and descent methods to induce effective unlearning for sets of nodes designated for removal. The authors support their proposed components with sufficient mathematical justification and rigorous proofs, demonstrating the robustness and theoretical soundness of the proposed methodology. Extensive experiments conducted across diverse benchmark datasets and multiple GNN models illustrate the superiority of Cognac over existing approaches. Notably, the proposed methodology maintains strong performance even when the number of nodes designated for removal is extremely limited and consistently scales effectively to significantly larger datasets. Claims And Evidence: The authors provide clear and convincing evidence supporting the two core components of their proposed methodology (CoGN and ACDC). Specifically, they present rigorous mathematical justifications for identifying and adjusting nodes through the CoGN approach, including the derivation of the criteria used to select nodes for contrastive unlearning. Additionally, the authors offer comprehensive theoretical proofs regarding the ACDC technique, which effectively unlearns the influence of designated node sets. Beyond these theoretical validations, the authors also perform extensive experimental evaluations demonstrating that their proposed methods outperform existing state-of-the-art techniques in various experimental scenarios. Methods And Evaluation Criteria: The authors evaluate their proposed method using a recently introduced metric from the literature (ICLR, 2024). They clearly indicate the sources of benchmark datasets widely adopted in prior research, ensuring transparency in their evaluations. Moreover, the authors acknowledge that, given the inherent characteristics of unlearning methods, guaranteeing a fair comparison can be challenging. To address this issue, they openly share multiple components essential for reproducibility, effectively demonstrating the robustness of their experimental setup. Finally, the soundness of the experimental approach is further strengthened by thoroughly comparing their methodology against five state-of-the-art benchmark models and four additional reference models. Theoretical Claims: The authors provide rigorous mathematical justification regarding their proposed method. First, they clearly formalize and prove how the Wasserstein-based interclass confusion attack affects class distributions by quantifying changes via mathematical analysis, thereby establishing a strong theoretical foundation. Additionally, the authors offer mathematical proofs to efficiently identify specific cases where the representation of an arbitrary node could be influenced, significantly optimizing the node-selection step in contrastive unlearning (CoGN). Finally, they define a loss function suitable for effective contrastive unlearning and rigorously prove important theoretical properties, including differentiability, convexity, and bounded gradients, thereby further solidifying the soundness and reliability of their approach. Experimental Designs Or Analyses: The authors demonstrate the validity of their proposed method by comparing it with four primary GNN unlearning techniques (original model, retrain, finetune, i.id.) and five state-of-the-art benchmark models. They emphasize that the widely adopted retrain baseline, as indicated by recent research, does not represent an optimal standard for corrective unlearning. In their evaluation, the authors convincingly show that their proposed method outperforms the retrain-based approach. Furthermore, the authors introduce an oracle baseline trained on the full, unmodified dataset to represent an upper-bound for corrective unlearning performance. The authors' method demonstrates strong performance relative to both the retrain and oracle baselines, further validating the effectiveness and rigor of their experimental evaluations. Supplementary Material: Read all of it. Relation To Broader Scientific Literature: The paper extensively compares to prior graph unlearning methods. For instance, GNNDelete by Cheng et al. (2023) is taken as a baseline. By doing so, the authors clarify that they are tackling the corrective post-hoc unlearning scenario rather than the more studied privacy-driven exact unlearning scenario, which is an important distinction. Furthermore, the paper relates Cognac to general unlearning methods like SCRUB (Kurmanji et al., 2023). By including SCRUB, they acknowledge the broader machine unlearning literature. The fact that Cognac, with graph-specific insight, outperforms SCRUB in this domain highlights the contribution that graph structure awareness brings. In positioning relative to broader literature, the authors also mention robust training and concept erasure approaches in the introduction. They cite works on robust pre-training for GNNs (adversarial training by Yuan et al., 2024; defense by Zhang et al., 2023) and concept erasure in vision (Belrose et al., 2023) to clarify that while those aim to remove unwanted influences, unlearning is distinct in that it’s post-hoc and does not assume knowledge of the specific concept or attack in advance. This helps readers see how their work is different from training a GNN to be robust to attacks from the start instead fix the model after the fact. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths 1. The authors propose a novel approach, named Cognac, comprising two key components—Contrastive Unlearning on Graph Neighborhoods (CoGN) and Ascent Descent De-coupled (ACDC)—to address corrective unlearning in Graph Neural Networks (GNNs). The authors provide rigorous mathematical justifications and proofs supporting both components, demonstrating their theoretical soundness and validity clearly and convincingly. 2. Through extensive experimental evaluation on various benchmark datasets and models, the authors demonstrate the superiority of Cognac compared to existing baseline unlearning approaches. Notably, the experiments include a wide array of benchmark datasets and state-of-the-art models. The authors also highlight the scalability of their approach by showing its effectiveness on datasets up to eight times larger than standard benchmarks, further supporting the method's robustness. 3. The authors demonstrate that Cognac achieves superior performance even when compared against the oracle model, which represents the upper bound for corrective unlearning performance. Remarkably, Cognac maintains strong performance despite having access to only 5% of the manipulated data, clearly highlighting the method’s effectiveness and efficiency. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough and positive assessment of our work! We are thrilled that you recognized the novelty, mathematical soundness, experimental rigor, and effectiveness of our proposed method. In response to the other reviewers, we have added more experiments: (1) a feature trigger poison (reviewer igz9), (2) ablations of CoGN and AC/DC separately (reviewers igz9, rWWk), and (3) ablations of the strategy for identifying affected nodes (reviewers igz9, rWWk). If you have any specific questions or concerns that could further strengthen your support for our work, we would be happy to address them.
Summary: The paper addresses the challenge of Corrective Unlearning in Graph Neural Networks (GNNs). While GNNs are widely used across various applications, their message-passing mechanism makes them vulnerable to adversarial manipulations and erroneous data, as errors can propagate throughout the graph. To mitigate this, the authors introduce Cognac, a novel method designed to unlearn the effects of manipulated data, even when only a small fraction (5%) of the manipulated set is identified. Cognac significantly outperforms existing approaches, restoring model performance to levels close to those achieved with fully corrected data. Moreover, it is 8× more efficient than retraining the model from scratch. Claims And Evidence: Generalizability to Diverse Manipulations and Tasks: The paper primarily examines targeted binary class confusion attacks on edges and nodes within node classification tasks. The evaluation to a broader range of attack types and graph-related tasks is missing to validate Cognac's overall effectiveness and applicability. Methods And Evaluation Criteria: yes Theoretical Claims: Yes, Theorem 3.1 analyzes the impact of Interclass Confusion attacks on class representations. Lemma 3.2 establishes the locality of manipulation propagation in GNNs. Experimental Designs Or Analyses: Evaluation Scope: The paper evaluates targeted binary class confusion attacks on both edges and nodes, covering key manipulation types. However, expanding the analysis to a wider range of attack scenarios could further strengthen the evaluation. Identification of Affected Nodes: The paper employs a heuristic approach to identify affected nodes by inverting features and observing changes in output logits. While this method appears reasonable, a more rigorous validation against alternative approaches would enhance confidence in its effectiveness. The paper does not explicitly include ablation studies to assess the contribution of individual components (CoGN and AC-DC). Incorporating such studies would strengthen the justification for the method’s design choices and provide deeper insights into their impact. Supplementary Material: Appendix A.1: Reviewed the proof for Lemma 3.2, which examines the propagation of manipulations within an n-hop neighborhood of poisoned nodes. Verified the logical flow of the proof. Appendix A.3: Examined the assumptions and attack details for Theorem 3.1, which analyzes the impact of Interclass Confusion attacks, ensuring clarity on the conditions under which the theorem holds. Relation To Broader Scientific Literature: The paper builds upon the recently introduced problem of Corrective Machine Unlearning (https://arxiv.org/abs/2402.14015), which aims to remove the adverse effects of manipulated data while being agnostic to the type of manipulations. This approach operates with access to only a representative subset of the manipulated data for unlearning, making it more practical for real-world applications where full knowledge of manipulations is often unavailable. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: 1. The paper provides a theoretical analysis of adversarial attacks on GNNs, specifically examining the effects of Interclass Confusion attacks on the Wasserstein-2 distance between class embedding distributions. 2. Cognac effectively mitigates the impact of manipulated data, even when only 5% of the manipulated set is identified, outperforming existing GNN unlearning methods. 3. The proposed method restores most of the performance of a strong oracle trained on fully corrected data, even surpassing retraining from scratch when the deletion set is excluded. 4. Computational Efficiency: Cognac is 8× more efficient than retraining and scales effectively to large datasets. Weakness: 1. Scalability to Extremely Large Graphs: While Cognac demonstrates strong scalability to large datasets, its performance on massive graphs with billions of nodes, which are common in real-world applications like social networks, remains largely untested. 2. The study primarily focuses on node classification tasks and targeted binary class confusion attacks on edges and nodes, which may not fully capture all possible manipulation scenarios. 3. Dependence on Identified Manipulations: Although Cognac remains effective even when only 5% of manipulated entities are identified, its success still depends on having some prior knowledge of the manipulations. Other Comments Or Suggestions: It would be great to have more experiment evaluation on the eally large-scale dataset. Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s recognition of our work, particularly noting our theoretical analysis of adversarial attacks on GNNs, the effective mitigation of data manipulation through Cognac, and its computational efficiency compared to retraining. --- ## Expanding the analysis to a wider range of attack scenarios Thank you for the suggestion! We've added preliminary experiments on feature poisoning backdoor attacks across 3 representative datasets, now covering all major poisoning types (label, graph structure, and feature). The results can be found at https://imgur.com/a/nJnsqWN. Our feature attack injects trigger a pattern into the feature vectors of select nodes, and assigning a fixed spurious label, and reduces accuracy on the target distribution. Despite not being the strongest possible attack (also addressed in the limitations section, L422-426), our implementation provides sufficient signal for evaluation - most unlearning methods struggle, while Cognac matches and even outperforms retraining performance. Retrain fails to recover accuracy on the victim class for the *Cora* dataset, while SCRUB fails to do so for both *Cora* and *CS*. GNNDelete and MEGU, the graph unlearning baselines, fail to remove the poison. --- ## Affected Nodes Sampling: Validation against alternative approaches Our ablations (Figure 8, Appendix E.2) show the top 10% of affected nodes using our sampling performs comparably to using all nodes. As per your suggestion, we compare our sampling with MEGU’s. The link to the table can be found at https://imgur.com/a/3HykVJ8. Our results on the Cora dataset indicate that our method outperforms MEGU’s approach. Our heuristic delivers over 25% higher $Acc_{aff}$ and is 8x faster (0.43s vs 0.05s for 3160 samples) Key difference: MEGU propagates features through adjacency matrices using cosine similarity to adaptively threshold selection, while we use actual model predictions with L1 norm. We would be happy to include any additional sampling strategies/influence functions in the final version. --- ## Method Ablations We appreciate your suggestion! We've added ablation studies showing the individual contributions of both components (CoGN and AC/DC). While our paper already reports AC/DC performance separately (Figure 3), we've now added results for CoGN too, at this link (https://imgur.com/a/mk6D9ly): Both components contribute significantly: **CoGN alone achieves only 34.7% forgetting on Amazon (compared to Cognac’s 82.9%) and 42.1% on Cora (vs. Cognac’s 75.5%)**, showing that it effectively moves affected nodes but lacks correction from labels. Conversely, **AC/DC achieves 76.2% forgetting on Amazon, far better than CoGN but still below Cognac**, confirming that the components are complementary. AC/DC weakens incorrect learning signals and preserves task-relevant representations, while CoGN steers affected nodes away from manipulated ones. --- ## Scalability to larger datasets We thank the reviewer for the suggestion. However, we note that Cognac is designed so that its computational complexity primarily scales with the **size of the deletion set, not the entire graph**. In real-world unlearning applications, the deletion set is typically a small fraction of the overall data, making our approach inherently scalable. Moreover, our experiments (Appendix D.2) demonstrate that even when faced with a relatively large deletion fraction, Cognac consistently outperforms baseline methods. This empirical evidence further confirms that our method can effectively handle large graphs, showing promise against scalability concerns even for massive real-world networks. --- ## Does Cognac’s success depend on having some prior knowledge of the manipulations? The unlearning algorithm is fully agnostic to the manipulation type (as we show in our experiments across label, graph structure, and the newly added feature attacks). The input to the algorithm is simply a representative subset of the manipulated data, without any knowledge about the manipulation itself. Furthermore, note that unlearning is only defined for a non-zero number of unlearning entities. Please see our discussion with reviewer rWWk for additional insights on this. --- Thanks! Your detailed feedback has helped us greatly improve the paper. We hope this increases your support for our work.
Summary: In this paper, the author proposes an unlearning algorithm, Cognac, to remove manipulated data from a well-trained GNN model. The approach first identifies sensitive neighbors that may be influenced by spurious entities and then mitigates these effects by aligning the embeddings of the selected neighbors with unaffected ones. Abundant experiments have been conducted to demonstrate the perrormance of the proposed method. Claims And Evidence: The claims are sound and figure 1 explains it by showing a concrete example. However, I have two questions: 1 In Section 2, the focus is on removing edges that compromise the homophily property of the graph. However, recent findings suggest that heterophilic GNNs can enhance performance, indicating that non-local neighbors can also contribute to node and edge classification tasks. Does this imply that solely targeting homophilic edges could mislead the unlearning algorithm? 2 In Section 3.1.1, the method selects affected nodes using a heuristic that measures the influence of manipulated data. I suggest providing a concrete example to clarify this approach. 3 In reality, we usually don't know the manipulated data set. Methods And Evaluation Criteria: The problem is well defined and algorithm is plausible. However, I have several questions: 1 In section 3.1.1, the method selects affected nodes influenced by the manipulated data. However, how can they ensure that these nodes exert a strong influence on other 'unaffected' data? For example, suppose node A is a 1-hop neighbor of the manipulated node N but has no other connections, while node B is a 2-hop neighbor of N but has multiple neighbors. Given k=1, does it make sense to select A over B? 2 I suggest them to introduce the procedure of spurious edge addition in their experiments. 3 They propose a contrastive loss in formula 2 by aliging the inner product of "affected links" with "unaffected links". Could this potentially degrade the information preserved in the 'affected nodes'? Theoretical Claims: I suggest them to prove the soundess of the heuristic of selecting affected nodes in section 3.1.1. Experimental Designs Or Analyses: The experiments appear comprehensive, and their algorithm outperforms other approaches. Supplementary Material: I suggest to make the proof A.2 easy to follow. Currently, it is a little bit confusing. Relation To Broader Scientific Literature: NA Essential References Not Discussed: Yang, Tzu-Hsuan, and Cheng-Te Li. "When Contrastive Learning Meets Graph Unlearning: Graph Contrastive Unlearning for Link Prediction." In 2023 IEEE International Conference on Big Data (BigData), pp. 6025-6032. IEEE, 2023. Other Strengths And Weaknesses: The idea is innovative. However, paper was a little hard to follow with too many avenues and experiments explored, it would have been more productive to reduce the scope of the paper. At the end the conclusion and future work needs to be a little more elaborate in terms of next steps. The idea and approach is good, work is there just needs more organizing and scoping out. Other Comments Or Suggestions: NA Questions For Authors: Could you provide the link of code? Ethics Expertise Needed: ['Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)'] Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s recognition of our comprehensive experiments and the soundness of our claims. We're also pleased that the reviewer found the idea behind Cognac innovative. We hope our clarifications below address your concerns. --- ## Affected Nodes Sampling _Can solely targeting homophilic edges mislead the unlearning algorithm?_ Your question raises an important point. While the attack exploits the homophily due to the GNN’s inductive bias, the unlearning algorithm is fully agnostic to the manipulation type. It simply requires knowledge of a representative subset of the unlearning entities and selects affected nodes based on prediction changes, not graph structure properties, and can hence be adapted for heterophilic settings. Moreover, in L418-421, we acknowledge that our current work has not explored applications in heterophilic datasets yet. _1-hop vs 2-hop node selection_ The heuristic selects nodes based on the magnitude of change in their output logits (∆χ) when features are inverted, and not just the hop distance. A high ∆χ indicates strong influence from manipulated nodes, even if the node is not a direct neighbor. By choosing the top k% based on the GNN forward pass, the method accounts implicitly for hop distance, ensuring that only nodes significantly affected are flagged for corrective unlearning. _Soundness of our heuristic selecting affected nodes_ Lemma 3.2 shows that a manipulated node’s influence is confined to its n-hop neighborhood. Using feature inversion, we measure how changes in these nodes’ features affect their neighborhood. Our ablation studies (Figure 8, Appendix E.2) reveal that selecting the top 10% of nodes by change in output logits performs similarly to using the full n-hop subgraph, yet is 2× faster. Moreover, compared to MEGU’s sampling method, our heuristic delivers over 25% higher $Acc_{aff}$ and is 8× faster. Additional details are in our discussion with reviewer igz9 and will be included in the revised version. _Working Example_ We will add an example of our affected nodes sampling to the updated version (link to figure: https://imgur.com/a/Kgzfl3E) --- ## Contrastive Loss and Information Preservation Thank you for pointing this out! Left unchecked, contrastive learning could indeed degrade information. That's why we perform gradient descent on the retain set, which contains these affected nodes, ensuring their information is preserved. As part of new experiments, we add an ablation showing that contrastive unlearning alone (CoGN) performs notably worse notably worse (upto 48.2% on label flip on Photos). More details can be found in our discussion with reviewer igz9 and in the following table: https://imgur.com/a/mk6D9ly. We will include this discussion explicitly in the revised version in Appendix E. --- ## Data Manipulations - Breadth and Discovery _Discovering manipulated data_ We agree that addressing the difficulty in finding manipulated data is the motivation and main contribution of our work! [1] shows that prior works rely on the availability of all manipulated samples, which is unrealistic. Our method Cognac achieves better performance than theirs with as little as 5% of the manipulated set known. Note that unlearning is only defined for a non-zero number of unlearning entities. A small fraction of the manipulated data can be identified by manually investigating a small random subset of the data or using automated tools like [2]. We will address this in the revised version in Section 6. _Spurious edge addition_ Thanks for noting that this would be a valuable setting! This attack is indeed covered in our experiments, with details in Section 4.2. --- ## Paper Revision 1. **We’ll improve the writing of proof A.2** to make it more readable, and **expand the conclusion** with specific ideas for next steps. 2. **Include missing reference:** Thank you for pointing out the paper. We’ll add it to our related work section. We note that while both use contrastive methods, they use it for different objectives. Cognac performs contrastive learning directly on hidden representations of affected nodes while GCU improves upon GNNDelete by contrasting between Deleted Edge Consistency and Neighbourhood Influence to provide a more granular, graded removal of edge information rather than the binary deletion used in GNNDelete. 3. **Code is already present in the supplementary material:** All code and hyperparameters are in the supplementary materials zipfile; we'll include a deanonymized link in the final version. --- Thanks for your questions and suggestions. We hope our response increases your support for our work and are happy to discuss further! [1] Goel, Shashwat, et al. "Corrective Machine Unlearning." *Transactions on Machine Learning Research*. [2] Thyagarajan, Aditya, et al. "Identifying Incorrect Annotations in Multi-label Classification Data." *ICLR 2023 Workshop on Pitfalls of limited data and computation for Trustworthy ML*.
null
null
null
null
null
null
null
null
Deterministic Sparse Fourier Transform for Continuous Signals with Frequency Gap
Accept (poster)
Summary: The paper introduces the first deterministic algorithm for computing the sparse Fourier transform (SFT) of continuous signals that have a minimum frequency gap. In contrast to earlier approaches that relied on randomness, the authors develop a method that deterministically recovers a k‑sparse signal (i.e., one with only k significant frequencies) using far fewer samples than traditional FFT methods. Their algorithm uses a de‑randomized hashing scheme combined with specialized filtering functions to isolate individual frequency components even in the presence of noise. A novel (C, ξ)-noise model (defined below) is employed to guarantee robust recovery under an ℓ₁/ℓ₂ mixed norm error bound. Overall, the method achieves sublinear sample complexity and runtime—specifically O(k² polylog(FT/η))—making it an optimal deterministic solution for continuous sparse Fourier transforms when frequencies are separated by a gap. A noise function g(t) defined on the interval [0, T] is considered (C, ξ)-noise if its maximum squared magnitude over [0, T] is upper-bounded by a constant multiple of its average energy plus an additive term. Formally, this means that   max₍ₜ∈[0,T]₎ |g(t)|² ≤ C · (1/T · ∫₀ᵀ |g(t)|² dt) + ξ, where C is a fixed constant and ξ is a parameter that depends on the specific characteristics of g(t). This condition ensures that even if g(t) has occasional high peaks, the overall noise level remains controlled relative to its average energy. ## Update After Rebuttal: I think the authors have responded to my review sufficiently and I will keep my score. Claims And Evidence: Yes, the claims made in the submission are supported by proofs. Methods And Evaluation Criteria: N/A (this is a theoretical paper) Theoretical Claims: I read the first 9 pages, and the arguments there seem fine. Experimental Designs Or Analyses: N/A (this is a theoretical paper) Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper extends deterministic sparse Fourier techniques from the discrete setting—drawing on work by Li & Nakos (2020), Hassanieh et al. (2012), and Indyk & Kapralov (2014)—to continuous signals by introducing a deterministic hashing scheme and a (C, ξ)-noise model. In contrast to previous randomized methods (Price & Song, 2015; Chen et al., 2016; Jin et al., 2023) that rely on random time sampling for sublinear recovery, this work adapts tools such as hash-to-bins and one-sparse recovery to achieve optimal deterministic recovery guarantees. Essential References Not Discussed: I think the references are adequate. Other Strengths And Weaknesses: The paper offers an original contribution by extending deterministic sparse Fourier techniques from the discrete to the continuous setting. Its strengths include: • A creative combination of ideas from discrete SFT (e.g., Li & Nakos, Hassanieh et al., Indyk & Kapralov) with continuous signal analysis, leading to a deterministic algorithm that avoids randomness. • Rigorous theoretical analysis with optimal sublinear sample and runtime guarantees under a (C, ξ)-noise model. • Clear advancement in theory by removing randomness, which is significant for applications where deterministic performance is critical. I don't have any significant weaknesses to mention, other than perhaps checking the applicability of this algorithm in a practical context. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comment and we greatly appreciate the reviewer's recognition of our contributions. We believe that our work represents a substantial advancement in the theoretical understanding of Fourier transforms. We also appreciate the reviewer’s perspective regarding the practical applicability of our framework. While our main focus has been on establishing the theoretical foundations, we share the reviewer’s interest in examining real‐world deployments and it would be interesting to see our deterministic method can be evaluated under a variety of practical conditions. Please let us know if you have any further comments regarding our work. Thank you again for your positive feedback.
Summary: This paper introduces a deterministic sublinear-time algorithm for recovering sparse continuous signals with frequency gaps, which addresses a critical gap in prior research by random approaches. The proposed method achieves optimal recovery guarantees in the presence of arbitrary noise. Claims And Evidence: Yes, the claims are strictly proven. Methods And Evaluation Criteria: The proposed methods are theoretically novel and address a critical gap in continuous sparse recovery. However, the evaluation criteria rely excessively on theory, lacking empirical validation and baseline comparisons, which limits practical credibility. Theoretical Claims: Part of them. And I think they are right. Experimental Designs Or Analyses: The work lacks experiments. Supplementary Material: Sorry, I don't have time to review the supplementary material Relation To Broader Scientific Literature: The key contributions of this paper address critical gaps in the sparse Fourier transform (SFT) literature, particularly in the context of continuous signals with frequency gaps. Its deterministic guarantees and noise resilience position it as a critical step toward reliable, real-world sparse signal processing systems. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - Deterministic algorithm for continuous sparse Fourier transform. The paper introduces a deterministic *sublinear-time* algorithm for recovering sparse continuous signals with frequency gaps. - Tight theoretical guarantees. Theoretical guarantees ensure stable recovery even with arbitrary noise, which addresses a key challenge in continuous signal processing. Weaknesses: - Lack of empirical validation. The paper provides no experiments to validate the results. - No comparison to on-grid compressed sensing. The work does not compare its approach to on-grid compressed sensing methods, which discretize the frequency domain and achieve good sample complexity under the same setting. - Grid Assumption is too strong. Real-world signals often exhibit off-grid frequencies, and the algorithm’s performance would degrade significantly in such cases, limiting its applicability. ## update after rebuttal As the empirical validation is yet to be incorporated, I'll maintain my score as it is. Other Comments Or Suggestions: No. Questions For Authors: 1. How does the algorithm perform on synthetic signals with varying noise levels? 2. Is there a mechanism to handle the grid mismatch? 3. Could you compare the experimental performance of on-grid compressed sensing and your algorithm by conducting experiments? For fairness, they can use the same samples. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the recognition of our theoretical and algorithmic contributions. ### W1, Q1 and Q3: Lack of empirical validation Thank you for pointing this out. We acknowledge that empirical validation would be beneficial, but our focus in this submission was on establishing theoretical guarantees and sublinear‐time recovery. Like many other works in this line of research, we have chosen to emphasize these theoretical foundations, focusing on the sublinear‐time property and performance guarantees. Such a focus often entails specialized data structures and asymptotic analyses rather than extensive empirical testing. ### W2: Comparison of on-grid compressed sensing We thank the reviewer for this valuable point. We emphasize that our setting is fundamentally different from standard on-grid compressed sensing methods, which assume discretized frequency domains. Our problem formulation assumes continuous-time signals with non-zero frequency gaps, where naive discretization would suffer from sparsity blow-up and noise sensitivity. ### W3 and Q2: Grid assumption Thank you for the thoughtful feedback. We acknowledge this concern and would like to clarify. Our current algorithm assumes frequencies lie on an equispaced grid with a known frequency gap, as is standard in prior deterministic SFT work. This assumption allows us to design deterministic hashing and filtering strategies with provable guarantees. Extending deterministic sparse recovery to off-grid frequencies is more challenging. We believe one potential way is to refine the grid resolution so that frequencies are only “mildly off-grid,” at the cost of increased sample complexity. We are grateful for the reviewer’s insightful comments. Thank you for your time and valuable feedback!
Summary: This paper adapts Li and Nakos (2020)'s deterministic sparse Fourier transform (SFT) algorithm to the continuous-time setting described by Price and Song (2015) (who had proposed a randomized algorithm), showing that an efficient deterministic method exists in this regime as well. The proposed algorithm has $O(k^2\log k \log^2(F/\eta))$ sample complexity and $O(k^2 \log k \log^3(F/\eta))$ time complexity, where $k$ is the sparsity, $F$ is the bound on possible frequencies, and $\eta$ is the gap between possible frequencies. The algorithm is sublinear with respect to the frequency domain size $\Theta(F/\eta)$, and thus is an improvement over non-sparse algorithms. The class of $(C, \xi)$-noise functions is defined in Definition 3.3 so that the algorithm can work in the continuous setting. Proofs of the recovery guarantees and sample/time complexities of the algorithm (as summarized in Theorem 3.11) are provided in the appendices. Claims And Evidence: The main claims of this paper, the correctness and sublinear complexity of the proposed deterministic continuous SFT algorithm, is well substantiated with proofs in the appendix. The required assumptions (e.g., on the noise) are laid out clearly. However, the claim that the algorithm achieves optimal recovery guarantees (see, e.g., the last sentence of the abstract or the second last sentence of the conclusion) lacks justification. The authors should clarify this and explain how the recovery guarantee achieved by the algorithm is optimal. Methods And Evaluation Criteria: The problem definition, recovery guarantees, and use of sample/runtime complexity to evaluate the efficiency of the algorithm are standard in the literature. Theoretical Claims: The proofs of algorithm correctness and complexity (presented in the appendices) generally follow those in Li and Nakos (2020). Although the high-level structure of the proof is similar, significant changes are required due to the continuous-time setting. Although I did not check every detail, it seems that all the necessary adjustments for the proof to go through have been made. Experimental Designs Or Analyses: N/A (No experiments.) Supplementary Material: N/A (No supplementary material.) Relation To Broader Scientific Literature: This paper combines the methodology of Li and Nakos (2020) and the setting of Price and Song (2015). Although neither the idea nor the setting is new, the combination seems to be novel. The proposed algorithm is similar to that of Li and Nakos (2020), but has nontrivial differences stemming from the difference in setting. Essential References Not Discussed: The relevant literature on continuous and/or sparse Fourier transforms seems to be adequately represented in this paper. Other Strengths And Weaknesses: 1. This paper assumes that the reader is familiar with the general techniques used for sparse Fourier transforms and often defines variables or functions without explaining their purpose. This makes this paper very challenging to read for readers who are not familiar with the subject matter. In addition, this paper borrows many definitions from Li and Nakos (2020), but leaves out the explanations that accompanied them in the original paper. For example, the role of the pessimistic estimator $h\_r (f, f', \sigma\_1, b\_1, \dots,\sigma\_r,b\_r)$ defined in line 262 of the left column of page 5 is not explained, making it very hard to understand for readers who have not read Li and Nakos (2020). 2. It is unclear how useful the setting considered in this paper is. Although the setting allows for continuous-time signals, this is offset by the requirement that all active frequencies be a bounded discrete multiple of some constant $\eta$, which seems rather limiting. Furthermore, the $(C, \xi)$-noise model defined in this paper seems to exclude some common types of noise such as white Gaussian noise. Other Comments Or Suggestions: 1. In the introduction, a brief discussion on the different kinds of continuous settings studied in the context of SFTs might be beneficial to give readers additional context. For example, although Boufounos et al. (2012) is mentioned in the introduction as a paper that studies SFTs in the continuous setting, it focuses on the discrete-time and continuous-frequency setting, while this paper studies the continuous-time and discrete-frequency setting. 2. There are many minor typographical and formatting errors, which can sometimes impede understanding. For example, the definition of $\omega$ to be $e^{-2 \pi \mathbf{i}}=1$ in the second last line of the right column of page 2 does not make a lot of sense. It seems substituting $e^{-2 \pi \mathbf{i}}$ in for every instance of $\omega$ will fix this issue (e.g., replace $\omega^{t\sigma b}$ with $e^{-2\pi\mathbf{i} t \sigma b}$). In addition, in line 233 of the left column of page 5, "if $o_{f,\sigma,b}(f)$ is big and $o_{f,\sigma,b}(f')$ is small" should probably be either "if $o\_{f,\sigma,b}(f)$ is small and $o\_{f,\sigma,b}(f')$ is big" or "if $\hat{G}\_{o\_{f,\sigma,b}(f)}$ is big and $\hat{G}\_{o\_{f,\sigma,b}(f')}$ is small". Questions For Authors: 1. The paper claims that the proposed algorithm achieves optimal recovery guarantees. Could the authors explain what it means for a recovery guarantee to be optimal, and how the proposed algorithm is optimal in that sense? (see section "Claims and Evidence") 2. Could the authors provide concrete examples of $(C, \xi)$-noise functions, preferably with exact values for $C$ and $\xi$? Providing examples of commonly encountered $(C, \xi)$-noise functions will help justify the definition. (see section "Other Strengths and Weaknesses", point 2) 3. Could the authors provide potential applications of the continuous-time, discrete-frequency SFT? It is true that this is not the first paper that studies this setting, but some examples of real-world uses will help motivate the problem that this paper is solving. Price and Song (2015) did mention piano tuning as an example, but it seemed rather contrived. (see section "Other Strengths and Weaknesses", point 2) ## Update after Rebuttal The authors' response was reasonable and addressed most of my concerns. I believe that the final paper will be much stronger if the authors incorporate this discussion in the final version (especially their responses to questions 1-3) and make all the fixes that they promise. Taking this into account, I have increased the overall rating from 3 to 4. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and for recognizing the novelty, significance, and technical contributions of our work. ### W1: Self-contained Explanation We will include clearer explanations of fundamental Sparse Fourier Transform (SFT) methods (such as hashing, filtering, and convolution) to ensure the paper is easy to understand and complete on its own. For instance, we will clarify the definition of $h_r$ in our revision, stating explicitly: "In the derandomization phase, each function $h_r$ serves as a pessimistic estimator, tracking the probability of undesirable events (such as hash collisions) given the first $r$ selected hash functions." We will carefully define these key concepts throughout the paper. ### W2 and Q2: $(C,\xi)$-noise model We introduced the $(C,\xi)$-noise model to guarantee that if the noise function has bounded pointwise deviation relative to its global energy, a deterministic sampling pattern can isolate true signal components from the noise. White Gaussian noise typically has unbounded probability tails, which can lead to arbitrarily large amplitudes at deterministic sample points and hence one cannot “average out” them in a purely deterministic algorithm. However, our model does not strictly exclude Gaussian noise if it is truncated. Suppose that $g(t)$ is i.i.d. over time t. If we consider $g(t) \sim \mathcal{N}(0,\sigma^2)$ condition on $|g(t)| \leq M$, then it is a $(C,\xi)$-noise with $C=1$ and any $\xi > 0$. In fact, any uniformly bounded process (including a truncated normal) is trivially a $(C,\xi)$-noise. Another example is that $g$ is a polynomial function, e.g. $g(t) = t^c$ for some $c > 0$. Then $g$ is a $(C,\xi)$-noise with $C = 2c+1$ and $\xi = 0$. ### Q1: Optimal recovery guarantee By “optimal,” we mean that up to polylogarithmic factors in $k$. Our algorithm achieves both optimal sample complexity and runtime. This is because the lower bound for sample complexity is $\Omega(k^2 + k \log k)$, as established in [1]. Consequently, the lower bound for runtime is also $\Omega(k^2 + k \log k) $. ### Q3: Applications and real-word examples SFT problems arise whenever signals are (approximately) dominated by a small number of frequencies. While classical FFT requires sample/time complexity on the order of the entire band-limit $F$, SFT leverages sparsity $k \ll F$ to reduce complexity. Many continuous-time signals in scientific and engineering contexts are indeed “nearly sparse,” with only a few truly significant frequency components amidst a large potential range of smaller, negligible ones. For example, in many radar systems, the received signal consists of only a few dominant sinusoidal components [2], each corresponding to a strong reflection path. While the overall bandwidth FFF may be large, the effective sparsity (the number of meaningful reflections kkk) is typically quite small. Another example is machinery vibration analysis [3]. Machinery vibrations are inherently continuous signals that often exhibit only a small handful of resonant frequencies—each well spaced from the others. This distinct frequency gap naturally suits a continuous sparse Fourier approach for robust fault detection and monitoring. [1] Ganguly, Sumit. "Lower bounds on frequency estimation of data streams." Computer Science–Theory and Applications: Third International Computer Science Symposium. 2008 [2] Austin, Christian D., Emre Ertin, and Randolph L. Moses. "Sparse signal methods for 3-D radar imaging." IEEE Journal of Selected Topics in Signal Processing. 2010 [3] Ding, Chuancang, Ming Zhao, and Jing Lin. "Sparse feature extraction based on periodical convolutional sparse representation for fault detection of rotating machinery." Measurement Science and Technology. 2020 We also thank the reviewer for the editorial comments. We appreciate these suggestions and will refine the final draft accordingly. Thank you for your time and valuable feedback!
null
null
null
null
null
null
null
null
Shifting Time: Time-series Forecasting with Khatri-Rao Neural Operators
Accept (poster)
Summary: The authors propose a time series forecasting method, leveraging continuous time-shift operators, which act as continuous analogs of the lag factor of the discrete-time autoregressive models or the upsampling/downsampling layers in CNNs, mapping the history of values up to an observation to a future window. To tackle the operator-learning problem, the authors introduce Khatri-Rao neural operators (KRNO) that define non-stationary integral-based transforms with the almost linear cost for spatio-temporal problems. The proposed method is evaluated on several real-world and synthetic spatiotemporal and standard temporal forecasting datasets, showcasing competitive performance with other popular time series methods. An anonymized code repository is also provided for reproducibility purposes. ## update after rebuttal Assessing the overall impact of the contribution, including the theoretical results and experimental evaluation, and following the author's rebuttal that showcased incremental performance results on real-world irregularly sampled datasets (or without proven statistical significance), I maintain my initial ratings. During the rebuttal, the authors have not adequately addressed significant questions concerning the positioning of the contribution (W1, W2) or presented a thorough computational (e.g., time cost) analysis between the proposed method and considered baselines for the chosen tasks (which is a significant aspect in case of borderline performance improvements). Performance percentage improvements in terms of MSE are misleading (referring to the 2nd decimal with overlapping stds), and the proposed method is outperformed in Tables 5 and 6, while results during the rebuttal on irregular datasets showcase average improvements ranging from 1% to 7% for 2 out of the 3 datasets. Claims And Evidence: A few problematic claims: - *(Almost) Linear Complexity Competitive to Operator-Based Methods:* The almost linear complexity of the method compared to other operator-based learning frameworks is not showcased in the main paper but rather in the appendix, where experimental results show that the proposed KRNO method is the worst in terms of train-test times per iteration for the spatiotemporal shallow water dataset (surpassed in several cases by the FNO method that has O(nlogn) complexity). - *Continuous Analog of Discrete-Time Autoregressive Models:* It is not very clear how the proposed KRNO method is a continuous analog of the lag factor of the discrete-time autoregressive models, and for which selection of kernels this holds. Methods And Evaluation Criteria: Proposed methods and evaluation criteria (e.g., datasets) are generally appropriate for the task at hand but could be significantly enhanced to enable a thorough experimental evaluation. **Baseline Methods:** Particularly for the case of irregularly sampled time series, only two baselines are used. More recent state-of-the-art methods can be found here [2]. **Datasets:** The proposed method is evaluated on two spatiotemporal forecasting datasets, including data for shallow water simulation and climate modeling, and several temporal forecasting datasets, including the M4 competition archive, the Darts archive, and the two additional crypto and player trajectory datasets. The datasets are commonly used in the (at least for the temporal) forecasting community. Yet several common datasets are missing in terms of regularly sampled time series (ETT, Electricity, Traffic etc, summarized here https://github.com/thuml/Time-Series-Library). The authors also generate an irregularly sampled synthetic dataset to validate their method for data with non-equidistant observation times. The synthetic 2D spiral dataset is rather limiting for the irregularly sampled time series. More prominent methods perform forecasting on apriori irregularly-sampled data or randomly downsample regular ones and do extrapolation (see [1,2]). [1] Rubanova, Y., Chen, R. T., & Duvenaud, D. K. (2019). Latent ordinary differential equations for irregularly-sampled time series. Advances in neural information processing systems, 32. [2] Oh, Y., Lim, D. Y., & Kim, S. (2024). Stable neural stochastic differential equations in analyzing irregular time series data. arXiv preprint arXiv:2402.14989. Theoretical Claims: Proofs for the properties of the time-shift operator and Proposition 2.1 have been checked for correctness. No issues detected. Experimental Designs Or Analyses: The authors follow standard experimental designs for spatiotemporal and temporal forecasting (including optimization, metrics, and datasets). It is unclear if they measure the variance of models' performances with multiple runs with different random seeds if the results refer to a single fixed seed. Supplementary Material: I have reviewed all parts of the supplementary material, focusing more on computational complexity and additional results (sections G and H), and fastly followed the provided theoretical details for KRNO (sections A-F). Relation To Broader Scientific Literature: This work is related to the problem of time series forecasting, including spatiotemporal and temporal settings, also extending to irregularly sampled time series that are very common in several engineering and scientific domains. It explores operator-based learning methods proposed for solving complex problems in spatiotemporal data and PDEs as an alternative to the complexity of numerical solvers. The proposed method introduces an efficient time-shift operator for capturing multiple levels of granularities and irregular sampling without the need to approximate the kernel while maintaining linear complexity. Essential References Not Discussed: Based on the review comment above (**"Methods And Evaluation Criteria"**), several baseline methods for irregularly-sampled temporal forecasting are not discussed, including the recent SOTAs based on Neural SDEs (several methods are mentioned in [1]). For regular temporal forecasting, a more standard benchmark followed in the community is based on Time Series Library [2,3] (methods such as TimesNet, TimeXer, iTransformer, and PatchTST are first in ranking yet not tested in the paper). [1] Oh, Y., Lim, D. Y., & Kim, S. (2024). Stable neural stochastic differential equations in analyzing irregular time series data. arXiv preprint arXiv:2402.14989. [2] Time Series Library https://github.com/thuml/Time-Series-Library [3] Wang, Y., Wu, H., Dong, J., Liu, Y., Long, M., & Wang, J. (2024). Deep time series models: A comprehensive survey and benchmark. arXiv preprint arXiv:2407.13278. Other Strengths And Weaknesses: Summarized **strengths** of the paper: - **[S1]** The authors propose a thoroughly presented operation-based theoretical framework that aims to tackle real-world spatiotemporal problems beyond numerical solvers. - **[S2]** Broad tasks, such as temporal forecasting and irregular sampling, are approached. - **[S3]** The included visualizations enhance the readability of the work. Summarized **weaknesses** of the proposed study: - **[W1]** The work is poorly presented compared with the neural-operator-based related works in the literature (such as FNO). Such related methods are abstractly mentioned in the introduction, and several details are given in the methods section, but very few are in the related work section. I suggest restructuring the related method's details throughout the text to highlight the proposed method’s contributions. - **[W2]** The related work section is misplaced and limited regarding references. Only a few methods (Koopman, FNO) are explained in terms of operator-based learning, and one method is explained for irregularly sampled time series modeling (NeuralODEs). Baseline methods used in experiments should be explained in more detail along with more recent methods (see relevant review section above). - **[W3]** Several mixed benchmarks in terms of datasets but with important state-of-the-art methods missing, especially for temporal forecasting and irregularly-sampled temporal modeling (see relevant review section above). - **[W4]** The proposed method is, in several cases, outperformed by baselines, particularly for the temporal forecasting datasets. However, the performance significance of results is not justified if studied, e.g., were multiple runs with random seeds followed for all datasets and baselines? In some cases, the results for the baseline are directly taken from the relevant papers. - **[W5]** The experiments on irregularly sampled time series are limited to simple synthetically generated data (2D spiral), which raises questions about the method's efficacy in real-world setups with irregular timestamps. - **[W6]** Studies on the computational complexity of the proposed KRNO methods compared to SOTAs for temporal forecasting (beyond operator-based methods) are missing. This is essential to support the applicability of the process in practical scenarios with vast time series datasets, where simple and lightweight architectures are, in several cases, more favorable. Other Comments Or Suggestions: No comments Questions For Authors: 1. **[Q1]:** Could authors improve the presentation and structuring of the related work (based on **[W1], [W2]**)? 2. **[Q2]:** Could you tackle the issues raised on experimental evaluation in terms of temporal SOTAs, methods applied to irregularly sampled data, and real-world irregularly sampled datasets (based on **[W3], [W4], [W5]**)? 3. **[Q3]:** To what extent are neural operators for time series forecasting constrained to data exhibiting clear structures, smooth evolution, and spatial correlations? (for instance, NeuralODEs solve irregular sampling problems for specific physical datasets/are not successfully extended to standard forecasting) Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the detailed review and valuable comments. ## 1. Claims and Evidence: >**(Almost) Linear Complexity..** Please refer to our first response to reviewer **dfXC** for more clarification on the computational complexity and runtime analysis of KRNO compared to FNO. >**Continuous Analog..** Using the proposed continuous time-shift operator, we learn an operator that maps the history of the dynamics over a past time-window into its future values over a subsequent time-window. This enables us to learn from irregularly sampled observations and to forecast at any given time over a fixed time-window, making it a continuous analog of the lag factor of the discrete-time autoregressive models. It is also worth noting that the continuity constant of the time-shift operator can be bounded in terms of the Lipschitz constants of the integral transform layers. We plan to study this in future work to show robustness to distribution shifts similar to the seminal work of Oh et al. (2024) on stable NSDEs. ## 2. Baselines and Evaluation Criteria: We agree that numerical studies on regularly sampled time-series do not allow a clear demonstration of the full capabilities offered by KRNO. We now include additional experiments on challenging irregularly-sampled time series benchmark datasets (MIMIC, USHCN, Human Activity, and MuJoCo) and the results are compared against a range of alternative approaches with SOTA performance such as T-PatchGNN, NeuralSDE, NeuralCDE, PatchTST, and Latent-ODE. KRNO achieves new SOTA performance on three of the four benchmarks. The results for the MuJoCo benchmark are provided below where we compare against the methods in the ICLR 2024 paper by Oh et al [1] which you brought to our attention. It can be seen that KRNO achieves top performance on this benchmark on all four cases. Scripts for reproducing the results for the additional four benchmarks are provided in the anonymous repository under directories 'code/scripts/mujoco' and 'code/scripts/irregular_time_series/krno'. The results for MIMIC, USHCN, and Human Activity can be found in our response to reviewer **ntzg**. |Methods|Regular|30% dropped|50% dropped|70% dropped| |:---|:---:|:---:|:---:|:---:| |GRU-Δt|0.223 ± 0.020|0.198 ± 0.036|0.193 ± 0.015|0.196 ± 0.028| |GRU-D|0.578 ± 0.042|0.608 ± 0.032|0.587 ± 0.039|0.579 ± 0.052| |GRU-ODE|0.856 ± 0.016|0.857 ± 0.015|0.852 ± 0.015|0.861 ± 0.015| |ODE-RNN|0.328 ± 0.225|0.274 ± 0.213|0.237 ± 0.110|0.267 ± 0.217| |Latent-ODE|0.029 ± 0.011|0.056 ± 0.001|0.055 ± 0.004|0.058 ± 0.003| |Augmented-ODE|0.055 ± 0.004|0.056 ± 0.004|0.057 ± 0.005|0.057 ± 0.005| |ACE-NODE|0.039 ± 0.003|0.053 ± 0.007|0.053 ± 0.005|0.052 ± 0.006| |NCDE|0.028 ± 0.002|0.027 ± 0.000|0.027 ± 0.001|0.026 ± 0.001| |ANCDE|0.026 ± 0.001|0.025 ± 0.001|0.025 ± 0.001|0.024 ± 0.001| |EXIT|0.026 ± 0.000|0.025 ± 0.004|0.026 ± 0.000|0.026 ± 0.001| |LEAP|0.022 ± 0.002|0.022 ± 0.001|0.022 ± 0.002|0.022 ± 0.001| |Neural SDE|0.028 ± 0.004|0.029 ± 0.001|0.029 ± 0.001|0.027 ± 0.000| |Neural LSDE|0.013 ± 0.000|0.014 ± 0.001|0.014 ± 0.000|*0.013 ± 0.001*| |Neural LNSDE|*0.012 ± 0.001*|0.014 ± 0.001|0.014 ± 0.001|0.014 ± 0.000| |Neural GSDE|0.013 ± 0.001|*0.013 ± 0.001*|*0.013 ± 0.000*|0.014 ± 0.000| |KRNO|**0.007 ± 0.002**|**0.008 ± 0.002**|**0.011 ± 0.004**|**0.012 ± 0.002**| [1] Oh, Y., Lim, D., & Kim, S. Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data. ICLR 2024. ## 3. Questions and Weaknesses: **\[Q1\]** Thank you for your suggestion. We have restructured the methods section to highlight our contributions more clearly. We have also updated the related work section by discussing SOTA methods such as Neural SDEs, T-PatchGNN, NeuralCDE, and Latent-ODE for irregularly sampled time-series forecasting. **\[Q2\].** Please refer to our response in section *Baselines and Evaluation Criteria* regarding comparison with SOTA methods and additional benchmarks on irregularly sampled time series datasets with missing observations. For these additional benchmarks, we have now added the results from multiple runs with different random seeds. It can be seen from the results that KRNO achieves new SOTA performance on three of the four new benchmarks we studied. **\[Q3\]** Neural operators have been primarily successful on data from physical systems governed by PDEs. Our comprehensive experiments demonstrate that KRNO generalizes effectively to both physical systems (MuJoCo) and non-physical datasets (healthcare, climate, human activity) with irregular sampling patterns. **\[W6\]** We have compared the memory usage and runtime analysis of KRNO with Neural GSDE for the MuJoCo dataset. For a batch size of 128, KRNO uses 850MB of memory while Neural GSDE uses 306MB of memory. On the GTX4090 GPU, for this setting, we found that the default KRNO configuration is around 10 times faster than Neural GSDE per iteration. --- Rebuttal Comment 1.1: Comment: I truly appreciate the authors' efforts in the rebuttal. In light of their responses and new results on real-world irregular datasets, as well as their initial experiments, I notice the performance improvements are mostly minor compared to baselines. Assessing the overall impact of the contribution (not substantially new theoretical results and experimental improvements), I prefer to maintain my initial scores. --- Reply to Comment 1.1.1: Comment: Thank you for your continued engagement with our work. Upon reflection, we wonder if there might have been a misinterpretation of our experimental results. The 46% error reduction on regular data and 38% on irregularly sampled data (MuJoCo) represent substantial improvements in forecasting accuracy - improvements that would typically be considered significant advances in the field. In our experience, improvements of even 5% over SOTA is often considered as a meaningful contribution. To clarify these substantial improvements: - On the MuJoCo benchmark, KRNO achieves a **46%** error reduction compared to Neural GSDE [1], the previous SOTA method published at **ICLR 2024** (Neural GSDE: **0.013** vs. KRNO: **0.007** MSE) - With 30% dropped observations, KRNO maintains a **38%** error reduction (**0.008** vs **0.013** MSE) - KRNO **consistently outperforms all 15** baseline methods across all dropout settings - Additionally, KRNO achieved SOTA performance on **3 of 4** irregular time-series benchmarks (MIMIC, USHCN, Human Activity, and MuJoCo), demonstrating its effectiveness across diverse domains with missing observations. On the MIMIC, USHCN, and Human Activity benchmarks, we compare against TimesNet, PatchTST, GRU-D, Warpformer, mTAND, Latent-ODE, and T-PatchGNN [2] - a recent study from **ICML 2024**. [1] Oh, Y., Lim, D., & Kim, S. "Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data." *ICLR 2024*. [2] Zhang et al., "Irregular multivariate time series forecasting: A transformable patching graph neural networks approach." *ICML 2024*. To ensure full *reproducibility* of these results, we have updated our anonymized repository with all code and scripts used for these irregularly sampled time-series benchmarks, including detailed instructions for replicating our experiments. Remarkably, we achieved these substantial improvements using the default KRNO architecture [3] without any dataset-specific hyperparameter tuning. This "out-of-the-box" performance stands in stark contrast to competing methods that typically require extensive tuning for each dataset. Such exceptional generalization across diverse data distributions suggests KRNO translates to meaningful practical improvements in forecasting accuracy, particularly for applications requiring precise predictions from irregularly sampled data, such as healthcare monitoring and climate science. Our work contributes not only empirical advances but also a novel operator-theoretic framework for handling irregularly sampled data, which opens new research directions. This framework enables continuous representations in both space and time without requiring specialized solvers or numerical integration schemes. We believe our numerical studies, spanning **24 diverse datasets** across multiple domains, demonstrate the broad applicability and effectiveness of our approach, particularly for the challenging irregular sampling settings that were the focus of your initial review. Given these objective metrics and the theoretical framework we have developed, we respectfully invite you to reconsider your assessment. In the spirit of scientific evaluation, where quantitative improvements of 46% over recent SOTA methods would typically be considered substantial contributions, we believe our work makes a meaningful advance to the field. We value your expertise and perspective on what would constitute a significant improvement in this domain. ###### [3] *The default KRNO architecture used in all experiments has three kernel integral layers with 20 channels each, lifting and projection layers are parametrized by MLPs with one hidden layer containing 128 hidden units, and the kernels in the integral layers are parametrized by MLPs with 3 hidden layers.*
Summary: The paper introduces a novel operator-theoretic approach for time-series forecasting by learning a continuous time-shift operator. This method provides a more flexible alternative to traditional autoregressive models, which rely on discrete time lags. The authors propose Khatri-Rao Neural Operators (KRNOs) as a new architecture to parametrize non-stationary integral transforms, enabling efficient learning of time-dependent dynamics in both temporal and spatio-temporal forecasting. Claims And Evidence: The paper presents several claims, most of which are backed by clear empirical results. Methods And Evaluation Criteria: The paper evaluates KRNO on 29 different forecasting tasks, covering applications in climate modeling, financial markets, and fluid dynamics. The model is tested against leading baselines, including Fourier Neural Operators (FNO), DeepONet, and traditional autoregressive models Theoretical Claims: The proofs for Proposition 2.1 and its generalization appear correct in principle and are logically structured, but they lack some explicit derivations and justifications for key claims (especially computational efficiency). While these omissions do not invalidate the results, addressing them would increase clarity. Experimental Designs Or Analyses: The paper evaluates KRNO on several temporal and spatio-temporal datasets, including Darts, M4, and physics-based problems (Darcy flow, hyper-elastic problems). It compares KRNO with FNO, DeepONet, and other neural operators. It reports relative error as the primary evaluation metric. The paper claims that KRNO has almost linear computational complexity. However, there is no explicit runtime comparison with other baselines, making this claim difficult to verify. Supplementary Material: Yes, hyperparameters, dataset splits, and training configurations. Relation To Broader Scientific Literature: The continuous time-shift operator proposed in this paper generalizes discrete autoregressive models by modeling entire function trajectories instead of relying on discrete time steps. This idea extends prior work on neural operators like Fourier Neural Operators and DeepONet Essential References Not Discussed: No Other Strengths And Weaknesses: The introduction of Khatri-Rao decompositions enhances computational efficiency compared to FNO and DeepONet, making it feasible for large-scale forecasting. The claim of near-linear complexity is promising, though a more explicit runtime analysis would strengthen this claim. The model is evaluated on 29 forecasting tasks, including climate modeling, physics simulations, showing broad applicability. The claim that KRNO achieves near-linear complexity is reasonable based on Khatri-Rao properties, but the paper does not provide wall-clock runtime comparisons with FNO, DeepONet, and transformer-based methods and Scalability analysis for increasing dataset sizes and higher-dimensional problems. Other Comments Or Suggestions: N\A Questions For Authors: 1. Can you provide runtime benchmarks comparing KRNO with FNO, DeepONet, and standard transformer models? 2. How does KRNO handle missing or noisy data? 3. Does KRNO generalize well to very high-dimensional spatio-temporal problems, such as 3D weather forecasting or turbulence modeling? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback and comments. ## 1. Computational Complexity and Runtime Analysis In Appendix G, we provide a detailed comparison of computational complexity and runtime between KRNO and FNO-3D using the spatio-temporal shallow water problem. While our initial analysis used the default 12 Fourier modes for FNO-3D across all spatial resolutions, we note that when increasing the number of modes for high-resolution datasets, FNO-3D's memory usage and training time exceed KRNO's requirements. The table below presents an updated runtime analysis comparing KRNO with FNO-3D across different spatial resolutions (S × S) using the maximum number of Fourier modes (S/2+1). These results demonstrate that KRNO is both faster and more memory-efficient than FNO-3D at higher resolutions. | Spatial resolution (S x S) | Memory (MB) | | | | Time (seconds) | | | | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | | Training | | Testing | | Training | | Testing | | | | **KRNO** | FNO-3D| **KRNO** | FNO-3D | **KRNO** | FNO-3D | **KRNO** | FNO-3D | | 32 × 32 | 1,390 | 1,074 | 708 | 1,074 | 0.0279 | 0.0234 | 0.0107 | 0.0062 | | 64 × 64 | 2,776 | 2,772 | 1,314 | 2,772 | 0.0394 | 0.0347 | 0.0134 | 0.0071 | | 96 × 96 | 4,884 | 5,264 | 2,366 | 5,264 | 0.0626 | 0.0710 | 0.0185 | 0.0149 | | 128 × 128 | 7,608 | 8,994 | 3,796 | 8,994 | 0.0999 | 0.1297 | 0.0312 | 0.0288 | | 160 × 160 | 10,040 | 13,764 | 5,644 | 11,288 | 0.1584 | 0.2088 | 0.0502 | 0.0456 | ## 2. Handling Missing or Noisy Data We now include additional experiments on challenging irregularly sampled time series benchmark datasets (MIMIC, USHCN, Human Activity, and MuJoCo), which contain missing and noisy observations. The performance of KRNO is compared against a range of alternative approaches with SOTA performance such as T-PatchGNN [1], NeuralSDE [2], NeuralCDE, PatchTST, and Latent-ODE. KRNO achieves new SOTA performance on three of the four benchmarks; please see the tables in the *Baselines and Evaluation Criteria* section of our responses to reviewers **ntzg** and **kNx6**. From these studies, we see that KRNO is able to handle missing and noisy data. In our future work, we will extend KRNO to support missing data for the case of missing data in spatio-temporal problems. [1] Zhang, Weijia, et al. "Irregular multivariate time series forecasting: A transformable patching graph neural networks approach." ICML 2024. [2] Oh, Y., Lim, D., & Kim, S. Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data. ICLR 2024. ## 3. Generalization to High-Dimensional Spatio-Temporal Problems Numerical studies on spatial modeling problems, 2D spatio-temporal forecasting problems, and temporal forecasting problems show that the KRNO architecture provides better performance than competing methods for this class of problems such as FNO, DeepONet, and LOCA. We plan to evaluate KRNO on challenging 3D spatio-temporal problems in future work.
Summary: The paper introduces a novel operator-theoretic framework for time-series forecasting, leveraging the Khatri-Rao Neural Operator (KRNO) to learn continuous time-shift operators. By relaxing the discrete lag factor in autoregressive models, KRNO enables super-resolution forecasting in both space and time while handling irregularly sampled observations. The authors demonstrate KRNO’s scalability and competitiveness across benchmark datasets. Claims And Evidence: 1. Efficiency: KRNO achieves near-linear computational cost for non-stationary integral transforms, scaling better than FNO and other neural operators. 2. Flexibility: KRNO handles irregular sampling and super-resolution forecasting by parametrizing the time-shift operator as a continuous kernel. 3. Superior Performance: KRNO ranks among the top 3 methods on 21/29 test cases and tops 10/29 datasets. Weakness: 1. Overstated Generality: The claim that KRNO “inherits the benefits of neural operators” (e.g., discretization independence) overlooks its reliance on specific kernel structures (Equation 7). 2. Lack of Theoretical Guarantees: While Proposition 2.1 supports computational efficiency, there is no theoretical analysis of approximation error or stability for non-product-kernel scenarios. Methods And Evaluation Criteria: 1. KRNO is built on non-stationary integral transforms with component-wise kernels decomposed as Khatri-Rao products (Equation 7). 2. The architecture includes lifting/projection layers and three kernel integral transform layers, with neural networks parametrizing each kernel. Weakness: 1. Assumption of Product Grids: KRNO’s near-linear complexity relies on input/output data lying on product grids (e.g., time × latitude × longitude). This limits applicability to irregularly gridded or high-dimensional data. 2. Fixed Hyperparameters: The time-shift operator’s boundaries (t_p, t_f) are treated as hyperparameters, but the paper does not discuss adaptive strategies for varying forecasting horizons. 3. Ignoring Temporal Dependencies: The operator learns mappings over fixed windows ([t_p, t] to (t, t_f]), neglecting long-range dependencies beyond the window size. Theoretical Claims: There is no theoretical analysis of how approximation errors in the kernel decomposition (Equation 7) propagate to the overall forecasting error. While KRNO avoids ODE-based adjoints, the paper does not compare its gradient estimation stability to neural ODEs in noisy or high-dimensional settings. Experimental Designs Or Analyses: 1. Lack of Hyperparameter Tuning Details: Key parameters (e.g., number of layers, hidden units) are not fully documented, reducing reproducibility. 2. Inconsistent Evaluation Protocols: Some experiments (e.g., M4) use recursive forecasting, while others (e.g., Darts) rely on fixed window sizes, complicating comparisons. 3. Missing Baselines: Notable omissions include recent SOTA methods like TimesNet (Wu et al., 2022) and LogTrans (Li et al., 2023) for time-series forecasting. 4. Computational Cost Analysis: While Table 7 compares GPU memory, training/inference times are only shown for shallow water (Figure 9), lacking scalability analysis for larger datasets (e.g., M4). Supplementary Material: Yes, I have reviewed all of them Relation To Broader Scientific Literature: Connects KRNO to operator-learning frameworks (DeepONet, FNO) and highlights advantages over autoregressive models (e.g., Transformer, N-BEATS). Essential References Not Discussed: No Other Strengths And Weaknesses: See review above Other Comments Or Suggestions: 1. Clarify the scope of KRNO’s applicability (e.g., product grids vs. arbitrary grids). 2. Add error bounds or stability analysis for non-stationary kernels. 3. Include recent SOTA methods (e.g., TimesNet, LogTrans) in benchmarks. 4. Provide full hyperparameter details and reproducibility protocols. Questions For Authors: See review above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback and comments. ## 1. Theoretical Guarantees and Kernel Structure The computational complexity of the kernel integral transform layer scales as O($n^2$), where $n$ is the number of quadrature nodes in the input and output domains. Additional assumptions are required to reduce this complexity. For instance, FNOs assume that the kernel is stationary, thereby enabling the use of FFT. In the present work, we overcome this complexity using product structured non-stationary kernels. The only approximation error comes from the quadrature scheme used to compute the integral in equation 6; the non-stationary kernel is evaluated exactly, without additional errors. We have added an ablation study to show the influence of the number of quadrature points on the generalization performance of KRNO (please refer to our response to reviewer **P25q** on ablation studies). A particularly attractive feature of KRNO is that it allows us to learn a flexible non-stationary kernel for each component of the product structured kernel. As evidenced by our numerical studies, this additional flexibility provides performance gains over stationary kernel-based neural operator architectures, while significantly reducing the model parameter count. ## 2. Implementation and Applicability In our exposition of KRNO, we considered the case of product grids to achieve almost linear scalability. It is worth noting that our approach can be extended to unstructured grids by introducing a learnable function that maps the data from an unstructured grid to a latent product grid as in Geo-FNO [1]. We have added the missing hyper-parameter details for each benchmark in the Appendix. To ensure reproducibility, we have shared the source code and scripts used in our numerical studies at [https://anonymous.4open.science/r/KRNO-1F4F/](https://anonymous.4open.science/r/KRNO-1F4F/). [1] Li, Zongyi, et al. "Fourier neural operator with learned deformations for pdes on general geometries." JMLR 24.388 (2023): 1-26. ## 3. Baselines and Evaluation Criteria: We agree that numerical studies on regularly sampled time-series do not allow a clear demonstration of the full capabilities offered by KRNO. We now include additional experiments on challenging irregularly sampled time series benchmark datasets (MIMIC, USHCN, Human Activity, and MuJoCo) and the results are compared against a range of alternative approaches with SOTA performance such as T-PatchGNN [1], NeuralSDE [2], NeuralCDE, PatchTST, and Latent-ODE. KRNO achieves new SOTA performance on three of the four benchmarks; please see the Table below which provides results for three of the datasets. The results for the MuJoCo benchmark are provided in our response to reviewer **kNx6**. | **Method** | **MIMIC MSE×10⁻²** | **MIMIC MAE×10⁻²** | **USHCN MSE×10⁻¹** | **USHCN MAE×10⁻¹** | **Human Activity MSE×10⁻³** | **Human Activity MAE×10⁻²** | |:---:|:---:|:---:|:---:|:---:|:---:|:---:| | TimesNet | 5.88 ± 0.08 | 13.62 ± 0.07 | 5.58 ± 0.05 | 3.60 ± 0.04 | 3.12 ± 0.01 | 3.56 ± 0.02 | | PatchTST | 3.78 ± 0.03 | 12.43 ±0.10 | 5.75 ± 0.01 | 3.57 ± 0.02 | 4.29 ± 0.14 | 4.80 ± 0.09 | | GRU-D | 1.76 ± 0.03 | 7.53 ± 0.09 | 5.54 ± 0.38 | 3.40 ± 0.28 | 2.94 ± 0.05 | 3.51 ± 0.06 | | Warpformer | 1.73 ± 0.04 | 7.58 ± 0.13 | 5.25 ± 0.05 | 3.23 ± 0.05 | *2.79 ± 0.04* | *3.39 ± 0.03* | | mTAND | 1.85 ± 0.06 | 7.73 ± 0.13 | 5.33 ± 0.05 | 3.26 ± 0.10 | 3.22 ± 0.07 | 3.81 ± 0.07 | | Latent-ODE | 1.89 ± 0.19 | 8.11 ± 0.52 | 5.62 ± 0.03 | 3.60 ± 0.12 | 3.34 ± 0.11 | 3.94 ± 0.12 | | T-PatchGNN | *1.69 ± 0.03* | **7.22 ±0.09** | *5.00 ± 0.04* | *3.08 ± 0.04* | **2.66 ± 0.03** | **3.15 ± 0.02** | | KRNO | **1.57 ± 0.02** | *7.43 ± 0.06* | **4.95 ± 0.08** | **3.06 ± 0.08** | 2.85 ± 0.03 | 3.46 ± 0.02 | [1] Zhang, Weijia, et al. "Irregular multivariate time series forecasting: A transformable patching graph neural networks approach." ICML 2024. [2] Oh, Y., Lim, D., & Kim, S. Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data. ICLR 2024. ## 4. Computational Cost Analysis Please refer to our first response to reviewer **dfXC** for more clarification on the computational complexity and runtime analysis of KRNO compared to FNO. ## 5. Temporal Dependency You have raised a very important point about the choice of $t_p$ and $t_f$. In our numerical studies, we treated them as hyperparameters. However, it would be valuable to explore how these parameters can be adaptively chosen and how this could potentially improve generalization. This point is also closely related to the application of KRNO to model long-range dependencies. We plan to pursue these directions in future work.
Summary: This paper presents a method for time-series forecasting that treats the task as learning a continuous time-shift operator, approximated via a proposed architecture called Khatri-Rao Neural Operator (KRNO). The operator is modeled as an integral transform with a non-stationary kernel, decomposed via Khatri-Rao product structure. The resulting model enables forecasting with irregularly sampled data, super-resolution, and low parameter count. The method is validated across a large suite of temporal and spatio-temporal benchmarks. Claims And Evidence: The main claim of the authors is that they introduce the continuous time-shift operator. There is a drastic disconnect between this claim and the empirical studies. None of the studies demonstrate the importance of continuous time-shift as concept. The continuos shift in either time or space can be achieved by a simple MLP, mapping time and coordinates along with some additional features into predictions. Why do we need KRNO, what is special about it that other architectures cannot do? This is further emphasized by the fact that large part of experiments are conducted on regularly sampled non-spatial datasets that have so many methods that work extremely well on them, that the authors will not have enough space in a 20-page paper to review, explain and present their results. May I suggest that the authors rather focus on a very concrete problem and drill it down, instead of overwhelming the reader with results and studies that are largely irrelevant from the point of view of the problem that they claim to solve? Methods And Evaluation Criteria: - I am not quite convinced that the use of regularly sampled forecasting datasets, such as Darts, Crypto, Baseball or M4, is relevant, as the proposed theory and operator-based approach seem to be most applicable to irregularly spaced and spatio-temporal data. In my view, these results overload the paper and distract attention from other important topics. For example, there is very little discussion of what the proposed model learns i.e., how the time-shift operator behaves or generalizes across inputs. Similarly, there is very little discussion in the text of the implementation details. How does the inference path of the model look like, exactly? Can you provide equations describing it? Additionally, there are no ablation studies. Theoretical Claims: - Apparent theory/practice disconnect: the transition between theory (quadrature approximations, kernel formulation) and implementation (data-driven neural parametrization) feels abrupt (i.e. Line 213). The theory defines kernels as functions of coordinates, but the practical implementation uses learnable functions of data, raising questions about the operator-theoretic validity of the implementation. Experimental Designs Or Analyses: - The description of many experiments lacks clarity. For example, for Darcy-flow and hyper-elastic benchmark no details are provided at all. What are the problems, why are they relevant for the evaluation of the proposed algorithm? What are dataset/problem sizes, what are the splits? How is the L2-relative error computed, exactly? Similar remarks apply to most benchmarks used in the paper. - The relevance of Figure 2 is not clear. It would constitute a much stronger case if it contained a comparison with baseline methods that significantly underperform on the presented anecdotes. The fact that the proposed technique apparently does reasonably well on these cases does not render convincing evidence of the proposed framework being able to solve problems that other approaches fail to provide adequate solutions for. - The paper contains no ablation studies whatsoever. This is a red flag. What are important model components, key assumptions and their effects on model performance? - Missing comparisons to neural controlled differential equations (CDEs), which are designed for irregular and continuous-time forecasting, are only discussed briefly and not benchmarked empirically. Supplementary Material: Yes, Appendices A, B Relation To Broader Scientific Literature: - Overuse of standard mathematical machinery: Much of the theoretical exposition leans heavily on classical results (e.g., Grönwall’s inequality, semigroup continuity), which could have been cited more concisely. Similarly, Proposition 2.1 is a simple consequence of the Khatri–Rao product structure and it does not need a formal statement in the form of proposition or a proof. Its inclusion, while technically correct, adds bulk without substantive new theory. Essential References Not Discussed: [1] Horn et al., Set Functions for Time Series https://arxiv.org/pdf/1909.12064 Offers a non-sequential way to model irregular time series using permutation-invariant functions. [2] Kidger et al., Neural Controlled Differential Equations for Irregular Time Series, https://proceedings.neurips.cc/paper/2020/file/4a5876b450b45371f6cfe5047ac8cd45-Paper.pdf, memory-efficient ODE-based approach to irregular time-series Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: - Can you clarify the distinction between kernels being functions of time/space vs. functions of the data? Does this violate the assumptions made in the integral operator formulation? - Why are neural CDEs or latent ODEs not included in the empirical comparisons, especially given the shared goal of handling irregular sampling? - Can you add ablation studies showing the effects of important model components on model performance? As an example, could you provide an ablation of the quadrature rule used (midpoint vs. trapezoidal vs. learned)? What are other important components and their effects on the model accuracy/speed/memory? - Is the method capable of forecasting in the presence of measurement noise or exogenous inputs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback and comments. ### 1. Evaluation on Irregularly Sampled Datasets We agree that numerical studies on regularly sampled time-series do not allow a clear demonstration of the full capabilities offered by KRNO. We now include additional experiments on challenging irregularly sampled time series benchmark datasets (MIMIC, USHCN, Human Activity, and MuJoCo) and the results are compared against a range of alternative approaches such as T-PatchGNN, NeuralSDE, NeuralCDE, PatchTST, and Latent-ODE. KRNO achieves SOTA performance on three of the four benchmarks. Please see Tables in section *Baselines and Evaluation Criteria* of our response to reviewers **ntzg** and **kNx6**. As suggested, we have updated the Appendix to include more details on the Darcy-flow and hyper-elasticity benchmarks. ### 2. Model Architecture & Implementation Details KRNO's architecture follows the general structure of neural operators (lines 129-150), i.e., lifting layers followed by series of kernel integral layers and a projection layer. The key difference is our use of a non-stationary product structured kernel in the kernel integral transform (Equation 6). The KRNO representation of the continuous time-shift operator can be written as: $$ \mathcal{A}_{t_p}^{t,t_f}:= \mathcal{P} \circ \mathcal{K}_n \circ \ldots \circ \mathcal{K}_1 \circ \mathcal{L}, $$ where $\mathcal{K}_i: \mathbb{R}^{p} \to \mathbb{R}^{q}$ is a kernel integral layer defined in Equation 5, $\mathcal{L}: \mathbb{R}^{n} \to \mathbb{R}^{c}$, and $\mathcal{P}: \mathbb{R}^{c} \to \mathbb{R}^{n}$ are the pointwise lifting and projection layers parameterized by neural networks with one hidden layer, and $c$ is the number of channels in integral layers. To ensure reproducibility, we have shared the source code and scripts used in our studies at [https://anonymous.4open.science/r/KRNO-1F4F/](https://anonymous.4open.science/r/KRNO-1F4F/). ### 3. Theoretical Consistency Please note that the matrix-valued kernels in KRNO are consistently defined as functions of time and space throughout the paper and in our implementation. The kernel integral layers in Equation 5 use these parametrized space-time kernels to learn mappings between function spaces. As a consequence, the numerical implementation is aligned with the operator-theoretic formulation. ### 4. Ablation Studies Thank you for suggesting insightful ablation studies. We have conducted ablation studies as suggested to better understand the key components that influence generalization. One of the key parameters is the number of integral layers/channels. Larger values of this parameter enhance model capacity at the expense of increased memory usage for high-resolution data. This trade-off can be effectively managed by adjusting the number of quadrature points in the kernel integral layers. The following ablation study on the Elasticity problem (learning stress fields from void deformation in elastic blocks) confirms this flexibility. Using training data given on a 41×41 grid, we notice that reducing latent grid resolution in the integral layers significantly decreases memory requirements with minimal impact on accuracy. We intend to include additional ablation studies to illustrate the impact of the quadrature rule on generalization performance. |Latent Grid Resolution|L2 Rel. error|GPU Memory (MB)| |:---:|:---:|:---:| |16 x 16 |5.20 ± 0.18 % |1,386 | |24 x 24 |5.14 ± 0.17 % |1,774 | |32 x 32 |5.14 ± 0.26 % |2,186 | |40 x 40 |5.12 ± 0.15 % |2,650 | |48 x 48 |5.16 ± 0.16 % |3,174 | ### 5. Handling Measurement Noise and Exogenous Inputs The new irregularly sampled benchmarks mentioned earlier are challenging with missing and noisy observations. Our results show that KRNO effectively handles such data. In the present work, we formulate the time-shift operator in the setting of deterministic ordinary/partial differential equations. This formalism enabled sufficient flexibility to provide strong generalization across temporal and spatio-temporal forecasting problems from diverse domains. For example, for the case of time-series datasets, the forced time-shift operator can be defined as $X\_{(t,t_f]}=\mathcal{A}\_{t_p}^{t,t_f} (X_{[t_p,t]}, f_{[t_p,t_f]})$, where $X_{[t_p,t]}$ denotes the state trajectory over the time-interval $[t_p,t]$ while $f_{[t_p,t_f]}$ denotes the forcing function over the input and forecasting time-interval $[t_p,t_f]$. To ensure that the time-shift operator is causal (i.e., the predictions made at time instant $\tau$ are not influenced by the future values of the forcing function), the integration domain of the kernel integral transform w.r.t. $f$ should be set to $[t_p, \tau]$, where $\tau \in (t, t_f]$. A similar approach can be used to define the time-shift operator for forced spatio-temporal dynamical systems. These extensions would enable our approach to be applied to systems with exogenous effects. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing a comprehensive response. I raise the score slightly based on the current improvements. I would still like the authors to provide more detail regarding the following questions. 1. Theoretical Consistency response is rather shallow. Can you please provide a draft of how the transition in the paper will be made from the kernel theory to the data-driven neural parametrization? Yes, kernels are functions of time and space. But they are not functions of data, aren't they? If you could justify this transition, preferably in a theoretically rigorous way (can you make a case with a theorem, for example?), this would significantly strengthen the paper. 2. The reliance of testing protocols on regularly sampled data brings a lot of bulk in the paper, but does it really demonstrate the key features of the proposed method? May I suggest that a bulk of these studies be moved to appendices? Can this leave more space for things such as the quadrature grid ablation? Finally, I do not believe this point has been addressed: > For example, there is very little discussion of what the proposed model learns i.e., how the time-shift operator behaves or generalizes across inputs. > Can you use the space in the paper to actually show something unique about the method working with irregularly spaced data? My honest opinion is that it is very unlikely that the operator based approach designed to handle irregularly spaced spatio-temporal data will be used as a handy replacement for PatchTST or DLinear, or whatever other method optimized for dense regularly spaced data. Why even spend precious paper space emphasizing aspects that are not core to the contribution? 3. What is the forward path of the method, in terms of neural layers? Yes, the code helps to solve the reproducibility issue and I appreciate it. But I also want to understand how the operators translate into actual neural layers (i.e. matrix/bias/activation function operations applied to input tensor) and signal flows through the architecture. Can you present forward pass of the model in the paper using some language similar to this one: https://arxiv.org/pdf/1706.03762 as an example? 4. Much of the theoretical exposition leans heavily on classical results (e.g., Grönwall’s inequality, semigroup continuity), which could have been cited more concisely. Similarly, Proposition 2.1 is a simple consequence of the Khatri–Rao product structure and it does not need a formal statement in the form of proposition or a proof. Its inclusion, while technically correct, adds bulk without substantive new theory. --- Reply to Comment 1.1.1: Comment: We appreciate your thoughtful feedback and the opportunity to clarify aspects of our work. ## Theoretical Consistency The transition from theory to data-driven parametrization follows established principles in deep kernel methods, where neural networks learn mappings to a feature space where standard kernels are applied, with all parameters jointly learned. In KRNO, our kernels strictly maintain the form $k(x,t)$ where $x,t$ are space-time coordinates. To ensure expressivity, the kernel is parameterized by neural networks with learned weights & biases, i.e., $k_\theta(x,t)$, where $\theta = \text{argmin}\_\theta \ell(f(k_\theta),D)$ with $\ell$ denoting the loss function, $f_{k_\theta}$ the operator with kernel $k_\theta$, and $D$ the training data. As an illustrative example, consider a time-series $u(t) \in \mathbb{R}^n$ over $[t_p,t_f]$ modeled using a *single-layer KRNO* with no lifting and projection layers. Let $U_p = [u(t_1),u(t_2),u(t_3)]^T \in \mathbb{R}^{3 \times n}$ and $U_f = [u(t_4),u(t_5)]^T \in \mathbb{R}^{2 \times n}$ denote observations over the time-intervals $[t_p,t]$ (input) and $(t,t_f]$ (output), respectively. The KRNO prediction at time $t_j \in (t,t_f]$ takes the form $$ \begin{align*} \hat{u}(t_j)=\int_{t_p}^{t} k_\theta (t_j,t')u(t')dt' &\approx \sum_{i=1}^{3}k_\theta (t_j,t_i)w_iu(t_i)=[k_\theta (t_j,t_1),k_\theta (t_j,t_2),k_\theta (t_j,t_3)] \begin{bmatrix}w_1u(t_1)\\\\w_2u(t_2)\\\\w_3u(t_3)\end{bmatrix}, \end{align*} $$ where $k_\theta: \mathbb{R} \times \mathbb{R} \to \mathbb{R}^{n \times n}$ is a matrix-valued kernel parametrized using a neural network and $w_i$ are quadrature weights. The prediction over $(t,t_f]$, i.e., $\hat{U}\_f =[\hat{u}(t_4),\hat{u}(t_5)]^T\in \mathbb{R}^{2\times n}$ becomes $$ \begin{align*} \hat{U}\_f= \mathcal{K}(U_p)=k_\theta (T_f,T_p) \text{vec(diag}(w) U_p) =\begin{bmatrix}k_\theta (t_4,t_1) & k_\theta (t_4,t_2) & k_\theta (t_4,t_3)\\\\ k_\theta (t_5,t_1) & k_\theta (t_5,t_2) & k_\theta(t_5,t_3)\end{bmatrix} \begin{bmatrix} w_1u(t_1)\\\\ w_2u(t_2)\\\\ w_3u(t_3)\end{bmatrix} \end{align*}, $$ where $k_\theta(T_f, T_p) \in \mathbb{R}^{2n \times 3n}$ is the kernel matrix evaluated at the quadrature nodes $T_f=[t_4,t_5]$ and $T_p=[t_1,t_2,t_3]$. The parameters $\theta$ are learned by minimizing the $L^2$ error between $\hat{U}_f$ and the observed $U_f$. Our kernel parametrization approach preserves the operator-theoretic formulation (kernel remains a function of coordinates) while enabling data-driven adaptation through the learned parameters $\theta$. ## Reorganizing benchmarks and revisions Thank you for your suggestion - we will restructure the paper to focus on irregularly sampled data in the main text. In the revised paper, we will emphasize KRNO's uniqueness stemming from its continuous representation in both space and time, offering several key advantages, particularly for irregularly sampled data. For example, at each prediction time $t_j$, the kernel $k_\theta(t_j, t')$ *learns to identify and weight the most relevant historical time points*, without requiring regular sampling. Moreover, the multi-layer structure allows KRNO to discover both local temporal dynamics and global patterns (through the composition of multiple kernel layers). Figure 1 demonstrates this capability where we show super-resolution in both space and time. ## Forward pass of KRNO We like your suggestion of graphically illustrating the KRNO forward pass, identifying tensor shapes. We will include this in the revised paper. To illustrate the steps compactly, consider a *single* KRNO layer which includes a pointwise lifting and projection layer with $n$ channels applied to the same example described earlier. Given input sequence $U_p\in \mathbb{R}^{3\times n}$, KRNO predicts $\hat{U}_f=[\hat{u}(t_4),\hat{u}(t_5)]^T\in \mathbb{R}^{2\times n}$ as: $$\hat{U}_f=\mathcal{P}\circ\mathcal{K}\circ\mathcal{L}(U_p).$$ Parametrizing the lifting and projection layers using an MLP with one hidden layer (for brevity), the matrix operations in the forward pass are: - Lifting layer: $U_1= \mathcal{L}(U_p)=(\sigma(U_p W_{l_1}+b_{l_1}))W_{l_2}+b_{l_2}$, where $U_1 \in \mathbb{R}^{3\times n}$ - Integral layer: $U_2'=\mathcal{K}(U_1)= k_\theta(T_f, T_p) \text{vec(diag}(w) U_1), $ where $k_\theta(T_f, T_p) \in \mathbb{R}^{2n\times 3n}$ and $U_2' \in \mathbb{R}^{2n}$ - $U_2=\text{Reshape}(U_2') \in \mathbb{R}^{2\times n}$ - Projection layer: $\hat{U}\_f=\mathcal{P}(U_2)=(\sigma(U_2 W_{p_1} + b_{p_1})) W_{p_2}+b_{p_2}$, where $\hat{U}_f \in \mathbb{R}^{2\times n}$ Here, $W_{l_1},W_{l_2},W_{p_1},W_{p_2}\in\mathbb{R}^{n\times n}$ and $b_{l_1},b_{l_2},b_{p_1},b_{p_2}\in\mathbb{R}^{n}$ are weights and biases. ## Streamlining theoretical exposition. We appreciate your suggestions about the theoretical presentation. In the revised manuscript, we will focus on aspects unique to our contribution. Thank you again for your insightful feedback that will help us improve the manuscript.
null
null
null
null
null
null
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models
Accept (poster)
Summary: The paper addresses object hallucination, a well-known issue for existing multimodal large language models (MLLMs), where they prone to generate plausible yet incorrect responses that are not aligned with given images. Authors attribute this to weak robustness and high uncertainty of LLM representations (section 3.2), which they refer to as “amnesia” of MLLMs like LLaVA. From this perspective, authors propose to enhance representations by retracing. Claims And Evidence: Some claims made in this submission are not supported well. A few things should be taken care of more meticulously. To reveal amnesia phenomenon of existing MLLMs, authors pass representations from all layers in LLaMA to a vocabulary head, which is aligned with the last layer solely during LLaMA pre-training. Given that most layers are not aligned with the vocabulary head, what such experiments indicate is doubtful and the conclusions do not make sense. This is problematic and misleading analysis. Another thing is, authors name this to amnesia and memory, however, I do not see memory-involved design in their approach. Meanwhile, authors seem confuse the difference between hallucination and error. Specifically, the Figure 5 case is not a clear hallucination problem, which is not a language prior or statistical bias. Methods And Evaluation Criteria: The proposed method is not sensical from technical perspective, as most layers are not aligned with the vocabulary head during pre-training, the analysis in section 3 is problematic. Theoretical Claims: No theoretical claims are made in main paper. Experimental Designs Or Analyses: The experimental designs or analyses has weak relation to experimental designs. Supplementary Material: I have reviewed the supplementary material. The supplementary material includes Theoretical Analysis of MemVR, more experimental details and some case studies. Relation To Broader Scientific Literature: The proposed method is technically viable to be applied at a general task than object hallucination, such as MMBench and MMStar. Essential References Not Discussed: The visual retracing appears to be similar to V*, where authors should discuss common ideas and differences more clearly, along with its follow-up studies. [a] Guided Visual Search as a Core Mechanism in Multimodal LLMs. Other Strengths And Weaknesses: No other strengths and weaknesses so far. Other Comments Or Suggestions: No other comments so far. Questions For Authors: No questions for now. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer 8UAJ for the insightful comments.** > Q1: Given that most layers are not aligned with the vocabulary head, what such experiments indicate is doubtful and the conclusions do not make sense... The idea of applying language heads directly to the hidden states of the middle layers, known as **early exit** [1][2][3], has proven to be effective even without the special training process [4][5] for alignment to the vocabulary head. We also presented it in the paper (*Line 263, right*). [1] Teerapittayanon et al. Branchynet: Fast inference via early exiting from deep neural networks. ICPR, 2016. [2] Maha Elbayad et al. Depth-adaptive transformer. ICLR, 2020. [3] Tal Schuster et al. Confident adaptive language modeling. NeurIPS, 2022. [4] Wei-Tsung Kao et al. Bert's output layer recognizes all hidden layers? some intriguing phenomena and a simple way to boost bert. arXiv, 2020. [5] Chuang Y S et al. DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models. ICLR 2024. > Q2: About 'amnesia' and not seeing memory-involved design "Amnesia" is similar to the concept of the information flow of image tokens converges at shallow layers and diverges at deeper layers in [6], where truncating image tokens no longer affects the answers in the middle or deep layers. VR is implemented in FFN, and FFN stores knowledge from training data in the form of KV memories [7], thus we call VR in memory space. [6] From Redundancy to Relevance: Enhancing Explainability in Multimodal Large Language Models, 2024. [7] Geva M, et al. Transformer Feed-Forward Layers Are Key-Value Memories. EMNLP, 2021. > Q3: Figure 5 case is not a clear hallucination problem, which is not a language prior or statistical bias The ground-truth of Figure 5 case is "*four* **mangosteen**", but LLaVA outputs "*a* **pomegranate**", the count (four->one) and object (mangosteen->pomegranate) are wrong. We don't know why it is not a clear hallucination case. 1) About this case, Reviewer 8UAJ thinks it is not a language prior or statistical bias. We get the reviewer's view that only examples with a language prior or statistical bias, like the misidentification of a black banana as a yellow banana, can be called hallucinations. **However, the language prior or statistical bias is merely a possible source of the hallucinations** proposed in the Contrast Decoding series of papers, e.g., VCD, but not the only source. 2) We're not imitating VCD or other papers to present a language prior or statistical bias as the cause of hallucination, so we did not purposely choose like “black bananas“ with a strong language prior for this example. 3) Further, **the other hallucination cases with a language prior or statistical bias in our experiments also show the same regularity, i.e., hallucination tokens exhibit higher entropy than correct ones.** > Q4: No theoretical claims We have provided a theoretical framework based on information theory in the paper, **Reviewers 2zfj,bEEH,3FtN mentioned**. > Q5: The experimental designs or analyses has weak relation to experimental designs Firstly, we actually don't know much about what you mean by weak ties. We conducted comprehensive experiments and analysis on 8 benchmarks and ablation studies examining the impact of different injection ratios, uncertainty thresholds, and static vs. dynamic triggering strategies. We've already provided 5 MLLMs with 8 kinds of benchmark evaluations. These MLLMs used different Language models, text-image aligning methods, and training strategies to ensure effectiveness and generalization across the various testing environments. And, we have demonstrated that MemVR can be easily applied to almost all open-source MLLMs. Compared with SOTA methods, we have done the most complete experiments, as follows,   |Method|Evaluation benchmarks| |--------|:--------| |OPERA|POPE, MME, CHAIR, MMBench (4)| |HALC|POPE, MME, LLaVA-Bench (3)| |VCD|POPE, MME, LLaVA-Bench (3)| |ICD|POPE, MME, LLaVA-Bench (3)| |MemVR (ours) |POPE, MME, LLaVA-Bench, CHAIR, HallusionBench, MMBench, MM-Vet, VizWiz-VQA **(8)**| **If you still think our experimental design has problems, could you be more specific? We are willing to discuss with you.** > Q6: VR appears to be similar to V*, about the differences We actually don't understand why Reviewer 8UAJ suddenly gave us a paper that is irrelevant to hallucinations, nor in a similar field. Nonetheless, **we follow your comment for comparison.** **In short, the difference between V\* and ours is as follows,** 1) The goal is different. V* is an LLM-guided visual search algorithm, not a method to mitigate hallucination. 2) The framework of V* is the normal structure of MLLMs, which is completely different from our "look-twice" where an extra bypass is created. 3) V* needs training by LoRA, but MemVR is a plug-and-play method. [1] V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs. --- Rebuttal Comment 1.1: Comment: I thank authors for such detailed responses in their rebuttals. The claims made by authors have addressed most of my concerns, including Q1 where related works are clearly mentioned. The overview framework has now become clearer to me. I am raising my score to weak accept.
Summary: This paper presents Memory-space Visual Retracing (MemVR), a decoding strategy aimed at mitigating hallucinations in Multimodal Large Language Models (MLLMs). The primary insight is that MLLMs tend to lose visual information during the decoding process, leading to hallucinations due to an over-reliance on textual context. Inspired by human cognition, MemVR introduces a “look-twice” mechanism, where visual tokens are re-injected into intermediate transformer layers based on model uncertainty. This approach leverages the Feed Forward Network (FFN) as a key-value memory module, supplementing missing visual information dynamically.MemVR mitigates hallucinations by balancing modality contributions. Claims And Evidence: Claims: MemVR mitigates hallucinations by balancing modality contributions. MemVR is computationally efficient, introducing minimal latency. These claims are supported from the experiments results. However, uncertainty-based heuristics may be brittle, as textual uncertainty higher than a fixed threshold does not always indicate a hallucination problem. The ablation in fig.7 (left) doesn't show the usefulness of the threshold. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. The authors provide an information-theoretic justification, showing that MemVR increases mutual information between hidden states and visual tokens, leading to a decrease in hallucinations. Experimental Designs Or Analyses: Yes. Comparisons across multiple benchmarks. Different uncertainty thresholds (γ) and their impact. Analysis of hallucination uncertainty per layer. Supplementary Material: Yes. The appendix provides additional derivations, ablation studies, and dataset details. Relation To Broader Scientific Literature: The paper situates MemVR well within multimodal hallucination mitigation literature, contrasting: Retrieval-Augmented Generation Contrastive Decoding Attention-based hallucination correction Essential References Not Discussed: No Other Strengths And Weaknesses: #### **Strengths** - The paper is well-organized and easy to follow, with a logical flow of ideas. - The study provides a structured analysis of why VLMs hallucinate, supported by attention visualizations and informative figures. - MemVR achieves good performance on the benchmarks tested, outperforming prior hallucination mitigation methods. --- #### **Weaknesses** - **Limited novelty in the analysis of hallucinations** - The discussion in Section 3.2 about VLMs failing to correctly process images is already a well-established issue in the field. The analysis does not provide particularly novel insights beyond existing consensus. - Figure 4 lacks clarity regarding "text feature"—does this refer to the text prompt’s feature representation? - Since VLMs are inherently trained to answer text-based questions, they naturally rely on textual features. A decrease in text feature importance may lead to text with issues such as incoherence, redundancy. - **Uncertain about the effectiveness of the method and potential bias amplification** - Equation (6) injects additional image information, but does this also introduce image-related bias? - Prior works, such as [1], highlight how image-biased hallucination can degrade response quality. - [2] discusses how image redundancy can lead to errors. Does MemVR amplify such errors? - Why not using the uncertainty score as $\alpha$? - A more thorough analysis or potential solution for these problems would strengthen the paper. - **tested benchmarks** - While the proposed method appears simple, its contribution lacks significant novelty in the VLH domain. - Given the concerns outlined above, is MemVR overfitting to specific benchmarks rather than providing a robust, generalizable solution? The evaluation could be improved by testing MemVR on more recent and challenging hallucination benchmarks, such as HallusionBench [3], or other reasonable ones. - **Minor issues and typos** - Figure 4 and some others contain a typo in the word "perception". --- #### **References** **[1]** *IBD: Alleviating Hallucinations in Large Vision-Language Models via Image-Biased Decoding* **[2]** *From Redundancy to Relevance: Enhancing Explainability in Multimodal Large Language Models* **[3]** *HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models* Other Comments Or Suggestions: See weakness Questions For Authors: See weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer 3FtN for the constructive comments on our work. We promise to revise the paper.** > Q1: About textual uncertainty Yes, textual uncertainty cannot be a sufficiently necessary condition for the existence of hallucinations, but when hallucinations are occurring, textual uncertainty tends to be above the threshold, and MemVR triggers override the cases in which hallucinations occur. Besides, regarding the ablation in Fig. 7 (left), the curve may not be obvious due to the value being over 1800 and the change being within 100. Specificaly, we set $\gamma$ from 0.5 to 1.0, when $\gamma$ is set to less 0.6 MemVR is triggered early without performance improvement, while **between 0.6 and 0.95 can improve performance, with the optimal threshold around 0.75 (can increase 32 scores)**, which means MemVR works well under higher uncertainty. > Q2: The analysis in Section 3.2 not beyond There are many works that have discussed why MLLM generates hallucinations. Thus, our discussion in Section 3.2 firstly introduces previous insights, including inherent biases in the training data, visual uncertainty resulting from the model’s statistical bias and priors, and the limitations of current models in accurately discerning context and fact throughout the output generation process. Then, we presented our argument that the imbalance of modalities leads to a substantial deviation from the accurate representation of visual input, eventually giving rise to hallucinations, which prior work did not raise. We will continue our efforts to delve into the causes of hallucinations. > Q3: Regarding "text feature" The 'text feature' here refers to the embedded text tokens, which are concatenated with visual tokens to form the input of LLMs. > Q4: Does VR introduce image-related bias? [1] constructs the image-biased model by adjusting the attention weight matrices (i.e., an amplification to the image tokens) within the vanilla model. In Equation (6), **VR only modifies the hidden state in the FFN layer (i.e., memory about visual information) that uses the original hidden state as query to search supplemental visual feature**, rather than a direct amplification to the image tokens in attention weight matrices, thus VR does not introduce image-related bias as [1]. [1] IBD: Alleviating Hallucinations in Large Vision-Language Models via Image-Biased Decoding > Q5: Does MemVR amplify image redundancy errors? In conclusion of [2], image tokens are highly redundant after the cliff layer, where truncating image tokens no longer affects the answers in the middle or deep layers. **This is because LLMs are more text-informed with "attention sinks", and token embedding of image tokens may not align well with the model’s text-based training.** This does not mean that image redundancy leads to errors. In MemVR, VR supplements visual information to the middle or deep layers of LLMs by taking the original hidden state as queries to search for visual features that need to be supplemented, without introducing image-related bias, nor amplifying image redundancy errors. Experimental results on 8 benchmarks show that VR effectively improves performance. [2] From Redundancy to Relevance: Enhancing Explainability in Multimodal Large Language Models > Q6: Why not use the uncertain score as $\alpha$? Thank your nice suggestion. The uncertain score is usually between 0.5 and 0.99, which is large, and the original knowledge would be confused if we use the uncertain score as $\alpha$. To achieve a dynamic injecting ratio, we calculate $\alpha$ by 2*(uncertain_score – threshold), which this named MemVR++, and the results are as follows, LLaVA-Bench: |Method|average|all|complex|conv|detail| |--|:--:|:--:|:--:|:--:|:---:| |LLaVA1.5|64.80|50.80|74.60|52.90|52.10| |VCD|63.20|48.50|77.90|52.40|50.80| |ICD|56.90|40.20|78.20|35.30|42.20| |MemVR|65.17|51.30|77.90|55.90|52.60| |MemVR++|65.87|51.70|81.80|51.20|49.60| POPE (MSCOCO): |Method|Random|Popular|Adversarial|Average| |--|:--:|:--:|:--:|:--:| |LLaVA1.5|83.49|79.98|76.03|79.83| |MemVR|88.50|87.10|85.20|86.93| |MemVR++|88.40|87.17|85.17|86.92| The results show that, compared with MemVR, MemVR++ can also achieve the same improvement, and even better. > Q7: Tested benchmarks and contribution Follow your suggestion, we test MemVR on the challenging **HallusionBench**, the results are as follows, |Method|$fAcc$|$easy aAcc$|$hard aAcc$|$aAcc$| |--|:--:|:--:|:--:|:--:| |LLaVA1.5|17.92|36.04|36.74|41.45| |VCD|13.87|36.92|34.65|41.10| |OPERA|16.19|37.58|35.35|41.19| |ICD|13.87|32.97|33.49|38.18| |MemVR|18.50|36.48|37.67|42.34| |MemVR++|18.50|36.48|36.98|42.07| where the evaluation is conducted on the GPT-4o-mini. The results demonstrate that MemVR and MemVR++ achieve superior performance on HallusionBench. > Q8:Minor typos We have revised our paper accordingly, the typo "preception" is revised to "perception". We sincerely appreciate your valuable suggestions again.
Summary: This paper addresses the hallucination issue in Multimodal Large Language Models (MLLMs) by proposing MemVR, a novel decoding paradigm. MemVR uses visual tokens as supplementary evidence and re-injects them via FFN at the middle trigger layer. Theoretical analysis shows MemVR can mitigate hallucinations by enhancing mutual information, reducing conditional entropy, and optimizing the objective function. Experiments on multiple benchmarks prove its superiority in reducing hallucinations and improving performance. ## update after rebuttal I have carefully read other reviewers' comments and the rebuttal. Most of the concerns are well addressed. I will keep my original rating as accept. Claims And Evidence: The claims are well-supported by comprehensive experiments. Methods And Evaluation Criteria: By re-injecting visual tokens, MemVR directly addresses the cause of hallucinations in MLLMs, which is the imbalance between visual and textual modalities. The evaluation criteria, including the use of various benchmark datasets, are appropriate. These datasets cover different aspects of MLLM performance, such as hallucination mitigation and general-purpose capabilities, enabling a comprehensive evaluation of MemVR. Theoretical Claims: To be honest, I did not carefully check the correctness of the theoretical proofs. The proofs are based on established information theoretic concepts such as mutual information, conditional entropy, and the Data Processing Inequality (DPI). Experimental Designs Or Analyses: The paper compares MemVR with multiple state-of-the-art methods on various benchmarks. The selection of baselines is comprehensive, covering different types of methods for mitigating hallucinations. The analyses of the experimental results are also valid. Supplementary Material: I have reviewed the supplementary material. It includes additional experiments, detailed dataset information, implementation details, and case studies. The additional experiments provide more comprehensive results on different datasets and models, strengthening the findings in the main paper. Relation To Broader Scientific Literature: The key contribution of MemVR is closely related to the existing literature. Prior works have explored various methods to mitigate hallucinations in MLLMs, such as Retrieval Augmented Generation (RAG), extra fine-tuning, attention intervention, and Contrastive Decoding (CD). Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths:** 1. Originality: MemVR is a highly original approach. It combines the concept of visual retracing inspired by human cognitive behavior with the architecture of MLLMs. This novel combination offers a new perspective on solving the hallucination problem in MLLMs. 2. Experiments: The experimental designs are sound. The paper compares MemVR with multiple state-of-the-art methods on various benchmarks. The selection of baselines is comprehensive, covering different types of methods for mitigating hallucinations. The analyses of the experimental results are also valid. 3. Significance: The research is significant as hallucinations are a major obstacle to the widespread application of MLLMs. MemVR's ability to mitigate hallucinations without sacrificing efficiency can greatly improve the reliability of MLLMs, which is crucial for applications in safety-critical fields such as healthcare and autonomous driving. 4. Clarity: The paper is mostly clear. The concepts, methods, and experimental results are clearly presented. (Minor modifications are needed to further improve its structure, please see below for details). **Weaknesses:** 1. Missing comparisons with cross-attention based retrieval. In Eq.6 a simple retrieval process for VR is adopted instead of cross-attention layers as in previous approaches (Li et al., 2022; Alayrac et al., 2022). But no direct experimental comparisons are reported. 2. The organization of the paper, especially the layout can be further improved. For example, in Sec4.3, It would be more clear, if "Static Triggered MemVR" is introduced before "Dynamic Triggered MemVR". Other comments can be found in below Section "Other Comments Or Suggestions". 3. Hyperparameter Tuning: The process of determining the optimal hyperparameters for MemVR, such as the injection ratio of visual information and the strategy for selecting the triggered layers, is complex. This may limit the practical application of MemVR as it requires significant effort to finetune these parameters for different models and tasks. Other Comments Or Suggestions: Typo: Line245 "a input". Adjust the layout: There are significant distances between the positions of figures and their citations in the main text, which can indeed cause inconvenience to readers. For example, Figure1 is in Page1, and the citation for Figure1 is in Page5 Line232. Similarly, Figure8 is mentioned in Page5 (Line220 right), but Figure8 is in Page8. Moreover, Figure3 (Line198 right) is mentioned after Figure4 (Line199 left) and Figure5 (Line182 right). Please re-plan the page layout of the paper and shorten the distance between the figure and the corresponding citation position in the main text as much as possible. Questions For Authors: No Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer bEEH for the constructive comments on our work. We are very grateful to the reviewer for recognising the novelty of our idea and the richness and rationality of our experiments.** > Q1: Missing comparisons with cross-attention based retrieval. Sorry for the confusion, we wish to clarify that the ‘cross-attention’ mentioned in our paper refers specifically to a text–image alignment strategy used during the training, rather than the specific method for mitigating VLM hallucinations. We are simply stating that the VR strategy has less overhead than injecting information through cross-attention. Besides, the cross-attention mechanism needs to be trained. We make complexity analyses between cross-attention and FFN, and VR operations. Let $d$ be the dimension of the hidden state, $D$ denote the dimension of FFN, and $N_v$ and $N_t$ denote the number of vision/text tokens. We have computational complexity (CO) analysis, CO_cross-attn=$\mathcal{O}((N_v+N_v)N_v d+N_vd^2 )$; CO_FFN=$\mathcal{O}((N_v+N_t) d D)$; CO_VR=$\mathcal{O}((N_v+N_t) d N_v)$. Where CO_VR< CO_FFN < CO_cross-attn. > Q2: The layout can be further improved and Typo. Thanks for your helpful suggestion. We have revised our paper accordingly, including layout and typos, so that it will be easier to read. > Q3: Hyperparameter Tuning. We value your concerns. We develop MemVR++, which alters the injection rate of visual information from fixed to self-adaptive, thus discarding the hyperparameter $\alpha$. Specifically, the retracing ratio $\alpha$ is determined by 2*(layer_entropy – entropy_threshold) when MemVR is triggered. This ensures that the higher layer_entropy is, which indicates that the model is more confused, the higher $\alpha$ would be. This is a global strategy for MemVR. We've tested it on several benchmarks, and the evaluation shows that this dynamic retracing strategy is also significantly outperforming the default one. The results are as follows, LLaVA-Bench: |Method|Average|all|complex|conv|detail| |--------|:--------:|:--------:|:--------:|:--------:|:--------:| |LLaVA1.5|64.80|50.80|74.60|52.90|52.10| |VCD|63.20|48.50|77.90|52.40|50.80| |ICD|56.90|40.20|78.20|35.30|42.20| |MemVR|65.17|51.30|77.90|55.90|52.60| |MemVR++|65.87|51.70|81.80|51.20|49.60| POPE (MSCOCO): |Method|Random|Popular|Adversarial|Average| |---|:---:|:---:|:---:|:---:| |LLaVA1.5|83.49|79.98|76.03|79.83| |VCD|86.84|82.65|77.31|82.27| |ICD|84.87|82.93|81.07|82.96| |MemVR|88.50|87.10|85.20|86.93| |MemVR++|88.40|87.17|85.17|86.92| POPE (A-OKVQA): |Method|Random|Popular|Adversarial|Average| |---|:---:|:---:|:---:|:---:| |LLaVA1.5|83.45|79.90|74.04|79.13| |VCD|86.15|81.85|74.97|80.99| |ICD|85.57|81.93|77.43|81.64| |MemVR|91.10|87.33|80.20|86.21| |MemVR++|91.03|87.50|80.23|86.25| POPE (GQA): |Method|Random|Popular|Adversarial|Average| |---|:---:|:---:|:---:|:---:| |LLaVA1.5|83.73|78.17|75.08|78.99| |VCD|86.65|80.73|76.09|81.16| |ICD|84.90|78.37|75.97|79.75| |MemVR|89.60|84.63|81.53|85.25| |MemVR++|89.57|84.60|81.57|85.25| MME Benchmark: |Method|Overall|Perception|Cognition| |--------|:--------:|:--------:|:--------:| |LLaVA1.5|1864.68|1508.97|355.71| |VCD|1872.87|1515.01|357.86| |OPERA|1784.34|1473.62|310.71| |ICD|1594.77|1306.91|287.86| |MemVR|1896.72|1512.80|383.92| |MemVR++|1894.14|1512.00|382.14| And we supplement the testing of MemVR on HallusionBench. HallusionBench: |Method|$fAcc$|$qAcc$|$easy aAcc$|$hard aAcc$|$aAcc$| |--------|:--------:|:--------:|:--------:|:--------:|:--------:| |LLaVA1.5|17.92|8.13|36.04|36.74|41.45| |VCD|13.87|11.43|36.92|34.65|41.10| |OPERA|16.19|5.49|37.58|35.35|41.19| |ICD|13.87|8.35|32.97|33.49|38.18| |MemVR|18.50|9.01|36.48|37.67|42.34| |MemVR++|18.50|8.35|36.48|36.98|42.07| where the evaluation is conducted on the GPT-4o-mini. The results demonstrate that MemVR and MemVR++ achieve superior performance. We'll include this part in our revised version. We sincerely thank you for your kind suggestions.
Summary: This paper introduces Memory-Space Visual Retracing (MemVR), a novel decoding approach to mitigate hallucinations in Multimodal Large Language Models (MLLMs). The authors posit that hallucinations often occur due to the model's tendency to "forget" visual information during text generation, and they address this by developing a "look-twice" mechanism that reinjects visual tokens into the model's middle layers when uncertainty is detected. Unlike existing contrastive decoding approaches, MemVR modifies intermediate hidden states rather than output logits, avoiding the need for multiple decoding passes. The authors evaluate their method across seven benchmarks and demonstrate superior performance in both hallucination mitigation and general capabilities while maintaining efficiency in inference time. ## update after rebuttal The authors' rebuttal has addressed my concerns. However, I still doubt the effectiveness of MemVR compared to retrieval-based methods, and I'm concerned that the novelty is somewhat limited, as the reviewer 3FtN mentioned. In my opinion, it's a borderline paper considering the technical contributions. Claims And Evidence: The paper claims MemVR: 1. mitigates hallucinations more effectively than existing methods; 2. maintains or improves general MLLM capabilities; 3. computationally efficient compared to alternatives; 4. plug-and-play and task-agnostic; Evidence is provided through extensive evaluations on hallucination benchmarks (POPE, CHAIR) and general benchmarks (MME, MM-Bench, LLaVA-Bench, etc.) Methods And Evaluation Criteria: Yes. This paper adopts several multimodal benchmarks to evaluate the proposed method, such as POPE and CHAIR for evaluating hallucination. Theoretical Claims: The paper provides a theoretical framework based on information theory: * MemVR enhances mutual information between hidden states and visual evidence; * MemVR optimizes the Information Bottleneck objective. Experimental Designs Or Analyses: The experimental design is comprehensive, evaluating on: 1. Hallucination-focused benchmarks: POPE (using COCO, A-OKVQA, GQA datasets) and CHAIR 2. General capability benchmarks: MME, MM-Bench, MM-Vet, VizWiz, LLaVA-Bench Experiments compare MemVR against baseline MLLMs and state-of-the-art hallucination mitigation methods (OPERA, VCD, ICD) using various models (LLaVA-1.5, Qwen-VL). The authors also conduct ablation studies examining the impact of different injection ratios, uncertainty thresholds, and static vs. dynamic triggering strategies. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: The authors clearly position MemVR as addressing limitations of contrastive decoding approaches, particularly their inference overhead and potential introduction of noise. Essential References Not Discussed: The paper covers the most relevant literature. Other Strengths And Weaknesses: ## Strengths 1. The paper introduces a compelling and intuitive mechanism for hallucination mitigation based on the human cognitive process of "looking twice" when uncertain. The approach of reinjecting visual information at the feature level rather than modifying logits is conceptually different from existing methods. 2. The experiments are extensive, covering both hallucination-specific benchmarks and general capabilities across multiple datasets and model architectures. This demonstrates the method's robustness and general applicability. ## Weaknesses: 1. The paper proposes a "look-twice" mechanism for hallucination mitigation, but the core idea of revisiting visual tokens resembles existing retrieval-based and contrastive decoding approaches. The contribution is incremental rather than fundamentally novel. 2. While the proposed method shows good improvements, a detailed analysis of failure cases could strengthen the paper. 3. The implementation of the "dynamic trigger" mechanism lacks theoretical justification for why specific layers are selected for visual retracing. The selection seems heuristic rather than rigorously derived. 4. While the paper claims that MemVR has minimal computational cost, it still introduces additional FFN operations and dynamic triggering logic. The efficiency advantage over contrastive decoding is clear, but a detailed breakdown of computational overhead is missing. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments and kind suggestions.  > Q1: The contribution of revisiting visual tokens. MemVR is fundamentally different from the current approaches, you may refer to Table 2 in the paper, where we show comprehensive comparisons with existing methods.  1) Retrieval-based approach incorporates knowledge from the external database, which brings about a large memory footprint. *MemVR achieves self-improvement without external knowledge.* 2) CD-based and attention intervention strategies both bring about high inference latency, due to multiple rounds of inference or the rollback operation. *MemVR mitigates hallucinations and excels beyond SOTA methods without incurring additional time overhead.* 3) Retrieval-based, CD-based, and attention intervention methods generally act on textual/visual input (i.e., instruction/image), output logits, or attention matrix. *MemVR reinjects visual information at hidden states.* 4) Compared with other methods, which succeed in hallucination benchmarks but fail on general benchmarks, *MemVR enables constant performance boosting in both hallucination and general benchmarks.* > Q2: Add analysis of failure cases. Thank you for your valuable and helpful suggestions. We are keen to explore how reinjecting similar visual features without external data might affect model biases. To address this, we have collected failure cases from the MME benchmark, in the 'Celebrity,' 'Scene,' and 'Landmark' sub-tasks, where MemVR underperforms compared to the default model, as follows.  | Right numbers | existence | count | position | color | posters | celebrity | scene | landmark | artwork | OCR | CommR | numerical_cal | translation | code | |--- | :----: | :-----: | :----: | :-----: | :-----: | :----: | :----: | :-------: | :----: | :----: | :---: | :---: | :-----: | :-----: | |Total| 60| 60 | 60| 60| 294| 340| 400 | 400 | 400| 40 | 140 | 40| 40 | 40 | |LLaVA1.5-7B|58| 51|45 | 54| 241| 266| 342 | 352 | 286 | 32 | 97 | 18 | 27 | 21 | |MemVR | 58 | 51 | 46| 54| 241| 264| 341| 351 | 288 | 32 | 102 | 18 | 28 | 23| We categorize MemVR's failures into two types: 1) Cases where the default model provides the correct answer, but MemVR outputs an incorrect one. 2) Cases where both the default model and MemVR produce incorrect answers. For failure type 1, we attribute the failure to the over-disturbance of the default model's reasoning process. In these instances, the original visual features are sufficient for reasoning, and the reinjected tokens inadvertently disrupt this process, leading to errors. We are actively investigating methods to mitigate such disturbances. For failure type 2, the failures arise from either the excessive complexity of the image or gaps in the LLM's knowledge base, which prevent correct reasoning even after VR. We will be including the bad cases as well as the analysis in our paper. > Q3: Why are specific layers selected for VR The layer selection for the dynamic trigger is theoretically grounded in information entropy analysis. Specifically, we employ layer-wise entropy $H(x)=-\sum p(x) \log p(x)$ as an information bottleneck metric to identify critical transition points where feature uncertainty reaches local maxima, which guides information entropy of the trigger layer from high to low state under the Data Processing Inequality[1]. While our implementation adopts an efficient thresholding mechanism, the core selection criterion stems from established analysis of hierarchical feature stabilization patterns in [2], rather than arbitrary heuristics. [1] Cover, T. M. et al. Entropy, relative entropy and mutual information. Elements of Information Theory, 2(1):12–13, 1991. [2] Teerapittayanon et al., BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks, ICPR 2016. > Q4: detailed breakdown of computational overhead. LLaVA components include ViT+MLP+LLM (self-attention and FFN). Suppose $L_v$ and $L_l$ are the number of layers of ViT and LLM, $d_v$,$d$, and $D$ are dimensions of ViT, self-attention, and FFN, $N_v$ and $N_t$ denote the number of vision/text tokens. 1) Computational overhead of VCD: FLOPs$_{\text{VCD}}$=$2*(L_v(N_v^2 d_v+N_v d_v^2)+N_v d_v d+L_l[(N_v+N_t)^2 d+(N_v+N_t) dD])$ 2) Computational overhead of MemVR: FLOPs$_{\text{MemVR}}$=$L_v(N_v^2 d_v+N_v d_v^2)+N_v d_v d+L_l[(N_v+N_t)^2 d+(N_v+N_t) dD] + \underline{(N_v+N_t) d{N_v}+L_o (N_v+N_t) d}$. where $L_o \lt L_l$. Clearly, MemVR adds $\underline{\text{underline}}$ overhead terms, but it is negligible as $(N_v+N_t) d$ is low computation, and $N_v \ll D$, for instance $D = 11008$ and $N_v = 256$ for LLaVA, thus $(N_v+N_t) d{N_v}\ll (N_v+N_t) d D$, which makes VR operation efficient in total. We hope the content above can address your concerns. Please let us know if you have further questions. We sincerely thank you for your kind suggestions again.
null
null
null
null
null
null
Online Curvature-Aware Replay: Leveraging $\mathbf{2^{nd}}$ Order Information for Online Continual Learning
Accept (poster)
Summary: The paper combines experience replay with a second-order optimizer derived from a KL constraint to tackle online continual learning. The paper improves SOTA results on three vision-based online continual learning benchmarks. Claims And Evidence: The main claims on OCAR improving online continual learning optimization and performance are supported by convincing evidence. They report improved scores on standard benchmarks, have toy experiments to contrast learning trajectories of OCAR vs baselines, and investigate stability-plasticity tradeoff of OCAR hyperparameters. However, the paper would significantly benefit from expanding on some of its methodology derivation-related claims in Section 4. - Line 155: Why does a sequence of local optimization problems solved at each step prevent the use of "global approaches" like learning rate decay and momentum? In addition, it would be good to put references on why first-order optimization can be cast as a sequence of local optimization problems. - Line 170: Where is fast adaptation constraint (D4) achieved? - Line 133: How does the 2nd order KL divergence Taylor approximation formula hold? In addition, the sentence "after approximation, the solution for problem 1 is $\delta_t^* = -\tfrac{1}{\lambda}(\nabla_{N_t} + \nabla_{B_t})$" is very confusingly introduced, since the previous equation gives a second-order Taylor expansion of the KL divergence whereas the sentence refers to the parameter update solution of the first-order Taylor expansion of the KL divergence. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experimental designs and analyses are valid. Supplementary Material: I have read the supplementary material. Relation To Broader Scientific Literature: The paper investigates alternative optimization methods aside from commonly used SGD to improve online continual learning. Their method is highly related to LPR [1], which uses a block diagonal preconditioner to improve online continual learning accuracy and optimization efficiency. [1] Yoo, J., Liu, Y., Wood, F., & Pleiss, G. (2024). Layerwise proximal replay: A proximal point method for online continual learning. arXiv preprint arXiv:2402.09542. Essential References Not Discussed: To my knowledge, relevant references are discussed. Other Strengths And Weaknesses: The paper's strength is its originality since not many works investigate how to better optimize neural networks during online continual learning. In addition, the use of natural gradient descent-like approaches, as the paper states, draws interesting connections with information geometry (though the authors do not expand on this much). The paper's greatest weakness is its lack of clarity arising from highly unintuitive notations and derivations. For example, $\nabla, \mathbf{H}$ are used as both operators and vectors/matrices. In addition, $\hat{KL}$, which is suggestive of KL divergence that accepts probability distributions as inputs, accepts random variables as inputs. For the SGD derivation for experience replay, the paper gives the second-order Taylor expansion of the inner optimization objective and provides a parameter update solution for the first-order Taylor expansion of the inner optimization objective. Adjusting issues like this would make the paper much stronger. Other Comments Or Suggestions: I suggest renaming "Tikhonov regularization" in the paper to "Tikhonov damping" as done by Martens and Sutskever [1] to minimize confusion from the fact that the paper's Tikhonov regularization applies to inner level optimization objective of Eq 1, not to the outer level experience replay optimization objective. In addition, [1] Martens, J., & Sutskever, I. (2012). Training deep and recurrent networks with hessian-free optimization. In Neural Networks: Tricks of the Trade: Second Edition (pp. 479-535). Berlin, Heidelberg: Springer Berlin Heidelberg. Questions For Authors: - Have you considered deriving OCAR parameter update purely from $KL$-constrained inner optimization problem of a first-order Taylor expanded objective? This seems possible and would be a much simpler way of deriving OCAR update that does not require strictly working with $KL$ objective function. This would also let OCAR-DER++ and OCAR-ACE an instantiation of the paper's framework, since OCAR would no longer be constrained to a specific form of experience replay loss. - In addition, the Hessian of KL divergence is simply a Fisher information matrix, so there's no reason to state "we approximate the two Hessian matrices of our solution with the FIM, greatly simplifying the computations". - For computing the Fisher information matrix, how many additional forward/backward passes does OCAR have to make? - How well would OCAR scale to larger models, compute and memory wise? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for your insightful review. We are happy to address your comments: 1) **Line 155**: We consider our stream to be arbitrary (we can have a different task for each step). If the objective has changed (task boundary), and we are at a different point in the space, the previous gradient history is not beneficial (see [1] for OL setting). On the other hand, when the length of the stream is not known, learning rate decay diminishes plasticity in all directions, stopping the learning at the limit. We will clarify these points in the paper. The idea of casting the first order optimization as a sequence of local optimizations can be found in [2], Section 7. We will refer to this. 2) **Line 170**: The Hessian matrix increases the convergence speed as in a Newton-like step, improving fast-adaptation w.r.t. first-order methods. Here we are stretching the definition of fast-adaptation to include also buffer elements that are not fully learned (note that the stability condition is defined w.r.t. the model and not the data). We will rewrite the sentence to be more transparent about our "stretch". 3) **Line 133**: Unfortunately, there is a typo on the Taylor expansion, missing the $\frac{1}{2}$ term. We will correct it. 4) **Confusing sentence**: You are right! The sentence about the first-order solution is confusing because we misplaced the second-order Taylor expansion in the first-order section. We will correct it. 5) **Notation**: You are right again. We tried to lighten the notation as much as possible, but it is now confusing. We will distinguish the function symbol from its resulting object (e.g. $\mathcal{H}$ and $\mathbf{H}$ for the Hessian; $g$ and $\nabla$ for the gradient). On the other hand, the $\hat{KL}$ function is defined in line 110 as the sample estimation of the real KL computed on probability distributions. 6) **Tikhonov damping**: We happily welcome the suggestion of renaming the "Tikhonov regularization" to "Tikhonov damping". 7) **Different OCAR derivation**: This is an interesting idea. Our work aimed to propose a standalone approach leveraging the natural gradient (NG) theory, which shows how the KL generates the FIM as Riemannian metric. To connect to NG, we needed to use the KL as the objective function to derive the FIM matrix that is shown to accelerate learning (see [3]). With a general loss function, we would lose this connection. Moreover, a non-KL-derived loss can be tricky. In our experiments, OCAR-DER++ was very unstable, failing the optimization on Imagenet. We believe this idea can be a valuable future research direction. 8) **Hessian of KL**: On this point, we have to disagree. The Hessian of $KL(p_{\theta'} || p_\theta)$ is equal to the FIM only when evaluated at $\theta' = \theta$, near the minimum of the KL (see [4], [3]). In our case, this true only for the constraint $\hat{KL}(f_{w_{t-1}}(x_{B_t}) || f_{w_t}(x_{B_t}))$, but false for $\hat{KL}(y_{N_t} || f_{w_t}(x_{N_t}))$ and $\hat{KL}(y_{B_t} || f_{w_t}(x_{B_t})$. Our approximation, to compute a single FIM instead of a FIM and an Hessian, is to assume the model is "the correct" model for both new and buffer data. 9) **Additional passes**: We need to compute only a single FIM with a weight $(1+\lambda)$ on the buffer data. Using a log-likelihood loss, the FIM can be computed in closed form, requiring only an additional forward pass for the entire batch, and a backward pass for each output, when an update is needed. 10) **Scaling**: OCAR is efficient w.r.t. other SOTA approaches (see training times in Appendix B). The most intensive operation is the K-FAC computation and inversion. For fully-connected layers, K-FAC stores and inverts, for each layer, a factor $n_{in} \times n_{in}$ and a factor $n_{out} \times n_{out}$. [5] shows that training large models with K-FAC is possible. Moreover, one of our future directions is to explore the use of E-KFAC to make the method more efficient. Your review has been invaluable, and we are profoundly grateful for this. We hope we addressed all the concerns. We are happy to engage more if you like so! *** **References**: [1] Yuan, K., Ying, B. and Sayed, A.H., 2016. On the influence of momentum acceleration on online learning. Journal of Machine Learning Research, 17(192), pp.1-66. [2] Martens, J., 2020. New insights and perspectives on the natural gradient method. Journal of Machine Learning Research, 21(146), pp.1-76. [3] Amari, S.I., 2016. Information geometry and its applications (Vol. 194). Springer. [4] Ollivier, Y., Arnold, L., Auger, A. and Hansen, N., 2017. Information-geometric optimization algorithms: A unifying picture via invariance principles. Journal of Machine Learning Research, 18(18), pp.1-65. [5] Pauloski, J.G., Huang, L., Xu, W., Chard, K., Foster, I.T. and Zhang, Z., 2022. Deep neural network training with distributed k-fac. IEEE Transactions on Parallel and Distributed Systems, 33(12), pp.3616-3627. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. Trusting that the authors will do an extensive notational cleanup and fix the paper's errors, I will raise my score to accept, given the paper's novelty and interesting analysis. A minor detail is that the natural gradient (and descent) can be derived in other ways without explicit reliance on the KL objective [1]. [1] Lin, W., Nielsen, F., Khan, M., & Schmidt, M.. (2021). Introduction to Natural-gradient Descent: Part II. --- Reply to Comment 1.1.1: Comment: We are very grateful for your appreciation. We will definitely polish the paper following your suggestions. Thanks for the reference. We agree that the FIM can be derived from the metric of the space instead that from a specific KL objective. In our case, we believe our approach presents a "natural" formalization of forgetting constraints using the KL of the model prediction on the replay data. From the expansion of this, the exact FIM emerges. Following the same process also for the objective makes the method more intuitive and justifiable. Nonetheless, we believe it would be very interesting to analyze the possibility of starting from the metric of the space to inject a constraint on replay data, which can then be used on general objective functions. The complexity of this is that the FIM for the forgetting constraint can be computed on different data (Replay) than the ones used for the objective. Thank you again for this interesting idea.
Summary: This paper proposes Online Curvature-Aware Replay (OCAR) that leverages second-order information of the loss using a K-FAC approximation of the Fisher Information Matrix (FIM) to precondition the gradient in OCL. Claims And Evidence: Overall the claimed information are well supported. By using second-order methods, it can alleviate the CF problem by enlarging the ”sight” of optimizer with the local variation of the variation. Methods And Evaluation Criteria: I think the evaluation criteria is reasonable Theoretical Claims: I think the theoretical claims are correct Experimental Designs Or Analyses: 1. Missing discussions and comparisons with other OCL methods that aiming at alleviate the CF problem from gradient aspect. 2. Missing analysis and comparisons of 'second order method' and 'first-order + gradient regularization strategy' Supplementary Material: The supplementary material provides more training details and experimental results, which well support some claims in the main draft. Relation To Broader Scientific Literature: I think it misses comparisons and discussions with other OCL methods that focus on modifying the gradient to alleviate the CF problem. Essential References Not Discussed: Missing references to methods that focuses on modifying the gradient to alleviate the CF problem. Other Strengths And Weaknesses: 1. Missing comparisons with other graident precondition methods in OCL Other Comments Or Suggestions: 1. From Table 1, sometimes the proposed method performns worse than existing methods, I suggest provide more discussions on why this happens. 2. I understand second-order methods can alleviate the CF, will it degrade the learning accuracy? Questions For Authors: I understand changing first-order methods to second-order methods can be effective in some cases, however, I am wondering the comparsions with first-order methods+some gradient altering strategies. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer, we thank you for your review and your time. We try to address all your concerns: 1) **Other gradient-altering strategies**: In our experiments, we tested A-GEM and LPR, both examples of gradient projection methods. LPR (current SOTA) is a variant of preconditioned SGD, preconditioning the gradient to penalize update directions that interfere with replay-data activations. A-GEM instead is a "pure" projection method: when the gradient violates the constraint $\tilde{g}^\top g_{ref} \geq 0$, hence interfering with a previous reference gradient, it is projected via $\tilde{g} = g - \frac{g^\top g_{ref}}{g_{ref}^\top g_{ref}}$. Our method is different from both these approaches: we work on the parameter space, using the Fisher Information (FIM) as a preconditioner, penalizing directions that would interfere with replay-data predictions and accelerating other directions, instead of projecting the gradient only when interfering. We will explain these differences better in the final version of the paper, and we will expand the related work section, which already briefly mentions some works on gradient projection, with additional references, such as [1-5]. In general, our goals and those of most gradient projection methods are different. Gradient projection is often used to prevent forgetting in a replay-free setting (with some notable exceptions like LPR and A-GEM), often requiring expensive SVDs or inner optimizations, making them harder to apply to OCL (see [7]). 2) **First Order + Regularization**: In our paper, we compared with some methods that indirectly alter the gradient with a regularization term on the loss. One notable example is EWC, which adds a penalization for the change of parameters important for previous tasks. Unlike our approach, it uses the FIM in the penalization and not as a preconditioner. In section 6, we compared ourselves directly to EWC, with further explanations in Appendix C. We will expand the section, explaining how our method can achieve both stability and plasticity, while EWC can only slow some directions. Moreover, in our experiments, we also compared with other methods that use regularized losses, like DER and ER-ACE. 3) **OCAR worse on Acc**: We agree. We should expand the explanation of the metrics. The *Acc* measures only the final accuracy at the end of the stream, a single point that can be affected by noise and by the length of the stream. From an OCL perspective, the *AAA* metric is more meaningful, measuring the average performance over the entire stream (more similar to the regret in Online Learning). Our method always shows better stability (*Worst Case Acc*) and overall performance (*AAA*) than others, but it is not the best in *Acc* for TinyImagenet. We believe it has something to do with different plasticities in different parts of the stream, with OCAR more efficient at the beginning and, for example, ER-ACE better in the final part. Corroborating this, the combination of OCAR and ER-ACE is dominant on all metrics. 4) **$2^{nd}$-order methods accuracy**: In fact, the opposite happens. While the curvature information penalizes the optimization along the directions important for previous tasks, it also speeds up learning in the other directions, something that most other methods cannot do. In our experiments, the probed accuracy shows that OCAR is learning better representations than the other methods. This improved optimization is based on the information geometry theory, which shows how the FIM can help the optimizer to converge faster to better minima [6]. Again, we thank you for your review and suggestions. We hope we addressed all your concerns. We are happy to engage more if any doubt persists. *** **References**: [1] Yichen Wu, Hong Wang, Peilin Zhao, Yefeng Zheng, Ying Wei, Long-Kai Huang: Mitigating Catastrophic Forgetting in Online Continual Learning by Modeling Previous Task Interrelations via Pareto Optimization. ICML 2024 [2] Saha, Gobinda, and Kaushik Roy. "Continual learning with scaled gradient projection." Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 8. 2023. [3] Lin, Sen, et al. "Trgp: Trust region gradient projection for continual learning." arXiv preprint arXiv:2202.02931 (2022). [4] Yang, Enneng, et al. "Data augmented flatness-aware gradient projection for continual learning." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [5] Zhao, Zhen, et al. "Rethinking gradient projection continual learning: Stability/plasticity feature space decoupling." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [6] Amari, S.I., 2016. Information geometry and its applications (Vol. 194). Springer. [7] Yoo, J., Liu, Y., Wood, F. and Pleiss, G., 2024. Layerwise proximal replay: A proximal point method for online continual learning. arXiv preprint arXiv:2402.09542.
Summary: This paper formalizes replay-based Online Continual Learning (OCL) as a second-order online joint optimization with explicit KL-divergence constraints on replay data. It proposes Online Curvature-Aware Replay (OCAR) to solve the problem: a method that leverages second-order information of the loss using a K-FAC approximation of the Fisher Information Matrix (FIM) to precondition the gradient. Extensive experiments are conducted. ## update after rebuttal Based on the response of authors, I will keep my scores. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: a. This paper is well-written and easy to follow. b. I appreciate the extensive experiments. c. I think the idea is novel. Weaknesses: a. I am a little bit worried about the time cost of this proposed method. Other Comments Or Suggestions: N/A Questions For Authors: a. In my view, there should be more discussion about the difference and similarity between online learning and online continual learning in Sections 2 and 3. b. It is better to show the time cost of the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, thank you very much for your review. We hope to be able to solve your doubts: 1) **OL vs OCL**: We agree that a deeper comparison between Online Learning and Online Continual Learning can be beneficial. In Section 3, we will add a paragraph explaining the main points: both OL and OCL requires the model to update with full access only to the last seen data-points, without the whole training dataset; both OL and OCL aim to minimize the cumulative loss (regret) experienced over the whole stream, without knowing the length of it *a priori*; OCL, unlike OL, aims to tackle forgetting, building models that are robust also when tested on past data of the stream; OCL, unlike OL, equips the model with additional information (for example a Replay Buffer) to stabilize the model on old data. 2) **Time Cost**: The table with the time cost of the different models is presented in Appendix B. Given its importance, we will move it to the main paper in the final version. Thanks to its approximations and the efficient use of K-FAC, our method is more efficient than other SOTA approaches tested on the same setting. Thank you again for your review and your suggestions. We hope we tackled all your doubts. If that is not the case, we are happy to engage more.
Summary: The paper addressed the online continual learning setting. The authors proposed Online Curvature-Aware Replay (OCAR), a replay-based method that leverages an approximated Fisher Information Matrix to help both the stability and plasticity. The authors further proposed specific adaptation for online continual learning. Experiments showed improved performance with the proposed method. ## update after rebuttal The authors have addressed all my questions. Therefore I retain my initial rating as Accept. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No. Relation To Broader Scientific Literature: Proposed a better method for replay-based online continual learning. Essential References Not Discussed: There is a missing discussion with the latest works of online continual learning with pretrained models, such as [a, b]. [a] Moon, Jun-Yeong, et al. "Online class incremental learning on stochastic blurry task boundary via mask and visual prompt tuning." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [b] Zhiqi K, Wang L, Zhang X, et al. Advancing Prompt-Based Methods for Replay-Independent General Continual Learning[C]//The Thirteenth International Conference on Learning Representations. 2025. Other Strengths And Weaknesses: Strengths: - The method is well motivated, well developed - The paper is easy to follow - The visualization in figures helps improve readability Weaknesses: There are no serious weaknesses in the paper, I have only some remarks in the Question section. Other Comments Or Suggestions: No. Questions For Authors: - About the assumption of instability In lines 245-246, the authors claimed that the majority of instability happens on the classifier. I am wondering if using a frozen pretrained backbone can help clarify this point. - About the replay buffer Since most of the improvement is due to the optimization process, is it possible to design a version of OCAR that works without the replay buffer? For instance, the recent work [b] proposed a replay-independent approach to deal with online continual learning. Do you think OCAR can also boost its performance? [b] Zhiqi K, Wang L, Zhang X, et al. Advancing Prompt-Based Methods for Replay-Independent General Continual Learning[C]//The Thirteenth International Conference on Learning Representations. 2025. - Computation efficiency It would also be interesting to see the computation efficiency of the proposed method, as it is important for online scenarios. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, thank you very much for your review and your time. We are happy to answer all your questions: 1) **Additional references**: Thank you for the suggested papers. They are relevant, and we will add a discussion about them in section 2 in the final version. 2) **Classifier Instability**: Our assumption about the stability of the representation is based on previous works that showed that the representation forgetting is usually small, and often comparable between methods that do not have any control for forgetting and methods that instead try to control it [1]. In our experiments, the *Probed Acc* metric is obtained by freezing the feature extractor after training, and retraining the classifier on all data, to show how good the representations were. The results show that the simple ER has a *Probed Acc* on par with iid training. Our method is even higher due to the improved optimization. This means that the majority of the forgetting happens on the classifier. From this, we can infer that using a pretrained backbone would get a final accuracy between our results of the *Acc* and the *Probed Acc*. 3) **OCAR without Replay**: OCAR fundamentally requires Replay data to estimate the Fisher Matrix (FIM) required for the stability constraint. Unfortunately, removing the Replay data would hinder the scope of the method. On the other hand, without the Replay, the remaining effect would be to accelerate learning, as with standard second-order methods. Within this scope, the combination with [2] is possible and beneficial. In particular, during the Forgetting-Aware Minimization, the FIM matrix can be useful to penalize directions of high curvature, increasing the chances of finding a flatter minima, hence strengthening the relation with Sharpness Aware Minimization. 4) **Computation Efficiency**: The single-task training times for CIFAR-100 are shown in a table in Appendix B, showing that our method is more efficient than other SOTA approaches. Given the importance for online scenarios, we will move the table into the main paper. We thank you again for the suggestions and ideas. We hope to have satisfied all questions and concerns. We will be happy to discuss this further if any doubts remain. *** **References**: [1] Davari, M., Asadi, N., Mudur, S., Aljundi, R. and Belilovsky, E., 2022. Probing representation forgetting in supervised and unsupervised continual learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16712-16721). [2] Kang, Z., Wang, L., Zhang, X. and Alahari, K., 2025. Advancing Prompt-Based Methods for Replay-Independent General Continual Learning. arXiv preprint arXiv:2503.00677. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed and insightful rebuttal. I believe the authors have addressed all my questions. Thus, I maintain my initial rating as Accept. --- Reply to Comment 1.1.1: Comment: Thank you very much! It was our pleasure to answer your questions.
null
null
null
null
null
null
FG-CLIP: Fine-Grained Visual and Textual Alignment
Accept (poster)
Summary: Based on the observation that CLIP struggles to handle with fing-grained understanding tasks, this paper propose 1. a larger dataset including abundant images, bounding boxes and captions; 2. to incorporate long captions, short captions and hard negative strategies to enhance CLIP's ability during training. The extensive experiments demonstrate the effectiveness of this paper. Claims And Evidence: Yes, the paper claims that original CLIP model cannot well cope with fine-grained understanding tasks. And this claim is supported by experimental results in Table 1 to Table 6. Methods And Evaluation Criteria: Yes, the proposed method and dataset makes sense for the problem. Theoretical Claims: This paper doesn't include any theoretical claims. Experimental Designs Or Analyses: Yes, I have checked the experiment section. The experimental designs are sound to evaluate the fine-grained understanding ability of CLIPs (including original CLIP, EVA-CLIP, etc. and this paper proposed FG-CLIP). Supplementary Material: I have reviewed the supplementary material, including visual grounding dataset visualization, positive and negative description examples, attention visualizations and a compared table on FG benchmark. Relation To Broader Scientific Literature: 1. This paper enhances fine-grained understanding ability of CLIPs. 2. A larger dataset is proposed for the community. Essential References Not Discussed: No, all related works are discussed in the paper. Other Strengths And Weaknesses: Strengths: 1. A well-written paper to follow. 2. Effective module design. Weaknesses: 1. Limited ablation and disscussion with negative sampling. How many hard negative samples per image-text pair? 2. More visualization on hard negative samples. Other Comments Or Suggestions: This paper construct a high-quality visual grounding dataset, with 12 million images and corresponding bounding box and captions. It would be better to discuss this dataset with previous ones like LAION, COCO, and so on in a Table. Questions For Authors: Please see above weaknesses and comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: __1. Response to Weakness 1__ Thanks for pointing out this problem. We perform ablation studies on the number of hard negative samples. Specifically, we test configurations with 1, 5, and 10 hard negative samples per positive sample. Our experiments show that 10 hard negative samples per image-text pair yield the best performance. We agree that this ablation study adds significant value to our manuscript and will include it in the final version. Thank you for pointing out this important aspect. | Fine-Grained Understanding| hard | medium | easy | |----------|--------------|--------------|---------| | FG-CLIP (1 hard negative sample) | 44.49 | 65.87 | 67.82 | | FG-CLIP (5 hard negative samples)| 45.86 | 66.78 | 67.21 | | FG-CLIP (10 hard negative samples) | 46.40 | 67.15 | 68.59 | __2. Response to Weakness 2__ Thank you for your comment regarding the need for more visualizations of hard negative samples. We appreciate the opportunity to provide additional insights into how our method, FG-CLIP, benefits from hard negative sampling. We provide more visualization in https://anonymous.4open.science/r/ICML_RE-3CF6/fgshow.png . Specifically, we extract the dense image feature and visualize the similarity matrix to qualitatively analyze the impact of hard negative sampling. As illustrated in the figures, our FG-CLIP can capture the regions more accurately after performing hard negative sampling. For example, in the first row, the phrase "Man in red clothes" is accurately identified with hard negative loss, whereas without it, the model struggles to capture the correct region. __3. Response to questions in Other Comments Or Suggestions__ Thank you for your valuable suggestion regarding the comparison of our dataset with previous ones such as LAION, COCO, and others. We agree that a detailed discussion in the form of a table would better highlight the unique strengths and contributions of our dataset. In addition to our visual grounding dataset of 12 million images with corresponding bounding boxes and captions, we have also incorporated an additional 1.6 billion image-text pairs in the first stage of training. The dataset in the first stage is generated using a large multimodal model to produce higher-quality fine-grained long captions compared for capturing global-level semantic details. We have compared our dataset with several related datasets, as shown in the table below. Overall, our dataset stands out in terms of scale and quality, particularly in its fine-grained annotations and challenging negative samples. Here are the key points of comparison: - Scale: Our dataset contains the largest number of images, bounding boxes, and captions among all the datasets except LAION. While LAION has the largest number of captions (2B), the quality of these captions is often noisy and inconsistent. Our supplementary dataset adds 1.6B high-quality, fine-grained long captions, enhancing the overall utility of our data. - Bounding Boxes: Among the widely used datasets, only COCO provides bounding box annotations. However, our dataset surpasses COCO by an order of magnitude, with 40M bounding boxes compared to COCO's 1.5M. - Hard Fine-Grained Negative Samples: A distinctive feature of our dataset is the inclusion of 10M hard fine-grained negative samples. These samples help the model differentiate subtle differences in semantically similar pairs, thereby improving its performance across various downstream tasks. |Dataset|Image |Image caption| Bounding box| Region caption| Hard fine-grained negative sample | | --- | --- | --- | --- | --- |--- | |LAION-2B| 2B| 2B|0|0|0| |Flickr30k|30K|150K|0|0|0| |CC3M|3M|3M|0|0|0| |COCO|330K|330K|1.5M|1.5M|0| |Ours in stage1|1.6B|1.6B+1.6B|0|0|0| |Ours in stage2|12M|12M+12M|40M|40M|10M| --- Rebuttal Comment 1.1: Comment: My concerns are addresed by authors' rebuttal. Hence, I recommend acceptance. --- Reply to Comment 1.1.1: Comment: We are very glad to have resolved the concerns you raised, and we sincerely appreciate your recommendation to 'accept' our work. We will incorporate the new content based on your suggestions into the latest version of our paper.
Summary: This paper proposed FG-CLIP, A region level contrastive learning model for fine-grained image representation. Through training on large scale synthetic data. This model achieve strong performance compared to previous methods on fine-grained region-level tasks like fine-grained understanding, OVD, image-text retrieval and image classification, serving as a promising vision fundation model compared to CLIP. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The evaluation criteria generally make sense. But more FG dataset like OV-LVIS, V3Det should be considered. Theoretical Claims: The theroretical claims are correct as they mostly from previous method (FineCLIP). Experimental Designs Or Analyses: The experimental designs are valid. Supplementary Material: Yes Relation To Broader Scientific Literature: I believe the technical novelty is kind of limited as the region level contrastive learning and hard nagtive samples are direct adoption of existing methods. the main contribution is a large scale synthetic region-text paired data. Essential References Not Discussed: The key contribution of model and loss is from FineCLIP published in NeurIPS 2024. The authors should directly discuss this in the methods section. Experiment results on region level classification should consider Alpha-CLIP in CVPR 2024 for comparison. Other Strengths And Weaknesses: Major strength: Strong performance is achieved compaired to previous methods. Major weakness: The methods of regional contrastive learning and hard nagative sampling are direct adoption from FineCLIP and ALBEF, which make the model contribution marginal. The major contribution is the dataset. As the dataset is also collected and synthesised without new techinical contribution, the novelity of this paper is kind of limited. Other Comments Or Suggestions: No Questions For Authors: 1. Alpha-CLIP should be considered for comparison. 2. I personally believe rewritted captions from LVLM like CogVLM and detection through YOLO-world will restrict the diversity of the data. As the model also benchmarked on data that are either synthesized (Share4V) or not that finegrained enough (COCO, ImageNet, RefCOCO...). Thus I believe: * Data diverity analysis is important to be quantitatively discussed. * Consider more fine-grained dataset like LVIS, V3Det to defend the claim of the paper. 3. If this paper want to claim that FG-CLIP serves as better visual encoder for LVLM, merely tested on GQA and POPE is not enough. More numbers on other LVLM benchmarks should be included. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: __1. Response to Question 1 in Essential References Not Discussed__ Thanks for your comments. We discuss the difference between FG-CLIP and FineCLIP and provide experimental results in the first response to reviewer 6zz8. We refer you to that response for more details. __2. Response to Question 2 in Essential References Not Discussed and Question 1 in Questions For Authors__ Thanks for your suggestion to include a comparison with Alpha-CLIP on region level classification. We test Alpha-CLIP using its "clip_b16_grit1m+mim_fultune_4xe" weight across three testing configurations: Alpha Map, RoIAlign, and Crop&Resize. The Alpha Map method involves creating a mask based on the label box information and then combining this mask with the entire image, which aligns with Alpha-CLIP 's training objective. The RoIAlign method is a testing approach in our FG-CLIP. Meanwhile, the Crop&Resize method refers to cropping the image region corresponding to the box and resizing it to the resolution required by ViT. The table in https://anonymous.4open.science/r/ICML_RE-3CF6/compare_with_alphaclip.md shows FG-CLIP consistently outperforms Alpha-CLIP across different configurations. We will add these results to our manuscript. __3. Response to Major weakness__ Thank you for your insightful comment regarding the methods of regional contrastive learning and hard negative sampling, as well as the novelty and contribution of our dataset. __3.1 Regional Contrastive Learning and Hard Negative Sampling__ We have discussed the difference between FG-CLIP and FineCLIP in an earlier response. Here, we highlight distinctions between hard negative sampling of FG-CLIP and ALBEF. ALBEF proposes a strategy to sample hard negatives for its ITM task. Specifically, ALBEF samples hard negatives using softmax-normalized image-to-text and text-to-image similarity to find in-batch negatives, selecting one negative text/image per mini-batch. In contrast, FG-CLIP conducts a novel pipeline to create challenging fine-grained negative samples. As shown in lines 236 to 251 of the right column of the manuscript, we modify the attributes of bounding box descriptions while keeping the object names unchanged. We generate 10 negative samples for each positive sample. This process generates subtle variations where objects may appear similar but differ in specific details. __3.2 Contribution of the Dataset__ Our work leverages two high-quality datasets. In the first stage, we utilize an extensive dataset of 1.6 billion long caption-image pairs to capture global-level semantic details. In the second stage, we employ a carefully curated dataset of 12 million images with 40 million corresponding bounding boxes and captions, which are specifically designed to provide fine-grained annotations. We also generate 10 million challenging fine-grained negative samples, improving the model's ability to distinguish subtle differences. These datasets enable the model to achieve superior performance on various benchmarks. In summary, we have validated the entire synthesis pipeline used to create these datasets, including innovative techniques such as generating challenging fine-grained negative samples, which provide reference for others. We plan to make this dataset public to support further research in visual grounding and fine-grained understanding. __4. Response to Question 2 in Questions For Authors__ To quantitatively discuss the diversity of our dataset, we compare it with other fine-grained datasets such as LVIS and V3Det. We extract and aggregate category labels from captions generated through steps involving CogVLM and YOLO-world. The following table compares the number of images and unique category labels across different datasets. Notably, even when sampling an equivalent number of images (243k), our dataset yields more unique category labels than V3Det, indicating higher diversity. We visualize the category labels of sampled data (equivalent to 243k images) using t-SNE plots in https://anonymous.4open.science/r/ICML_RE-3CF6/data_tsne_pic.png The visualization also shows that our dataset has a more diverse set of category labels at the same image scale. As the dataset scales up to 12M images, the diversity in category labels and captions increases significantly. Dataset|Image|Caption|Category Label -|-|-|- LVIS|164k|1.27M|1.2k V3Det|243k|1.75M|13k Ours (sampling 243k)|243k|815k|21k Ours|12M|40M|128k To defend the claim of our paper, we evaluate our model on LVIS. The results in the following table show FG-CLIP achieves SOTA performance. Method|Top-1|Top-5 -|-|- CLIP|24.79|46.63 EVA|14.36|29.11 FineCLIP|23.29|44.17 FG-CLIP|28.55|52.60 __5. Response to Question 3 in Questions For Authors__ We conduct experiments on other LVLM benchmarks at https://anonymous.4open.science/r/ICML_RE-3CF6/add_result_mm_compare.md . The experimental results show that LLaVA with FG-CLIP achieves better prefomance. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. Although the technical novelty of the paper is somewhat limited, the thorough experiments presented in the rebuttal are quite impressive and have addressed most of my concerns. I would like to raise my overall recommendation to a 3 and suggest that the detailed results and discussions from the rebuttal be incorporated into the revised paper. --- Reply to Comment 1.1.1: Comment: We are very glad to have addressed most of the concerns and are deeply thankful for your acknowledgment of our rebuttal. This improvement is largely due to the valuable suggestions from you and other reviewers. We will include the content from the rebuttal in the final version.
Summary: The proposed method introduces Fine-Grained CLIP (FG-CLIP) for enhancing CLIP's fine-grained understanding capabilities. The authors propose three components to address this challenge: First, they generate 1.6 billion long caption-image pairs for capturing global-level semantic details. Second, they construct a high-quality dataset with 12 million images and 40 million region-specific bounding boxes along with detailed captions. Third, they introduce 10 million hard fine-grained negative samples to help the model learn to distinguish between subtle semantic differences. For the proposed method, the training occurs in 2 stages. First stage focuses on global contrastive learning using the long caption-image pairs. Second stage incorporates regional contrastive learning and hard negative samples. The proposed model extends CLIP with position embeddings that can handle longer text (up to 248 tokens vs. the original 77) and uses ROIAlign to extract region-specific features from images. The paper further showcases results on various downstream tasks, including fine-grained understanding, open-vocabulary object detection, image-text retrieval, and general multimodal benchmarks. ## Update after rebuttal The authors have answered all the concerns mentioned in the weaknesses and experimental design sections. Furthermore, additional visualizations and results on datasets like OpenImages and LVIS strengthen their claims. They also propose a lower-resource training setup and plan to release a distilled model, improving accessibility. I increase my score to accept. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence across multiple experiments. The paper's claims regarding improved fine-grained understanding are substantiated in Table 1 and Appendix C.2, while claims about enhanced bounding box classification capabilities are validated in Table 2. The assertions about superior long caption image-text retrieval and open-vocabulary object detection are demonstrated through comprehensive results in Table 3. Furthermore, the claims about FG-CLIP's effectiveness when used as a backbone for Large Multimodal Models for attribute analysis, object localization, and reducing output hallucination are thoroughly supported by Tables 4 and 5. The paper also provides visualization evidence in Appendix C.1, further reinforcing these claims. Overall, most claims presented in the introduction are backed by quantitative and qualitative results, providing a strong empirical foundation for the paper's contributions. There is one concern however, further discussed in the “Experimental Designs Or Analyses”, namely whether individual contribution of the new data vs. the new model was substantiated. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The authors properly identify limitations in existing CLIP models regarding fine-grained understanding and propose targeted solutions. Theoretical Claims: No theoretical claims in the paper; the method proposes empirical and dataset curation. Equations 1-5 are standard loss functions. Experimental Designs Or Analyses: - There is, however, a major question of fairness in comparison since the current method is trained on data different from that of previous methods. It would be more convincing if prior methods, such as FineCLIP, should have been trained on this new dataset to determine whether the improvements stem primarily from the data quality or the model architecture itself. - The paper doesn't clearly show if using more training data is the main reason for better results, especially since FG-CLIP's local and global feature approach is similar to FineCLIP. - Testing other recent methods on the proposed dataset would make the comparisons more complete. - More examples showing how the model describes different parts of images would better prove its fine-grained understanding. - The results in Table 6 raise questions because adding regional contrastive learning doesn't improve performance much on “hard” and “medium” fine-grained tasks, which needs more explanation. Supplementary Material: I reviewed the supplementary material, which includes examples of the curated visual grounding data (Appendix A), positive and negative descriptions related to image regions (Appendix B), and visualization comparisons (Appendix C). Relation To Broader Scientific Literature: The key contribution follows previous research on vision-language models like FineCLIP and RegionCLIP, focusing specifically on region-level representation techniques. The key contribution of the proposed novel approach is related to advanced region-based reasoning and cross-modal alignment methods previously explored in multimodal learning research. Essential References Not Discussed: I think all the relevant literature has been discussed and referenced. Other Strengths And Weaknesses: Strengths: - Achieves state-of-the-art performance on multiple downstream tasks, including fine-grained understanding, object detection, and image-text retrieval. -Extensive dataset curation with 1.6 billion long caption-image pairs, 40 million region-specific annotations, and 10 million hard negative samples provides a valuable contribution (beyond the model). -Comprehensive ablation studies demonstrate the individual contributions of each proposed component (global contrastive learning, regional alignment, and hard negative samples). Weakness: - While Table 6 shows quantitative improvements from hard negative sampling, there is limited qualitative analysis (such as t-SNE visualizations) demonstrating how subtle differences are actually differentiated. -The paper shows impressive performance improvements, but it is not entirely clear if these come from the larger dataset or the new model design. While FG-CLIP offers interesting ways of handling local and global features, more detailed studies would help us understand precisely what is driving these performance gains. - OpenImages or specialized fine-grained datasets such as NUS-WIDE that contain naturally fine-grained categories would strengthen the evaluation. Other Comments Or Suggestions: No typos found. Questions For Authors: - Is the proposed method better because of the design or just because of more data? Could you test this by training other models like FineCLIP on the new proposed dataset? - Why doesn't regional contrastive learning improve results much on hard and medium fine-grained tasks in Table 6? This seems unexpected. - Can you show visual examples of how your hard negative sampling helps tell apart similar items? Some visualizations would make your point clearer. - Have you tested your approach on other datasets like OpenImages or NUS-WIDE? This would show your method works broadly. - Your training needs lots of computing power (160×910B NPUs and 8×NVIDIA H800 GPUs). Could researchers with less resources still use your approach? Are there more efficient ways to get similar results? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your constructive comments. We address your concerns below. __1. Response to Questions 1-3 in Experimental Designs Or Analyses and Weakness 2__ Thank you for your comments regarding the improvement factor and fair comparison. The improvement of FG-CLIP stems from both the model architecture and the proposed new dataset. To support this, we provide the differences between FineCLIP and FG-CLIP. Moreover, we train our proposed dataset on FineCLIP to conduct a fair comparison. 1.1 Differences between FineCLIP and our FG-CLIP. - A large-scale dataset with diverse captions can enhance FG-CLIP's fine-grained understanding capabilities. Specifically, our FG-CLIP integrates short captions with long captions, while FineCLIP only utilizes short captions during global contrastive learning. Moreover, we use a large-scale dataset (i.e., 1.6B+12M) rather than the small dataset of FineCLIP (i.e., 2.5M). - FG-CLIP discards the self-distillation strategy used in FineCLIP, which introduces significant additional computational overhead. For example, when using a single GPU with a batch size of 32 samples, incorporating the self-distillation strategy increases memory usage from 25GB to 75GB during training and reduces FPS from 25 to 9.8. This is mainly due to additional feature extraction for each bounding box in the input images. - Hard fine-grained negative sample learning helps FG-CLIP distinguish subtle differences in semantically similar pairs. 1.2 We then train our proposed dataset on FineCLIP. Due to time constraints, we perform this experiment on the 12M dataset, instead of the larger 1.6B+12M setup. From the table, the substantial improvements (Row 1 -> Row 2 & Row 2 -> Row 3) highlight that both our proposed dataset and model architecture are significant for FG-CLIP. Method|Data Source|COCO-Box-Top-1|COCO-Retrieval-I2T|COCO-Retrieval-T2I -|-|-|-|- FineCLIP|FineCLIP (CC2.5M)|50.7|54.4|40.2 FineCLIP|FG-CLIP (12M)|53.5|59.6|46.2 FG-CLIP (Ours)|FG-CLIP (12M)|56.1|65.9|47.1 __2. Response to Question 4 in Experimental Designs Or Analyses__ Thank you for this insightful question. The qualitative results in https://anonymous.4open.science/r/ICML_RE-3CF6/partshow_pic.png show that FG-CLIP is able to capture different parts of images that are strongly related to the input text, demonstrating its fine-grained understanding capabilities. We follow the same experimental settings as Appendix C.1. __3. Response to Question 5 in Experimental Designs Or Analyses__ Thank you for pointing this out. In the "hard" fine-grained task, only one attribute within the caption is replaced. Similarly, there are two replaced attributes within the caption in the "medium" task. However, regional contrastive learning primarily enhances region-text alignment in FG-CLIP, which may struggle to distinguish these fewer replaced attributes. To this end, we propose hard negative sample learning to further improve the overall performance of FG-CLIP. We will make this clearer in the manuscript. __4. Response to Weakness 1__ Thank you for raising this point. We follow the same experimental settings as Appendix C.1 and further provide the qualitative results in https://anonymous.4open.science/r/ICML_RE-3CF6/fgshow.png . After performing hard negative sampling, our FG-CLIP can capture the regions more accurately. For example, the highlighted region of "Man in red clothes" with hard negative loss in 1st row shows significantly better than that without hard negative loss. __5. Response to Weakness 3__ Thank you for this suggestion. We compare several baselines and FG-CLIP on the validation set of OpenImages. We conduct bounding box classification and follow the settings of COCO. The results further demonstrate the effectiveness of FG-CLIP’s fine-grained capabilities. Method|Top-1|Top-5 -|-|- CLIP|18.02|40.90 EVA|8.83|20.90 FineCLIP|18.10|42.16 FG-CLIP|20.60|47.43 __6. Response to Questions For Authors__ Thank you for these insightful questions. For the first four questions, we refer you to our earlier responses. Here, we focus on addressing the 5th question regarding computational resources. For researchers with fewer computing resources, we suggest discarding the first stage and directly training FG-CLIP on the dataset of 12 million images in the second stage. We conduct this experiment using 4×NVIDIA A100 GPUs, and the training process takes approximately 14 hours. Our experimental results in https://anonymous.4open.science/r/ICML_RE-3CF6/result_different_data.md indicate that this approach achieves slightly lower performance compared to the original settings but remains effective. Another possible method is to distill a model based on our pre-trained weights, which significantly reduces the computational burden. We plan to release detailed guidelines and tools for implementing this distillation process in future work, making our approach more accessible to researchers with limited resources. --- Rebuttal Comment 1.1: Comment: The authors have answered all the concerns mentioned in the weaknesses and experimental design sections. Furthermore, additional visualizations and results on datasets like OpenImages and LVIS strengthen their claims. They also propose a lower-resource training setup and plan to release a distilled model, improving accessibility. I would increase my score to accept. --- Reply to Comment 1.1.1: Comment: We are very pleased that our rebuttal has largely addressed your concerns regarding the weaknesses and experimental design sections. We are also deeply grateful for the professional questions you raised, which have significantly improved the quality of our paper. In future versions, we will incorporate the content from our rebuttal into the manuscript.
Summary: This paper proposes to fine-grained CLIP by introducing additional high-quality data and designing specific loss functions for training. As for the data, original CLIP only use short global caption data, while this work introduces long global caption, region-level caption, and region-level negative caption for training. For model training, this work introduces 2 stage training framework, with the first stage focusing on global-caption training and the second stage using all data. Experiments on several fine-grained downstream tasks demonstrate the effectiveness of the proposed method. Claims And Evidence: The proposed idea are verified by sufficient ablation studies with the newly introduced data. Table 6 demonstrates the effectiveness of global contrastive learning, regional contrastive learning, and hard fine-grained negative samples learning. ======================= After carefully reading the rebuttal and other reviews, I think more in-depth discussions of hard-negative should be included in the main text. The challenges mentioned in the rebuttal can possibly be addressed by some carefully designed methods. I also agree with Reviewer 1aL6 that the novelty is somehow limited as both fine-grained CLIP and hard negative ideas have been explored before. But I acknowledge that the manuscript is solid work as other reviewers. So my final decision is borderline. Methods And Evaluation Criteria: Yes, the proposed method is evaluated on fine-grained tasks, e.g., fine-grained understanding, bounding box classification, long caption image-text retrieval, and open-vocabulary object detection. It also maintains strong performance on coarse-grained tasks, e.g., short caption Image-Text Retrieval, and Zero-shot Image Classification. The evaluation metrics follow the standard practices for each task. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall, this work has made non-trivial contributions by introducing better training data with fine-grained details, which improve the performance for fine-grained tasks. The training is conducted in two levels: global image level and local region level. Besides, hard negatives are also introduced during training. However, this work did not sufficiently discuss all possible combinations of training loss. For example, both long and short captions are introduced for global image level training. Do we also need to consider long and short captions for each region-level boxes? Hard-negatives are only introduced at the local region level training. Do we also need to consider hard negatives during global image level training? Should we consider hard-negatives for both long and short caption? There could be many loss function terms if we consider all the situations above. Can the author provide more in-depth analysis why they use the loss function proposed in the paper? Supplementary Material: Yes, I read all sections of the suppl. One question in Table 7, the proposed FG-CLIP can still give very high-scores for hard negatives in many cases. Can the author provide more insight behind this? Relation To Broader Scientific Literature: The work is built upon CLIP, and enrich the fine-grained understanding ability with newly-introduced captions with fine-grained details. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. In Line 216, how did you sample K regions in a batch. Is this from the same image? If so, how did you sample K regions if the number of regions in the current image is less than K? 2. In Line 218, how did you segment the full-captions into small phases and establish the connections with visual boxes? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and recognizing the non-trivial contributions of our work. In response to your specific questions, we provide detailed explanations below, aiming to clarify any concerns. __1. Response to questions in Experimental Designs Or Analyses__ Thanks for your insightful comments and suggestions. Your points about considering long and short captions at both global image level and region-level boxes, as well as the introduction of hard negatives during global image level training, are indeed critical aspects that deserve further elaboration. As mentioned in our manuscript (line 264), the captions used for region-level training are derived from the global long caption using the SpaCy tool. Specifically, we extract region-specific descriptions from the global long caption to generate a single type of text description for each region-level box. This method is efficient and already contains detailed information about the objects within those regions. Given that this approach adequately describes the objects, it provides sufficient context for regional contrastive learning without the need for additional caption types. Therefore, we only utilized this single type of caption for Regional Contrastive Learning. This streamlined approach not only enhances computational efficiency but also ensures that the model focuses on high-quality, relevant textual descriptions, leading to improved performance. Regarding the consideration of hard negatives during global image level training, we acknowledge that this could potentially enhance the robustness of the model. However, there are practical challenges associated with implementing this approach: - Limitations with Long Captions: We introduce the process of creating challenging fine-grained negative samples in Section 3.2. Specifically, we modify attributes of bounding box descriptions while keeping the object names unchanged. For global long captions, which describe multiple regions within an image, generating meaningful hard negatives by modifying attribute words for each region becomes complex. The resulting negative samples might deviate too far from the original caption, losing their effectiveness as "hard" negatives. Consequently, they may no longer serve as hard negative examples for training. - Limitations with Short Captions: As illustrated in Figure 1 of our manuscript, global short captions often lack detailed descriptions of individual objects or regions. This makes it difficult to create meaningful hard negatives through attribute modification, as the short captions may not contain enough fine-grained information to make such modifications impactful. __2. Response to questions in Supplementary Material__ This is indeed an interesting phenomenon and can be attributed to several factors. The fine-grained hard negatives we created involve modifying only a few attribute words in the captions. In some cases, these modifications may still result in captions that are quite similar to the original descriptions of the corresponding image regions. This similarity can cause FG-CLIP to give relatively high scores to these hard negatives because the model perceives them as still being relevant to the image regions. Additionally, in certain scenarios, the modified attributes might correspond to minor changes within the image region, which do not significantly alter the overall visual content. Consequently, the model continues to assign high scores due to the minimal perceptual difference between the original and modified captions. __3. Response to Question 1 in Questions For Authors__ Thank you for your question regarding how we sample K regions in a batch (Line 216). To clarify: K is not a fixed number but rather represents the total number of valid bounding boxes (bbox) across all images within a batch. This means that K dynamically adjusts based on the actual number of available regions in the batch. This approach ensures flexibility and adaptability in handling batches with varying numbers of regions without introducing artificial constraints. We acknowledge that this explanation may have been unclear in the initial submission. To avoid any potential misunderstandings, we will provide a more detailed clarification in the final version of our manuscript. __4. Response to Question 2 in Questions For Authors__ We introduce this process in lines 263 to 272 of our manuscript. Specifically, we utilize SpaCy to parse the captions and extract referring expressions. These extracted expressions are then fed into a detection model to generate corresponding bounding boxes. Non-maximum suppression is applied to eliminate overlapping bounding boxes, retaining only those with predicted confidence scores higher than 0.4. This method allows us to effectively link textual referring expressions with their corresponding visual elements, facilitating more accurate and contextually relevant training for our model.
null
null
null
null
null
null
GTR: A General, Multi-View, and Dynamic Framework for Trajectory Representation Learning
Accept (poster)
Summary: This paper proposes GTR, a general, multi-view, dynamic framework for learning trajectory representation. The authors conduct a thorough review of existing studies and identify three critical limitations in current research: (1) reliance on single-view representations, (2) limited multitasking capabilities, and (3) insufficient support for model updates. To address these challenges, GTR proposes three innovative components: a multi-view encoder (MVE) for capturing diverse perspectives of trajectory data, a spatio-temporal fusion pre-training (STP) mechanism for enhanced multitasking performance, and an online frozen-hot updating (OFU) strategy to facilitate dynamic model updates. Extensive experimental evaluations demonstrate that GTR consistently surpasses 12 state-of-the-art methods across six mainstream trajectory analysis tasks. Furthermore, the experiments validate the superiority and effectiveness of the proposed module designs, as well as the model's scalability and efficiency. These results highlight GTR's potential as a robust and versatile solution for trajectory representation learning. Claims And Evidence: The motivation behind this study is clearly articulated, and the experimental results validate the identified shortcomings of existing work. Methods And Evaluation Criteria: The studied trajectory representation learning problem is important in spatio-temporal data mining community. The authors have selected datasets and tasks that are widely recognized and extensively utilized in the field. Theoretical Claims: No mathematical proofs are provided in the paper. Experimental Designs Or Analyses: This paper conducts a comprehensive and rigorous set of experiments, thoroughly evaluating the proposed GTR framework from multiple perspectives, including model effectiveness, ablation, efficiency, scalability, case studies, etc. Supplementary Material: I have checked the appendix section and the supplementary material. Relation To Broader Scientific Literature: The paper has identified three key limitations in previous studies—single-view representation, limited multitasking capabilities, and lack of support for model updates—and proposes innovative solutions through the multi-view encoder (MVE), spatio-temporal fusion pre-training (STP), and online frozen-hot updating (OFU) mechanisms. An extensive evaluation using various tasks and datasets show the superiorities of the framework and its designs. Essential References Not Discussed: The literature review is sufficient and the SOTA works have been discussed and evaluated. Other Strengths And Weaknesses: Strengths: S1. The studied trajectory representation learning problem holds significant importance within the spatio-temporal data mining community, as it addresses fundamental challenges in analyzing and understanding complex movement patterns, which are critical for various real-world applications. S2. The paper has effectively identified three key limitations in previous studies. Then, the paper proposes three innovative solutions through the multi-view encoder (MVE), spatio-temporal fusion pre-training (STP), and online frozen-hot updating (OFU) mechanisms. S3. The extensive experiments have convincingly demonstrated the superior performance of GTR and its individual modules over state-of-the-art baselines across multiple tasks and datasets. S4. The paper is well-structured and easy to follow, making it accessible to readers while effectively conveying its technical contributions and experimental results. Weaknesses: W1. Some parts of this paper are not explained very clearly, which could lead to misunderstandings and ambiguities. i) First, since there are many symbols, a symbol table is recommended for better following. ii) Besides, although the appendix includes detailed descriptions of relevant studies, the absence of citations of this "Section" in the main body may create the impression that the paper lacks a thorough discussion of prior studies. iii) In Table 1, does "Avg.Length" refer to the spatial length of the trajectory or the number of trajectory points? W2. The experimental section can be optimized. i) For instance, including the percentage increase in experimental tables can indeed enhance the clarity and impact of the results. ii) The calculation processes of some evaluation metrics (e.g., PED for trajectory simplification task, Hausdorff/DTW for trajectory generation task, HR@x for trajectory similarity search task) are not provided. W3. There are also some minor details that need attention. i) "we conduct extensive experiments on two real-world datasets demonstrate"->"we conduct extensive experiments on two real-world datasets to demonstrate". ii) "In this paper, we target perform"->"In this paper, we target performing". iii) "In classification task, there are two labels in beijing dataset"->"In classification task, there are two labels in the Beijing dataset". iv) "Mean Absolut Percentage Error (MAPE)"->"Mean Absolute Percentage Error (MAPE)" Other Comments Or Suggestions: None Questions For Authors: Please response to the comments in W1-W3. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We appreciate the positive comments and our responses are detailed below.** ``` W1: Some parts of this paper are not explained very clearly, which could lead to misunderstandings and ambiguities. i) First, since there are many symbols, a symbol table is recommended for better following. ii) Besides, although the appendix includes detailed descriptions of relevant studies, the absence of citations of this "Section" in the main body may create the impression that the paper lacks a thorough discussion of prior studies. iii) In Table 1, does "Avg.Length" refer to the spatial length of the trajectory or the number of trajectory points? ``` We sincerely appreciate the reviewer’s valuable feedback regarding the clarity of our manuscript. We have addressed each point as follows. (i) We would like to include a notation table below to clearly define all mathematical symbols used throughout the paper. (ii) We will add explicit citations to the appendix’s related work section in the main text. (iii) The “Avg.Length” in Table 1 indeed refers to the average number of trajectory points. | Symbol | Discription | | ----------------- | ------------------------------------- | | $\mathcal{T}$ | GPS Trajectory | | $\mathcal{T^g}$ | Grid Constrained Trajectory | | $\mathcal{T^r}$ | Road-network Constrained Trajectory | | $\mathcal{G}$ | Grid Cells | | $G$ | Road Network | | $D^{\mathcal{T}}$ | Road Trajectory Dataset | | $Z_R$ | Road Representation | | $Z_G$ | Grid Representation | | $Z_P$ | Position Representation | | $Z_T$ | Temporal Representation | | $Z_S$ | Spatial Representation | | $h$ | Trajectory Generalized Representation | ``` W2: The experimental section can be optimized. i) For instance, including the percentage increase in experimental tables can indeed enhance the clarity and impact of the results. ii) The calculation processes of some evaluation metrics (e.g., PED for trajectory simplification task, Hausdorff/DTW for trajectory generation task, HR@x for trajectory similarity search task) are not provided. ``` Thanks for the suggestion. (i) In addition to the detailed performance values, we are happy to include the percentage increase, i.e., 15%–60% for trajectory imputation task, 1%–4% for trajectory classification task, 10%–90% for the travel time estimation (TTE) task, 5%–26% for trajectory simplification task, 4%–8% for trajectory similarity computation task, and 37%–81% for trajectory generation task. (ii) As these evaluation metrics are well-established in the field, we directly cited the related papers that introduced them in our manuscript. The definitions of these metrics can refer to the response to reviewer vfe9's. ``` W3: There are also some minor details that need attention. i) "we conduct extensive experiments on two real-world datasets demonstrate"->"we conduct extensive experiments on two real-world datasets to demonstrate". ii) "In this paper, we target perform"->"In this paper, we target performing". iii) "In classification task, there are two labels in beijing dataset"->"In classification task, there are two labels in the Beijing dataset". iv) "Mean Absolut Percentage Error (MAPE)"->"Mean Absolute Percentage Error (MAPE)" ``` We sincerely appreciate the reviewer's careful reading of our manuscript. We will correct all these minor issues in the revised version. We will carefully proofread the entire manuscript to improve its presentation. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concerns have been addressed. --- Reply to Comment 1.1.1: Comment: We appreciate your timely response and are glad our responses addressed your concerns. Thanks again for your careful consideration of our work!
Summary: This paper introduces GTR, a novel general, multi-view, and dynamic trajectory representation framework. GTR addresses the limitations of conventional approaches that rely exclusively on either free-space or road-network perspectives by incorporating a multi-view encoder to effectively capture the intrinsic spatio-temporal characteristics of trajectory data. GTR further enhances its representation capability through a spatio-temporal mixture of experts mechanism, which dynamically integrates spatial and temporal information. Moreover, GTR proposes an innovative online frozen-hot updating strategy that enables efficient model adaptation. Extensive evaluations conducted on two real datasets, encompassing comparisons with 12 baseline methods, demonstrate that GTR achieves superior performance across multiple metrics, significantly outperforming existing approaches in various trajectory analysis tasks. ## Update after rebuttal: I have read the rebuttal and my concerns are well addressed. Good luck! Claims And Evidence: Yes. The authors effectively demonstrate the significance of multi-view, dynamic, and general aspects through comprehensive comparative experiments and a rigorous ablation study. Methods And Evaluation Criteria: Yes. i) To address the generalizability of the framework, this study introduces a novel Spatio-Temporal Mixture of Experts (ST-MOE) module for dynamically learning and integrating spatial and temporal features. ii) To enable online model update, this study proposes an effective online frozen-hot updating strategy. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes, I thoroughly examined the soundness and validity of the experimental designs and analyses. The experimental validation in this paper is indeed rigorous and convincing. The study offers a comprehensive performance comparison of 12 baseline models across two real-world datasets. Moreover, ablation studies are systematically conducted to demonstrate the effectiveness of each individual component. In addition, the authors provide details such as efficiency evaluation, model scalability evaluation, etc., which are crucial for demonstrating the practical applicability of the framework in real-world scenarios. Supplementary Material: I have reviewed the appendix section of the paper. This section includes: related work, method description, algorithm pseudocode, and additional experiments. The appendix section contains rich content that can enhance the completeness and rigor of the paper. Relation To Broader Scientific Literature: This paper makes significant contributions that are closely connected to the wider scientific field. The MVE module addresses the limitations of single-view approaches, while the STP module integrates spatial and temporal features to solve the limitation of task-specific. Furthermore, the OFU module addresses the lack of support for model update. This work addresses the challenges of traditional methods. Essential References Not Discussed: No, much of the work related to this paper has been already discussed. Other Strengths And Weaknesses: S1. GTR effectively combines free-space and road-network views, providing robust representations for various trajectory tasks, along with an online frozen-hot update mechanism that adapts to the evolving nature of trajectory data. S2. The paper makes significant experimental contributions, convincingly demonstrating the model's improvements through extensive comparative experiments, comprehensive ablation studies, and efficiency evaluations. S3. The paper provides an in-depth exploration of the existing research in the trajectory representation learning. It offers a comprehensive discussion of the distinctions between the GTR and prior studies. S4. The paper is clearly written and easy to follow, with all necessary preliminaries provided. The motivations behind the different components of the model are well articulated. W1. The paper does not specify the value of the balancing parameter β in the combined loss function (LGTR), leaving ambiguity in how the MLM and triplet tasks are weighted during pre-training, which could affect reproducibility and performance interpretation. W2. Some figures (e.g., 1 and 2) and tables (e.g. 5 and 8) in the paper are too small to be clearly legible. Other Comments Or Suggestions: D1. In table 5, the TTE task result MAPE in Porto 0.19186 should be bolded. Questions For Authors: Please answer the following questions, Q1. The paper does not specify the value of the balancing parameter β in the combined loss function (LGTR), leaving ambiguity in how the MLM and triplet tasks are weighted during pre-training, which could affect reproducibility and performance interpretation. Q2. Some figures (e.g., 1 and 2) and tables (e.g. 5 and 8) in the paper are too small to be clearly legible. Ethical Review Concerns: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for providing constructive feedback on our paper, and we greatly appreciate the acknowledgement of our contributions. We have addressed the specific concerns raised by the reviewer as detailed below. ``` W1&Q1: The paper does not specify the value of the balancing parameter β in the combined loss function (LGTR), leaving ambiguity in how the MLM and triplet tasks are weighted during pre-training, which could affect reproducibility and performance interpretation. ``` Thanks for pointing this out. In our pre-training framework, **the balancing parameter $\beta$ in the overall loss function is set to $0.7$**, which was determined through empirical validation on the validation set to optimally weigh the MLM and triplet loss components. This configuration ensures a balanced contribution from both tasks while maximizing downstream performance. ``` W2&Q2: Some figures (e.g., 1 and 2) and tables (e.g. 5 and 8) in the paper are too small to be clearly legible. ``` Thanks for the comment. We will carefully fix these presentation issues in the revised manuscript. ``` Other Comments Or Suggestions: In table 5, the TTE task result MAPE in Porto 0.19186 should be bolded. ``` Thanks for pointing this out. We will fix this marking error in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for authors' detailed rebuttal. All my concerns have been well addressed and I lean to vote for acceptance. --- Reply to Comment 1.1.1: Comment: We are happy that our responses have addressed your concerns. We would like to express our sincerest gratitude once again for taking the time to review our paper!
Summary: This paper proposes a novel framework for trajectory representation learning by integrating free-space trajectories with road network-based trajectories. The framework consists of three key components: 1) a multi-view encoder designed to handle different types of trajectories; 2) a spatial-temporal fusion pretraining mechanism that leverages a mixture of spatial and temporal experts; and 3) an online updating strategy for timely processing of new trajectories. Extensive experiments conducted on datasets from Beijing and Porto, comparing against 12 baseline methods, demonstrate the effectiveness of the proposed approach. Claims And Evidence: Yes Methods And Evaluation Criteria: The key designs in the proposed framework appear to be trivial, offering limited novelty and technical contribution. The concept of the multi-view encoder is a common and straightforward approach that has been widely adopted in numerous spatial-temporal data mining studies. Additionally, the phrasing of the "mixture of experts" with only one spatial expert and one temporal expert seems somewhat forced and lacks clarity. Furthermore, while the online updating strategy aligns with intuition and represents a simple, practical design, it falls short in terms of technical depth and theoretical grounding. Theoretical Claims: NA Experimental Designs Or Analyses: 1. The experiments only evaluate two small GPS trajectory datasets, which restricts the robustness and effectiveness of the proposed methods' performance. 2. While the authors argue that existing works cover only a limited range of trajectory tasks, the tasks mentioned in the paper are also narrow in scope, with notable omissions such as next-location prediction, map matching, and anomaly detection. 3. Most tasks only incorporate baselines based on representation learning, lacking specific SOTA baselines tailored for each task. For example, trajectory imputation tasks fail to include AttnMove and MtrajRec, while trajectory generation tasks overlook ControlTraj and others. Supplementary Material: NO Relation To Broader Scientific Literature: This work is closely related to representation learning in the context of spatial-temporal data mining and trajectory modelling. Essential References Not Discussed: 1. Xia, Tong, et al. "Attnmove: History enhanced trajectory recovery via attentional network." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 5. 2021. 2. Ren, Huimin, et al. "Mtrajrec: Map-constrained trajectory recovery via seq2seq multi-task learning." Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021. 3. Zhu, Yuanshao, et al. "Controltraj: Controllable trajectory generation with topology-constrained diffusion model." Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024. Other Strengths And Weaknesses: The paper is well-organized and easy to follow. Other Comments Or Suggestions: NO Questions For Authors: Please refer to the previous section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for all the valuable comments. **Response to E1:** Existing works (START, Trembr, ST2Vec, etc.) mainly use Beijing and Porto datasets, so we adopted them for fair comparison. Following the suggestion, we have added the larger Chengdu dataset (containing 2,140,129 trajectories). Due to space limitation, we specifically test the most computationally intensive trajectory similarity computation task. The results are shown in the anonymous link https://anonymous.4open.science/r/Rebuttal_GTR/ (Table 1 : Trajectory Similarity Computation on Chengdu Dataset). As expected, GTR maintains superior performance over baselines on this new dataset, confirming its robustness. **Response to E2:** **We would like to emphasize that GTR extends task coverage compared to existing methods**. As reported in Table 8 (Appendix A, lines 605--614), GTR is the first unified framework supporting all six fundamental trajectory tasks, whereas prior methods (including the state-of-the-art START, LightPath, JGRM) handle at most three tasks. Note that, our framework allows easy adaptation to additional tasks (e.g., next-location prediction, anomaly detection) through straightforward training strategy modifications. For anomaly detection specifically, our framework can learn discriminative trajectory representations through the MVE and STP modules. Based on that, we can learn representations of different trajectories and classify them as normal or abnormal. **Response to E3&References:** **We appreciate the reviewer's feedback regarding baseline selection**. We would like to clarify that GTR is fundamentally a trajectory representation learning framework, rather than a task-specific model. Our primary objective is to learn robust trajectory embeddings that can effectively support multiple downstream tasks, which differs from specialized methods designed for individual tasks. This experimental design aligns with existing trajectory representation learning studies. Following the suggestion, we have now included comparisons with task-specific methods (AttnMove, MtrajRec, ControlTraj). The results are shown in the anonymous link https://anonymous.4open.science/r/Rebuttal_GTR/ (Table 2: Evaluationon on Trajectory Imputation Task; Table 3: Evaluationon on Trajectory Generation Task). As observed, GTR still achieves superior performance, further demonstrating the effectiveness. In the revised paper, we will include these results and cite these works. **Response to Methods And Evaluation Criteria:** We appreciate the concerns about the technical contributions and novelty. GTR effectively and innovatively unified multi-view encoding, multi-task pre-training, and online adaptation, which have been recognized by the other reviewers (vfe9, Th6q, and nfbU). (i) While multi-view learning is not new, our **MVE is first to integrate Grid, POI, and road-network attributes to jointly model spatial, semantic, and structural features—unlike prior works that focus on only one or two views**. The ablation studies (Tables 4 and 9) prove the effectiveness. (ii) Our ST-MoE is not a trivial extension of standard MoE, as **we introduces task-aware gating to dynamically adjust spatio-temporal feature fusion**. This enables jointly optimizing multiple tasks while preserving inter-task correlations. Additionally, we employ dedicated spatial and temporal experts (rather than multiple experts) to ensure clear structural separation between spatial and temporal feature learning. This design mitigates the common issue of expert polarization, where only a subset of experts remains active. (iii) Our online updating strategy is theoretically grounded in incremental learning [1]. We appreciate the opportunity to clarify the theoretical foundations, which **combines layer-wise parameter freezing with Lyapunov stability analysis to ensure robust adaptation while preventing catastrophic forgetting**. Specifically, we freeze the first $L$ layers and update the last $N−L$ layers. The objective optimization function is: $\min\_{\theta^{L+1:N}}\mathbb{E}\_{(x, y)}\sim\mathcal{D}\_{\mathrm{new}}[\ell(f\_{\theta\_{\mathrm{pre}}^{1:L}}(x), y;\theta^{L+1:N})]$. With frozen lower-Layer parameters ($(\nabla_{\theta^{1:L}}\ell=0)$), old features remain unchanged, which guarantees that the old features are not forgotten. The updating process can be modeled as a dynamic system: $\theta_{t+1}^{L+1:N} = \theta_t^{L+1:N} - \eta_t g_t$ , with a Lyapunov function: $V(\theta)=\mathcal{L}\_\mathrm{new}(\theta)+\gamma\|\theta^{1:L}-\theta_\mathrm{pre}^{1:L}\|^2$. As $\gamma\to\infty$, the system satifies: $\mathbb{E}[V(\theta\_{t+1})]\leq\mathbb{E}[V(\theta\_t)]-\eta\_t\|\nabla\mathcal{L}\_{\mathrm{new}}(\theta\_t)\|^2$. The monotonic decrease of $V(\theta)$ ensures stable updates. Freezing lower layers prevents cascading perturbations, balancing new feature learning with old feature retention. [1] *A Comprehensive Survey of Continual Learning: Theory, Method and Application.* --- Rebuttal Comment 1.1: Comment: After thorough consideration of the authors' response and fellow reviewers' comments, I acknowledge that the response and revised version have satisfactorily addressed the majority of my initial concerns. In light of these improvements, I have raised my overall score to 3. To further strengthen this valuable contribution, I would like to offer two suggestions for the authors' consideration: 1. Additional refinement of the manuscript's writing and organization to maximize clarity and readability 2. Timely release of comprehensive, well-documented source code to ensure reproducibility and enable broader community engagement with this work These final improvements would elevate the paper's impact and utility for the research community. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your prompt response and are delighted that our clarifications have addressed your concerns satisfactorily. In the final version of the manuscript, we will carefully incorporate your valuable comments and suggestions to further enhance the writing, organization, and clarity. Additionally, we will ensure that the source code is thoroughly documented to guarantee reproducibility. Thank you once again for your insightful feedback.
Summary: This paper proposes GTR, a trajectory representation framework built on a pre-train and fine-tune architecture. The proposed GTR consists of a Multi-View Encoder (MVE) and Spatio-Temporal Fusion Mixture of Experts (ST-MoE), supports pre-training, fine-tuning, and Online Frozen-Hot Updating (OFU), and facilitates multiple downstream tasks. Compared with previous work, GTR supports POI embedding, dynamic updating, and more downstream tasks. Experiments demonstrate that the performance of GTR surpasses previous work by a large margin, and the proposed modules are effective in most cases. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The application of the related theory in this paper is correct. Experimental Designs Or Analyses: The experimental designs and analyses are largely thorough and effective. However, some experimental results did not meet expectations and lack further discussion. Supplementary Material: I have reviewed all the content of the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are related to the broader scientific literature on trajectory representation learning and to multiple application studies on downstream tasks, such as travel time estimation. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This paper is highly comprehensive and technically solid. The proposed method supports pre-training, fine-tuning, and dynamic updating; it takes multi-view inputs and facilitates multiple downstream tasks. Experiments demonstrate its superior performance. 2. The proposed challenges are well-justified, and the corresponding solutions are innovative, largely effective, and of further practical value. 3. The paper is well-presented, with a clear flow from the challenges to the corresponding solutions and the purposes of the experimental design. Weaknesses: 1. Limited details are provided on the online updating approach; please elaborate on it. For example, how does the method process the newly available trajectory data? 2. Some experimental results did not meet expectations, such as those in Table 5 and Table 9. It is necessary to provide further discussions. 3. Multiple downstream tasks and evaluation metrics are presented in the paper, and it is beneficial to provide definitions of the metrics and specify which metrics indicate good performance, for clarity. Other Comments Or Suggestions: Both MLM and Triplet Training are pre-training methods. It is beneficial to list them in parallel in Section 3.2.2 for clarity. Questions For Authors: 1. Please see the weaknesses. 2. Would you provide a comparison of model parameters between the proposed GTR and the baselines? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We thank the reviewer for offering the valuable feedback. We have addressed each of the concerns as outlined below.** ``` W1: Limited details are provided on the online updating approach. ``` We are happy to provide more details about the online updating approach. To process newly arrived trajectory data, we employ the validation set to simulate real-world online/streaming scenarios. The model undergoes single-epoch incremental model updates. ``` W2: Some experimental results did not meet expectations, such as those in Table 5 and Table 9. ``` We appreciate the reviewer's careful examination of our experimental results. Here, we are happy to provide more detailed experimental discussions to alleviate your concerns. Regarding **Table 5** (travel time prediction task), our online method GTR\* shows slightly reduced effectiveness compared with our offline method GTR. This can be attributed to the presence of outliers in short-duration trajectories. Due to the sensitivity of regression task to short-term anomalies, it tends to overfit newly arriving anomalous real-time data. Nevertheless, our methods (GTR and GTR\*) perform better than the other comparative baselines. Regarding **Table 9**'s ablation results for travel time prediction, the complete model shows slightly higher MAPE due to a small number of long-duration outliers. MAPE is known to be more sensitive to errors in lower-value ranges, which might amplify the effect of these outliers. ``` W3: It is beneficial to provide definitions of the metrics and specify which metrics indicate good performance, for clarity. ``` **These evaluation metrics are standard measures widely adopted in trajectory data processing research.** In the interest of manuscript conciseness, we directly cited the related papers in our manuscript. Here, due to space limitations, we provide rough definitions of the metrics and specify which metrics indicate good performance. We acknowledge that providing more detailed computational formulations could enhance reproducibility, and we will include these in the revised manuscript's appendix to ensure full methodological transparency. For **trajectory simplification task**, $PED$ computes the shortest perpendicular distance between a deleted point $p_i = (x_i, y_i)$ and the line segment connecting its neighboring points $p_s = (x_s, y_s)$ and $p_t =(x_t, y_t)$. Lower $PED$ indicates lower compression errors. For **trajectory imputation task**, $Recall@x$ evaluates the model's ability to recover masked tokens by checking whether the ground truth appears in the top-x predicted candidates. Specifically, if the true value is contained within the top-x ranked predictions, $Recall@x$ is assigned 1 for that token; otherwise, it is assigned 0. The final metric is computed by averaging these binary outcomes across all masked tokens in the evaluation set. The MAP represents the probability of the precision, and higher $Recall@x$ and MAP indicate higher performance. For **travel time estimation task**, we use Mean Absolute Error (**MAE**), Mean Absolute Percentage Error (**MAPE**), and Mean Square Error (**MSE**) metrics. Specifically, lower MAE, MAPE, and MSE indicate higher prediction accuracy. For **trajectory classification task**, we use $ACC$, $F1-Score$, and $AUC$ metrics. Specifically, higher $ACC$, $F1-Score$, and $AUC$ indicate higher classification accuracy. For **trajectory generation task**, we use Hausdorff and DTW distance metrics. Lower DTW and Hausdorff value indicate higher performance. For **trajectory similarity computation task**, we use Mean Rank and $HR@k$ metrics. Lower MR value and a higher $HR@k$ value indicate better performance. ``` Suggestions: Both MLM and Triplet Training are pre-training methods. It is beneficial to list them in parallel in Section 3.2.2 for clarity. ``` Thanks for the suggestion. Such layout issues can be easily adjusted. ``` Questions: Would you provide a comparison of model parameters between the proposed GTR and the baselines? ``` A comparison of model parameters is shown below. While GTR has a higher parameter count due to its multi-view encoder (MVE) and spatio-temporal fusion pre-training (STP) modules, this increase is justified by two key advantages. (i) **Enhanced Capability**. The additional parameters enable GTR to support more downstream tasks effectively. (ii) **Performance Gains**. The trade-off in model size is offset by improvements in accuracy and robustness. | Model Name | Parameter Size (MB) | | ---------- | ------------------ | | PIM| 94.57| | Trembr| 148.08| | Toast| 161.18| | START| 1126.40| | LightPath| 73.96| | JGRM| 375.60| | GTR| 862.99| --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concerns have been well-addressed. --- Reply to Comment 1.1.1: Comment: We are happy that our responses have addressed all your concerns. We thank the reviewer for reviewing our paper and providing us with invaluable comments and suggestions!
null
null
null
null
null
null
Instance-Optimal Pure Exploration for Linear Bandits on Continuous Arms
Accept (poster)
Summary: This paper studies $\epsilon$-BAI for Bayesian linear bandits with Gaussian noise and Gaussian prior on compact and continuous arm sets. The metric of performance that is being minimized is the posterior probability of identifying an $\epsilon$-optimal arm conditioned on the history of the observations. In particular, they consider instance-dependent asymptotic rate of convergence of this conditional posterior probability. This setting is challenging due to the infinite dimensionality of the space of policy (i.e., continuous distribution on the set of arms) and the non-smoothness of the objective. To alleviate these challenges, the authors re-parametrize the optimization problem with respect to the design matrix being induced by the sampling policy. As a consequence, they need to extract a policy from the optimized design matrix, which is done with projection and reconstruction following along the lines of the approximate Caratheodory problem. This requires solving quadratic and fractional quadratic optimization problem on the arm set. The authors provide an upper and lower bound on the conditional posterior probability of error. Moreover, they numerically compare their algorithm with other benchmarks on synthetic instances. **## update after rebuttal** Including the discussions will improve the paper in its revised version, hence I raised my score towards weak accept. Claims And Evidence: I. Tractable algorithm. While PCMA aims at solving more tractable optimization problems, the discussion on the tractability of PCMA is not fully convincing based on Sections 5.2 and 5.3. To the best of my understanding, PCMA has both a large computational and space cost. In particular, it would be valuable to have a more detailed discussion on the Quadratic Objective. What is the per-call computational and space cost of the quadratic objective used in Algorithm 2 ? More importantly, the authors mention that “some problems can be solved in a polynomial time, and some problems are NP-hard”. Is the quadratic objective solvable in polynomial time for the problems considered in this paper, or is it NP-hard ? Even if the computational cost is polynomial, this oracle is being called a superlinear number of times at each iteration, i.e., $O(t^{u})$ with $u> 1$. Therefore, it seems to be a major computational bottleneck. Quadratic Fractional Objective. What is the per-iteration computational and space cost of the quadratic fractional objective (being called at each time step) ? What is the convergence rate, and how does it impact the choice of the “break” condition for this optimization procedure ? In particular, Appendix G.1 seems to suggest that at most 30 iterations are used, and refer to a quantity $q_n$ not defined in Section 5.3. Algorithm 2. The memory requirement seems also to be superlinear with time. Is it possible to maintain a sparser approximation or the policy ? If the discussion is not possible for general set of arms, then it would be enlightening to illustrate the discussion in some special cases such as (1) the unit sphere or (2) a spherical cap as considered in Section 7. Based on the small scale of the experiments and the lack of numerical computational costs, I am wondering whether PCMA can be truly considered as a tractable and practical algorithm. II. Instance-Dependent Optimality. While the authors claim instance-dependent optimality, their actual statement doesn’t reflect what is commonly referred to as “Instance-Dependent Optimality”. Therefore, it would be better to be more precise and discuss the difference with what is usually referred to as asymptotic optimality. Theorem 6.2 is a statement on the conditional posterior probability of misidentification, hence it doesn’t account for the randomness of the observations themselves. Theorem 6.2 assumes knowledge of the unknown characteristic time $\tau^\star$ in order to match the optimal rate. This is not a practical assumption, hence the theorem doesn’t hold for practical implementation III. Recommendation Rule. The authors state their result for a given stream of recommendation rule. However, to the best of my understanding, they have only theoretical guarantees (Section 6.3) and experiments (Section 7) for the greedy recommendation rule. The paper would gain in clarity by being specialized for the greedy recommendation rule. The greedy recommendation rule is by far the least computational expensive method, even though it is not asymptotically optimal on all instances. Section 4.3 glimpses at a more sophisticated notion of recommendation rule, i.e., the (instantaneous) furthest answer in Jourdan & Degenne (2022). However, this choice is far from being tractable. Even with the approximated algorithms from PCMA, it would require an additional outer optimization loop for a non-convex optimization problem. On top of the intractability of this procedure, it will probably be challenging to show that Equation (7) holds for this choice. IV. Stopping Rule. To the best of my understanding, based on paragraph “Stopping Rule” in Section 3 and Section 4.2, the authors do not provide a clear and convincing discussion as regards the stopping rule. In particular, the authors should explicitly define what is the stopping rule that they are referring to, compare it to the literature on stopping rule, and explain whether this stopping rule has some theoretical guarantees. In particular, while the authors only control the posterior probability conditioned on $\mathcal F_t$, proving $(\epsilon,\delta)$-PAC also require controlling the randomness of the observations themselves, i.e., without the conditioning on $\mathcal F_t$. Therefore, comparing their upper bound to $\delta$ will not be enough to obtain an $(\epsilon,\delta)$-PAC stopping rule, both theoretically and empirically. The upper bound proposed in Lemma 4.2 does not seem to be truly non-asymptotic. It is based on a discretization of the space (Lemma D.1) and the different terms do not seem to be explicit. Methods And Evaluation Criteria: See “Experimental Designs Or Analyses” section for details on the empirical evaluation. Theoretical Claims: Lemma 4.1 seems unusual for the literature due to its dependency in $t$ on both sides of the equations. In particular, while both the sampling and the recommendation rules are not explicitly given and satisfy some general conditions, the upper and lower bounds are matching. This seems too good to be true. Could the authors provided more discussion ? Lower bound from (2). While it could be independent of the sampling policy as in Theorem 2.1 in Li et al (2024) and Theorem 1 in Russo (2016), the analysis should most likely require more assumptions as regards the recommendation rule at each time. For example, a minimum requirement should be that the recommendation is $\epsilon$-optimal for $\mu_t$, which is satisfied by the greedy recommendation rule. It seems like this property is used in the equation 15 in Appendix F.2. Some statements should also be detailed more in the proof as they don’t seem to be trivial: (Line 756-757) “convergence of t in LHS of (18) is uniform [...] we can exchange lim and sup”. Upper bound from (3). This seems to be very surprising that it holds for general sampling policy without assumptions on its asymptotic behavior. Intuitively, one should at least require convergence of the sampling policy. While this assumption is not made explicit in Lemma 4.1, it seems used in Appendix F.2: (Line 715) “we assume $(\pi_t)$ converges,”. Experimental Designs Or Analyses: The empirical metric of performance used in the experiments appears to be unusual, and lack details on its practical implementation. Instead of considering the empirical proportion of error as a function of time, the authors consider the conditional posterior probability of misidentification. Since it is hard to compute, they consider as proxy the upper bound from Lemma 4.2, yet dropping the terms $a_t$ and $b_t$. Due to the optimization over the continuous set of arms, it is not clear how $p_t$ is actually computed numerically. Are the authors using a discretization of the space, e.g., the one from Lemma D.1 ? Even for a discrete set, it is not clear how each term is numerically computed. Is it based on the Gaussian cumulative distribution discussed at the end of Section 4.2. While promising, the experimental results provided in Section 7 and Appendix G seem to be preliminary. A more extensive empirical evaluation would be appreciated. In particular, the empirical claims would be strengthened by having: - More than one hundred rounds for each run, and more than ten runs for each instance. - Plots of the empirical proportion of error as a function of time. - Plots of the actual computational cost (in terms of CPU time) for the proposed algorithms to highlight that it is tractable. - Bayesian experiments in order to actually match the Bayesian setting considered in this paper. - Number of samples needed for the approximated upper bound (from Lemma 4.2) on the conditional posterior probability of error to be lower than $\delta$. - Ablation studies to better understand the impact of the hyper-parameter of the algorithm. In particular, the experiments use $\lambda_{V} = 0$, yet the theory require $\lambda_{V}$ to depend on the unknown characteristic time (Theorem 6.2 and Propostion 6.3). Supplementary Material: Appendices A,B,C,D,E, F.1-3, F.2, F.3 and G in details. I didn’t check all the details in Appendices F.4-7. Relation To Broader Scientific Literature: The authors wrote a wrong statement on the literature: “Jedra & Proutiere (2020) [...] showed that an optimal sampling policy is the round-robin manner using the orthonormal basis.” Jedra & Proutiere (2020) show that round-robin sampling yields an algorithm whose sample complexity is order-wise optimal. Order-wise means that it has the same scaling as the optimal characteristic time, however the lower and upper bound are not matching asymptotically. Therefore, round-robin is not an optimal sampling policy, only one that is “not too bad”. Essential References Not Discussed: To the best of my knowledge, most relevant literature is discussed. Nevertheless, more details could be added in order to compare the obtained results for continuous set of arms with the ones known for discrete set of arms. Other Strengths And Weaknesses: I. Lack of clarity. In general, the paper would gain in clarity by considering a specific recommendation rule instead of studying a general formula under some assumptions. This obfuscates the discussion on the main challenges of the continuous set of arms. It would be valuable to clearly state what are the theoretical challenges in proving Theorem 6.2. Is a direct extension of the analysis for discrete sets ? Moreover, the paper would benefit from more examples to illustrate the bounds and the algorithms, e.g. special case of the unit sphere. Currently, it is difficult to parse everything in all its generality. II. Confusing notation for the conditional posterior probability of misidentification. To the best of my understanding, the only randomness in the conditional posterior probability of misidentification is with respect to $f_t \sim P_t$ since $\zeta_t$ is given. The dependency in $f_t$ is not made explicit and seems hidden in the notation $\mathcal X^*(\epsilon)$ which should depend on $f_t$ and not on $f$ fixed at initialization. Other Comments Or Suggestions: Equation (16) in Appendix F.2 lacks a square on $r_t$ in the first term of the right hand side. Questions For Authors: Several questions have been asked in the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough and insightful review and apologize for any statements that may have caused confusion. ## Tractability of the algorithm First, we note that our method is tractable in terms of the number of oracle calls. We do not specify algorithms (optimization oracles) for the quadratic and fractional quadratic objectives. ### Computation Oracles Please refer to the response to Reviewer 4oes (we note that $q\_{n}$ is defined in Line 317 (right)). ### Space complexity In every iteration of Alg. 1, we approximate our sampling policy using Alg. 2. The choice of $n_t = t^u$ is motivated by the convergence rate of the Frank-Wolfe method (Lemma F.4). Under certain assumptions, the Frank-Wolfe method exhibits a faster convergence rate [Garber & Hazan, 2015], and the cardinality of the support of $\pi_t$ can be significantly reduced. However, this typically requires the set $\mathcal{V}(\mathcal{X})$ to be strongly convex, and we are unaware of any arm-sets that satisfy this assumption. ## Instance-Dependent Optimality We agree that our method is asymptotically optimal. However, as we discussed in the introduction, our analysis is instance-dependent unlike existing methods in the continuous arm setting (such as MVR). Theorem 6.2 assumes knowledge of the unknown characteristic time, however, as we remarked after Prop 6.2, we provided a convergence analysis in the case when $\lambda\_{V} = 0$ in Prop D.2. ## Recommendation Rule We agree that our main result, Thm. 6.2, relies on assumption (7) regarding the recommendation rule, and we provide sufficient conditions for (7) only for the greedy recommendation rule. To improve clarity, in the introduction of the revised version of this paper, we will discuss this. ## PAC with respect to the unconditional probability As clarified in the paper, we consider the $\epsilon$-BAI problem under the Bayesian reward model setting. Proving that the method is $(\epsilon, \delta)$-PAC with respect to the unconditional probability $P$ is not a primary focus of this study. We believe there are several ways to theoretically validate a pure exploration algorithm. For instance, under a Bayesian setting, Russo 2016 also provided a similar analysis (asymptotic bound of the posterior error probability) to this study, but it is considered to be an important literature in this field. Moreover, the core ideas of PMCA can be applied to existing algorithms (such as LεBAI (Jourdan & Degenne, 2022)), since they do not clarify the optimization method regarding the sampling distribution, and it would be possible to prove the extended algorithm is PAC. ## Stopping Rule For any explicitly computable upper bound $u\_t$ of the conditional probability of misidentification and $\delta \in (0, 1)$, we refer to the stopping rule by the stopping time defined as the minimum $t$ satisfying $u\_{t} \le \delta$. Assuming $\mathcal{X} \subset [0, 1]^d$, we make the statement of Lemma D.1 more explicit. Using notation of 2 of Lemma D.1, by the tail probability of chi squared distribution, we can take $h\_{t} = t^{-(1 + c)}$ and $|\mathcal{X}\_{h}|$ is given as $(\sqrt{d}h^{-1})^d$. Then, $u\_t(\delta'\_t, h\_t, \mathcal{X}\_{h\_t})$ gives an explicit upper bound. As we discussed in Line 240, $\sup\_{\xi \in \mathcal{X}} P\_t(\cdots)$ can be solved using the fractional quadratic objective. ## Lemma 4.1 Both sides of Lemma 4.1 are defined asymptotically, therefore they do not depend on $t$. In addition, unlike (Russo 2016), both sides depend on asymptotic behavior of general recommendation rule $\zeta\_t$ and (the mean of) general sampling rule $\pi\_{t}$ rather than the optimal one (our analysis can also be regarded as a generalization of Eq. (1) of Glynn & Juneja (2004)). In Eq. (15) in Appendix, we use the standard fact on the normal distribution that unconditionally holds for the signature of $\epsilon + \mu\_t(\zeta\_t) - \mu\_t(\xi\_t)$. Moreover, in the proof (around Line 715), we assume a **sub-sequence** of $(V(\overline{\pi}\_{t}))\_{t\ge 1}$ converges, and we *do not assume* $V(\overline{\pi}\_{t})$ converges in the statement of Lemma 4.1. We hope this resolves your concern and if you have further questions, we would be happy to answer them. ## Experiments We appreciate your suggestions. We refer to the response to Reviewer huXi for CPU time of each method. We have conducted an ablation study of the parameter $\lambda\_{V}$ for the second ($a, b=0.2, 0.6$) problem instance in Fig 2, where MVR is sub-optimal. The table shows the mean (standard deviation) of our evaluation metric. We found that while the original choice ($\lambda\_{V} = 0$) outperforms the baselines, the optimal choice of $\lambda\_{V}$ is around $1e^{-1}$. | $\lambda\_{V}$ | 0.0 | 1e-2 | 1e-1| | -- | -- | -- | -- | | $log (p\_t)$ | -63.6 (1.4)|-64.1 (3.7)|-69.4 (2.3) | **References** - Garber & Hazan, Faster rates for the Frank-Wolfe method over strongly-convex sets, ICML 2015 --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough and detailed answers, as well as the additional experiments. At the time being, I am inclined to increase my score to weak accept. **Instance-Dependent Optimality**. Given that Theorem 6.2 is obtained by using Proposition 6.3, could the authors discuss the challenges encountered when using Proposition D.2 to generalize Thm 6.2 when $\lambda_{V} = 0$ ? Do the authors believe that a finer analysis could show the convergence in Proposition D.2 for all sequence, i.e., replacing liminf by $\inf$, or is it truly a limitation when taking $\lambda_{V} = 0$ ? **Miscellaneous**. Could the authors explicitly state how $p_t$ is computed in their experiments ? --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's further feedback. ## Analysis for Proving Theorem 6.2 Under the Condition $\lambda\_{V} = 0$ Thank you for bringing this problem up. Based on your suggestion, we reconsidered this problem, and we believe a simple modification of our analysis can prove Theorem 6.2 even if $\lambda\_{V} = 0$. If we can confirm this rigorously, we will update our manuscript in the revision. Specifically, the revised statement of Proposition 6.3 would be as follows: **Proposition 6.3 (revised version)** Let $(\pi\_t)_{t}$ be the sampling rule of PMCA (with $\lambda\_{V} = 0$), then we have $\lim\_{t \rightarrow \infty}\Gamma^{\ast}(V(\overline{\pi}\_{t}); \zeta\_{\infty}, f) = \tau^{\ast}(f; \zeta\_{\infty})$. We introduced the regularizer $\lambda\_{V}$ to ensure the sequence $(V(\pi\_{t}))\_{t}$ converges. However, as we discussed, this condition is not necessary. In the proof of Theorem 6.2, the condition $\lambda\_{V} > 0$ is not necessary until Line 1159. We can prove the above proposition by the inequality at Line 1157. Then, Lemma 4.1 implies the main result (Theorem 6.2). ## Proposition D.2 > Do the authors believe that a finer analysis could show the convergence in Proposition D.2 for all sequence. No, we believe that the condition for the step sizes is too general to derive such a result. We provide more details in the following and we explain the difficulty in deriving Theorem 6.2 from Proposition D.2. In Proposition D.2, we consider a general condition for the step sizes (i.e. $\lim\_{t\rightarrow \infty}\eta\_t = 0$ and $\sum\_{t=1}^{\infty}\eta\_t = \infty$ as in [Ruszczynski, A. P., 2006, Theorem 7.2]). This leads to a weak statement on the convergence result, that is, we have $\liminf\_{t\rightarrow \infty} \Gamma^{\ast}(V(\pi\_t); \zeta\_\infty, f) = \tau^{\ast}(f; \zeta\_\infty)$. We refer to this equality as Eq. (a). To prove our main result (Theorem 6.2), at least we need the following condition $\liminf\_{t\rightarrow \infty} \Gamma^{\ast}(V(\overline{\pi}\_t); \zeta\_\infty, f) = \tau^{\ast}(f; \zeta\_\infty)$. We refer to this equality as Eq. (b). Equations (a) and (b) are slightly different because of $V(\pi\_t)$ and $V(\overline{\pi}\_t)$, where we recall that $\overline{\pi}\_{t} = \frac{1}{t}\sum\_{s=1}^{t}\pi\_{s}$. If $(V(\pi\_t))\_t$ is a convergence sequence, Eq. (a) implies Eq. (b), but in general, we suppose the conclusion of Proposition D.2 is too weak to prove Theorem 6.2. The value of Proposition D.2, apart from the condition on $\lambda_{V}$, is that we have a convergence result under a general condition on the step sizes. We will clarify this in the appendix of the revised paper. ## Miscellaneous To compute $p\_t$ in the experiments, we used the formula introduced at Line 234, which follows from Eq (15) and the monotonicity of the cumulative distribution function $\Phi$. Assume that $\epsilon + \mu\_t(\zeta\_t) - \mu\_t(\xi) \ge 0$ holds for any $\xi \in \mathcal{X}$ (the greedy recommendation rule satisfies this condition). Then, by $\inf\_{\xi \in \mathcal{X}} \frac{\epsilon + \mu\_t(\zeta\_t) - \mu\_t(\xi)}{|\zeta\_t - \xi|\_{\Sigma\_{t}^{-1}}} = (\sup\_{\xi \in \mathcal{X}} \frac{|\zeta\_t - \xi|\_{\Sigma\_{t}^{-1}}^2}{(\epsilon + \mu\_t(\zeta\_t) - \mu\_t(\xi))^2})^{-1/2}$, we can compute $p\_t$ by calling the oracle for the quadratic fractional objective. We will clarify this in the revision. Thank you again for your valuable feedback to improve our manuscript.
Summary: This paper investigates a pure exploration problem with linear bandit feedback on continuous arm sets, aiming to identify an $\varepsilon$-optimal arm with high probability. Previous approaches for continuous arm sets have employed instance-independent methods, due to technical challenges such as the infinite dimensionality of the space of probability measures, and the non-smoothness of the objective function. This paper designs a novel and tractable algorithm that addresses these challenges by using a reparametrization of the sampling distribution and projected subgradient descent. However, this approach introduces new challenges related to the projection and reconstruction of the distribution from the reparametrization. This paper addresses these by focusing on the connection to the approximate Caratheodory problem. Compared to the original optimization problem on the infinite-dimensional space, the proposed method is tractable, requiring only the solution of quadratic and fractional quadratic problems on the arm set. This paper also provides empirical results. Claims And Evidence: The claims made in this paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. Theoretical Claims: The theoretical results look reasonable, but I didn’t go through every proof. Experimental Designs Or Analyses: The experiments look reasonable. Supplementary Material: I didn’t read the supplementary material. Relation To Broader Scientific Literature: This paper is relevant to the literature. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. This paper proposes a tractable algorithm PMCA for Posterior error Minimization for a general Continuous Arm set based on the PSGD (projected subgradient descent method). 2. This paper provides upper and lower bounds to show that the proposed algorithm is asymptotically optimal. 3. The theoretical contribution of this paper is solid and interesting to the linear bandit community. 4. Empirical results are provided to demonstrate the effectiveness of the proposed algorithm. Weaknesses: 1. The readability of this paper should be improved. This paper is dense and hard to follow. 2. This paper provides asymptotically optimal results. It seems that the practical significance of asymptotically optimal optimality is not strong, because the number of times we sample an option is finite and we care more about the optimality of the final output option in real-world applications. Can the asymptotic results in this paper provide any insight for the finite-time results/optimality? 3. More discussion on the challenges of generalization from the finite-arm set to the continuous arm set is needed. 4. Can you give more concrete examples for the computational oracles used in the proposed algorithm? For example, in what subproblems such computational oracles exist, and what are the corresponding concrete computational oracles (algorithms)? Other Comments Or Suggestions: Please see the weaknesses above. Questions For Authors: Please see the weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough and insightful review. We will revise our manuscript based on your suggestions. ## Readability We appreciate reviewer's suggestion. In a revision, we will improve the readability of the paper (e.g. by making detailed statements in Prop 6.5 concise and refer to the Appendix for a more precise statement). ## Asymptotic optimality Lemma 4.2 provides a non-asymptotic upper bound for the posterior probability of misidentification that asymptotically matches the actual conditional probability. However, a more refined analysis guaranteeing non-asymptotic optimality remains an important avenue for future research. We will discuss this limitation in the Conclusion section of the revised paper. ## More discussion on the challenges of generalization from the finite-armed setting While it would be difficult to prove that generalizing from the finite-armed setting is impossible for any existing algorithm, the non-smooth optimization objective over the space of probability measures is a fundamental problem that any asymptotically optimal algorithm must address. To the best of our knowledge, even in the finite-armed setting, the efficient solution of this problem has not been thoroughly discussed. We can also view our contribution as an efficient algorithm for the finite-armed setting, particularly when dealing with a large number of arms. ## Computational Oracles ### Quadratic Objective For example if the arm set $\mathcal{X}$ is defined as an interval constraint $l \le q(x) \le u$ with a quadratic function $q$ (this includes the case of the unit sphere), the quadratic objective can be solved in polynomial time by the Lagrange relaxation (Park&Boyd, 2017). The relaxed problem is a semi-definite problem (convex problem). If we use an interior point method (barrier method), to obtain an $\epsilon'$-optimal solution, the outer loop needs $O(\log(1/\epsilon'))$ iterations and the convergence rate of the inner loop is linear convergence. Per iteration computational complexity is $O(d^4)$ with $O(d^2)$ space complexity (Boyd&Vandenberghe, 2004, Chapter 11.8.3). In general, the problem can be NP-hard. We note that MVR (Vakili et al., 2021) has the same problem and similar to the MVR implementation (PosteriorStandardDeviation) in the BoTorch library, we use a non-linear solver (such as L-BFGS-B) with a randomly selected initial point in our experiment. ### Quadratic Fractional Objective Let $A(x)/B(x)$ be the objective we want to maximize over $x \in \mathcal{X}$. As we briefly discussed in Sec. 5.2, if we use the Dinkelbach's algorithm, we call the quadratic objective oracle once in each iteration of the Dinkelbach's algorithm. More precisely, if we define $q^{(n + 1)} = A(x^{(n)})/B(x^{(n)})$ and $x^{(n)} = argmax\_{x} A(x) - q\_{n}B(x)$, then $q\_{n}$ converges to the optimal value $q\_{\ast}$ and its convergence rate is superlinear (the error goes to zero faster than any geometric sequence) (Schaible, 1976). Then one can obtain a solution of the original problem by $ argmax\_{x} A(x) - q^{\ast}B(x)$. Due to the superlinear convergence rate, we use a limited number of iterations (i.e., 30) in our experiments. **References** - Boyd & Vandenberghe, Convex Optimization, Cambridge University Press, 2004
Summary: This paper investigates the problem of best arm identification (BAI) for Bayesian linear bandits, where the action set is assumed to be continuous. While existing investigations, e.g., [Jedra et. al.] establish optimal algorithms when the set of arms is finite, BAI for linear bandits under infinite actions is hitherto unexplored. This paper takes a step towards solving this problem, specifically, making the following key contributions: - It introduces an instance-dependent measure (which I will call problem complexity in the rest of the review), a counterpart of its finite-armed setting, and shows that the asymptotic posterior probability of error for any algorithm decays exponentially at a rate proportional to the problem complexity, and no larger (converse result). - It devises an algorithm which achieves this asymptotic error rate There are some novel observations, which I would also like to highlight. - The paper introduces a reparameterization to solve the infinite-dimensional problem (in the probability space) to an equivalent problem in the matrix space, (and hence finite dimensional) - It introduces a projected gradient descent-based algorithm for BAI (which has been used in prior works), and the projection step is tackled using a novel connection with the approximate Caratheodary theorem. Overall, the paper takes a step towards solving BAI in the linear bandit setting with continuous arm sets. Claims And Evidence: Yes, I think that the claims and evidences are coherent and sufficient. Methods And Evaluation Criteria: Yes, the experiment settings make sense. Theoretical Claims: I did not have time to check the correctness of the proofs. I would be happy to take a look at any specific part, if any issue is raised. Experimental Designs Or Analyses: This is a theoretical paper; having said that, the experimental settings suffice to bolster theoretical claims. Supplementary Material: I did not review the supplemental. Relation To Broader Scientific Literature: - BAI is an important problem in the bandit literature, with various practical applications including A/B testing, clinical trials, recommender systems, etc. Investigating the continuous-armed setting enhances our theoretical understanding of BAI. - Prior works establish optimal (fixed-confidence and fixed-budget) algorithms for linear bandits. Examples include [Jedra et. al.], [Vakili et. al.]. Most of these investigations consider a finite set of arms. - This paper positions itself as the one which extends BAI to continuous arm sets, which has not been sufficiently investigated. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I have already listed the strengths of this paper. A weakness of the paper is that the writing can be improved. Here are instances of typographical / grammatical inconsistencies: - **Line 95:** It should be $f(\zeta)$ - **Lines 161-162:** that estimates $\zeta_t$ an $\epsilon$-optimal arm ... - **Lines 220-222:** the term $b_t$ is given as (an upper bound) the ... - **Lines 272-274:** similarly should be Similarly Other Comments Or Suggestions: Answered above. Questions For Authors: I have the following questions for the authors: - the authors mention that "..the posterior probability $P_t(\zeta_t\in\mathcal{X}^*(\epsilon))$ of misidentification is known to the learner...". How does the learner know $\mathcal{X}^*(\epsilon)$? - What is the computational complexity of the reparameterization $\pi_t \mapsto \mathbf{V}_t$, since it involves an expectation operator? - It seems that the expression of the sub-gradient in (6) is its gradient. How do the authors deal with the non-differentiability at points due to the inner supremum, say, in (4)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough and insightful review. We will revise our manuscript based on your suggestions. ## Posterior probability of misidentification As we briefly discussed in Line 133 (right), the posterior probability $P\_t(\zeta\_t \not \in \mathcal{X}^{\ast}(\epsilon))$ is known to the learner, i.e., it is a $\mathcal{F}\_{t}$-measurable variable because it is defined using the conditional probability $P\_t$. More concretely, we can rewrite $P\_t(\zeta\_t \not \in \mathcal{X}^{\ast}(\epsilon)) = P\_t(\zeta\_t \le \sup_{x}f(x) - \epsilon)$. We note that conditioned on $\mathcal{F}\_t$, the reward function $f$ is identified with $f\_{t}$. In addition, the distribution of $f\_{t}$ is known due to the Bayesian setting and we provide it explicitly in Line 152 (left). Therefore, we can essentially compute the posterior probability of misidentification. We hope this resolves your concern. We will clarify the statements in Line 133 in the revision of this paper. ## Computational Complexity Since in each round, we add $n\_{t}$ points to the support of the sampling distribution, the computational complexity of the reparametrization is given as $O(d^2 n_t)$, which is polynomial in $t$. We appreciate your suggestion and will add the discussion on this in Section 5.3. ## Subgradient $g\_t$ defined in Eq. (6) is a subgradient of the function defined in Eq. (4). We used a known fact that if the function $F$ is defined as the supremum of convex functions $(F\_{i})\_{i \in I} $, then a subgradient of $F$ is given as a gradient of $F\_{i^{\ast}}$, where $i^\ast$ attains the supremum (Hiriart-Urruty & Lemarechal 2004, Chapter D, Lemma 4.4.1). We appreciate the reviewer's suggestion and will briefly explain this in the main paper.
Summary: This work studies the problem of pure exploration for linear bandits particularly with continuous arms. The paper begins by establishing a lower bound on the asymptotic posterior probability of misidentification, and then proposes a tractable algorithm, called PMCA, for minimizing the error probability. The authors provide theoretical analysis showing that the suggested algorithm finds the optimal sampling rule that achieves the asymptotic optimality for any given recommendation rule. The numerical experiments show that the algorithm outperforms MVR and the uniform sampling strategy, confirming the theoretical finding. Claims And Evidence: I agree that the suggested algorithm PMCA significantly reduces the computational burden required to optimize the sampling distribution. However, the term “instance-dependent optimality” sounds considerably misleading. Algorithm 1 and Theorem 6.2 treat the recommendation rule $(\zeta_t)_t$ as something given exogenously. Theorem 6.2 states that the sampling rule of PMCA achieves the asymptotic optimality for any given recommendation rule satisfying some regularity condition, without arguing the optimality of the recommendation rule. I believe that, conventionally, a pure exploration algorithm corresponds to a combination of sampling rule $\pi$ and recommendation rule $\xi$. It does not make sense to me that an algorithm can be said to be instance-optimal without having the recommendation rule specified. I guess that the authors are aware of that it is technically challenging to guarantee that both of sampling rule and recommendation rule safely converge to their optimal combination, say $(\pi^*, \zeta^*)$. In this work, some assumptions are being made for one component in order to guarantee convergence of the other component (e.g., Assumption 6.1 for Theorem 6.2 vs. $\lambda_{min}(V_t) \geq t^{-\alpha}$ for Proposition 6.4). It was not discussed whether the assumptions can be satisfied simultaneously. Jedra & Proutiere (2020) had introduced the notion of forced exploration in order to break this mutual dependence. I believe the authors should explicitly discuss this gap. Methods And Evaluation Criteria: The proposed algorithm PMCA looks completely sensible. Theoretical Claims: I carefully read the statements provided in the main body, but not the proofs in the appendix. I gently believe that the proofs are rigorous. Experimental Designs Or Analyses: - I believe the authors should report the computational time of the algorithms, because it is one of the main contributions of this paper. - I hope to see performance/speed of the algorithm with discretization. - In order to highlight the practical importance of PMCA, I would like to recommend the authors to find/test an application with many and densely distributed arms. Possibly, LLMs can help in direction (e.g., an arm corresponds a text's embedding). Supplementary Material: No supplementary materials to review. Relation To Broader Scientific Literature: I am not sure how the key contributions can extend beyond the linear bandit literature. Essential References Not Discussed: None. Other Strengths And Weaknesses: Although I am skeptical about the claim “near optimal”, I do appreciate the ideas behind PMCA -- the use of PSGD method and the reparameterization trick. As suggested in Experimental Designs or Analyses, I believe that this algorithm has a great potential for the situations with many many arms. Other Comments Or Suggestions: - In Proposition 6.3, the term $\zeta^*$ was used without any definition. - Please see Experimental Designs or Analyses for my suggestions on experiments. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough and insightful review. We will revise our manuscript based your on suggestions. ## Instance-dependent optimality and recommendation rule We agree that while an asymptotically optimal algorithm needs both optimal sampling and recommendation rules, we mainly focus on the optimization problem regarding the sampling rules, which we believe is the most challenging aspect in the continuous arm setting. More precisely, our main result, Theorem 6.2, relies on assumption (7) on the recommendation rule, and we show the greedy recommendation rule satisfies (7) under some assumptions. To improve clarity, in the introduction (or conclusion section) of the revised version of this paper, we will explicitly explain this more in detail. ## Assumption 6.1 and the assumption for Proposition 6.4 Our method satisfies the condition on $\lambda_{min}(V_t)$ for Proposition 6.4 because we consider a mixture distribution of $(1-t^{-\alpha})\tilde{\pi}\_{t} + t^{-\alpha}\pi\_{exp}$ at Line 15 in Algorithm 1. Thus, we can focus on Assumption 6.1 and we believe that we already discussed the validity of the assumption in Sections 6.1 and 6.3. We will clarify this in the revision and appreciate the reviewer's suggestion. ## Experiments We sincerely appreciate reviewer's suggestion on the experiments. In the following table, we show cpu time for each algorithm for the left most instance in Figure 2. Regarding a method that discretizes the arm set, the efficiency of such a method can be arbitrary worse due to the discretization, however, we need different problem instances (higher dimensional problem instances) to demonstrate the inefficiency. In the following table, the number represents time in seconds for running one experiment and shows the mean (std) over 10 repetitions. (w/ eval_time) represents the number includes cpu time for computing $p_t$ (the evaluation metric) and (w/o eval_time) represents the number excluding cpu time for $p_t$. The table shows our method took about 1.5 seconds while MVR took 0.2 seconds. This is natural since the instance-independent baselines (Uniform and MVR) do not solve the optimization objective over the space of probability measures. The experimental results show our method runs in a reasonable time. | Uniform (w/ eval_time) | MVR (w/ eval_time) | Ours (w/ eval_time) | Uniform (w/o eval_time) | MVR (w/o eval_time) | Ours (w/o eval_time) | | -- | -- | -- | -- | -- | -- | |1.2e+00 (1.9e-02) | 1.3e+00 (2.0e-02) | 2.6e+00 (4.3e-02) | 8.6e-03 (6.7e-05) | 1.9e-01 (6.4e-04) | 1.5e+00 (2.3e-02)| ## Practical applications As you mentioned, recent papers consider an application of bandits to LLMs [Li, Y., 2025, Nguyen, Quang H., et al., 2024, Jinnai & Ariu, 2024]. For instance, Jinnai & Ariu applied the Correlated Sequential-Halving to the text generation tasks (such as machine translation, text summarization, and image captioning tasks), where the objective is to find a text sequence with the best utility efficiently. By regarding an embedded sequence as an arm, we can formulate it as a BAI problem with a linear reward model. Since the set of possible sequences is exponentially large, our algorithm has potential application to this problem. More generally, we can extend any linear bandit algorithm to Bayesian optimization algorithms by random Fourier features (or quadrature Fourier features) [Mutný, M., & Krause, A. 2019]. The continuous arm setting naturally arises in the Bayesian optimization setting and has numerous applications including material design and hyper-parameter optimization. As we discussed in the conclusion section, it is an important future study of this paper to directly extend our result to the Bayesian optimization setting (without such a reduction). **References** - Jinnai, Y., & Ariu, K. (2024). Hyperparameter-Free Approach for Faster Minimum Bayes Risk Decoding. In ACL (Findings). - Li, Y. (2025). LLM Bandit: Cost-Efficient LLM Generation via Preference-Conditioned Dynamic Routing. arXiv preprint arXiv:2502.02743. - Mutný, M., & Krause, A. (2019). Efficient high dimensional bayesian optimization with additivity and quadrature fourier features. Advances in Neural Information Processing Systems 31, 9005-9016. - Nguyen, Quang H., et al. (2024). "MetaLLM: A High-performant and Cost-efficient Dynamic Framework for Wrapping LLMs." arXiv preprint arXiv:2407.10834 (2024). ## Response to other comments > In Proposition 6.3, the term $\zeta^{\ast}$ was used without any definition. We thank the reviewer for pointing out this. In Proposition 6.3, $\zeta^{\ast}$ is a typo and should be $\zeta_{\infty}$. We will correct this in the revision.
null
null
null
null
null
null
Reflection-Bench: Evaluating Epistemic Agency in Large Language Models
Accept (poster)
Summary: This paper proposes a cognitive-inspired benchmark, reflection-bench, to evaluate agency in LLMs. By decomposing the necessary cognitive procedures an agent would be required, the paper lists out seven important cognitive functions, including prediction, memory, belief updating, meta-reflection, and so on. The study selects a representative cognitive task from fields of psychology and cognitive science for each of the domains, and tests on multiple LLMs as well as several prompting strategies, such as CoT. The benchmark reveals models with larger sizes tend to perform much better than smaller sizes. Claims And Evidence: The paper has several claims regarding the benchmarks. First, the paper claims the benchmark as a benchmark of agency. The evidence is that they decompose what an agent needs to do cognitively to interact with the environment, and test the cognitive functions with relevant but separate tasks. However, this kind of evaluation is hard to be called 'agency', since there is no agent in the benchmark at all. Though people have not reached complete consensus about the definition of agent, an agent typically should be able to use tools, learn and interact with the environment, goal-directed as well as plan for the goals, and of course, those dimensions already proposed by the paper. However, when evaluating an agent, we should place them in a real-world like environment to measure their capacity integratively, not separately. It's hard to say if a model performs all that well in the current agency benchmark can be called a good agent, since engaging in these tasks separately does not require agency at all. Therefore, agency, though can be composed of many cognitive properties, it is an integrative concept. Evaluating agency with separate cognitive domains does not make sense. In this sense, I can not agree that this benchmark is really evaluating agency. Second, going specifically, the benchmark picks up seven representative cognitive tasks, corresponding to seven domains. However, when evaluating specific behaviors, only raw performance and some naive behavioral patterns are analyzed. The authors should go into more detail about the model behavior by applying computational models to describe and interpret their behaviors. In cognitive science and psychology, there are many computational models proposed. Using these models to fit parameters and even select a proper model could provide a deeper insight into LLM's behaviors. Lastly, for each task, I do not know how many repeated experiments were conducted and how the configurations of LLMs, such as temperatures, top K, top P, are set up. Comparing that performance also does not reveal any statistical analysis to show meaningful differences. This also pins out the importance of going beyond purely behavioral performance, since some tasks like the probabilistic reversal learning task contain noises. Without a detailed look at behavioral patterns and statistical analysis, it's hard to say these models are really different in performance. Methods And Evaluation Criteria: Since it's a primarily benchmark paper, the evaluation and methods have been extensively discussed above. Theoretical Claims: The paper does not involve theoretical proof, except the concept decomposition about agency. Though these cognitive dimensions make sense, the authors overlook that these dimensions should be integrated, not separated. Experimental Designs Or Analyses: The tasks are mainly from the literatures in cognitive science and psychology, which are representative and mature. As I mention above, to better evaluate agency, the paper could be improved by testing on an integrative task, like a game (Allen et.al., 2024), to measure LLM's behaviors and cognitive functions integratively. Some other approaches to deepen the design and analysis can be fine-tuning models (like Centuar, Binz et.al., 2024) on these tasks and testing them on alternative tasks, which also correspond to these concepts. This not only helps build up the understanding of how fine-tuning models can be used to improve agency, but also helps to evaluate how generalizable these selected tasks are. Or the authors could go deeper into computational modeling analysis of behaviors, as well as look into how the model's internal representations support their specific behavioral patterns and strategies (with interpretability tools like SAE). References: Allen, K., Brändle, F., Botvinick, M., Fan, J. E., Gershman, S. J., Gopnik, A., ... & Schulz, E. (2024). Using games to understand the mind. Nature human behaviour, 8(6), 1035-1043. Binz, M., Akata, E., Bethge, M., Brändle, F., Callaway, F., Coda-Forno, J., ... & Schulz, E. (2024). Centaur: a foundation model of human cognition. arXiv preprint arXiv:2410.20268. Demircan, C., Saanum, T., Jagadish, A. K., Binz, M., & Schulz, E. (2024). Sparse autoencoders reveal temporal difference learning in large language models. arXiv preprint arXiv:2410.01280. Supplementary Material: I have read all parts of the supplementary materials, which mainly contain specific information about the experiment prompts, model results, and automated evaluation scoring results. Relation To Broader Scientific Literature: This benchmark proposes seven aspects of agency in cognitive domains, which are used to test extensive existing LLMs. They can somehow improve our understanding of these models' performance in these tasks. However, due to its lack of integration in these aspects in the task, as well as limited depth about analyzing those behaviors in a computational and neural (representational) aspect, the impact of the work is limited. Essential References Not Discussed: The paper has listed and discussed key literatures in the intersection of cognitive science and AI, especially important milestones and benchmarks in these fields. However, in some detailed tasks, the citations of the reference may not be proper enough. For example, when citing references about the tasks, the authors should be aware of the origin of the task, not just cite a relevant paper. I have not gone through all the citations, but references below are definitely not accurate enough: For example, in the Iowa Gambling Task, they should cite this: Bechara, A., Damasio, A. R., Damasio, H., & Anderson, S. W. (2013). Insensitivity to future consequences following damage to human prefrontal cortex. In Personality and Personality Disorders (pp. 287-295). Routledge. (or 1994 the earlist version) instead of this: Buelow, M. T. and Suhr, J. A. Construct validity of the iowa gambling task. Neuropsychology review, 19:102–114, 2009. URL https://doi.org/10.1007/s11065-009-9083-4. for the probabilistic reversal learning task, they should cite this at least (or earlier work): Cools, R., Clark, L., Owen, A. M., & Robbins, T. W. (2002). Defining the neural mechanisms of probabilistic reversal learning using event-related functional magnetic resonance imaging. Journal of Neuroscience, 22(11), 4563-4567. Since this benchmark is cognitive-inspired, necessary and accurate background about cognitive science and psychology would help the benchmark be established more properly. Other Strengths And Weaknesses: Strengths: the paper examines extensive existing models, which are comprehensive and robust. The writing is clear and the visualization is neat and easy to understand. Other Comments Or Suggestions: I have no other comments and suggestions. Questions For Authors: I current do not have any questions and will propose in the engagement of the rebuttal phase. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough review and specialist insights. We deeply value your expertise and have carefully considered each point of your feedback. # "Agency" and its evaluation We appreciate your insightful critique regarding our "agency" conceptual framework. Your concerns about the terminology overlap of agency between our work and conventional agent capabilities is astute. Your comment that "... hard to be called 'agency', since there is no agent at all..." reflects a conception of agency centered on agent capabilities such as tool use. We would like to clarify that our focus is on the base model's foundational intrinsic quality, making these operational capabilities possible. In your recommended work Demircan et al., 2024, researchers compare Llama models with "human agents" and "Q-learning agents," suggesting that certain type of agency can be investigated where "there is no agent". Indeed, such kind of agency is undefined by the community yet, although they have gradually realized that base model's certain intrinsic quality significantly determines its effectiveness when deployed as an agent. This conceptual ambiguity highlights the need for more precise terminology to describe our scope. After careful literature review, we will refine our terminology from the broader "**agency**" to the more precise "**epistemic agency**" to better reflect our focus, e.g., changing the title to "Reflection-Bench: Evaluating Epistemic Agency in Large Language Models." Epistemic agency refers to the capacity to actively form and revise beliefs about external environments—evaluating evidence, updating beliefs with new information, and reflecting on belief-forming processes [1, 2, 3]. This concept more accurately captures what we evaluate: the cognitive processes underlying belief formation (prediction, decision-making, error detection, memory retrieval, counterfactual thinking, and belief updating) and meta-reflection on these processes. As we emphasize throughout our paper (Lines 21, 46-48, 140-145, 149), we fully agree that the seven cognitive dimensions are not separated but closely interconnected. Our methodological choice to evaluate them separately allows us to identify specific limitations that might be obscured in fully integrated tasks. By separately evaluating these cognitive dimensions—just as standardized cognitive assessments do in psychological research—we could establish necessary conditions for epistemic agency while providing targeted diagnostic insights that can guide focused improvements. Your comment that performance on separate tasks "does not require agency at all" might reflect a misunderstanding of our approach. While excellence in separated tasks doesn't constitute complete epistemic agency, deficits in any functions necessarily constrain a model's overall epistemic agency. We certainly recognize the value that integrative evaluations in game-like environments can provide, as demonstrated in recent work (Allen et al., 2024). We will discuss game-like evaluations as promising directions for future work. # Deeper analysis and experiments Considering our primary focus is establishing a standardized benchmark, we conducted 1 million random simulations across applicable tasks, establishing chance-level thresholds at the 95th percentile (Table 9). This provides a statistically sound metric for determining whether models merely produce plausible-looking outputs. We will incorporate it alongside our existing metrics. Thank you for helping enhance the scientific rigor of our work. Following your recommendation, we've added evaluation results for both Centaur and its base model on easy and hard settings. But Centaur shows no improvement, despite using specific prompt format as suggested. This unexpected result suggests Reflection-Bench reduces contamination. You can check the results at this anonymous link: https://anonymous.4open.science/r/ICML_Rebuttal-773F/For%20Reviewer%20Xeen.md Your suggestion to apply computational models and SAE analysis is thoughtful. While such approaches provide additional insights, implementing these analyses expands our scope beyond establishing a comprehensive benchmark. For our task adaptations, new computational models would be needed. And behavioral failures present methodological challenges for internal representations analysis We anticipate that our benchmark will promote more such works in future. ### Other concerns We will enhance experimental clarity in our revised manuscript Section 4.1 and update the references to include original works accordingly. Thank you for your thorough and constructive review. We are grateful for your specialist comments, which strengthen our work regarding terminology refinement, statistical validation, and future research directions. [1] The Routledge Handbook of Philosophy of Agency [2] Knowledge, Dexterity, and Attention: A Theory of Epistemic Agency [3] Belief, Agency, and Knowledge Essays on Epistemic Normativity --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you very much for your additional work. I appreciate the inclusion of Centaur as a comparable model and will raise my score to 2 in recognition of your responsiveness. However, I still cannot offer a more positive evaluation of the overall contribution. The central concern remains: the benchmark tasks, while inspired by cognitive constructs, do not constitute "agency"—even under the refined term "epistemic agency"—because they do not require the integration of multiple cognitive functions in a context where such integration is necessary. Without an agent interacting with an environment or pursuing goals, it is difficult to interpret these tasks as measuring any form of agency, epistemic or otherwise. The decomposition of agency into component tasks is useful, but separating them entirely limits the ecological validity of the evaluation. In addition, I find the claim that deeper analysis lies outside the scope of a benchmark to be somewhat inconsistent with the paper’s stated contributions. The benchmark is motivated by cognitive science, and the tasks are drawn from this literature. Therefore, it is reasonable to expect more cognitively meaningful analysis—such as computational modeling, learning curve characterization, or internal representation analysis—to support this positioning. Without such depth, the benchmark risks being another behavior-only evaluation, lacking the interpretive value that would make it stand out from existing work (e.g., Binz & Schulz, 2023). In short, the work remains limited both in **breadth** (its definition and implementation of agency) and **depth** (the insights it provides into model behavior). I appreciate the effort made to address these concerns, but they are, in my view, only partially resolved. Thank you again for the thoughtful engagement. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your continued engagement and for recognizing our efforts. We appreciate your perspective while respectfully maintaining that our work makes meaningful contributions to the field. Our primary contribution lies in identifying and systematically decomposing epistemic agency - a previously ill-defined quality that fundamentally affects how base models perform when deployed as agents. This framework provides theoretical foundations for future research. As LLM-based agents emerge as the next frontier in AI research and applications, "epistemic agency" has broader implications for the trustworthiness of LLM-based agents - only when a model possesses robust mechanisms for belief formation and revision can it be considered accountable for its actions. Our empirical research evaluates current models' cognitive capabilities necessary for epistemic agency, offering fine-grained diagnostic insights for future development. Our current behavioral findings establish baselines while highlighting directions for model improvement. The observed poor performance of current models on several tasks, with some performing below chance level, inherently limits the utility of computational modeling and representation analyses at this stage. This underscores the timeliness of our behavioral assessment framework as a necessary first step, laying the groundwork for more sophisticated analyses once models demonstrate more robust capabilities. We look forward to this work developing as a research line that progressively incorporates more sophisticated evaluation methods and analyses, including the game-like evaluation, computational modeling, learning curve characteristics, and internal representation analyses you suggested. We sincerely appreciate your thoughtful feedback.
Summary: The authors propose Reflection-Bench as a contamination-free benchmark consisting of seven parameterized cognitive tests inspired by cognitive psychology paradigms. The experimental evaluation spans 16 prominent LLMs and three prompting strategies: direct generation, free output, and Chain-of-Thought (CoT). Results identify a three-tier performance hierarchy among models, highlighting significant limitations particularly in meta-reflection. The paper concludes with implications for future research, notably enhancing meta-cognition, developing adaptive cognitive strategies, and encouraging coordinated cognitive capabilities within LLMs. Claims And Evidence: Yes Methods And Evaluation Criteria: - The authors only evaluated entry-level difficulty tasks. Evaluating different difficulty levels or scalability (e.g., medium and high difficulty) would strengthen the generalizability and robustness of results. - While focused intentionally on intrinsic agency, the paper does not investigate how agency manifests within integrated, real-world agent workflows, limiting practical generalizability. Theoretical Claims: Yes Experimental Designs Or Analyses: The current tasks, while well-designed, remain somewhat abstracted from realistic, naturalistic contexts, potentially limiting their predictive power regarding real-world agent performance. Supplementary Material: Yes Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** 1. **Comprehensive Framework:** Reflection-Bench is systematically designed, covering a robust set of cognitive dimensions rooted in cognitive psychology literature, and the tests are thoughtfully adapted to suit LLM evaluation contexts. 2. **Novelty and Significance:** The paper addresses the understudied aspect of intrinsic agency in LLMs, providing a comprehensive cognitive-level evaluation beyond traditional application-specific benchmarks. 3. **Clear Methodology:** The authors provide a detailed description of cognitive tests and their adaptation methods, ensuring clarity and reproducibility. **Weaknesses:** See Above Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and recognition of our benchmark's strengths. We have addressed your key concerns as follows: # Evaluation across different difficulty levels We fully agree with your suggestion about evaluating tasks at varying difficulty levels. In response, we have expanded our evaluation to include more challenging parameter configurations. We tested 18 models (including the 16 original models plus Centaur, a model fine-tuned with human performances on various cognitive tests [1], and its base model Llama-3.1-70B-Instruct) on harder configurations using direct generation. The results, presented in a new table (Table 9), show the expected performance decreases on harder configurations and suggest that Reflection-Bench is both minimizing contamination and far from saturated. We also conducted 1 million random simulations across five applicable tasks, establishing robust chance-level thresholds at the 95th percentile of random performance distributions (Table 9). This provides a statistically sound metric for determining whether models merely producing plausible-looking outputs. Performance exceeding these thresholds indicates that models have developed meaningful understanding of the underlying task parameters. In our revised manuscript, we will incorporate these statistical benchmarks alongside our existing metrics. Thank you for helping enhance the scientific rigor of our evaluation framework. You can check Table 9 at this anonymous link: https://anonymous.4open.science/r/ICML_Rebuttal-773F/For%20Reviewer%205qGQ.md # Regarding real-world generalizability ## Definition Clarification Thank you for pointing out the concern about real-world applicability. While we would like to clarify that our research intentionally focuses on assessing models' intrinsic capabilities at the cognitive level, independent of specific external tools or applications. To further clarify our scope, we have refined our terminology "**agency**" to "**epistemic agency**," which more precisely captures our focus on cognitive capabilities that enable belief formation, revision, and reflection in dynamic environments -- making operational capabilities such as planning possible and reliable [2, 3, 4]. Undoubtedly, without robust intrinsic epistemic agency, even the most sophisticated tool integration or workflow design will be limited by the model's core cognitive constraints and, therefore, not reliable. The research community has increasingly recognized that LLM-based agent performance critically hinges on some intrinsic quality of the base model, yet there remains considerable ambiguity about the precise nature of this quality. Our benchmark aims to evaluate it as "epistemic agency" that determines whether models can serve as reliable cores for AI agents in any real-world context. We will revise our manuscript accordingly and update the title to "Reflection-Bench: Evaluating Epistemic Agency in Large Language Models." We believe that by establishing "epistemic agency" as a well-defined, measurable characteristic of language models, our work provides the missing framework to systematically identify and evaluate this crucial but previously nebulous quality. ## Ecological validity As we emphasized in line 76-80: "Cognitive tests create controlled environments where subjects must learn and reason about unknown parameters through interaction, offering standardized, quantified, and objective assessment tools that mirror real-world functioning. ", cognitive tests are specifically designed to extract and evaluate the abstract cognitive features underlying everyday real-world functioning. This property, known as "ecological validity" in cognitive assessment, allows controlled tasks to provide meaningful insights into fundamental capabilities that support real-world performance. Similarly, our adapted tests extract core cognitive dimensions essential for epistemic agency in any environment. We will acknowledge in our Limitations section that the ecological validity of Reflection-Bench for LLMs specifically requires further validation, and identify this as an important direction for future work. We will also discuss how future iterations could incorporate more naturalistic contexts to complement our controlled assessments. We are grateful for your thoughtful feedback, which significantly strengthens both the conceptual clarity and methodological rigor of our research, enhancing Reflection-Bench's contributions to the field by providing a comprehensive framework for evaluating the foundational capabilities that ultimately determine an LLM's effectiveness as a reliable agent core. [1] Centaur: a foundation model of human cognition [2] The Routledge Handbook of Philosophy of Agency [3] Knowledge, Dexterity, and Attention: A Theory of Epistemic Agency [4] Belief, Agency, and Knowledge Essays on Epistemic Normativity --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I have raised my score.
Summary: This paper proposes a benchmark to evaluate agency in large language models. The authors define agency along seven dimensions, namely prediction, decision-making, perception, memory, counterfactual thinking, belief updating, and meta-reflection. For each of these, the authors adapt a task from cognitive psychology for LLMs. The benefit of these tasks is that they are parametrised and 'contamination-free' (in the sense that if a model has memorised a task you can change the parameters, and also that the tasks for a particular parameter configuration are unlikely to occur in training data). The authors evaluate many LLMs in three classes (normal LLMs, reasoning LLMs, and Qwen models), with three different prompting strategies (direct generation, free generation, and CoT), and repeat each task at least twice. They find that their benchmark has a clear discriminating factor, and models exhibit some agency according to their measure, but fail on some tasks. Most notably, no model is able to perform the meta-reflection task which requires determining a repeating pattern in a sequence and adapting predictively based on it. ## Update after rebuttal My main points are addressed, and I am in favour of accepting this paper. Claims And Evidence: Claims are supported. Methods And Evaluation Criteria: The methods and evaluation criteria make sense. The evaluated dimensions are all important for agency, although they do not encompass every aspect that is commonly considered agentic (e.g. planning). The authors evaluate a comprehensive set of models, use different prompting techniques, and repeat experiments to handle stochasticity. Theoretical Claims: N>A. Experimental Designs Or Analyses: Everything seems sound/valid. Supplementary Material: Yes, the results figures 14-16, as well as the task examples and system prompts in Appendix A. Relation To Broader Scientific Literature: Many recent works evaluate agency in LLMs, but the authors distinguish themselves by defining agency along 7 dimensions and taking tasks from cognitive psychology and adapting them to LLMs. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: **Strengths** - The tasks are original, well-adapted for LLMs, and the results can discriminate between LLMs well - For one task currently all LLMs fail (the meta-reflection task) - The authors do a comprehensive evaluation (multiple models, trials, and prompting strategies) - The authors do a comprehensive analysis of the results, and do interesting findings such what kind of strategies models employ for sequential belief updating tasks (namely, that models employ a win-stay-lose-switch strategy) **Weaknesses** - Presentation of results in figures. Figure 14 and 15 are difficult to parse. Consider also using informative captions. For Figure 16, you're connecting the dots with lines between models. But the points on the line between models don't indicate anything. For categorical X-labels don't use a line plot. - The analysis is only done for one set of parameters; it would be interesting to have at least one more set of parameters to see how models get worse/better for different parameters in the tasks. Other Comments Or Suggestions: - Consider highlighting that the definition of agency is a well-studied and difficult subject without consensus, and the presented definition here is not generally accepted. - l160-162 needs to be rewritten - Would be useful to see a figure or table with an example for each task in 3.2 - The number reflecting the result is hard to interpret. Can you say what the interpretation is? Would it be useful to get a human score on this benchmark for comparison? - Styling error: When referring to figures like "Figure K", the space is often missing (e.g. see Line 351r and 315l) Questions For Authors: - Appendix D.1; authors validate automatic evaluation with a human. I can't parse *"Five of them were out of the evaluated models in this paper for verifying the generality of our automated evaluation method based on text-embedding-3-large."* How many examples do you evaluate? And only of 5 models of 13? How were these selected? - Figure 18; what does each panel refer to? - How is CoT and direct generation done? I can't find the prompts in the appendix. - Why is Qwen a separate category of models? - Interesting analysis for the Wisconsin card sorting test; models not being able to change from the shape rule they determined ("shape sink"). could this be connected to a shape bias in humans (Landau et al. 1988). E.g., maybe models could change to different rule if it does not concern a shape rule, did you try this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and helpful suggestions. We've addressed your feedback as follows: # Complementary experiments Following your suggestion, we evaluated 18 models (original models plus two additional models: Centaur, fine-tuned with human cognitive tests performances [1], and its base model Llama-3.1-70B-Instruct) on a difficult parameter set using direct generation. The results (Table 9) show expected score decrease, suggesting that Reflection-Bench is both leakage-resistant and far from saturated. As for the score interpretation, we conducted one million random simulations for five applicable tasks (excluding the non-parameterized oddball task and qualitative meta-bandit task) to establish chance-level thresholds (95th percentile). Scores above these thresholds (Table 9) indicate statistically significant task performance, with higher scores reflecting a more precise inference of task parameters. # Definition of agency Thank you for your feedback on our conceptual precision. We will add a paragraph recognizing the conceptual complexity of "agency". To further address the ambiguity between "agency" and common agent capabilities like planning and tool usage, we will revise the terminology to "**epistemic agency**", a concept that more accurately captures our research scope. It refers to one's intrinsic cognitive foundation for constructing, modifying, and monitoring its beliefs about the external world [2, 3, 4], independent of specific external modules or tools. Additionally, epistemic agency has direct implications for AI trustworthiness -- only when a model possesses robust mechanisms for belief formation and revision can it be considered accountable for its actions. We believe this refinement strengthens the theoretical precision and potential contributions of this work. While the research community widely acknowledges that some intrinsic quality of base models significantly determines their effectiveness as agents, this characteristic remains inadequately characterized (sometimes simply labeled as "intelligence"). Our benchmark aims to characterize and measure this elusive foundation, .i.e. epistemic agency, which will contribute meaningfully to the field. # Presentation improvements We've enhanced the presentations by: - Redesigning Figures 14 & 15 with more informative captions - Replacing line plots in Figure 16 with bar charts - Adding a comprehensive figure in Section 3.2 with illustrations for all task (Figure Tasks) - Fixing spacing issues in figure references throughout the paper - Rewording lines 160-162 for clarity: "For LLMs, prediction capabilities are crucial for planning tasks, where models must reason about which policies will effectively transition an agent from its initial state to a desired goal state." - adding the prompts for two strategies in Appendix A # Other specific questions - Appendix D.1 clarification: We validated our automated evaluation method using 1,950 responses from 13 models. Five of these models (GPT-4, Gemini-1.5-Pro, Llama-3.1-405B/70B/8B) were not included in our main evaluation but were used to verify that our embedding-based method generalizes to other models. This approach ensures that our evaluation methodology is applicable to a diverse range of models. - Figure 18 panels: Each panel represents different levels of data aggregation (from 5 to 25 datapoints) used to analyze the correlation between human annotation and automatic evaluation. - Qwen categorization: We separated the Qwen-2.5 series (72B, 32B, 14B, 7B) as one category to show how performance scales with model size within a consistent model family. - Human baselines: Our adapted tasks differ from standard human cognitive tests, making direct comparisons with existing human scores potentially not appropriate. We will add a discussion in the limitations section about the human-LLM comparisons for establishing the ecological validity of Reflection-Bench. - Shape bias in WCST: This is an insightful observation. We conducted additional experiments with four Qwen-2.5 models where we modified both the rule blocks and card formats from shape-color-number to color-number-shape. We found that the "shape sink" effect persists for Qwen2.5-14B-Instruct (Figure WCST), while less evident for the other three models. Although beyond the scope of the current paper, further investigation of this phenomenon could provide valuable insights into how language models indirectly encode human cognition in other modalities. Thank you again for your valuable feedback, which has significantly improved our paper. You can view the updated table and figures at this anonymous link: https://anonymous.4open.science/r/ICML_Rebuttal-773F/For%20Reviewer%20EAfK.md [1] Centaur: a foundation model of human cognition [2] The Routledge Handbook of Philosophy of Agency [3] Knowledge, Dexterity, and Attention: A Theory of Epistemic Agency [4] Belief, Agency, and Knowledge Essays on Epistemic Normativity
Summary: This paper presents Reflection-Bench, a benchmark designed to evaluate the intrinsic agency of LLMs from seven cognitive dimensions: prediction, decision-making, perception, memory, counterfactual thinking, belief updating, and meta-reflection. The authors use or adapt a cognitive psychology-inspired and parameterized test for each of the seven dimensions, and evaluate 16 LLMs across three model categories, using three prompting strategies. The performance distribution shows a clear three-tier hierarchical performance structure aligned with model scaling, demonstrating a basic level of agency. However, detailed behavioral analyses reveal significant weaknesses in LLMs’ capabilities, particularly in prediction, decision-making, and meta-reflection. The results suggest that future research should focus on improving meta-cognition, developing dynamic reasoning mechanisms, and enhancing coordination among cognitive capabilities. Claims And Evidence: * The authors claim that by using parameterized tests (parameters remain dynamic or unknown while the task might be seen during training), this benchmark is "contamination-free" --- I believe this is an over-statement. Although this may avoid verbatim memorization, having seen the task format with alternate numbers or solutions still counts as implicit contamination ("implicit contamination" in [1], footnote 2; "in-distribution contamination" in [2]). While I agree that parameterized tests may greatly reduce such contamination, "contamination-free" is over-claiming (in fact, in section 6, a weaker statement is used: "ensures minimization of potential data contamination"). * Another problematic claim (or general perspective) is that this benchmark is claimed to measure intrinsic "agency". A critical aspect of LLM agents, (autonomous) tool use, is completely missing. Other claims are supported by clear and convincing evidence. [1]. Generalization or Memorization: Data Contamination and Trustworthy Evaluation for Large Language Models [2]. DICE: Detecting In-distribution Contamination in LLM’s Fine-tuning Phase for Math Reasoning Methods And Evaluation Criteria: The proposed methods and evaluation criteria mostly make sense for the problem or application at hand. However, I believe it is not adequate to fully evaluate "agency", as mentioned above. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The experimental designs and analyses are sound. The cognitive tests are originated from human cognition literature and the adaptations are reasonable for the purpose of testing the cognitive process of LLMs. Model-based automatic evaluation is validated with detailed correlation checks with human annotations. The model and prompt selection are also reasonable. Supplementary Material: No code provided. Although the prompts and examples of each task are displayed in the appendix. Relation To Broader Scientific Literature: This benchmark builds upon existing research in LLM evaluation, cognitive psychology, and AI agency. Prior work on LLM evaluation has largely focused on individual and task-specific benchmarks (e.g., reasoning, planning, and decision-making) rather than holistic agency, while Reflection-Bench frames these abilities within a structured agency evaluation framework. Essential References Not Discussed: To my best knowledge, no essential reference is missing. Other Strengths And Weaknesses: Strengths: * This benchmark adapted several cognitive tests to evaluate LLM agency, which appears a novel contribution. * The analyses (including the secondary analysis on the model performance patterns) are in-depth, highlighting clear trends in decision-making, memory, and meta-reflection capabilities. * The results provide interesting insights into critical future directions. Weaknesses: * Overclaiming or inadequate evidence for "contamination" and "agency". Other Comments Or Suggestions: * To provide a more compiling evidence of "contamination-free", specific testing protocols could be incorporated (e.g. [3]), and new cognitive tests/games (with newly-created rules) or tests after the model's cutoff time can be developed (e.g. similar to [4]), although I do acknowledge that this might be beyond the scope of the current work (i.e. I would be happy if the authors simply weaken this statement for this work). Therefore, I put it here as a suggestion for future versions. [3]. Investigating Data Contamination in Modern Benchmarks for Large Language Models [4]. LiveBench: A Challenging, Contamination-Free LLM Benchmark Questions For Authors: 1. For the MBT tests for meta-reflection, which the models generally struggled on, do you have some failure examples (showing model output) and detailed error analysis on the failure patterns? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful suggestions. We've addressed your concerns as follows: # Regarding "contamination-free" claims We agree that our claim of "contamination-free" was overstated. We will revise all such instances throughout the paper to use more precise language such as "reducing potential data contamination" or "minimizing data leakage." To address this concern further, we examined your recommended references and considered applying the TS-Guessing method [1], but found it incompatible with Reflection-Bench's format where models typically choose between given strucuralized options (Right/Left, Yes/No). We've also added new evaluation results of Centaur [2] (specifically fine-tuned on human cognitive test performances) and its base model Llama-3.1-70B. Centaur shows no performance improvement compared to its base model, providing empirical evidence that our parameterized design effectively minimizes data leakage concerns. Additionally, we will add a discussion in our Limitations section about developing novel test designs similar to LiveBench [3] as a promising direction for future work. # Clarifying the scope of "agency" We agree with the potential confusion between our notion of agency and agent capabilities such as planning and tool usage. Our focus is not on these operational capabilities but rather on the underlying processes that make such capabilities possible and reliable. While there is a consensus that base model's certain intrinsic quality significantly determines its effectiveness when deployed as an agent, this foundational characteristic remains poorly characterized in the research community (someone calls it "intelligence"). Our paper aims to identify this undefined quality and evaluate it systematically. Based on your feedback, we have refined our terminology to "**epistemic agency**," which more accurately captures our intended meaning. This philosophical concept refers to a model's intrinsic cognitive foundation for constructing, modifying, and monitoring its beliefs about the external world [4,5,6] - independent of specific external modules, tools, or applications. Moreover, epistemic agency has direct implications for the trustworthiness of AI systems. Only when a model possesses robust mechanisms for belief formation and revision can it be considered accountable for its actions. In the revised manuscript, we will clarify this distinction throughout, including changing the title to "Reflection-Bench: Evaluating *Epistemic Agency* in Large Language Models." We believe this refinement addresses your concerns. Thank you for helping us clarify this crucial aspect of our work, which strengthens both the conceptual precision and potential contributions of our research. # MBT failure example We had provided a representative example of MBT (meta-bandit task) failure in Figure 7, but did not explicitly point out in its caption. So we clarified the caption of Figure 7 to explicitly identify it as showing representative failure patterns from GPT-4o (Figure 7). You can check the revised Table 9 and Figure 7 at this anonymous link: https://anonymous.4open.science/r/ICML_Rebuttal-773F/For%20Reviewer%20sNx4.md # Code and data availability We confirm that the complete code and dataset for Reflection-Bench will be open-sourced on GitHub following the double-blind review process. We believe these revisions address your concerns while improving the paper's rigor and clarity. Thank you again for your valuable contributions to strengthening our work. [1] Investigating Data Contamination in Modern Benchmarks for Large Language Models [2] Centaur: a foundation model of human cognition [3] LiveBench: A Challenging, Contamination-Free LLM Benchmark [4] The Routledge Handbook of Philosophy of Agency [5] Knowledge, Dexterity, and Attention: A Theory of Epistemic Agency [6] Belief, Agency, and Knowledge Essays on Epistemic Normativity --- Rebuttal Comment 1.1: Comment: Thank you for your response on data contamination and the scope of "agency". With the current scope and contribution effectively reduced to "testing the intrinsic cognitive capabilities of LLMs", I find it quite borderline. Therefore, I am inclined to keep my score unchanged.
null
null
null
null
null
null
Global Convergence and Rich Feature Learning in $L$-Layer Infinite-Width Neural Networks under $\mu$ Parametrization
Accept (poster)
Summary: This paper investigates the training dynamics of infinitely wide neural networks with mup and SGD. They show that these neural networks can learn rich feature spaces and enjoy global convergence, which is better than other mainstream parameterizations such as NTK, MF, and SP. They also validate the theoretical findings through real-world experiments. Claims And Evidence: No. The proof of the global convergence property (Corollary 4.6) is too intuitive and seems wrong. Please see the details in "Theoretical Claims". Methods And Evaluation Criteria: Yes. mup is a popular parameterization in the pretraining of large models. However, its mechanism is rarely explored. This paper tries to prove the advantages of mup over other parameterizations (both in kernel and feature learning regimes). Theoretical Claims: I appreciate the result in Theorem 4.5, which proves that the feature representations evolve while maintaining their diversity and avoiding collapse throughout training. However, the proof of the global convergence is intuitive and seems wrong. In my opinion, any neural network enjoys the zero training loss and global convergence if it will no longer be updated after some training step. Experimental Designs Or Analyses: Yes. The paper conducts experiments to verify that mup can perform feature learning (better than NTK and SP) and keep a rich feature space (better than IP). I think the empirical results can support their main results. Supplementary Material: Yes. I reviewed the Section of Experimental Details and some proofs of the theoretical results. Relation To Broader Scientific Literature: NA. Essential References Not Discussed: NA. Other Strengths And Weaknesses: Strengths: 1. The paper is written clearly. 2. The paper tries to study a significant topic in the theory for mup. 3. As far as I know, the proof idea in this paper is original. Weakness: 1. I think the authors need to carefully address my concern about the global convergence. Other Comments Or Suggestions: NA. Questions For Authors: 1. Is the $t$ in Theorem 4.5 a finite constant w.r.t. width like that in the TP series (e.g. TP4)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the Reviewer for taking the time to review and give feedback on our manuscript. We appreciate the positive comments regarding the paper's clarity, the topic's significance, the originality of our ideas, and the supportive experimental results. We are particularly grateful for the reviewer's recognition of Theorem 4.5's contribution regarding feature learning under $\mu$P. Regarding the reviewer's main concern about the global convergence result, we believe there is a significant misunderstanding about what our result delivered and why it is non-trivial. The reviewer states (in the comments on "Theoretical Claims" and in Question 1) that **Any neural network enjoys zero training loss and global convergence if it will no longer be updated after some training step.** We respectfully but strongly disagree with this premise as a general statement and wish to clarify this crucial point, as it seems to underlie the reviewer's doubt. --- **Clarifying the Core Misunderstanding Regarding Global Convergence** While it is true that training algorithms like SGD stop updating when they reach a point where the gradient is zero (a stationary point), this absolutely does not guarantee that this point is a global minimum. In the complex, non-convex loss landscapes of neural networks, such stationary points can very often be suboptimal local minima or saddle points, where the training loss is significantly higher than the lowest possible value (the global minimum) and the model has not achieved the best possible fit to the training data. The core contribution of our Corollary 4.6 is precisely to demonstrate rigorously that, for the specific setting we study (infinitely wide networks with $\mu$P trained using SGD), the optimization dynamics avoid getting trapped in these suboptimal stationary points. Our proof establishes that the training process is guaranteed to converge to a global minimum of the training loss function. **Therefore, proving global convergence is far from trivial**; it requires showing that suboptimal stationary points (such as local minima with loss higher than the global minimum and relevant saddle points) are escaped or avoided, leading specifically to the desired global minimum (the lowest possible training loss). This property is not inherent to any network training process that stops; it relies critically on the theoretical properties of $\mu$P in the infinite-width limit that we analyze in our paper. Our work provides a theoretical foundation for how $\mu$P facilitates finding the global optimum, which is a key aspect of understanding its effectiveness. --- Response to Specific Questions: **Q1**. In my opinion, any neural network enjoys zero training loss and global convergence if it will no longer be updated after some training step. **A1**. As detailed above, this statement reflects the core misunderstanding we wish to clarify. Halting updates only signifies reaching a stationary point (zero gradient), which is not necessarily a global minimum. It could be a suboptimal local minimum or a saddle point with higher loss. Our key result in Corollary 4.6 is the non-trivial proof that, under $\mu$P and infinite width, SGD specifically converges to a global minimum of the training loss, overcoming the challenge of potentially getting stuck in other stationary points with higher loss values. This is our specific contribution. --- **Q2**. Is the t in Theorem 4.5 a finite constant w.r.t. width like that in the TP series (e.g., TP4)? **A2**. Our current analysis focuses on the behavior of infinitely wide neural networks. Therefore, investigating the dependence of constants like t on finite network widths is beyond the scope of this paper. This remains an interesting open question for exploring finite-width corrections. --- We trust this clarification addresses the reviewer's primary concern by highlighting the crucial distinction between halting training updates and achieving mathematically proven global convergence. We believe our theoretical analysis provides a rigorous justification for the observed empirical success of µP and clarifies the optimization dynamics in this important regime. --- Rebuttal Comment 1.1: Comment: Can you prove rigorously that "However, the non-degenerate trajectory ensures that a nonzero error signal would necessitate further parameter updates" in the proof of Corollary 4.6? ------------------- My concern has been addressed. I have improved my score from 2 to 4. --- Reply to Comment 1.1.1: Comment: We appreciate the opportunity to address the new question raised concerning this specific step in the Corollary 4.6 proof: how the non-degenerate trajectory ensures parameter updates follow from a non-zero error signal. This step "the non-degenerate trajectory ensures that a nonzero error signal would necessitate further parameter updates." establishes a crucial distinction: while neural networks typically have many stationary points in the parameter space (where parameter gradients vanish), our proof shows that under $\mu$P with non-degenerate trajectories, convergence can only occur when all error signals $\mathring{\chi}_{T, i} = L'(\mathring{f}_T, y_i)$ are zero. This is significant because zero error signals across all training samples directly imply reaching the global minimum of the training objective (given the convexity of typical loss functions with respect to model outputs). In other words, we are not merely showing convergence to a stationary point in parameter space, but specifically convergence to a global minimum. This property, stemming directly from the feature independence guaranteed by Theorem 4.5, is critical for global convergence. We provide the detailed derivation below, explicitly demonstrating this link: --- **More Detailed Derivation:** We proceed by contradiction. Suppose at time T, there exists some sample $i$ with non-zero error signal $\mathring{\chi}_{T, i} \neq 0$, yet the parameters no longer update after time T. According to our parameter update rule in Equation (3.3) from the main paper, we have: $Z^{\delta W^{L+1}\_t} = -\eta \sum_{i \in [m]} \mathring{\chi}\_{t-1,i} Z^{x^L}\_{t-1}(\xi_i),$ where $[m] = \\{1, 2, \dots, m\\}$ denotes the set of indices for the training samples, and as stated in Section 3, the weights evolve as: $Z^{W^{L+1}\_t} = Z^{W^{L+1}\_0} + Z^{\delta W^{L+1}\_1} + \cdots + Z^{\delta W^{L+1}\_t}$ For the parameters to remain unchanged from time T to T+1, we must have: $Z^{\delta W^{L+1}_{T+1}} = 0$ Substituting the update rule, this implies: $-\eta \sum_{i \in [m]} \mathring{\chi}\_{T,i} Z^{x^L}\_{T}(\xi_i) = 0$ Since the learning rate $\eta > 0$, this simplifies to: $\sum_{i \in [m]} \mathring{\chi}\_{T,i} Z^{x^L}_T(\xi\_i) = 0$ However, by Theorem 4.5, we have established that the post-activation features $\\{Z^{x^L}\_T(\xi\_i)\\}\_{i \in [m]}$ are linearly independent at any time T. This linear independence property means that the only way for the equation $\sum_{i \in [m]} \mathring{\chi}\_{T, i} Z^{x^L}\_T(\xi\_i) = 0$ to hold is if $\mathring{\chi}\_{T, i} = 0$ for all $i \in [m]$. This contradicts our assumption that $\mathring{\chi}\_{T,i} \neq 0$ for some $i$. Therefore, if any error signal is non-zero at time $T$, the parameters must continue to update. --- We appreciate the reviewer highlighting this key step in the proof of Corollary 4.6. In our revised manuscript, we will include this detailed derivation to strengthen the connection between Theorem 4.5 and Corollary 4.6 while maintaining the natural flow of the paper.
Summary: The submitted paper analyzes the global convergence of MLPs in feature learning parameterization. By demonstrating that features remain independent during training, they prove global convergence. Claims And Evidence: Yes. The theorems support the claims. Methods And Evaluation Criteria: I am not sure if the experiments support the assumptions considered in the paper. I would like the authors to verify these assumptions (see Questions). Theoretical Claims: I have checked the Theorems and arguments presented in the main text but not the Proofs. Experimental Designs Or Analyses: I am not convinced why the Figure 2 experiments support the results in the paper. I would request the authors to help me understand this result better. Supplementary Material: I have not reviewed the supplementary material for this paper. Relation To Broader Scientific Literature: This paper contributed to understanding convergence of feature learning limits of neural networks. Prior literature has mostly focused on the kernel limits. Essential References Not Discussed: Relevant references are discussed to the best of my knowledge. Other Strengths And Weaknesses: Strengths: This work analyzes the global convergence of feature learning parameterizations, which is of significant interest to the community. Weaknesses: I am not sure if the assumptions made in this paper generalize to practical settings beyond toy models (see Questions). Other Comments Or Suggestions: Suggestions: * Can the authors directly measure correlations between different features during training to support their analysis? Comments: * SP and NTP don't learn features at small learning rates only. By comparison, at large learning rates, all parameterizations learn features and can perform equally well or better [1] [1] Scaling Exponents Across Parameterizations and Optimizers https://arxiv.org/abs/2407.05872 Questions For Authors: Questions: * Can the authors clearly state how they constructed the feature matrix from joint space-time features in Figure 2? I am not sure how this experiment supports the theoretical results. * I am unsure about how realistic Assumption 4.1 is for practical settings beyond toy models (MLPs trained on CIFAR; MLPs only achieve around 50% accuracy on CIFAR-10 and may not require feature learning to achieve this performance). Can the authors check the assumption on a few datasets? * A recent paper has shown that both SP and NTP can exhibit hyperparameter transfer (and feature learning) [1]. Is it straightforward to show this in the framework introduced? [1] Scaling Exponents Across Parameterizations and Optimizers https://arxiv.org/abs/2407.05872 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their support and constructive feedback. We address each question below: --- **Q1**: Can the authors directly measure correlations between different features during training to support their analysis? **A1**: Yes, we have directly measured the correlations between different features during training. Our analysis confirms that features remain largely independent throughout the training process. Specifically, we compute the Gram matrix G of the centered features, where $G_{ij} = (h(x_i) - \bar{h})^{\top} (h(x_j) - \bar{h})$ represents the similarity between features of data points i and j. The set of N centered feature vectors $\\{h(x_i) - \bar{h}\\}\_{i=1}^N$ is linearly independent if and only if the minimum eigenvalue λ_min(G) > 0. Our experiments track this minimum eigenvalue throughout training, demonstrating that $\mu$P maintains feature diversity while IP shows catastrophic eigenvalue collapse. These results directly validate our theoretical claims. We have additional experimental results here: https://anonymous.4open.science/r/mup_rebuttal-0E0B/Rebuttal.pdf. --- **Q2**. Regarding SP and NTP at large learning rates and feature learning **A2**. Thank you for pointing out the paper "Scaling Exponents Across Parameterizations and Optimizers." It is relevant and we will add a citation and brief discussion about it in the revision. This work empirically characterizes when feature learning occurs. Our work provides a rigorous mathematical framework guaranteeing global convergence during feature learning. The referenced paper shows that SP and NTP can empirically exhibit feature learning at large learning rates but lack theoretical guarantees. The existing NTK analysis framework cannot directly analyze feature learning in SP and NTP at large learning rates, creating a theoretical gap. While the precise theoretical characterization of SP and NTP at large learning rates remains an open problem in the community, our analysis provides new insights that could potentially be extended to study this challenge in future work. --- **Q3**. Construction of feature matrix from joint space-time features in Figure 2. **A3**.In Figure 2, we analyze the combined space-time feature diversity by constructing a feature matrix that simultaneously captures both initial and final representations. This approach provides a comprehensive assessment of how features evolve during training while maintaining linear independence. For each parameterization scheme, we construct a combined representation matrix as follows: Let $h_0(x_i) \in \mathbb{R}^n$ represent the feature vector at initialization for input $x_i$, and $h_T(x_i) \in \mathbb{R}^n$ represent the feature vector after training completion. For our dataset with $N$ samples, we form the combined feature matrix $H_{\text{combined}} \in \mathbb{R}^{n \times 2N}$ by concatenating both initial and final representations: $$H_{\text{combined}} = \begin{bmatrix} h_0(x_1) & h_0(x_2) & \cdots & h_0(x_N) & h_T(x_1) & h_T(x_2) & \cdots & h_T(x_N) \end{bmatrix}$$ We then compute the Gram matrix $G = H_{\text{combined}}^T H_{\text{combined}}$, where each element $G_{ij}$ captures the similarity between combined representations. The minimum eigenvalue of this Gram matrix quantifies the linear independence across both spatial dimensions (different inputs) and temporal dimensions (initialization versus final state). This combined analysis provides a stronger test of feature diversity than analyzing features in isolation. As demonstrated in Figure 2, μP maintains significantly higher minimum eigenvalues compared to other parameterizations as width increases, confirming its unique capability to preserve feature diversity throughout training while enabling substantial feature learning. For example, if a network doesn't learn meaningful features (e.g., NTP at large widths) and $h_T(x_i) \approx h_0(x_i)$ for all inputs, the combined matrix would have linearly dependent columns, yielding a minimum eigenvalue near zero. The higher minimum eigenvalues for μP confirm it learns new features that are linearly independent from initialization. --- **Q4.** Realistic nature of Assumption 4.1 for practical settings beyond toy models **A4:** Thank you for this question. Assumption 4.1 requires distinct inner product magnitudes between data points. Our empirical verification (sampling 5,000 random triplets from each dataset) shows: | Dataset | % Triplets Satisfying | Min Dist Between Inner Products | |---------|----------------------|------------------------------| | MNIST | 100% | 2.74e-06 | | CIFAR-10| 100% | 3.08e-05 | | CIFAR-100| 100% | 1.51e-05 | These results definitively confirm that Assumption 4.1 is not merely a theoretical convenience but reflects geometric properties inherent to real-world datasets. The perfect compliance across standard benchmarks validates that our theoretical framework directly applies to practical settings beyond toy models. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. While the rebuttal has clarified my concerns, I am keeping my score because I could not prove several results of the paper in the given time frame, and the score reflects my review confidence. I believe such papers that make fundamental contributions to the field deserve a thorough review in journals with longer review periods. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging that our rebuttal addressed your concerns and for sharing the reasoning behind your final assessment. If accepted, we will incorporate these clarifications to enhance the clarity of the manuscript for the ICML audience. We appreciate you recognizing the work's potential significance and thank you for your constructive feedback throughout this process.
Summary: This paper studies the training of infinite-width $L$-layer FFN under the $\mu P$ parametrization. The authors establish that features evolve significantly during training while remaining linearly independent, ensuring convergence to a global minimum. Claims And Evidence: The theoretical claims in the paper may be correct, as I did not verify all details of the proofs. However, the insights may not generalize beyond the specific setting studied, as the paper makes restrictive assumptions and does not provide comprehensive experimental validation. Additionally, while the authors claim their results apply to SGD, their theoretical analysis focuses on **full-batch** GD rather than SGD. In SGD, there is an additional source of randomness from sampling the training data, which may not accounted for in their analysis. As a result, the conclusions drawn for GD may not directly extend to standard SGD. Methods And Evaluation Criteria: The results seem correct but the paper does not provide comprehensive experiments to support its theoretical results or to verify whether the assumptions are necessary. Without empirical validation, it remains unclear whether the proposed theoretical framework accurately captures real-world training dynamics or if the restrictive assumptions limit its practical applicability. Theoretical Claims: I did not check the proofs in detail, but since they follow results from the Tensor Program framework, the claims are likely correct. However, there are some important concerns. 1. The authors claim to follow [1], but their setup is actually closer to [2]. In [1], the learning rate is scaled as $\eta n^{-c}$ uniformly across all layers, whereas [2] allows different layers to use different scaling factors. This difference raises concerns about whether their training dynamics assumptions are fully aligned with prior works. 2. The paper does not impose explicit constraints on the learning rate for global convergence. This implies that as long as $\eta$ is constant with respect to width $n$ , arbitrarily large learning rates (e.g., $\eta = 100$) could be used theoretically, which is possible in some machine learning models [3]. In practice, learning rates are much smaller ( $\eta \ll 1$), suggesting that there might be missing stability constraints in the theoretical framework. [1] Tensor Programs IV: Feature learning in infinite-width neural networks. ICML 2021 [2] Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer. NeurIPS 2021 [3] Implicit Bias of Gradient Descent for Logistic Regression at the Edge of Stability. NeurIPS 2023 Experimental Designs Or Analyses: The experimental results are not comprehensive, and it is unclear what features are being demonstrated in the plots. Based on the authors’ description, I assume the features $h$ correspond to $h^2$ in the second layer after $1000$ epoch, but this is not explicitly stated. The authors may also consider adding results for $x^2$, as they claim that both pre- and post-activation features remain linearly independent. Additionally, the paper includes results for Tanh and ReLU, which do not meet their theoretical assumptions, yet the trends in the plots remain similar to those in Figures 1 and 2. This suggests that the assumptions made in the paper may serve primarily for proof convenience rather than being strictly necessary in practice. Supplementary Material: I reviewed the appendix for experimental details and scanned the proofs, but I did not check every detail thoroughly. Relation To Broader Scientific Literature: This paper extends the theoretical understanding of infinite-width neural networks by demonstrating that the $\mu P$ enables both independent feature learning and global convergence. Essential References Not Discussed: The paper omits some recent theoretical works on $\mu P$ and NTK for infinite-depth neural networks, which are relevant to its contributions. Notably, the following works should be considered: [1] TP6: Feature Learning in Infinite-Depth Neural Networks [2] Implicit regularization of deep residual networks towards neural ODEs [3] Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit [4] Global Convergence in Neural ODEs: Impact of Activation Functions Other Strengths And Weaknesses: NA Other Comments Or Suggestions: Equation 3.1: should be $h^1 = W^1 \xi$, not $W\xi$ Questions For Authors: 1. What is the precise definition of \alpha_k in Lemma C.1? How does it relate to the covariance structure of the random variables involved? 2. The paper studies the training dynamics of infinite-width feedforward networks (FFNs). Does a finite-width FFN always converge to this infinite-width case? If so, under what conditions does this hold, and what is the convergence rate? 3. Intuition Behind GOOD Functions: What is the intuitive reasoning behind the definition of GOOD functions? Additionally, why is Condition 4 in Assumption 4.3 necessary? 4. Gradient Independence Assumption: Does the paper assume the independent gradient assumption, meaning that W and W^\top are independent during training? If so, how does this assumption affect the generality of the results? 5. Gaussian Process Covariance: Can the authors explicitly write out the specific covariance expressions for the Gaussian processes in Equations (5.3) and (5.4)? 6. Can the results on independent feature learning and global convergence be extended to the finite-width FFN case? If not, what are the key challenges in making such an extension? If so, how large does the width need to be in your experiments for the results to align with theoretical predictions? 7. The paper does not seem to impose explicit constraints on the learning rate to ensure convergence, except $\eta=\Theta(1)$. Under what conditions on the learning rate does convergence hold? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review. Due to space constraints, our responses are necessarily concise while addressing all key points: --- **Q1**. How does your SGD analysis account for mini-batch randomness when your theoretical focus is on full-batch GD? **A1**. Our analysis based on the Tensor Program framework already accounts for mini-batch randomness in SGD. Definition 3.1 models this with the indicator function $\mathbf{1}\\{i ∈ \mathcal{B}_t\\}$, and our derivations incorporate this sampling process. Under SGD, the induced Gaussian processes maintain the same covariance structure as in GD, with the additional randomness from batch selection not affecting the feature independence property central to our global convergence proof. --- **Q2**. Why aren't your experiments more comprehensive in validating theory and testing assumption necessity? **A2**. CIFAR-10 provides an ideal balance between complexity and tractability for testing our theoretical claims across multiple parameterizations. In the revised manuscript, we expanded our experimental section with comprehensive evaluations across varying network widths and depths. --- **Q3**. Which layer features do your plots show, $h_2$? Could you add x2 results to validate both pre/post-activation feature independence? **A3**. Yes, it is $h_2$. We've analyzed both pre/post-activation features across all layers. Our supplementary analysis [https://anonymous.4open.science/r/mup_rebuttal-0E0B/Rebuttal.pdf] includes complete eigenvalue spectrum plots demonstrating linear independence is maintained throughout training in all layers. These results validate Theorem 4.5 across feature types and network depth. The contrast between $\mu$P and IP becomes more significant in deeper layers, where IP shows catastrophic eigenvalue collapse while $\mu$P maintains robust feature diversity. --- **Q4**. Why do Tanh and ReLU show similar results despite not meeting your theoretical assumptions? **A4**: Tanh satisfies the GOOD function properties in Assumption 4.3. ReLU was included for completeness despite its non-smoothness. As we already discussed in Appendix A.1, extending our analysis to non-smooth activations is a promising future direction. ReLU's similar empirical behavior suggests our theory's core mechanisms may be more general than the specific assumptions in our proofs. --- **Q5**. The authors claim to follow [1], but their setup is actually closer to [2]. **A5**. Both schemes give the same dynamic. [1] appears to scale learning rate uniformly. But this difference arises because [1] expresses weights as $w_{\ell}$ with scaling $a_{\ell}$ in $W^{\ell}=n^{-a_{\ell}}w^{\ell}$, while [2] directly uses $W_{\ell}$. Such equivalence is justified in Tensor Program V (arXiv:2203.03466). --- **Q6**. No explicit constraints on learning rate for global convergence. Paper only specifies $\eta = \Theta(1)$ without clear convergence conditions **A6**. We prove that when convergence occurs under $\mu$P parameterization, the non-degeneracy of features ensures it will be at a global minimum. We didn't provide explicit convergence conditions because our focus was on characterizing the convergence point, not the convergence process itself. Determining explicit convergence condition of $\eta$ for $\mu$P remains an important open question. --- **Q7**. The paper omits some recent theoretical works on $\mu$P and NTK for infinite-depth neural networks. **A7**. Thank you for pointing this out. We already discussed Tensor Program 6 in our paper and will discuss the other suggested works in our revision. --- **Q8**. Definition of $\alpha_k$ in Lemma C.1 **A8**. They are arbitrary real numbers representing linear combinations in our independence proof. --- **Q9**. Does a finite-width FFN always converge to this infinite-width case? ... Can the results be extended to the finite-width FFN case? What's the challenge? **A9**. Yes - convergence is formally established by Theorem 7.4 in Tensor Program IV. Our empirical results show networks with widths larger than 128 nearly align with infinite-width predictions (Figures 1-4). The main challenge is determining precise convergence rates due to the complexity of tracking higher-order interactions in finite-width networks. --- **Q10**. Can you explicitly write out the covariance expressions for equations (5.3) and (5.4)? **A10**. The expressions are provided in Definition 3.1 (Line 187) and Line 212 and restated after Equations (5.3) and (5.4) in Lines 310-311. --- **Q11**. What's the intuition behind GOOD functions, and why is Condition 4 necessary? **A11**. GOOD functions maintain feature diversity and ensure non-trivial gradient flow by preventing constant decomposition. Condition 4 helps prevent feature collapse, allowing sigmoid, tanh, and SILU to satisfy our analysis. --- **Q12**. Does your paper assume W and $W^\top$ are independent during training? **A12**. No, we consider standard backpropagation. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed and thoughtful rebuttal. Given these factors, I will raise my score to a 3 to reflect the strengthened presentation and technical correctness. However, I hesitate to give a strong recommendation for acceptance, as I have not had sufficient time to verify all proofs in detail, and fully digesting the cited works such as Tensor Program V and VI would require more time than was available during the review period. Additionally, while the new experiments are helpful, they still fall short of being comprehensive—for example, broader comparisons and evaluations beyond feature independence would further strengthen the empirical support for your theoretical claims. --- Reply to Comment 1.1.1: Comment: Thank you for your thorough review and for raising our score after considering our rebuttal – this is very encouraging. We appreciate you acknowledging our responses and have noted your final comments. In the revision, we will incorporate the improvements discussed to strengthen the paper.
Summary: The paper aims to investigate rich feature learning and convergence to a global minimum via a Maximal Update Parametrization. Networks learn linearly independent features which are different from features at initialization, and due to covariance structure over layers, this implies a convergence to a global minimum under the Maximal Update Parametrization. This holds over the choice of activation function. Claims And Evidence: I can see the story that the authors tell through the figures, however, I find there to be a lack of experiments. I would appreciate seeing experiments investigate this across different architectures (especially for more conventional networks) and layers. I would also like to verify that these results hold across multiple seeds. Much of the claims behind the paper relies on theorems, which are given extensive proofs. From what I can understand from looking at the appendix of the main proof claim, the logic appears to hold up. I would appreciate more clear exposition on when the GOOD function definition is used in the proofs. I am unsure if the features are empirically shown to be linearly independent. I am also confused on why minimum eigenvalue is investigated? shouldn't the entire spectrum be looked at in order to measure feature diversity? Methods And Evaluation Criteria: Most of the paper makes a theoretical claim and extensive proofs are provided, so the approach makes sense. I wish more detail was given to explaining the graphs. Theoretical Claims: I looked at the main theorem, and the steps appeared clear and correct. I question if the definition of a good function is somewhat arbitrary (for example, why can't I set r_1 = r_2 = 0 and ensure $\phi$ is different for each input x?) Experimental Designs Or Analyses: I would like more explanation behind why the authors just investigated minimum eigenvalue. I would like to see more experiment over architecture, layers, and seeds of initialization. I would also appreciate empirical investigations of the linear independence of the features. Supplementary Material: The proof behind theorem 4.5. Relation To Broader Scientific Literature: The paper provides a novel investigation of global convergence properties and rich feature learning Essential References Not Discussed: I would mention previous works that investigate covariance structure of neural networks and differentiate them from the current study i.e. they don't explicitly study cross-layer interactions A Rainbow in Deep Network Black Boxes Structured random receptive fields enable informative sensory encodings Other Strengths And Weaknesses: The paper is easy to read. Other Comments Or Suggestions: n/a Questions For Authors: I question if the definition of a good function is somewhat arbitrary (for example, why can't I set r_1 = r_2 = 0 and ensure $\phi$ is different for each input x?) I am unsure if the features are empirically shown to be linearly independent. I am also confused on why minimum eigenvalue is investigated? shouldn't the entire spectrum be looked at in order to measure feature diversity? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback and valuable suggestions. --- **Q1**. Lack of experiments across different architectures, layers, and seeds. **A1**. We focused on MLPs as the conventional building blocks widely used in theoretical studies. While a full investigation across diverse architectures is beyond the scope of this theoretical paper, our core contributions establish the mathematical principles governing feature learning under different parameterizations. To improve experimental breadth, we've expanded our analysis with comprehensive eigenvalue spectrum plots across all three layers for both pre-activation and post-activation features [https://anonymous.4open.science/r/mup_rebuttal-0E0B/Rebuttal.pdf]. These visualizations show μP's advantageous properties becoming more significant in deeper layers. **Layers**: While our current plots focus on the second layer, we add results demonstrating the minimum eigenvalue trend across different depths to show that the effect persists. [https://anonymous.4open.science/r/mup_rebuttal-0E0B/Rebuttal.pdf] **Seeds**: Robustness across initializations was already considered in our experiments. As detailed in Appendix A (Experimental Details), the presented results are mean values computed over 10 independent trials using different random seeds (seeds 42-51). --- **Q2**. Questions on empirical verification of linear independence and choice of minimum eigenvalue vs. entire spectrum analysis. **A2**. Regarding the crucial empirical verification of linear independence among the feature vectors corresponding to different input data points, this is precisely achieved in our work by analyzing the minimum eigenvalue ($\lambda_{min}$) of the $N \times N$ Gram matrix computed from the centered features. This matrix, capturing the inner products between feature representations, is fundamentally related to kernel methods and the Neural Tangent Kernel (NTK). We compute this Gram matrix $G$. If $X$ is the $n \times N$ matrix whose columns are the centered feature vectors $h(x_i) - \bar{h}$, then the Gram matrix is $G = X^{\top} X$. (Its element $G_{ij} = (h(x_i) - \bar{h})^{\top} (h(x_j) - \bar{h})$represents the similarity between features of data points $i$and $j$). Crucially, the set of $N$ centered feature vectors $\\{h(x_i) - \bar{h}\\}\_{i=1}^N$ is linearly independent if and only if the minimum eigenvalue $\lambda{min}(G) > 0$. Linear dependence among these$N$vectors occurs precisely when $\lambda_{min}(G) = 0$. Therefore, observing $\lambda_{min}(G)$ provides a definitive and direct verification of whether the features representing different inputs maintain linear independence. While focusing on the minimum eigenvalue provides the most direct test for linear independence, we have now included comprehensive eigenvalue spectrum analysis [https://anonymous.4open.science/r/mup_rebuttal-0E0B/Rebuttal.pdf] that examines the entire distribution of eigenvalues. These plots reveal that $\mu$P maintains significantly higher eigenvalues throughout the entire spectrum compared to other parameterizations. Notably, IP exhibits catastrophic eigenvalue collapse at higher percentiles (e.g., eigenvalues dropping to $10^{-7}$ in layer 3), while $\mu$P maintains eigenvalue orders of magnitude larger across all percentiles. This full-spectrum evidence further strengthens our claims about feature diversity under $\mu$P. --- **Q3**. Definition of GOOD function arbitrary? How about $r_1=r_2=0$?: **A3**. Regarding Assumption 4.3 / Condition 4: Here, we require that for any real numbers $r_1$ and $r_2$, the function $(r_1 + \phi(x))(r_2 + \phi'(x))$ is not almost everywhere constant. It is worth noting that this condition involves "any real number," not the existence of some real numbers satisfying a property. The GOOD function definition, including this condition, serves as a sufficient condition that guarantees our theoretical results. For an activation function that satisfies our definition, we can show that feature learning maintains linear independence. This condition allows for a broad class of activation functions, including most commonly used ones (e.g., tanh, sigmoid). --- **Q4**. Previous works investigate covariance structure of neural networks and differentiate them from the current study i.e. they don't explicitly study cross-layer interactions. **A4**. Thank you for suggesting these valuable references. As recommended, we will add the related works "A Rainbow in Deep Network Black Boxes" and "Structured random receptive fields enable informative sensory encodings" in our revision. We will also highlight, as you noted, that these works don't explicitly study cross-layer interactions, which is one of the key differentiating aspects of our contribution.
null
null
null
null
null
null
Active Learning with Selective Time-Step Acquisition for PDEs
Accept (poster)
Summary: This paper introduced an active learning method for learning PDEs. The method is composed of: (1) selective time-step acquisition, where the method selects a subset of time steps for the solver to simulate while other time steps are evolved by the surrogate model, (2) an acquisition function that evaluates the variance reduction, and (3) batch acquisition. The method is evaluated on 5 PDEs including incompressible and compressible NS, KS, Burgers, and KdV equations. The method shows consistent improvement compared with other baselines. Claims And Evidence: Yes. The claims that the selective time-step acquisition improves upon full trajectory acquisition is supported by convincing evidence. Methods And Evaluation Criteria: Yes, they make sense Theoretical Claims: The paper does not have theoretical claims. Experimental Designs Or Analyses: The experimental design is sound. Experiments are done on incompressible and compressible NS equations which are important evaluations. Also, experiments are done comparing the method with random Bernoulli sampling, which shows that although the proposed method is on par with best Bernoulli sampling result. I wonder if the patterns shown in Figure 5 are consistent among different experiment runs (with different seeds)? The authors are encouraged to run multiple independent experiments, showing their patterns. Supplementary Material: The paper does not have supplementary material. I briefly review the appendix. Relation To Broader Scientific Literature: The paper addresses an important problem in learning PDEs, where the time cost for running the solver is high. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: Strengths: The paper is written in a clear way and easy to understand. The novelty is reasonable. In terms of significance, I believe the paper addresses an important problem that can be widely used in learning PDEs. Weaknesses: The thing I worry about most is the engineering difficulty of applying the method. The method requires interleaving solver evolution with surrogate evolution, which, in my own experience, is typically engineering challenging. For example, if the solver is a well-established software which is written in a different language, it can be very hard to incorporate it with a neural network. In those case, it is probably easier if the solver runs the full trajectory. Therefore, in practice, I wonder whether the gain by selective time-step acquisition is worth the effort. Nevertheless, when incorporating the two component is not too hard, the method may be useful. ## update after rebuttal: I have read the authors' rebuttal and mostly resolved my concerns. I remain my score. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thoughtful questions and feedback. > I wonder if the patterns shown in Figure 5 are consistent among different experiment runs (with different seeds)? The authors are encouraged to run multiple independent experiments, showing their patterns. https://anonymous.4open.science/r/icml_rebuttal-E9AB/timesteps/ Thank you for raising an important point. The linked folder "timesteps" contains sampled timesteps for four other seeds for each of the PDEs. We see that the patterns described in our paper are consistent among different seeds per task. > The method requires interleaving solver evolution with surrogate evolution, which, in my own experience, is typically engineering challenging. We appreciate the concern about interleaving solver-based and surrogate-based time steps. In our current codebase, the only required addition is calling one step of the external solver from Python. In practice, most PDE solvers can evolve a state one step at a time through a simple function call or by using “checkpoint” files between time steps, which Python can read and write. Although some solvers are specialized or written in lower-level languages, the overhead incurred by periodic file-based communication is generally much smaller than the total cost of running a full solver trajectory, especially for problems where the solver is very expensive. By selectively acquiring just the most important steps, our approach saves substantial solver calls and overall runtime. We believe the net cost reduction outweighs the implementation complexity in most scenarios where computational demands are high. We will also make sure that our codebase is generally easy to adopt for user-defined PDE solvers.
Summary: This paper introduces a novel active learning (AL) framework called Selective Time-Step Acquisition for PDEs (STAP) to improve the efficiency of surrogate models for partial differential equations (PDEs). The key idea is to selectively acquire only the most informative time steps from PDE trajectories using a numerical solver while the surrogate model approximates the remaining steps. This contrasts with existing AL methods that acquire entire PDE trajectories, reducing computational costs per trajectory and allowing for exploring a more diverse set of trajectories within a given budget. The authors develop an acquisition function that estimates the utility of a set of time steps by approximating the resulting variance reduction. They demonstrate the effectiveness of STAP on benchmark PDEs like the Burgers', Korteweg-De Vries, Kuramoto-Sivashinsky, incompressible Navier-Stokes, and compressible Navier-Stokes equations. Results show that STAP significantly improves performance, reducing average error and error quantiles compared to existing methods. The method combines a numerical solver and a surrogate model to acquire data along a trajectory with reduced cost, improving sample efficiency. STAP can be seen as an add-on to existing AL methods that acquire full trajectories. The authors also explore efficient variants of STAP to reduce computational cost, and show that the success of STAP is driven by its ability to prioritize both diverse and informative time steps. The findings suggest STAP offers a more cost-efficient and accurate solution for PDE surrogate modeling with broader applicability in scientific and engineering simulations. Claims And Evidence: Claim: STAP improves surrogate model performance over previous active learning methods for PDEs. Evidence: Experiments on Burgers', KdV, KS, INS, and CNS equations demonstrate that SBAL+STAP outperforms other AL baselines regarding Log RMSE (Figure 3, Table 1). "Ratio of ∆" values in Table 1 quantifies the improvement of SBAL+STAP over Random selection. These ratios are greater than 1 for all tested PDEs, indicating STAP provides a performance gain. Figure 4 shows that STAP improves performance on points with extreme errors (99% and 95% quantiles) and even reduces error in the middle quantiles (50%), which is rare for AL algorithms. Appendix C provides a full report of all metrics on all methods Claim: STAP achieves better performance by adaptively choosing both the frequency and specific locations of time steps to acquire. Evidence: Figure 5 shows the distribution of time steps chosen by STAP, demonstrating a tendency to acquire early time steps, with occasional selection of later time steps varying across different PDEs. Analysis of the frequency with which STAP samples time steps reveals that it doesn't always sample time steps at an optimal frequency p to sample with. This suggests that the specific time steps sampled also matter as much as the overall frequency. Claim: STAP can be implemented efficiently without significant performance loss. Evidence: The use of two efficient variants of STAP, namely STAP MF and STAP 10, shown in Table 14. Table 3 summarizes the wall-clock time of each baseline method and STAP; STAP 10 incurs only a fraction of computational cost over the baseline methods. Methods And Evaluation Criteria: The paper uses a well-chosen set of PDE benchmarks from the Al4PDE active learning benchmark, including Burgers', KdV, KS, INS, and CNS equations, which represent a variety of physical phenomena and complexity levels, making for a robust and generalizable evaluation of STAP. It is great that the authors use the existing AL benchmarks and it would be great if they could also publish their method on github and integrate it there with the existing framework. Theoretical Claims: There are no theoretical claims made in the paper. Experimental Designs Or Analyses: The experiments are well-designed and in line with prior work by Musekamp et al. but limited to one-step predictions. Supplementary Material: I have skimmed the supplementary material. Relation To Broader Scientific Literature: Active learning for ML-based PDE solvers is a recent development, with only a few recent papers having addressed this important problem. The authors make strong contributions by showing that time steps of the trajectories (rollouts) can be chosen adaptively and selectively. Compared to prior work, this is a significant advancement over the state of the art. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths I really like the PCA visualizations of the effect. It is impressive that the methods show improvements over random sampling, even for problems where no other AL method showed an improvement! The evaluation is strong and on multiple PDEs from AL4PDE. Weakness The method could be prone to diverging models, producing out-of-distribution inputs to the simulator. There is no check of the inputs in terms of the uncertainty. Only applicable to 1-step training (predicting t+1 from t), does not support autoregressive training techniques. A push-forward experiment (D.2) is a good idea to check. But the numbers seem wrong; it is hard to believe that the push-forward trick worsens things. The connection to PDE dynamics / autoregressive behavior could have been discussed more. For example, your algorithm likely selects earlier time steps since improvements, in the beginning, will affect the rest of the rollout. In the autoregressive setting, one might not want to select only the first steps since it will take some time for the model to diverge again and since they might be too close. Other Comments Or Suggestions: None. Questions For Authors: What does the +- / shaded area show exactly? Confidence interval? 1 time standard deviation? The standard deviation was probably underestimated since they took the average error of the ensemble members (ln 848). Retraining == training from scratch or fine-tuning? What are the parameters of the IC generator (distribution)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your thoughtful questions and feedback. > Prone to diverging models, producing out-of-distribution inputs Thank you for raising this important point. Please see Common Response 2 at the bottom. > Does not support autoregressive training techniques We have done additional experiments with multi-step models as in [1]. Please see Common Response 1 in our response to reviewer FkHn. > What does the +- / shaded area show exactly? We used one standard deviation. You are correct to point out that we underestimated the standard deviation. We will rerun the experiments to obtain the correct standard deviations with prediction error of all ensemble members across all seeds. https://anonymous.4open.science/api/repo/icml_rebuttal-E9AB/file/correct_std.png?v=273fbf72 The above figure contains the corrected log RMSE figure for some main experiments. > Retraining == ? We are training from scratch like in most of the active learning literature [1]. > Parameters of IC generators Let $U$ stand for the uniform distribution. For Burgers and KdV, the initial condition is in the form $\sum_{i=1}^N A_i \sin(2\pi k_i x / L + \phi_i)$. The amplitudes and phases are always sampled from $U([0,1])$ and $U([0,2\pi])$. For Burgers, $N = 2$ and $k_i \sim U(\\{1,2,3,4\\})$, and for KdV, $N=10$ and $k_i \sim U(\\{1,2,3\\})$. For KS and INS, the initial conditions are gaussian random fields drawn from $N(0, 25(-\Delta + 25I)^{-1})$ and $N(0, 7^{3/2}(-\Delta + 49I)^{-2.5})$ respectively. For CNS, we first sample $\sum_{k \in \\{1,2,3,4\\}^3} A_k \sin( 2\pi k x/L + \phi_k)$, the amplitudes and phases sampled uniformly at random for each channel ($\rho$, $p$ and $\mathbf{v}$). Then, we renormalize $\rho$, $p$ to lie within $\rho_0(1 \pm \Delta_\rho)$ and $p_0(1 \pm \Delta_p)$, respectively, where $\rho_0 \sim U([0.1,10])$, $\Delta_\rho \sim U([0.013, 0.26])$, $\Delta_p \sim U([0.04,0.8])$ and $T_0 := \rho_0/p_0 \sim U([0.1,10])$. The $\mathbf{v}$ is also computed by superposing sinusoidal waves, but with amplitudes chosen so that the initial condition has the given initial Mach number $M \sim U([0.1,1])$. > Push-forward trick We agree that the errors seem too high. If we can’t fix the problem by the deadline, we will remove the push-forward experiment. **Common Response 2. Out-of-distribution Synthetic Inputs** https://anonymous.4open.science/r/icml_rebuttal-E9AB/gt_pred/ \ https://anonymous.4open.science/api/repo/icml_rebuttal-E9AB/file/kdv_energy.png?v=5d240a44 Reviewers have raised the concern that inaccurate surrogate models might synthesize inputs that lie far from the ground truth distribution, reducing their information gain. In fact, under limited training data, the surrogate model outputs visibly erroneous trajectories. The images in the folder of the first link are comparisons of the ground truth trajectories and predictions from surrogate models trained on 32 and 1 trajectories, respectively. The second link plots the energy of KdV states in both ground truth and predicted trajectories. We see that the surrogate model doesn't satisfy conservation of energy. https://anonymous.4open.science/api/repo/icml_rebuttal-E9AB/file/one_initial_train.png?v=1d82126d \ https://anonymous.4open.science/r/icml_rebuttal-E9AB/stap_pca/ To test how much this error harms STAP, we perform experiments where the initial training dataset contains 1 trajectory, compared to 32 used in our main experiments. The first link above shows the log RMSEs. To our own surprise, we find that SBAL+STAP still outperforms Random and SBAL, except for INS in the early rounds. The second link is a folder that contains comparisons of FNO activation PCAs for PDE states sampled by SBAL and SBAL+STAP during the first round of active learning. Note that the PCA was fitted only to the ground truth states in random trajectories (blue points), so that the out-of-distributionness of sampled states can be properly reflected. We find that only several of the states sampled by SBAL+STAP diverge significantly from the random ground truth states. In other words, the surrogate model synthesizes erroneous inputs when looked at trajectory-wise, but they aren't necessarily out-of-distribution, hence retaining the information gain. Reviewer vQAV also asked, "why not query more diverse initial conditions and run fewer time steps" in the earlier rounds where the surrogate model is inaccurate. The problem is that the distribution of states $u_t$ changes over time $t$. Running only up to the first few timesteps harms the model’s performance on the later timesteps, as evidenced in Appendix D.3. STAP seems to be striking the balance between sampling realistic inputs and sampling diverse timesteps $t$. How one could further improve this balance is left as a work for future research. References: [1] Musekamp, Daniel, et al. "Active learning for neural pde solvers." arXiv preprint arXiv:2408.01536 (2024).
Summary: The paper introduces an active learning framework STAP for surrogate modeling of PDE trajectories that selectively queries only key time steps instead of simulating entire trajectories. STAP uses a binary sampling pattern to decide which time steps to acquire via a numerical solver and which to approximate with a surrogate model. An acquisition function based on variance reduction guides the selection process, and a greedy batch acquisition algorithm is used to optimize the sampling patterns under a fixed-cost budget. Experiments on benchmark PDEs, including Burgers, KdV, Kuramoto-Sivashinsky, and Navier-Stokes equations, demonstrate that the proposed method reduces errors compared to existing active learning methods, particularly at higher quantiles. ## update after rebuttal I thank the authors for the updated results. I appreciate the interesting idea presented in the paper. However, although aiming to reduce the cost of generating a PDE dataset, the method presented in this paper, in the end, makes the overall process of obtaining a surrogate model more expensive in most cases. I believe the paper can contribute validly to the field if the authors address this contradiction. I've decided to maintain my score since two specific issues regarding the evidence presented in this manuscript remain unaddressed. Total computational cost is larger: In the AI4PDE context, both data generation (numerical solvers) and model training/AL overhead consume computational resources. Therefore, demonstrating a reduction in total computational cost is crucial for establishing practical value. While the authors provided cost analysis upon request, the results indicated their method actually increased the total computational cost for most benchmarks presented. If the proposed AL method requires more total computation time or resources to reach a given accuracy on these problems, what is its compelling advantage over standard approaches? The paper needs to address this directly and convincingly, rather than solely relying on hypothetical scenarios. Unvalidated scalability assumption for larger-scale PDEs: The work's significance heavily relies on extrapolating its findings to more complex, high-dimensional PDEs where solver costs are presumed dominant. However, it does not consider whether the deep learning surrogate, along with the active learning strategy that depends on it, can effectively scale to these types of problems. This is a non-trivial assumption, especially in light of known limitations of deep learning, such as producing blurry predictions for very high-resolution data. Without supporting evidence, expectations regarding performance in complex regimes remain speculative. Claims And Evidence: The paper claims that selective time-step acquisition using active learning improves the efficiency and accuracy of PDE surrogate modeling by reducing the expensive computation of the numerical solver. While the experimental results provide clear evidence that the proposed method outperforms other active learning methods, no cost of time and computing resources analysis against direct non-active learning are provided. The active learning in this work consists of training an ensemble of FNOs over multiple rounds. It is uncertain whether this approach saves time or computing resources compared to training a single FNO for one run on the full trajectory. The paper also claims the cost of data acquisition from numerical solvers is extremely high. This is not well supported. According to a recent article [1], it's common for the research in the machine learning community to compare to a numerical method that is much less efficient than a SOAT method for that problem. Some of the PDE datasets are merely solved with fundamental, manually implemented Python code instead of SOAT algorithm or well-optimized software. The authors should justify that the numerical solvers used in this paper are not necessarily the best but are at least reasonably close to the SOAT solution. Also, numerical algorithms can adjust the solving iterations to produce less accurate solutions with faster speed. As saving the cost of generating datasets for training surrogate models is the main goal of this work, it's advisable to test generating less accurate training samples from numerical solvers and compare the final accuracy and the time and computing resources cost. [1] McGreivy, Nick, and Ammar Hakim. "Weak baselines and reporting biases lead to overoptimism in machine learning for fluid-related partial differential equations." Nature Machine Intelligence 6.10 (2024): 1256-1269. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem of surrogate modeling for PDEs. Theoretical Claims: No proof is provided in the paper. Experimental Designs Or Analyses: The experimental design was generally sound. However, one concern is that the FNO is trained using a one-input, one-output autoregressive scheme. Many previous works use a multi-step input-output approach (e.g., 10-in-10-out). While the one-step approach might achieve higher accuracy, it sacrifices prediction speed by a factor of 10. A key efficiency advantage of machine learning models over numerical solvers is they can make multi-step or skip-step predictions. The paper does not evaluate whether the proposed framework remains effective under a common multi-step configuration. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The paper is related to AI for PDE and AI for CFD research. No apparent relevance to broader scientific literature. Essential References Not Discussed: No essential references I can think of. Other Strengths And Weaknesses: The concept of using selective time steps in data is interesting. The visualization of sample diversity is clear and effectively demonstrates the motivation. Other Comments Or Suggestions: I suggest that the authors include detailed information on computing resources and time costs for (1) the numerical solver, (2) the entire active learning procedure, and (3) the non-active learning approach. This will help readers determine whether to use active learning in their specific situations. For example, some may have a few GPUs but enough CPU cores and numerical algorithms that can effectively utilize these cores in parallel. Questions For Authors: The data selection method in the framework appears to be model-specific. If FNO were replaced with a different model, given the different inductive biases, would the selected time steps change? Is it possible to develop a data selection strategy that is not model-specific so that one can compare different machine learning models without requiring each to perform its own active learning process? In my experience, for complex physical problems, predictions from models like FNO often do not fully satisfy physical constraints. Even if these predictions are fed back into the solver, it's hard to reduce the physical residuals for just a single-step solving. Will this lead to a "garbage in, garbage out" scenario where, most of the time steps selected by active learning don't satisfy physics? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your thoughtful questions and feedback. > Concerns about the lack of a direct comparison of computing resources with non-active learning Our paper does not claim direct computational speedups on our benchmark PDEs; instead, it relies on the benchmark PDEs as proxies reflecting realistic, expensive simulations, a common approach in active learning studies such as AL4PDE [1]. Our surrogate metric, the number of numerical solver-simulated timesteps, effectively represents relative computational savings in realistic, expensive simulations. A computing resources analysis would be potentially misleading due to the inherently simplified nature of our experimental datasets. https://anonymous.4open.science/api/repo/icml_rebuttal-E9AB/file/taylor-green.png?v=50ac6a75 Many real-world PDEs inherently demand significant computational resources for numerical simulations [2][3][4]. For instance, the Taylor-Green vortex benchmark reported in [2]—standardized through the "taubench" work unit measure—requires between $10^4$ and $10^6$ work units per trajectory, translating to roughly 41 CPU-hours to over 4,100 CPU-hours on Dual 20-Core Intel Xeon processors (Gold 6148) [5]. This benchmark was conducted as a competition, in which multiple teams submitted numerical solvers. Each solver could run at various fidelity levels, with lower fidelity solutions requiring fewer computational resources at the cost of increased error tolerance. As shown in the image attached above (Figure 25(b) of [2]), even the fastest solution with the highest allowed error tolerance required approximately $10^4$ work units. This substantial computational burden highlights the importance and practicality of active learning for PDEs. > Experiments with multi-step neural solvers Thank you for raising this important point. Please see Common Response 1 at the bottom. > model-specific-ness of the method Our method STAP is model-agnostic, as it is applicable to any model that predicts the next PDE state given the current state (e.g. UNet). As you point out, STAP will likely select different timesteps when used with a different model. This is the desired behavior of any active learning algorithm, since its goal is to select data that are most useful to the current model at hand. Could you elaborate on why one would want an active learning strategy that selects the same data regardless of the model? > OOD data caused by surrogate models that potentially deviate from correct physical behavior Thank you for raising an important point. Please see Common Response 2 in our response to reviewer f7JP. **Common Response 1. Multi-step model** https://anonymous.4open.science/api/repo/icml_rebuttal-E9AB/file/multistep.png?v=ab3f6da0 We have done experiments with multi-step FNO which receives N timesteps as input and outputs N timesteps. To perform STAP, we group the total number of timesteps into non-overlapping clusters of N timesteps, and perform STAP as if each cluster is one timestep with N channels. We have performed two variants of experiments. In the first, we divide a timestep in our main experiment into 8 smaller timesteps, so that a total of $L$ timesteps turn into $8 L$ timesteps, and train 8-in-8-out models. In the second variant, we keep $L$ timesteps the same but train 2-in-2-out models. The figure above shows the log RMSE for Burgers, KdV, and KS of the first variant, and KS, INS and CNS of the second variant. |Equation|Random|SBAL|SBAL+STAP| |-|-|-|-| |Burgers 8L/8|$-1.670\pm 0.0982$|$-1.893\pm 0.053$|$-2.058\pm 0.028$| |KdV 8L/8|$1.402\pm 0.029$|$1.404\pm 0.024$|$1.364\pm 0.043$| |KS 8L/8|$1.255\pm 0.015$|$1.232\pm 0.012$|$1.156\pm 0.008$| |KS L/2|$1.340\pm 0.014$|$1.335\pm 0.011$|$1.288\pm 0.011$| |INS L/2|$1.124\pm 0.017$|$1.118\pm 0.007$|$1.081\pm 0.012$| |CNS L/2|$3.593\pm 0.023$|$3.594\pm 0.044$|$3.42\pm 0.050$| The table above summarizes our results with mean log RMSE. References: [1] Musekamp, Daniel, et al. "Active learning for neural pde solvers." arXiv preprint arXiv:2408.01536 (2024). \ [2] Wang, Q., Fidkowski, K., Abgrall, R., Bassi, F., Caraeni, D., Cary, A., ... & Olivier, H. (2012). “High-order CFD methods: current status and perspective.” International Journal for Numerical Methods in Fluids, 93(4), 212–232. \ [3] Kaneda, Y., Ishihara, T., Yokokawa, M., Itakura, K., & Uno, A. (2003). “Energy dissipation rate and energy spectrum in high resolution direct numerical simulations of turbulence in a periodic box.” Physics of Fluids, 15(2), L21–L24. \ [4] Heil, M. & Hazel, A. L. (2011). “Fluid–Structure Interaction in Internal Physiological Flows.” Annual Review of Fluid Mechanics, 43, 141–162.\ [5] Capuano, F., Beratis, N., Zhang,F., Peet, Y., Squires, K., & Balaras., E. (2023). Cost vs Accuracy: DNS of turbulent flow over a sphere using structured immersed-boundary, unstructured finite-volume, and spectral-element methods. Eur. J. Mech. B Fluids, 102:91–102. --- Rebuttal Comment 1.1: Comment: I just realized that the authors cannot view my official comments. I am repeating my comment here in this rebuttal comment. I thank the authors for their response. However, my main concern regarding the total time and computational cost remains unaddressed. Developing a deep learning surrogate for solving PDEs involves two major phases: dataset generation and model training. While active learning can reduce the cost of data generation, it can also increase the cost of model training. After reading the paper, I still cannot estimate whether active learning leads to net savings in total time or computational resources, as no convincing quantitative evidence is provided. The paper claims that the cost of generating datasets is so extensive that it dominates the total cost. I find this claim questionable because the data generation approach used in the paper may be based on computationally inefficient baselines. Please refer to my original comments. If a well-optimized numerical solver is used with appropriate PDE residual tolerance settings, the data generation cost could be significantly reduced. In such a scenario, active learning may not provide a net benefit. I do not expect the proposed active learning approach to always yield a net reduction in total cost. However, the authors should provide compelling quantitative evidence to clarify the conditions, such as the PDE problem's complexity, the numerical solver's efficiency, and the availability of computing resources, under which active learning can meaningfully reduce total cost. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's reply to our response. To clarify once again, we are **not claiming a reduction of total cost in any of our benchmark PDEs**, but in hypothetical settings where the data acquisition cost far outweighs the training cost. We had thus only reported the data acquisition cost in our paper. This practice is **consistent with the broader AL literature**, including AL4PDE [1], which our work builds upon. Nonetheless, we agree that providing the total cost on our benchmarks would serve as a helpful guide to practitioners. We compare the wall clock time of non-AL and AL methods, and for a meaningful comparison, we cut the AL experiment when it reaches below the RMSE of the final non-AL surrogate model. We also note that up to our knowledge, our numerical solvers listed in Appendix B.1 are nearly SOTA in terms of computational efficiency, except that the computational cost can be further reduced by allowing higher error tolerances for some solvers (Burgers, KdV, KS and INS). **Table 1. Total wall clock time until target RMSE, in seconds** |Equation|Random|SBAL+STAP| |-|-|-| |Burgers|237|441| |KdV|834|1469| |KS|385|2920| |INS|526|3702| |CNS|4053|3476| **Table 2. Total wall clock time decomposed into acquisition/training/selection** |Equation|Random|SBAL+STAP| |-|-|-| |Burgers|90/147/0|27/234/180| |KdV|670/164/0|455/664/350| |KS |40/345/0|35/2075/810| |INS|190/336/0|160/1750/1792| |CNS|3570/483/0|1448/972/1056| Table 1 compares the wall clock times between non-AL and AL, and Table 2 decomposes them into data acquisition, model training, and data selection. We find that AL reduces the total cost in CNS, where acquisition is relatively expensive. On other benchmarks, the training and data selection costs dominate, as expected. We want to stress yet again that the benchmarks were intentionally chosen to be inexpensive, to enable fast experimentation. For instance, we lowered the CNS resolution of 256x256 in AL4PDE to 32x32, reducing the acquisition time by **around x30 times**. Since the CNS solver doesn’t explicitly set the tolerance, we empirically measured the error scale by comparing solutions from the two resolutions, which yielded an error scale of around 1e+0. This means that our numerical solver at low resolution is **already sacrificing accuracy for fast computation**. Even in settings where training cost is comparable to acquisition cost, **practical strategies** can be employed, such as using less training compute during intermediate AL rounds [2][3]. In Section 5.6 of our paper, we have also discussed methods that can significantly reduce the cost of data selection of STAP while maintaining its performance. Finally, we would like to emphasize that the reviewer’s concern applies broadly to the active learning of PDEs as a whole. We ask that the reviewer also judges our work based on its novelty in the scope of active learning for PDEs. References:\ [1] Musekamp, Daniel, et al. "Active learning for neural pde solvers." arXiv preprint arXiv:2408.01536 (2024).\ [2] Coleman, Cody, et al. "Selection via proxy: Efficient data selection for deep learning." arXiv preprint arXiv:1906.11829 (2019).\ [3] Jung, Seohyeon, Sanghyun Kim, and Juho Lee. "A simple yet powerful deep active learning with snapshots ensembles." The Eleventh International Conference on Learning Representations. 2022. ## Addendum We provide the condition under which AL reduces total cost. Suppose AL improves data efficiency by $E$ over non-AL. Define $ T_{\text{acquire}}, T_{\text{train}} $ as the acquisition time and training time per unit data, and $T_{\text{select}}$ as the data selection time per round. The total cost of non-AL is $$ N_{\text{acquire}}^{(1)}T_{\text{acquire}} + N_{\text{train}}^{(1)}T_{\text{train}} $$ and for AL, $$N_{\text{acquire}}^{(2)}T_{\text{acquire}} + N_{\text{train}}^{(2)}T_{\text{train}} + M T_{\text{select}} $$ where $ N_{\text{acquire}}^{(i)} $ are the number of acquired data, and $ N_{\text{train}}^{(i)} $ are the total number of training examples (counting duplicates), and $M$ the number of rounds. With initial datasize $D$ and acquired datasize $B$ per round, $$N_{\text{acquire}}^{(1)} =BM$$ $$ N_{\text{train}}^{(1)} = D+BM$$ $$N_{\text{acquire}}^{(2)} = BM/E$$ $$ N_{\text{train}}^{(2)}=\sum_{\text{round}=0}^{M/E} (D + B\cdot \text{round}) $$ For AL to reduce the total cost, the setting would need to satisfy $$ N_{\text{acquire}}^{(1)}T_{\text{acquire}} + N_{\text{train}}^{(1)}T_{\text{train}} > N_{\text{acquire}}^{(2)}T_{\text{acquire}} + N_{\text{train}}^{(2)}T_{\text{train}}+ M T_{\text{select}}$$ **Table 3. Variables for cost analysis** |Equation|$E$|$T_\text{acquire}$|$T_\text{train}$|$T_{\text{select}}$|Satisfied| |-|-|-|-|-|-| |Burgers|3.33|0.087|0.101|60|F| |KdV|1.43|0.654|0.106|50 |F| |KS|1.11|0.005|0.116|90|F| |INS|1.25|0.077|0.112|224|F| |CNS|2.5|1.760|0.157|264|T| Table 3 lists these values, and whether they satisfy the condition above.
Summary: This paper develops an acquisition function that estimates the utility of a set of time steps and utilizes it for batch active learning in training surrogate models for PDEs. The empirical results show that the proposed method outperforms the baselines. Claims And Evidence: The proposed algorithm relies on the heuristic of replacing simulations with predictions from the surrogate model for the skipped time steps. However, especially in the early stages, when the surrogate model may perform poorly, the sequence after the skipped time steps might not correlate with the initial condition u_0 at all. In such cases, why not query more diverse initial conditions and run fewer time steps for each scenario using the ground-truth simulator? Methods And Evaluation Criteria: The evaluation metrics are well-established. For benchmarks, it is helpful to test whether the proposed methods can also be applied to PDEs where the evolution operator is time-dependent. Theoretical Claims: There is no theoretical guarantee provided for the proposed acquisition function. It will be helpful if the authors could include a theoretical analysis of their designed acquisition function. For instance, can they prove that the proposed acquisition function provides an optimal or near-optimal solution? Experimental Designs Or Analyses: I am not convinced by the authors' claim that their approach improves performance by up to five times compared to the baselines. For instance, in Figure 3, when comparing SBAL and SBAL+STAP, their performance appears similar. In most cases, SBAL is only one iteration behind SBAL+STAP. Additionally, it would be helpful if the authors can include more iterations to show the full performance until convergence. In Table 2, the results indicate that random time step selection outperforms STAP on 2 out of 5 tasks. Supplementary Material: I reviewed Sections A, B, and C of the supplementary material. Relation To Broader Scientific Literature: There is a amount of work on batch active learning in various scenarios. This paper focuses on actively selecting the time stamps within each scenario to further improve the sample efficiency. Essential References Not Discussed: The related work has been thoroughly discussed. Other Strengths And Weaknesses: Please refer to the previous sections. Other Comments Or Suggestions: Please refer to the previous sections. Questions For Authors: Please refer to the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your thoughtful questions and feedback. > Theoretical analysis of the acquisition function Our acquisition function is an approximation to the expected error reduction (EER), which is statistically near-optimal for active learning [1][2]. The EER measures how much the model’s generalization error is likely reduced after updating on hypothetically acquired data. We model our hypothetical belief about the ground truth solver as a *uniform categorical distribution* over the ensemble $ \\{\hat{G}\_a\\}\_{a=1}^M $. We assume that acquiring the trajectory of $u^0$ with sampling pattern $S$ only reduces generalization error on the trajectory of $u^0$. The current generalization error is expected to be the average of $ \| \hat u\_a - \hat u\_b \|^2 $ over $b$. We make a second assumption that the hypothetically acquired data $ \hat{u}\_{b,S,a} $ will update the model such that the model predicts the trajectory $ \hat{u}\_{b,S,a} $ given $u^0$. This gives us the expected reduction in error $ \| \hat u\_a - \hat u\_{b} \|^2 - \| \hat u\_a - \hat u\_{b,S,a} \|^2 $ averaged across $a$ and $b$, which is equal to our acquisition function. Although proving an optimality bound is outside of our expertise, we emphasize that our design of the acquisition function was guided by this exact theoretical consideration. > OOD data due to poor quality of surrogate models in early stages Thank you for raising an important point. Please see Common Response 2 in our response to reviewer f7JP. > Performance metrics https://anonymous.4open.science/api/repo/icml_rebuttal-E9AB/file/diff.png?v=0041f607 The linked image shows the improvement in log RMSE over Random (no active learning). In the KS equation, SBAL+STAP improves over Random five times as much compared to SBAL. |Equation|Random|SBAL|SBAL+STAP| |-|-|-|-| |Burgers|0.1522|0.2277|0.2451| |KdV|0.0831|0.1052|0.1132| |KS|0.1001|0.1031|0.1094| |INS|0.0806|0.0869|0.0912| |CNS|0.0544|0.0795|0.0884| However, we agree that this quantity might be misleading to readers, and thus provide a table of the measure of data efficiency, defined as the average reduction in log RMSE per round. Overall, SBAL+STAP improves data efficiency by about 10 percent compared to SBAL, which is the SOTA algorithm. For a simulation that takes ten days to run, this would amount to saving a whole day. We encourage you to look at the results in Figure 4 of Musekamp et al. [3], Figure 3(b) of Li et al. [4] (the yellow and blue lines correspond to no active learning and active learning), and Figure 5 of Bajracharya et al. [5]. Our method provides arguably the largest and the most robust performance gain reported in the PDE active learning literature. > random time step selection outperforms STAP As shown in Table 2, no single choice of $p$ for random time step selection outperforms full trajectory sampling on all PDE datasets, sometimes even degrading the performance. Unless there is a way to adaptively select $p$, random time step selection is not a viable active learning method, and was used in our paper solely for the purpose of analysis. > Experiments with more iterations https://anonymous.4open.science/api/repo/icml_rebuttal-E9AB/file/20_rounds.png?v=7ac767fc We have performed the main experiment for 20 rounds instead of 10, on Random, SBAL, and SBAL+STAP. We observe that the gap between SBAL and SBAL+STAP keeps widening, except in the KdV equation. > Experiments with time-dependent PDEs https://anonymous.4open.science/api/repo/icml_rebuttal-E9AB/file/time_dependent_ins.png?v=f9c179ae |Equation|Random|SBAL|SBAL+STAP| |-|-|-|-| |Time-dependent INS|$-0.080\pm 0.011$|$-0.081\pm 0.013$|$-0.339\pm 0.015$| We have performed an experiment on a time-dependent incompressible Navier Stokes equation, simply by using the time-dependent external force in our current INS equation. The new forcing term is a sinusoidal mixture of two spatial coordinates and the temporal coordinate. The above figure and table summarize the log RMSE of Random, SBAL, and SBAL+STAP. **Our method aligns closely with time-dependent PDEs, explaining the massive gain in performance.** References: [1] Settles, Burr. "Active learning literature survey." (2009). \ [2] Roy, Nicholas, and Andrew McCallum. "Toward optimal active learning through sampling estimation of error reduction." ICML. Vol. 1. No. 3. 2001. \ [3] Musekamp, Daniel, et al. "Active learning for neural pde solvers." arXiv preprint arXiv:2408.01536 (2024). \ [4] Shibo Li, Xin Yu, Wei Xing, Robert Kirby, Akil Narayan, and Shandian Zhe. Multi-resolution active learning of fourier neural operators. In International Conference on Artificial Intelligence and Statistics, pp. 2440–2448. PMLR, 2024. \ [5] Pradeep Bajracharya, Javier Quetzalcóatl Toledo-Marín, Geoffrey Fox, Shantenu Jha, and Linwei Wang. Feasibility study on active learning of smart surrogates for scientific simulations. arXiv preprint arXiv:2407.07674, 2024.
null
null
null
null
null
null
In-Context Reinforcement Learning From Suboptimal Historical Data
Accept (poster)
Summary: The paper introduces a method for multi-task RL meta-learning with suboptimal behavior policies. The goal is to train a common transformer model to imitate and improve upon the observed behaviors to maximize online rewards in new tasks. Towards this end, the paper introduces two methods: 1. a weighted policy imitation algorithm where the weights are based on exponentiated advantage function values and 2. a model architecture to transfer key dataset characteristics to infer the type of the new tasks. Experimental results on bandits and standard RL problems are included. Claims And Evidence: Yes. Claim 1 is supported by an equilibrium proof and Claim 2 is supported by a detailed model design. The authors provided empirical evidence to supplement the discussions. Methods And Evaluation Criteria: Yes. The problem is typical of RL meta-learning and the datasets are standard for bandit / RL problems. Theoretical Claims: Yes. The main theoretical claim is in Proposition 4.1. I briefly thought about it and the conclusions seem reasonable based on standard analysis on exponential family models. Experimental Designs Or Analyses: Partially. The author provided detailed evaluation setups, but they missed to include a few baselines in bandit experiment. The authors should also consider policy reinforce algorithm in the RL experiments. Supplementary Material: I only checked Figure 6. Relation To Broader Scientific Literature: The authors were exhaustive in the related work section. The authors surveyed decision transformers (PDT), algorithmic distillation, and general behavior cloning. The authors appear less focused on meta-learning for RL, though the literatures there could be less a bit old. Essential References Not Discussed: Not to the best of my knowledge. Other Strengths And Weaknesses: Strengths: 1. I like Proposition 4.1, which forms the foundation of the proposed policy improvements. The conclusions seem intuitive. 2. The follow-up algorithmic description is detailed, increasing my confidence in the empirical results. 3. The experiments validates the author's claims. Weaknesses: 1. In the discussions of Proposition 4.1, the authors discarded the partition function. This could introduce additional variance, leading to empirical instability. The authors should consider other normalization techniques such as batch-normalization. The authors should also include closely-related policy reinforce baselines where the exponentiated advantage function is replaced with plain reward function. 2. A few results are missing from the bandit experiment, indicating a rush to complete the paper? 3. In the training details, for L_req objective, the authors expected the average of the q function to be the value function. This is only true for on-policy roll-outs. For off-policy roll-outs, the authors should also include policy-likelihood ratios in the L_reg objective. Related, the time-indices for the Bellman equations are in backwards. Other Comments Or Suggestions: I like the paper a lot, but I cannot vouch for acceptance when there are still doubts in the experiments. To gain confidence, I would appreciate: 1. Completing Figure 2 2. Running Figure 3 & 4 until the proposed method meets optimal performance 3. Introducing policy reinforce baselines to show that simpler alternatives would not be sufficient to model reward values Questions For Authors: Regarding the three weaknesses: 1. Can the authors include and discuss policy reinforce methods in the experiments? 2. Can the authors include batch-normalization in the proposed methods - assuming that the authors left out normalization due to computational complexity? 3. Please complete Figure 2. Also for Figure 3 & 4, the methods are still far from converging to the optimal. How would you close this gap? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. We hope our responses below address your concerns. ### Experiments > **Bandit (Figure 2).** Our bandit experiments are designed to be comprehensive and aligned with established evaluation standards. As bandit problems are well-understood, **we follow the evaluation protocol used in DPT [1], comparing DIT against strong baselines such as the known optimal algorithms Thompson Sampling and UCB/LCB, as well as DPT itself** — which serves as an oracle for DIT due to its access to additional optimal bandit information during pretraining. These comparisons provide a rigorous evaluation of DIT in this setting. > **MDPs (Figures 3(a) & 4(a)).** The goal of these experiments is to evaluate adaptation speed — how quickly each method improves reward on new tasks. **While DIT has not fully converged to its optimal performance in Figure 3, the performances of all baselines have plateaued, and DIT already surpasses them in terms of performance**. We believe this clearly demonstrates the superior adaptability of DIT. > To provide additional empirical insights, we follow your advice to continue evaluation until the performance of DIT fully converges. As DIT’s performance in Figure 4(a) (Miniworld) has already converged, we focus on Darkroom and we run 10 additional episodes to see the final converged reward, it turns out that **DIT will continue advance the score a bit then keep stable (around 50), compared to 40 at episode 40**, expanding its performance gain over baseline methods. | Episode | 5 | 10 | 15 | 20 | 25 | 30 | 35 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | |----------|-----|------|------|------|------|------|------|------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | Reward |3.87 |11.5 |16.9 |19.3 |23.8 |33.0 |36.9 |44.3 |45.5 |45.8 |46.2 |46.4 |46.8 |47.1 |47.7 |47.4 |47.2 |47.4 | ### Method > **Normalizing Constant and Policy Reinforce Baselines.** We prove in Appendix D.3 that, when using the exponentiated advantage function for reweighting, the normalizing constant (partition function) can be safely ignored and set to 1. > To further illustrate the effectiveness of DIT, we follow the reviewer’s suggestion to conduct an extra ablation study to compare DIT to several policy-reinforced baselines with different reweighting schemes. We evaluated three weight alternatives: > 1. **Cumulative reward** > 2. **Exponentiated cumulative reward** > 3. **Batch-normalized DIT**, where advantage values are first estimated and then normalized across all trajectories in the pretraining dataset. > We consider offline testing from expert trajectory on Meta-world and summarize the key results in the following table. | Method | Cumulative Rewards | Exponentiated Cumulative Rewards | DIT (Batch-Norm Advantage) | DIT | |---------------------------------------------|--------------------|----------------------------------|-----------------------------|-----| | Return (higher means better) | 5.0 | 4.7 | 4.1 | 8.2 | > We observe that **DIT significantly outperforms all baselines**. In particular, batch-normalized DIT performs worse than standard DIT. This **aligns with our theoretical analysis**, which highlights the importance of using trajectory-specific weights. Normalizing across the entire dataset violates this principle, which likely degrades performance. > We will include these results and a discussion of the policy reinforcement methods into our experiment section to further improve the quality of our work. We thank the reviewer for this constructive suggestion. ### Training Objective (on-policy value function) > Thank you for this insightful observation. We agree that the value function equals the expected Q-function only under on-policy rollouts. > Indeed, in our case, DIT performs pretraining using trajectories generated by behavioral policies, and the learned value functions are used to weight the actions taken by those same behavioral policies. **Thus, all training rollouts are on-policy with respect to the value functions being used, and there are no off-policy discrepancies in the objective**. > We believe this is one **key strength** of DIT and will clarify this point in the revised version. Additionally, we appreciate the note about the time indices in the Bellman equations and will correct them accordingly. [1] Lee, Jonathan, et al. "Supervised pretraining can learn in-context reinforcement learning." Advances in Neural Information Processing Systems 36 (2023): 43057-43083. --- Rebuttal Comment 1.1: Comment: I would not change my score, primarily because Figures 3a and 3b reveal a notable gap between the converged DIT policy and the Optimal policy. Although the proposed algorithm demonstrates relatively strong performance, it still suffers unrecoverable losses from learning with suboptimal demonstrations. I attribute this shortfall to the DIT algorithm’s separation of advantage function estimation and policy learning into two discrete steps. In contrast, a fully off-policy RL method that iterates these processes in tandem would likely provide a more complete solution. --- Reply to Comment 1.1.1: Comment: We appreciate your continued engagement and the thoughtful feedback regarding Figures 3a and 3b. We would like to respectfully clarify several points that we believe directly address this remaining concern. --- ### **The Challenge of Learning from Suboptimal Data and Generalizing In-Context to New Tasks** The primary goal of our work, and ICRL more broadly, is to enable **fast adaptation to new RL environments** using only a **small number of demonstration trajectories**. This is **fundamentally different** from traditional offline RL, which assumes **abundant** suboptimal data collected from the **same environment** as the evaluation setting. We would like to highlight that **even in this simpler setting**, recovering an optimal policy from suboptimal data (e.g., in our experiments, only ~30% of optimal performance) is already extremely challenging, as recovering optimal policies requires sufficient state-action coverage within the offline data. Therefore, the performance gap observed in Figures 3a and 3b should be viewed in the proper context of this work, characterized by: (a) **Learning from suboptimal data**, and (b) **Generalizing to unseen tasks with only a few adaptation trajectories.** Notably, even ICRL methods pretrained **with optimal actions** (DPT) exhibit a similar gap. Thus, we believe the residual gap is not a limitation of DIT’s architecture, but a natural outcome of the **inherent difficulty** of the problem setting. --- ### **Strong Empirical Results Despite the Challenges** As the reviewer notes, **DIT shows strong performance**, even in some cases outperforms those pretrained with optimal action labels, for example, in Figure 3a. We respectfully emphasize that this is a **highly non-trivial result**, especially considering that DIT is trained **exclusively** on suboptimal demonstrations. While perfect convergence to the optimal policy is desirable, **consistently outperforming prior methods** in this significantly harder setting is an important contribution. Furthermore, using only suboptimal data makes DIT more **practical and broadly applicable** to real-world scenarios where optimal demonstrations are unavailable. --- ### **Regarding the Tandem Alternative** We appreciate the reviewer’s suggestion regarding the benefits of fully off-policy algorithms that couple advantage estimation and policy learning. However, such methods often face well-known challenges, including instability, sensitivity to hyperparameters, particularly when trained on fixed and narrow suboptimal datasets. Moreover, applying such iterative updates in a transformer-based architecture adds substantial complexity and introduces its own set of optimization challenges. DIT, by contrast, provides a **simpler and more robust solution**, tailored for demonstration-based generalization without requiring access to optimal labels. We agree that combining DIT with more iterative learning mechanisms is a promising future direction, and we are enthusiastic about exploring this line of work. Still, we believe DIT’s current formulation already represents a **substantial and novel contribution**. --- ### **Final Remarks** In summary, DIT introduces a **new paradigm** for ICRL, specifically tackling the practical challenge of pretraining using only suboptimal offline data. It is **theoretically motivated** and achieves **strong empirical results** compared to relevant baselines within a **highly demanding setting** characterized by suboptimal data and few-shot generalization requirements. Considering the inherent difficulties of the problem, DIT's strong relative performance, its practical advantage (no need for optimal labels), and the thoughtful design choices addressing the specific challenges of ICRL, we kindly ask you to reconsider your evaluation, as we believe the paper offers significant contributions to the ICRL field. We are very willing to **add a discussion clarifying these points** (especially regarding the ICRL context vs. standard offline RL and the rationale behind DIT's design choices) to the manuscript to ensure the community fully benefits from our findings. Thank you once again for your valuable time and feedback. Sincerely, The Authors
Summary: The new method Decision Importance Transformer (DIT) was proposed in the paper. This method is an enhancement of existing Decision Pretrained Transformer (DPT). While DPT requires expert target actions for training, DIT can be trained on trajectories sampled from suboptimal behavioral policies. Since there is no need in expert policies, the proposed approach is easier to use and more versatile than DPT and other ICRL methods. To learn near-optimal policies through sub-optimal historical data, exponential reweighting with advantage function technique was utilised. Authors propose to add weight into common next-action prediction objective, to force model prioritise actions with higher advantage values. DIT was tested on bandit and MDP problems. Dark Room and Miniworld was used as environments with discrete action spaces, Half-Cheetah and Meta-World was used as environments with continuous action spaces. In all settings DIT showed performance competitive to DPT despite being pretrained without the optimal action labels. ## Update after rebuttal I appreciate the authors’ rebuttal, which clarified my questions and addressed points that were previously unclear. The primary weakness of the paper was the insufficient explanation of the experiments, which left many aspects unclear to readers. This issue has now been resolved in the rebuttal. The idea of using the Advantage function to train DPT on suboptimal data is both novel and valuable for the ICRL area. In light of these improvements, I have raised my score. Claims And Evidence: The main claims of the paper is following: 1. DIT is able to learn in-context and adapt to unseen tasks while it is pertained on sub-optimal data *without the optimal action labels.* 2. DIT models demonstrate competitive performance to that of DPT The authors evaluated DIT on bandits and several MDP problems (Dark Room, Miniworld, Half-Cheetah and Meta-World). Bandits, Dark Room and Miniworld is a classic set of ICRL benchmarks and it is shown in the paper that DIT emerges in-context learning on this environments, but I have concerns about presented results. The scores of DPT on Dark Room and Miniworld is much lower than it was reported in the original paper. Therefore, the statement that DIT has performance comparable to DPT is inaccurate. Moreover, in my opinion, it would be great to evaluate DIT on Key-to-Door and Watermaze environments as these problems are well-known and common benchmarks for ICRL. For example these benchmarks are used in AD (https://arxiv.org/pdf/2210.14215) and AD-epsilon (https://arxiv.org/pdf/2312.12275) papers. Without these established benchmarks, the set of evaluation tasks seem less comprehensive and insufficient. It is good, that authors evaluated the method on continuous control tasks (Half-Cheetah and Meta-World), this makes the claims made in the submission stronger. But I spotted several inaccuracies. It is said that “Meta-World has 20 tasks in total, to evaluate our approach’s ability to generate to new RL tasks, we use 15 tasks to train and 5 to test.”, but Meta-World benchmark has 50 task in total. Why only 20 tasks were used and which task were used for training and evaluation, there are no information about it in the submission. Also I have concerns about presented scores on Meta-World tasks. The values of returns are in [0, 10] segment, but experts on these tasks can achieve much more higher results ( hundreds if the length of episode is 100 as it was in JAT or thousands if the length of episode is 250 as it was in GaTo (https://arxiv.org/abs/2205.06175)), so the value of returns in the segment of [0, 10] seem like totally random agents and does not support the main claims. Methods And Evaluation Criteria: The proposed benchmarks is fully suitable for verifying the article's claims, but it could be enhanced by additional ones commonly-used in ICRL area. Presented results is questionable and the setup of experiments are not fully described. What Meta-World tasks were chosen for training and test sets, has hyperparameters search been performed for considered methods, why demonstrated scores for DPT don’t match with reported values in original work, what is the length of context in pretraining datasets for each benchmark. In my opinion this information should be provided for better understanding the experiment setup and results of the work. Theoretical Claims: There are a few theoretical claims in the submission. I have carefully reviewed them and would like to ask some questions as I found contradiction. In Proposition 4.1, the optimization problem $J(\pi)$ is considered. This objective is the expected value of the difference between the advantage function and the KL divergence. In the mathematical expectation operator at the start (Expression 10), actions are sampled from the behavior policy for a certain task, $a \backsim \pi_{\tau}^b(a|s)$. However, in the “Proof” section, the mathematical expectation operator samples actions from the meta-policy, $a \backsim \pi(a|s; \tau)$. From my understanding, the second variant is correct because if the expectation is taken over the behavior policy $\pi_{\tau}^b(a|s)$, then maximizing $J(\pi)$ becomes trivial. This happens since only the $D_{KL}(\pi(.|s;\tau)||\pi_{\tau}^b(.|s))$ term would depend on $\pi(a|s; \tau)$. Experimental Designs Or Analyses: In my opinion experimental design and analysis is valid for this work. All my concerns about experimental setup were considered in “Claims And Evidence” and “Methods And Evaluation Criteria” paragraphs. Supplementary Material: Yes, I reviewed all parts of the Appendix. I think it could be extended with additional information about conducted experiments. Relation To Broader Scientific Literature: Decision Importance Transformer (DIT) is a novel approach in the field of In-Context Reinforcement Learning. Existing approaches, such as DPT(https://arxiv.org/pdf/2306.14892) and AD(https://arxiv.org/pdf/2210.14215), require expert data in their training datasets. However, collecting such data can be difficult, whereas suboptimal trajectories are only available for training agents. Many methods in offline RL, including CRR(https://arxiv.org/abs/2006.15134), IQL(https://arxiv.org/pdf/2110.06169), and CQL(https://arxiv.org/abs/2006.04779), utilize suboptimal data to learn better policies, yet no comparable techniques exist for ICRL. Advantage weighting is a well-known strategy in offline RL, and the authors propose a way to generalize it for ICRL. Some works have begun exploring this—for example, AMAGO-2 (https://arxiv.org/abs/2411.11188) — but it remains an understudied area. In my opinion, this is an impactful contribution that will reduce the data requirements needed to train ICRL agents. Essential References Not Discussed: Talking about “*stringent requirements on the pretraining datasets”* authors claim that AD need full learning trajectories of RL agents, but they don’t mention, that it is possible to collect dataset via noise distillation. This method is called $AD^{\epsilon}$ (https://arxiv.org/pdf/2312.12275). This work should be mentioned. Also advantage weighted regression (https://arxiv.org/abs/1910.00177) is mentioned in this work, but there are no reference. Authors are talking about using AWR in In-Context RL setting, but don’t mention AMAGO-2 (https://arxiv.org/pdf/2411.11188), where AWR was used. In my opinion, this work should be mentioned too. Other Strengths And Weaknesses: Strengths: - The paper is well written and it's easy to follow. - Detailed and clear description of the proposed method; - Applying method from Offline RL into In-Context RL area is good and novel idea; - Theoretical justification of the proposed method Weaknesses: - The experimental description is incomplete, leaving questions after reading the paragraph; - The claimed DPT scores differ from those in the original article Other Comments Or Suggestions: I do not have any other comments or suggestions. Questions For Authors: 1. Why do the DPT returns on Dark Room and Miniworld differ between the submission and the original paper? 2. What Meta-World tasks were chosen for training and test sets? 3. In the mathematical expectation operator at the start (Expression 10), actions are sampled from the behavior policy for a certain task, $a \backsim \pi_{\tau}^b(a|s)$. However, in the “Proof” section, the mathematical expectation operator samples actions from the meta-policy, $a \backsim \pi(a|s; \tau)$. Why? 4. Did you check what DIT do on Meta-World tasks (render video) or what the expert return is on these tasks? According to my experience the good returns on Meta-World tasks (when agent actually solve the problem or almost solve the problem) are much higher than 10. 5. What is the length of the context in the pertained datasets? I may change my evaluation and raise the score if the answers remove my concerns. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments. Please see our responses below to address your concerns. ### References > We appreciate these shared references. We will include a discussion of AD-\epsilon, AWR, and AMAGO-2 into our manuscript. In particular, we highlight that AD-\epsilon still requires sampling actions from the good policies to perform noise distillation. Moreover, to create noise distilled dataset, they need to actively sample trajectories following a noise schedule. **In comparison, our work assumes a setting of only historical data, which is more realistic and easier to satisfy**. ### Theoretical Results > The authors deeply appreciate the reviewer’s detection of this typo, and we will correct it in our updated manuscript. Yes, the expectation in Equation (3) should be with respect to the meta-policy $\pi(a|s\tau)$ and the expectation in Equation (4) is with respect to the behavioral policy $\pi^b_{\tau}(a|s)$: we use data collected by the behavioral $\pi^b_{\tau}(a|s)$ to learn a meta-policy $\pi(a|s;\tau)$. ### Experiment Setup >**Comprehensive Benchmarks.** Our goal is to provide a more comprehensive empirical evaluation than prior works, such as DPT (which focuses on bandits and navigation tasks like Dark Room and Mini-world) and prompt-DT (which focuses solely on continuous control). **To that end, we include representative benchmarks from all three domains: bandits, navigation, and continuous control**. While we agree that Key-to-Door is a valuable benchmark, it shares significant structural similarities with tasks like Dark Room and MiniWorld. As such, we believe it would offer limited additional insight beyond the settings already covered. Due to the time constraints of the rebuttal phase, we are unable to include new results at this stage. However, **we are more than happy to run additional experiments on Key-to-Door and include them in the final manuscript to further strengthen the empirical evaluation**. > **Meta-World Benchmark.** We follow the same environment setting of Prompt-DT[1] to use 20 tasks from the Meta-World reach-v2 suit (specifically ML1-pick-place-v2), while we choose the task horizon to be 100. We choose the converged SAC policy as the optimal policy. The cumulative reward of a random policy is around 1.5 within a horizon of 100 and around 20 for the optimal policy. We will update the manuscript to make this clear. ### Extra Experiments > To provide additional insights, we conduct extra experiments to compare DIT with several policy reinforce [2] baselines where the reweighting is based on cumulative reward rather than the advantage function. Specifically, we consider reweighting with cumulative rewards and exponentiated cumulative rewards for offline testing from expert trajectory on Meta-world and summarize the key results in the following table. | Method | Cumulative Rewards | Exponentiated Cumulative Rewards | DIT | |------------------------------|--------------------|----------------------------------|-----| | Return (higher means better) | 5.0 | 4.7 | 8.2 | > **The significantly improved performance of DIT over these two baselines further proves the effectiveness of our proposed method**. ### Performance of DPT. > The difference in DPT's performance between our work and the original paper stems from the nature of the pretraining datasets. In the original DPT paper, pretraining trajectories were collected using **uniformly random policies**, which ensured broad coverage of the state-action space. In contrast, as noted in our manuscript, we use suboptimal policies (achieving only ~30% of optimal performance) to collect pretraining data. This choice naturally reduces coverage and impacts DPT’s performance. > However, we believe **it better reflects realistic scenarios**—especially in practical applications where historical data are typically collected by suboptimal or heuristic policies, not uniformly random ones. > To ensure a fair comparison, both DPT and our method (DIT) are trained on the same set of suboptimal trajectories. However, **we provide DPT with additional privileged information**: a set of randomly sampled out-of-trajectory query states with corresponding optimal action labels sampled from optimal policies. This positions DPT as a strong oracle-style upper bound relative to DIT, further underscoring the strength of DIT’s performance under more constrained assumptions. [1] Xu, M., Shen, Y., Zhang, S., Lu, Y., Zhao, D., Tenenbaum, J., & Gan, C. (2022, June). Prompting decision transformer for few-shot policy generalization. In international conference on machine learning (pp. 24631-24645). PMLR. [2] Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8, 229-256. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification! It’s much clearer to me now. In my opinion, these details should be included in the paper to help readers better understand the work. Have you tried training DIT on trajectories collected using uniformly random policies, similar to what was done in the original DPT paper? --- Reply to Comment 1.1.1: Comment: We are gratified to learn that our previous response was helpful, and we will for sure incorporate the suggested details into the final manuscript. We indeed have tried DIT with uniformly random behavior policies. However, we encountered significant challenges. The reason is that the performance of these random policies was less than 1% of the optimal policies, rendering them unsuitable for providing reliable pretraining signals. This ineffectiveness is expected, and, as highlighted in our conclusion, we believe inferring near-optimal actions from purely random trajectories, devoid of information regarding optimal policies, is improbable. Furthermore, real-world historical data, as collected by companies, is more likely to originate from suboptimal policies (e.g., achieving 30% of optimal performance) than from uniformly random policies, which are rarely observed in practical scenarios. We hope this response can address your remaining concern.
Summary: This work focuses on in-context reinforcement learning where the source data/policy is suboptimal. In this case, the traditional ICRL algorithm could perform bad. This work proposes Decision Importance Transformer (DIT), which emulates the actor-critic algorithm in an in-context manner. It achieves superior performance than the baselines when the offline data is suboptimal. ## update after rebuttal I appreciate the authors' rebuttal. However, I think the authors should address my Question 7 more properly in the paper. I am not fully convinced by the authors' explanation. DIT uses the state from the context as the query state while DPT considers a more general way (use a random query state), shouldn't DPT work better in terms of learning a general understanding of an MDP? DIT will be limited by the size and richness of the context dataset. In any case, it remains unclear why DIT can outperform DPT. Claims And Evidence: Overall yes. But I still have some questions and recognize some weaknesses. Please see them in the corresponding section below. Methods And Evaluation Criteria: Yes. The problem is useful and the benchmark environments are reasonable. Theoretical Claims: Yes, they look correct to the best of my knowledge. Experimental Designs Or Analyses: Yes I checked. Please see the weakness and my questions below. Supplementary Material: Yes, I went through the whole supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The following reference is missing, which also studies how the offline data affect the performance of ICRL. Zisman, I., Kurenkov, V., Nikulin, A., Sinii, V., & Kolesnikov, S. (2023). Emergence of in-context reinforcement learning from noise distillation. arXiv preprint arXiv:2312.12275. Other Strengths And Weaknesses: 1. In the sentence: "When presented with a context dataset containing environment interactions collected by unknown and often suboptimal policies, pretrained TMs predict the optimal actions for current states from the environmental information within the context datase". I have two concerns. a) It sounds like there needs to be a pre-collected context dataset that is presented to the TMs. However, the online deployment of DPT already shows that the pretrained TMs can collect its own contexts online. b) "predict **the optimal actions**" doesn't sound good to me, because AD does not predict the optimal actions, it predicts whatever in the source RL algorithm. Indeed, here the pretrained TMs predicts whatever it learns during the pretraining, which may not be the optimal actions, depending on which ICRL algorithm is considered. Maybe it is just a language thing, but make it more clear and accurate. 2. " For instance, large companies often maintain extensive databases of historical trajectories from non-expert users." Why? is there a reference? The authors can explain the motivation in a more prefessional way, in terms of the language and example. 3. In Line 138, "causal transformer" may not be familiar for the readers. Explain it with more details and provide the reference. 4. In section 3, the authors use $p_\tau$ to denote both pretraining and test distributions. If DIT is deployed in an unseen new test task, in general, the test distribution should differ from the pretraining distribution? 5. The sentence "$D_{off}$ contains trajectories gathered from a random policy in $\tau$" is not correct right? The offline deployment in DPT paper considers two offline datasets. The offline dataset can be either collected by random policy or the expert policy. See Figures 4 and 5 in DPT paper. 6. The weight in DIT is manually selected. Is there a way to automatically select/learn the weight? Or is there a hint that what weight should one choose given a problem? 7. In DPT's online deployment, how does it work initially when $D_{on}$ is empty? I mean, it appends a trajectory every episode. In the whole first episode (not just the first step), the context is empty. Does TM still predict actions? 8. The authors claim that "DIT learns to infer near-optimal actions from suboptimal trajectories" and "DIT is comparable to DPT in both online and offline testings, despite being pretrained without optimal action labels." These are very strong claims. My question is: How "suboptimal" is your dataset? Should the suboptimal trajectories contain the near-optimal actions? If so, how much it needs? Please explain it more explicitly and with more contents in the Introduction, **not just one or two sentences in the experimental section**. Otherwise readers might think that DIT learning from a very very terrible offline dataset can achieve comparable performance with DPT, which is irrealistic. 9. I am confused about the pretraining dataset for Meta-World and Half-Cheetah. If they are collected from SAC, how is that "suboptimal"? 10. I appreciate that the authors provide the codes for validation. However, there is no instructions at all about how to install the dependencies and how to run and evaluate the algorithms. 11. Why AD is not considered in Miniworld? Other Comments Or Suggestions: 1. The authors are encouraged to not write equations in the text, which negatively impacts readability, e.g., the texts after Eq.(2). 2. Immediately introduce $\eta$ when first mentioning it in Eq.(2). Questions For Authors: 1. Why in-context reinforcement learning works? I mean, after the Transformers are pretrained, they are kept frozen when applying to the **new unseen tasks**. Why it can learns in-context? Specifically, why the performance can improve when there is more and more contexts while the Transformers parameters are frozen? 2. This work is built upon DPT. But why DPT can learn in-context? As discussed in the original AD paper, the methods that learn an good policy like DT cannot improve in-context with frozen Transformers parameters, i.e., cannot reinforcement learn in context. In this sense, isn't DPT also learning optimal policies? Why DPT can show in-context reinforcement learning abilities? 3. Comparing with AD/DPT, the DIT proposed in this work will need two extra transformer-based value functions. Does it cost a lot? I think the authors should report how much cost does it need compared with the baseline in terms of e.g., running time? More interestingly, does DIT still perform well compared to the baselines when **they have the same time budget**? This comparison is important. 4. Why the authors want to select a expotential weight? What is the advantage? 5. The context is genenrally just {s, a, r, s'}. But in Q and V estimators, the context is {s, a, G}. Why do we want to use this? Is it fair that the label for Q and V transformers should be the return G. But why the context needs to contain G? 6. Which tasks do the authors consider in Meta-World? 7. It is **extremely surprising** that DIT performs better than DPT in Figure 3(a) and Figure 4(a). Figure 5 looks more normal where the DPT is the oracle baseline. In addition, why DPT in Figures 3 and 4 does not have the comparable performance as that in the original DPT paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful and constructive comments. We will incorporate the referenced work into the final manuscript. In addition, we will include citations supporting the availability of abundant historical data in real-world settings (e.g., using [1] as a standard reference). Please see below for our responses to your other concerns. ### Why DPT and ICRL Work > As discussed in the original DPT paper, on a high level, DPT internally conducts posterior sampling (PS) for MDP, where the transformer model is pretrained to infer the target MDP from the given context dataset and to take action according to the optimal policy for the inferred target MDP. **As the context size increases, the inferred MDP becomes closer to the true target MDP, thus leading to improved performance**. Meanwhile, there are also theoretical works establishing the efficacy of the supervised pretraining approach taken by DPT, e.g., [2]. > **Prediction without a context dataset.** The DPT model can still predict an action using only the query state as input (without a context dataset), as the prediction is only based on the token corresponding to the query state. ### Method > **Choice of hyperparameter.** As shown in our theoretical result, $\eta$ represents the penalty for the KL divergence. Thus, if the historical data is less suboptimal (high quality), it should be high as we would like to stay close to the behavior policies; on the contrary, if the data is low quality, it should be chosen as a small value so that we can improve more over the behavior policies. > **Value transformer model structure.** Our design follows the in-context learning setup, where transformers are trained for regression tasks $y = f(x)$ using sequences like $x_1, y_1, ..., x_n, y_n$. In our case, value estimations (for V and Q) can be seen as regression problems, e.g., set $x = s$ and $y = G$ (cumulative reward) when estimating V. Based on this, we design our value transformer as shown in Figure 6. Figure 7 provides empirical evidence that it can accurately estimate value functions in context. > **Cost of training in-context advantage estimator.** As detailed in the manuscript, we train them with standard supervised training objectives for transformers, which are stable and straightforward to optimize. While this moderately increases the total pretraining time, we emphasize that **the primary bottleneck in ICRL lies in collecting high-quality pretraining data, not in training time**. To this end, DIT reduces this data requirement considerably, trading off for a modest increase in training complexity. We believe **this tradeoff is worthwhile and offers strong practical benefits**, making ICRL more accessible and scalable in real-world applications. ### Experiments > **Historical Data Suboptimality.** In our experiments, we use behavioral policies with performance 30%-50% of the optimal policies to collect historical data, a commonly used setting for offline RL. We would like to clarify that we use the intermediate training checkpoints of SAC to collect trajectories so that they are indeed suboptimal. We will update the manuscript to make this clear in the introduction to avoid confusion. > **Experiment Setup.** For Meta-world, we follow the setting of Prompt-DT to use Meta-World reach v2. We use Mini-world mainly for ablation study to understand the importance of the proposed reweighting mechanism and the effect of lack of optimal action labels. Given this purpose, we compare DIT with DPT and a variation of DIT without reweighting. > **Performance of DPT.** The observed performance difference of DPT between this work and the original paper is due to the difference of the pretraining datasets. Due to space constraint, please refer to our response to Reviewer Gczk for more details. ### DIT sometimes outperforms DPT > We appreciate the reviewer’s insightful observation. This is indeed a compelling research direction. > At a high level, DPT uses a single randomly sampled query state paired with an optimal action label for each trajectory during pretraining, whereas DIT leverages multiple reweighted suboptimal action labels. **Interestingly, these many reweighted suboptimal labels can collectively provide a stronger learning signal than a single optimal label, resulting in better pretraining objectives in some settings**. Of course, if DPT were given access to many optimal action labels per trajectory, it would likely surpass DIT, as DIT does not use any optimal supervision. [1] Levine, S., Kumar, A., Tucker, G., & Fu, J. (2020). Offline reinforcement learning: Tutorial, review. and Perspectives on Open Problems, 5. [2] Lin, Licong, Yu Bai, and Song Mei. "Transformers as decision makers: Provable in-context reinforcement learning via supervised pretraining." arXiv preprint arXiv:2310.08566 (2023). --- Rebuttal Comment 1.1: Comment: Thank you for your response. But please reply to my comments/questions one by one (like below) so that I can match your answers to the comments. Reviewer comments "*xxxxx*" - Author response: "xxxxx" I proposed 20 comments/questions. It looks like that not all of my comments are addressed. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s follow-up question, and we are more than happy to address them following the reviewer’s requested template, which we couldn’t complete mainly due to the space constraint of a single response. > ### Questions in *Other Strengths and Weaknesses* **Answer to Q1.** We will follow the reviewer’s suggestion to fix this sentence, avoiding potential misunderstanding. **Answer to Q2 on references and motivations for companies possessing historical data.** In our ongoing era of Big Data, large companies, particularly in areas like e-commerce, navigation, ride-sharing, health care and online gaming, often collect and maintain users’ historical data. These stored historical data can be employed for various purposes: to improve user experience, to optimize service delivery, to detect anomalous behaviors, etc. While offline RL surveys and papers are often good references, please see below two example references - Chen, X., Wang, S., McAuley, J., Jannach, D., & Yao, L. (2024). On the opportunities and challenges of offline reinforcement learning for recommender systems. ACM Transactions on Information Systems, 42(6), 1-26. - Shi, T., Chen, D., Chen, K., & Li, Z. (2021). Offline reinforcement learning for autonomous driving with safety and exploration enhancement. arXiv preprint arXiv:2110.07067. **Answer to Q3.** We will include a detailed introduction. Thank you for this helpful suggestion. **Answer to Q4 and Q5 on testing task distribution and initial context dataset.** While the testing task distribution for ICRL during deployment can be different to the pretraining task distribution, we assume them to be the same for simplicity, as it is not related to the key contribution of this work and all common benchmarks for ICRL follow this assumption. The context dataset indeed can contain trajectories collected from policies other than the random ones. We will update the manuscript to make this point clear. **Answer to Q6 on how to select weights for DIT.** Please see the **Choice of Hyperparameter** paragraph (in the first response). **Answer to Q7.** Please see the **Prediction without a context dataset** paragraph. **Answer to Q8 and Q9.** Please see the **Historical Data Suboptimality** paragraph. **Answer to Q10 on implementation.** Thank you for mentioning this point. Indeed, we plan to open-source all the implementations with full instructions for installation, training, and evaluation, after the publication of this paper. **Answer to Q11 on mini-world experiments not including AD.** We use Mini-world mainly for ablation study to understand the importance of the proposed reweighting mechanism and the effect of lack of optimal action labels. Given this purpose, we compare DIT with DPT and a variation of DIT without reweighting. ### Questions in *Other Comments Or Suggestions* We appreciate the reviewer’s suggestions and we will improve these two points in the final manuscript. ### Questions in *Questions for Authors* **Answers to Q1 and Q2.** Please refer to the **Why DPT and ICRL Work** section in the first response. Additionally, regarding why DT cannot learn in-context, as stated in the Related Work section of the AD paper, “*Importantly, these prior methods use contexts substantially smaller than an episode length, which is likely the reason in-context RL was not observed in these works.*” **Answers to Q3.** Please see the **Cost of Training in-context advantage estimator** paragraph. **Answers to Q4 on advantage function and exponential weighting.** The advantage function tells how much better (or worse) a specific action is compared to the average action you would normally take in a given state. Thus, we use it to evaluate whether an action is good or bad. We choose exponential weight because, as shown in our theoretical results, it leads to guaranteed performance improvement. **Answers to Q5 on the design of value transformers.** Please see the **Value Transformer Model Structure** paragraph. **Answers to Q6 on the tasks used in Meta-world.** For Meta-world, we follow the setting of Prompt-DT to use Meta-World reach v2. **Answers to Q7.** Please see the **DIT sometimes outperforms DPT** and **Performance of DPT** paragraphs in the first response.
Summary: The paper proposes the Decision Importance Transformer (DIT), a novel framework for in-context reinforcement learning (ICRL) that is designed to work with historical datasets generated by suboptimal behavioral policies. Unlike previous approaches that require optimal action labels or complete learning histories, DIT uses only suboptimal data. Its key idea is to incorporate an exponential reweighting scheme in the supervised pretraining objective. This weighting is derived from an estimated advantage function that is computed via transformer-based value and action-value estimators. The overall system is built upon an autoregressive transformer (using a GPT-2 backbone) that, once pretrained on a diverse set of tasks, can generalize to unseen tasks by extracting task-specific information from context trajectories. The paper supports its contributions both theoretically—with propositions that connect the weighted maximum likelihood objective to policy improvement—and empirically, by demonstrating competitive or superior performance on a range of problems (from bandit settings to challenging MDPs including navigation and continuous control tasks). Claims And Evidence: Core Claims: - Policy Improvement via Reweighting: The authors claim that by reweighting action labels according to an estimated advantage function, the transformer can learn to “steer” suboptimal behavioral data toward near-optimal policies. - Generalization from Suboptimal Data: DIT is posited to work well even when the training data do not contain optimal action labels, a scenario that is common in real-world settings where only historical data are available. Evidence: - Theoretical Analysis: The paper presents Proposition 4.1 and Proposition 4.2, which relate the weighted supervised objective to a policy improvement problem and provide conditions under which the learned policy strictly improves over the behavior policy. Although proofs are deferred to the appendix, the propositions outline a clear link between the exponential weighting scheme and performance guarantees. - Empirical Results: Extensive experiments are conducted on both bandit problems and various MDPs (including navigation tasks like Dark Room and Miniworld, as well as continuous control tasks such as Meta-World and Half-Cheetah). The results show that DIT can match or exceed the performance of baselines (including methods that have access to optimal action labels) in both online and offline settings. Methods And Evaluation Criteria: Methodology: - Weighted Maximum Likelihood Pretraining: DIT is trained using a weighted maximum likelihood objective where the weights are an exponential function of an estimated advantage function. This aims to give higher importance to actions that are deemed “better” in the historical data. - In-Context Advantage Estimation: The paper introduces transformer-based modules to estimate the value and action-value functions in-context, thereby approximating the advantage for each state–action pair. - Task-Conditioned Policy: The transformer model is conditioned on the context (a set of transitions from the environment) so that it can adapt its decision-making to new, unseen tasks during deployment. Evaluation: - The experiments are carried out in both online (where the agent gathers additional data) and offline (using fixed historical trajectories) settings. - The evaluation metrics include cumulative return and regret (for bandit problems) and episode cumulative return (for MDPs), which are standard and appropriate for the problem domain. Theoretical Claims: I did not check the detailed derivations, but they seem reasonable as the conclusions are standard in RL literature. Experimental Designs Or Analyses: The paper evaluates DIT on both simple (linear bandit) and complex (MDP) environments. For bandit problems, the method is compared against theoretically optimal algorithms (e.g., UCB and Thompson Sampling), showing that DIT quickly identifies the optimal bandit. In MDP settings, the authors test on environments with both sparse rewards (Dark Room, Miniworld) and complex dynamics (Meta-World, Half-Cheetah), and compare against several baselines including DPT, AD, and behavior cloning variants. Supplementary Material: Yes. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses - According to the proposed method, it is extremely important to learn a universal advantage function that is robust to distribution shifts, which may not be scalable. - In context learning needs a few tokens to construct the context. What is the cold-start performance of ICRL? How to perform reliable zero-shot adaptation? - There seems to be a reload of meaning between the "In Context" RL and "In Context" Learning of LLMs. - The theoretical results are less significant, considering that Prop. 4.1 is a well-known conclusion in RL, and the quadratic reliance on effective planning horizon in Prop. 4.2 makes the bound rather loose. Other Comments Or Suggestions: - Too many sentences are marked with red, which hinders a smooth reading process. I would recommend to mark very few phrases that are truly important. - The Primary Area: Reinforcement Learning->Everything Else should be used sparsely. Is there a specific category that this paper falls into? Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We hope our response can address your concerns. ### Presentation > **Style Improvement.** We will follow the reviewer’s suggestion to remove most of the colored text, keeping only minimal phrases with key importance. > **Clarification on the term “In-Context.”** We appreciate the reviewer highlighting this potential ambiguity. While “in-context learning” originated in the LLM literature, the term has recently been adopted by several works in reinforcement learning to describe similar ideas—namely, models that adapt to new tasks by conditioning on sequences of past transitions, without gradient updates. Our use of “in-context RL” follows this growing convention. Nonetheless, we agree that the dual usage can be confusing. In the revised manuscript, **we will explicitly distinguish in-context RL from ICL in language models to ensure clarity and avoid potential misunderstandings**. ### Methods. > **Zero-shot Adaptation.** DIT builds on the supervised pretraining framework of DPT and can act as an online meta-policy in new environments. Specifically, **it can predict actions without conditioning on a context dataset, as the prediction is only based on the token corresponding to the query state**. In the zero-shot setting—i.e., with no context tokens—DIT and DPT default to the behavior learned during pretraining, effectively leveraging a strong prior. Cold-start performance is demonstrated in the online testing experiments (Figures 2, 3, and 4). These results show that **DIT quickly improves its performance in-context with more information (trajectories), adapting efficiently even in the cold-start setting**. ### Learned Advantage Function and Distribution Shift > We appreciate the reviewer’s insightful comment. We would like to clarify that **DIT does not require robustness to distribution shift during the pretraining**. > This is because the learned advantage functions are only applied to actions taken by the same behavioral policies that generated the trajectories—meaning **all computations remain in-distribution with respect to those behavioral policies**. > This design ensures stability and avoids the challenges typically associated with off-policy corrections. We consider this to be a key strength of DIT, and we will make this point more explicit in the revised manuscript. ### Clarification of Contributions > While similar theoretical results have been established in the standard RL setting, **our contribution lies in extending these insights to the in-context reinforcement learning (ICRL) framework**. Although theory is not the primary focus of this work, we include the analysis to better motivate and ground the design of DIT, **helping the reader understand why the method is effective**. > Our core insight is that if "relatively good" actions can be identified and emphasized during supervised pretraining, a transformer-based policy can match the performance of meta-policies trained with significantly more expensive data, such as DPT. Although individual weighted action labels may be noisy, our analysis and experiments show that **the weighted MLE objective—when averaged over diverse environments—can yield high-quality meta-policies capable of generalizing to unseen tasks**. > Furthermore, with the proposed in-context advantage estimator, we observe that transformer models can learn to generate reliable value function estimates across tasks and behavioral policies using only supervised learning. We believe these empirical insights are important to share, as they highlight the practicality and potential of ICRL. > On the practical side, **DIT is simple to implement and significantly improves the feasibility of in-context RL**, as suboptimal trajectories are much easier to collect in real-world systems, especially in industry settings where large amounts of historical data are already available. **This opens the door for broader adoption of ICRL methods in practical applications**.
null
null
null
null
null
null
The Importance of Being Lazy: Scaling Limits of Continual Learning
Accept (poster)
Summary: This paper explores the relationship between scaling regimes and catastrophic forgetting using the lens of dynamical mean field theory (DMFT). The authors demonstrate theoretically that in feature learning regimes, catastrophic forgetting is more likely. In particular, there is a sharp transition between the lazy and rich regimes with respect to forgetting, which the authors term as edge of laziness (EoL). They show that in continual learning settings, there is an optimal choice of laziness level, which is transferable across model capacities. The authors further note that as forgetting becomes less important (as tasks are more similar), the optimal level of richness increases (optimal laziness decreases). Claims And Evidence: Overall, the claims made in the paper are generally well supported by theory and experiments. One area of potential concern is with the terminology "edge of laziness." The experimental results in Figure 3 could be interpreted either as a discrete transition between a lazy and rich regime, or a sharp, but still continuous transition. If the latter is the case, then I believe "edge of laziness" may not be the correct term to describe the results. Can the authors justify why the former is the case? Is there a phase transition at this point that the authors can show theoretically? Methods And Evaluation Criteria: Evaluations are conducted on Permuted-MNIST and Split-CIFAR, which are standard benchmarks. Theoretical Claims: The theoretical claims in the main text appear correct. They seem to rely on standard arguments in DMFT. Experimental Designs Or Analyses: Experimental setup as described in the captions and throughout the main text appears sounds. Supplementary Material: Appendix D, which validates the infinite width predictions, appears correct and supports the main paper. Relation To Broader Scientific Literature: As the authors note, previous works have already considered applying scaling limit results to continual learning. The key difference with this work is that they consider large networks in the rich regime under finite data, a previously unexplored setting for continual learning. Essential References Not Discussed: As far as I am aware, the relevant literature is discussed. Other Strengths And Weaknesses: Overall, the paper is well-written and presented and makes a novel contribution to the space of continual learning theory. A key weakness of the theoretical results is that the DMFT techniques used by the authors applies to one hidden layer neural networks. Experimentally, the authors consider deeper networks though, which somewhat alleviates this concern. Other Comments Or Suggestions: It looks like the labels in Figure 6 are cut off. I would also recommend increasing the size of the Figure (perhaps some of the whitespace could be removed). Questions For Authors: As mentioned above, can the authors theoretically demonstrate a phase transition between the lazy and rich regime (for edge of laziness)? Also, can the authors theoretically show a connection between the edge of laziness and the optimal stability-plasticity tradeoff? For what kinds of tasks is it true that the optimal $\gamma_0$ exists at the edge of laziness? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *Thank you for taking the time to read and review our paper. We are glad to hear that you found our paper well-written and that it offers a novel contribution to the space of continual learning.* --- ## Edge of Laziness and Phase Transition Thank you for your thoughtful comment, and we are sorry for the confusion caused by the terminology of Edge of Laziness. Firstly, we would like to take the opportunity to clarify our position on the observed phase transition. In our results, it is clear that a phase transition happens, however, it does not always appear as a sharp, discontinuous transition between the two regimes, but more like a non-linear continuous function of $\gamma_0$ in the area where the transition occurs. For this reason, we do not make any claims about the nature of the transition, being it *discrete* or *sharp but still continuous*. Secondly, our choice of terminology, "Edge of Laziness," was not intended to rigorously characterize the phase transition. Instead, it was intended to be evocative of a boundary between an effectively lazy region of the $\gamma_0$-space. Thirdly, we are currently unable to demonstrate the nature of the phase transition theoretically, a result that we believe to be highly non-obvious. Nevertheless, we are actively working towards this result, and we were recently able to characterize the infinite-time solution of CF in the rich regime of the linear case. In this setting, we observe a highly non-trivial relation between the task similarity $\rho$ and $\gamma_0$, where the second-order coefficient of CF is a sixth-grade polynomial in $\rho$. The complexity of this relationship - even in the linear case - points to the non-triviality of the behavior and thus to the difficulty of getting a general characterization of the phase transition theoretically. Nevertheless, we will continue working to extend this result to the non-linear setting for future work. Following your comment, and determined to avoid any confusion or misunderstanding with future readers, we have rephrased the parts of the paper that introduce and discuss the concept of the edge of laziness. We decided to refer to it as *lazy-rich transition*, avoiding the word "edge". We are also in the process of changing the title to reflect this change of terminology accordingly. We are grateful for the opportunity to improve the clarity of our claims and paper. ## Other Comments - **DMFT with one hidden layer NN** We plan to extend the DMFT to deeper networks in future work. In the meantime, however, we would like to stress the striking overlap between the results in the shallow and deep networks, suggesting that the one-layer derivation is already able to represent well the behaviors of deep networks. - **Edge of Laziness, tradeoff, and optimal $\gamma_0$** Thanks for the interesting question, which allows us to touch upon two distinct phenomena that influence the overall optimal level of richness. Firstly, we have observed that the task similarity has a direct and powerful impact on the feature evolution itself, meaning that a higher similarity effectively reduces the amount of feature learning for fixed $\gamma_0$ (Fig. 5a). This suggests that the task similarity (or in general the degree of (non-)stationarity of the data) non-trivially interacts with the regimes of the network. Including this perspective into the DMFT formalism is non-trivial, and we have just started exploring this avenue (see Appendix E3, F). However, we believe this direction would be very interesting and valuable for future work. Secondly, the optimal $\gamma_0^\star$ reflects the plasticity-stability tradeoff, and therefore, $\gamma_0^\star$ can shift from the $\gamma_0$ at which the *lazy-rich transition* happens. Generally, we find that when the tasks are close to stationary, the optimal $\gamma_0^\star$ is always 1, i.e. the maximum of the range tested. In other words, when the data is stationary, higher plasticity helps reach better performance. By contrast, as the amount of non-stationarity is increased, stability is necessary to avoid losing performance on the old data and the optimal $\gamma_0^\star$ is consistently lower than 1. From this point of view, the $\gamma_0$ at which the *lazy-rich transition* happens represents the minimal $\gamma_0^\star$ as any value below it will not increase stability (because the network is effectively lazy) and it will not increase plasticity. While we do observe the plasticity-stability tradeoff even in the theoretical experiments at infinite width (Fig. 4a), we have not specifically investigated it theoretically. - **Fig. 6 labels** Thanks for pointing out the cut-off labels of Fig. 6, and for suggesting an increased size of the Figure. We will correct these in the final version of the paper. --- *We hope that we have adequately addressed your questions. If you have further questions or comments, we remain at your disposal.*
Summary: This paper studies how neural network parameterization (at the extremes, NTP and $\mu$P) shapes the effect of network width on catastrophic forgetting. ## Update after rebuttal My assessment of the paper remains unchanged. I think this is an interesting contribution, but I am still skeptical of the added value provided by the DMFT analysis. Claims And Evidence: The claims are well-supported. Methods And Evaluation Criteria: The methods are appropriate. Theoretical Claims: The theoretical claims are straightforward extensions of the DMFT results of Bordelon & Pehlevan (2021), and appear sound. Experimental Designs Or Analyses: The experiments are generally well-designed, and the authors provide a good selection of additional figures in the Appendices. Supplementary Material: The authors do not provide any supplementary material. Relation To Broader Scientific Literature: This paper bridges two bodies of work in machine learning: that on optimal network parameterizations, and that on catastrophic forgetting. It is well-situated within the literature, and should be of broad interest to the ICML audience. Essential References Not Discussed: The authors do a generally good job of reviewing related prior art, but there are few missing references. - Petrini et al. ("Learning sparse features can lead to overfitting in neural networks", NeurIPS 2022) studied how very rich feature learning can lead to overfitting to spurious features. This bears a close conceptual relation to the results here on severe catastrophic forgetting in very rich networks. Moreover, it offers a previous example setting where more feature learning is not better. - Vayas et al. ("Feature-Learning Networks Are Consistent Across Widths At Realistic Scales", NeurIPS 2023) show that beyond some minimal width, $\mu$P-parameterized networks of different widths behave very similarly, which is related to the authors' findings in Figure 2. Other Strengths And Weaknesses: - I do not think the title does a good job of conveying the main contributions of the paper. I would suggest something that makes the main conclusion obvious. To give a very rough example, perhaps something like "Rich feature learning can accentuate catastrophic forgetting". - One substantial concern I have is with the role DMFT plays in the paper, as I think the space devoted to it in the main text might better be used to present additional experimental results (for instance, the experiments on training time could be promoted to the main text). To be very clear, I am by training a statistical physicist, so I think it's nice that the authors have the DMFT description. However, it is not enough to write down the self-consistent equations; there must be some conceptual meat extracted from them. My concerns are (1) that the DMFT results do not substantially strengthen the claims based on experiment, and (2) that the technical novelty here relative to the cited work of Bordelon and Pehlevan is minimal. What do we gain conceptually from the perturbative approximation? - The authors do not adequately discuss how some of their findings (particularly the relatively sharp increase in CKA with $\gamma\_0$) relate to those of Atanasov et al. 2024 ("The Optimization Landscape of SGD Across the Feature Learning Strength"), which documents similar phenomenology. This is also relevant to their choice of scaling of learning rate with $\gamma\_0$. - I think the paper would benefit from some futher analysis of representational changes across tasks. For instance, the authors could plot as a function of training time the kernel-target alignment (as in Atanasov et al. 2024) for each task. This would help clarify precisely what structure is learned and forgotten. Moreover, it would help relate this work to the abovementioned work of Petrini et al. 2022. - The paper is missing some methodological details on how the authors solved the DMFT equations. In line 771, the authors state that they "implemented simple computational physics discrete-time dynamics", but this is too vague to be useful. Presumably they just modified the solver from Bordelon & Pehlevan, but this should be stated. Other Comments Or Suggestions: - In figure lables, the authors should just write 1-CKA instead of "Features evolution, 1-CKA". - Mentions of Bordelon et al (2023) for depth scaling in ResNets should cite also the contemporaneous work of Yang et al. - Please make sure to state the dataset and number of samples used in figure captions (e.g. this is missing in Figure 4). - I think the claim in the Discussion that "our findings complete the picture on feature learning in modern NNs" is too broad, particularly given previous work on settings where feature learning harms generalization. Questions For Authors: I am a bit puzzled by the use of "capacity" at a few points in this paper, e.g. in line 352 where it is stated that increasing $\gamma\_0$ increases capacity. I would associate capacity more with expressivity than with learning-related features. Can you clarify this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *We are grateful for your time reading our paper, and thank you for your thoughtful review.* --- ## The role of DMFT in the paper We agree that the results of the DMFT are somewhat abstract, however we believe this is an inherent limitation of this theory: it is hard to simplify the results and reach interpretable results without sacrificing its completeness. Committed to this direction for future work, we are currently working on integrating minimal assumptions that could help us simplify the results without trivializing them. Nevertheless, we would like to address your opinions that "(1) the DMFT results do not substantially strengthen the claims based on experiments", and that "(2) the technical novelty here is limited". 1. The DMFT equations allow us to actually reach the infinite-width limit, something that would have been impossible otherwise. Although it is unusual to lay the theory at the service of the experimental evidence, we believe that the infinite-width curves crucially strengthen our finite-width insights of the experiments. 2. We agree that the DMFT techniques are a known approach to analyze the network behavior. Nevertheless, we believe that our extension to the Continual Learning scenario is notable and highly valuable for the community, as we - introduce the NTK across tasks as a new entity of the DMFT formalism - are the first work to analyze feature learning in CL - propose a perturbative approximation which is different from previous approaches (e.g., Bordelon and Pehlevan), as we also expand the residuals in powers of $\gamma_0$, and we apply it to the CL scenario. This allows us to study the complex dependence between $\gamma_0$ and the task similarity $\rho$, obtaining a closed-form solution for the coefficients of CF, up to the second order, in the linear case. Seeking the "conceptual meat", we were recently able to obtain further insights on the role of $\gamma_0$ in the linear case, where we find that in the infinite-time solution, the second-order coefficient of CF is always positive, certifying that larger $\gamma_0$ yields higher forgetting. Moreover, we find that this relationship has a non-trivial dependence on $\rho$ through a sixth-order polynomial, where the maximum CF is obtained at richness-dependent levels of $\rho$. You can find the preliminary plot [here](https://ibb.co/9mQhGZg1). ## Discussion of related work - **Petrini et al., 2022.**, **Vayas et al., 2023.**, **Yang et al., 2023**. Thanks, they should definitely be included in the related works and cited accordingly. - **Atanasov et al., 2025.** We would like to point out that this work is concurrent and will be presented at ICLR 2025. Although the paper is related to ours, we feel it is unfair to list as a weakness the missing discussion of such a recent work. Nevertheless, their findings are interesting and might be a good starting point for future work. As you suggest, the observed *lazy-rich transition* might be related to our choice of LR scaling (LR scales quadratically with $\gamma_0$): they observe that this scaling is optimal for the lazy regime, but not anymore for the rich and ultra-rich regimes (Fig. 1b), where instead sub-quadratic LR scaling is optimal. Our transition might therefore be due to a change of the effective LR, shifting from the optimal LR towards a too-large LR and thus approaching divergence. Note that this is just an intuition that requires further study. ## Other comments - **Title:** following also the review of DJW3, we have decided to avoid referring to Edge of Laziness, and will therefore opt for a more informative title, as you suggest. - **Representational Changes:** We have quickly implemented this interesting experiment, and you can find the results for the Permuted-MNIST, restricted to samples 0 and 1, [here](https://ibb.co/Zzwp7QTB). The drop in the alignment after the tasks switch represents an intriguing insight that serves as an interesting starting point for future analysis. We will add this experiment to the Appendix. - **Implementation of DMFT:** thanks for pointing out the missing details. We use a modified version of the one by Bordelon \& Pehlevan in the L=1 case, extended to support multiple tasks as per our training setting. We will clarify this in the final version. - **Network Capacity:** we refer to *network capacity* as the network size (i.e. the width), irrespective of $\gamma_0$. In l. 352 we state that "the increased capacity (i.e. size) of the network does not benefit performance". - **Figure lables:** we agree that the labels are slightly notation-heavy, but we believe that the CKA measure might be obscure to readers not familiar with the kernel literature; the labels provide a useful guidance to such a reader. - **Claim in the discussion:** we agree and will adapt this claim. --- *We hope that we have adequately addressed your questions. If you have further questions or comments, we remain at your disposal.*
Summary: This paper investigates the effect of model scale and the degree of feature learning in continual learning. It identifies a transition called Edge of Laziness influenced by task similarity, where the model exits an effectively lazy regime with low forgetting to enter a rich regime with significant forgetting. Technically, it extends the DMFT theory to non-stationary learning. Infinite-width simulations and real-world experiments support its claims. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check the proof carefully, but it seems good, and the experiments support the theoretical results. Experimental Designs Or Analyses: Yes. The experiments are conducted on different datasets and support their claims and theoretical results well. For the MNIST dataset, the infinite-width simulations motivated by the DMFT further verify the real-world experiments in finite width. Supplementary Material: Yes. I mainly reviewed Appendixes A, B, and D. The experiments are designed well and the results are presented clearly. Relation To Broader Scientific Literature: NA. Essential References Not Discussed: NA. Other Strengths And Weaknesses: Strengths: 1. The paper is written clearly. 2. The paper designs a systematic study on the impact of model scale and the degree of feature learning in continual learning, and the experiments support their claims well. 3. As far as I know, this paper takes a first step to extend the DMFT to continual learning. Other Comments Or Suggestions: I maintain my score after the rebuttal. Questions For Authors: 1. Do the claims in this paper hold for the Transformer architecture trained with Adam optimizer? 2. Following question 1, can the DMFT be extended to an adaptive optimizer like that in the TP 4b paper? 3. In practice, we may train a neural network with $O(width)$ steps by the scaling law. Can we extend the experiments and the DMFT theory to that case? 4. In this paper, DMFT theory is used to conduct simulations to verify the findings in finite-width experiments, which is expensive. Can we predict the existence of phase transition theoretically? 5. In my opinion, the $\gamma_0$ is just an output multiplier hyperparameter in existing mup papers (e.g., TP 5), so it should transfer across different widths. Am I right? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *Thank you for taking the time to read and review our paper. We are glad to hear that you found our paper clearly written, and that you consider our experiments well-designed, providing a systematic study of the topic and supporting well both claims and theoretical results.* --- 1. >**Do the claims in this paper hold for the Transformer architecture trained with Adam optimizer?** Despite the success of transformers in DL, their adoption in CL is still limited and fairly understudied [1,2,3]. If transformers in CL are already rare, their optimization with adaptive optimizers (e.g., Adam) is even more so, whereas vanilla SGD is the standard optimizer even when training transformers [4,5]. For these reasons, in our work we decided to study the ResNet architecture with SGD. Nevertheless, we agree that exploring our claims even for transformers and for adaptive optimizers is an interesting avenue for future research. 2. > **Can the DMFT be extended to an adaptive optimizer like that in the TP 4b paper?** The DMFT could potentially be extended from [6]. However, in this work, we were more interested in understanding the dependence of the dynamics on $\gamma_0$, which requires lengthy derivations even for a two-layer linear network trained under gradient flow. 3. > **In practice, we may train a neural network with $O(width)$ steps by the scaling law. Can we extend the experiments and the DMFT theory to that case?** The DMFT relies on the assumptions of the number of datapoints and timesteps of order O(1), beyond which the theory may break down. In this study, an extension to width-dependent training times is out of scope. However, we point to the experiments presented in Appendix B.1, in which we investigate the effect of training time in the NTP. We find that longer training time systematically yields high feature evolution even in the NTP, and therefore it leads to CF. 4. > **Can we predict the existence of a phase transition theoretically?** We are currently unable to predict the phase transition theoretically. Nevertheless, we are actively working towards this result, and we were recently able to characterize the infinite-time solution of CF in the rich regime of the linear case. In this setting, we observe a highly non-trivial relation between the task similarity $\rho$ and $\gamma_0$, where the second-order coefficient of CF is a sixth-grade polynomial in $\rho$. The complexity of this relationship - even in the linear case - points to the non-triviality of the behavior and thus to the difficulty of getting a general characterization of the phase transition theoretically. Nevertheless, we will continue working to extend this result to the non-linear setting for future work. 5. > **In my opinion, the $\gamma_0$ is just an output multiplier hyperparameter in existing mup papers (e.g., TP 5), so it should transfer across different widths. Am I right?** Thanks for the interesting question, which allows us to clarify the role of $\gamma_0$ as well as the novelty of our findings. Firstly, $\gamma_0$ is not only an output multiplier, instead it also quadratically modulates the LR (cfr Tab 1 in our paper). This implies that one cannot infer transfer properties of $\gamma_0$ from those of the LR, as the two are non-trivially related. Secondly, and crucially, the transfer properties of LR (and other hyperparameters) have only been shown - but not proven analytically - in the stationary setting, and not in the non-stationary and CL scenario. We therefore deem it non-trivial and surprising to observe that, with $\mu$P, $\gamma_0$ shows these transfer properties even in the non-stationary scenario. In hindsight, given that width scaling does not affect forgetting under mean field scaling, it is intuitive to expect that $\gamma_0$ should transfer. However, if width scaling were to affect the dynamics (e.g., by lowering forgetting due to increased capacity), then one could in principle expect $\gamma_0$ to scale differently with width. We hope this clarifies the subtlety of these results and the coherence of our findings. --- *We hope we have adequately addressed your questions and reservations that hindered your recommendation for acceptance. We remain at your disposal in case you might have further comments or doubts.* --- ### References - [1] Ramasesh et al., "Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics", ICLR 2021 - [2] Mirzadeh et al., "Wide Neural Networks Forget Less Catastrophically", ICML 2022 - [3] Lu, et al. "Revisiting neural networks for continual learning: An architectural perspective." arXiv preprint arXiv:2404.14829 (2024). - [4] Ramasesh et al., "Effect of scale on catastrophic forgetting in neural networks", ICLR 2021 - [5] Mirzadeh et al., "Architecture matters in continual learning." arXiv preprint arXiv:2202.00275 (2022). - [6] Bordelon et al., "Infinite Limits of Multi-head Transformer Dynamics.", Neurips 2024
Summary: The authors present a theoretical and experimental analysis of the effect of the neural network parametrisation on Catastrophic Forgetting. The study extends previous works which focused on the lazy regime only. The authors identify a spectrum of training regimes from the lazy regime to the feature learning regime. This spectrum is parametrised with a single parameter gamma. The authors show that depending on the parametrisation, the width scaling laws is different for Catastrophic Forgetting. Also, the authors identify an optimal tradeoff between accuracy and forgetting, as a function of the parametrisation. The tradeoff occurs because forgetting increases the more the feature rich the training regime, while the accuracy increases. Finally, the authors define and study the EoL, which varies as a function of the stationarity of the tasks. The more stationary the tasks, the higher the EoL. Claims And Evidence: The main claims of the paper are : 1- The effect of width scaling on CF depends on the network parametrization. The evidence to support it is : - Experimental : Measuring CF as a function of the width for the Split-CIFAR-10 and Permuted MNIST tasks. 2- The existence of the Edge of Laziness (EoL), a region which separates to regimes. The lazy regime where the features don't change much and CF is low, the rich regime where the features changes significantly more as well as CF. - Figure 3, a, b, c : The variation of CFr as a function of gamma_0 - Figure 4, a : The variation of the average loss as a function of gamma_0 3- The optimal gamma_star for the plasticity stability tradeoff is almost identical regardless of the network width : - Figure 4, b : The pareto front between CF and Learning error 4- High feature learning is only beneficial with highly similar tasks - Figure 5 C : The average error as a function of the task similarity and gamma_0 on Permuted MNIST Methods And Evaluation Criteria: The proposed evaluation methods are sensible. The authors use the tasks Permuted MNIST, CIFAR 10 and Split-TinyImageNet for their experimental validation. The last observation about the Pre-Training effect in Section 6.2 is very interesting and intriguing. The authors show that on Split-TinyImageNet the law is different compared to the other two tasks. I think this experiment is very important to highlight the validity of the law on training setups and datasets closer to real world data. Theoretical Claims: - There seems to be a notation inconsistency between the definition of the NTK across tasks (Eq 2) and its use in Eq 12. The definition only has a single time index, while Eq 12 has two time indexes. Also, I suggest clarifying the time index in the definition of Eq 2, in the weights matrix. Currently the definition doesn't explicitly highlight the time / task index. - General question : Out of curiosity, is it tractable to derive the expression of forgetting at t=infinity from the PDE in Eq 12 ? I skimmed through the proof but haven't checked it in detail. Experimental Designs Or Analyses: I checked the soundness of all the experiment designed presented in the main paper, but I didn't check the ones in the Appendix. The authors considered the Permuted MNIST, CIFAR 10 and Split-TinyImageNet tasks for their analysis. The first two tasks show clearly that the law is satisfied and the third one shows a different behaviour in the rich regime which the authors explain with forward transfer. One question I have is if there is a specific reason not to consider the CIFAR-100 task, as it lies between the two datasets and would help determine the boundary in terms of data distribution where the pre-training effect would apply or not. Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: This paper relates to the broader literature in the following ways : - [1] and [2] provide a theory of CF in the lazy regime, formulating theoretical bounds and a closed form of CF in the NTK regime. - [3] and [4] observe that increasing the width of the neural networks reduces CF and study the impact of the model architecture on CF. - [5] study theoretically the effect of overparametrization on CF for linear models. The paper also relates to the rich feature learning literature, I am not familiar with this research area. - [1] Doan, Thang Van et al. “A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix.” International Conference on Artificial Intelligence and Statistics (2020). - [2] Bennani, Mehdi et al. (2020). Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent. ArXiv, abs/2006.11942. - [3] Mirzadeh, S., Chaudhry, A., Hu, H., Pascanu, R., Gorur, D., & Farajtabar, M. (2021). Wide Neural Networks Forget Less Catastrophically. International Conference on Machine Learning. - [4] Mirzadeh, S., Chaudhry, A., Yin, D., Nguyen, T., Pascanu, R., Gorur, D., & Farajtabar, M. (2022). Architecture Matters in Continual Learning. ArXiv, abs/2202.00275. - [5] Evron, Itay, Daniel Goldfarb, Nir Weinberger, Daniel Soudry and Paul Hand. “The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting - An Analytical Model.” ArXiv abs/2401.12617 (2024): n. pag. Essential References Not Discussed: I am not aware of any essential references not discussed :) Other Strengths And Weaknesses: The paper was really interesting to read and it challenged several intuitions I had. I found the explanations very clear and I particularly appreciated the experimental evidence provided to support and illustrate the claims in practical settings. Also, I think the contribution is significant because the analysis links between several prior findings and further provides new insights about the impact of parametrization on a scale. I didn't note any major weaknesses, in the other sections I noted some clarification questions. Other Comments Or Suggestions: Some typos : - In the contributions section : I think NTP and muP are not defined beforehand, they are defined in the next page. - Figure 6 - d : The title is clipped - L163 : Could you briefly explain the cubic complexity without going much in detail ? Questions For Authors: (The questions below are clarification questions and wouldn't impact the final score) - In Figure 1 and Figure 4, the law of the smallest width (64) is significantly more nosy than the other larger widths. Could you share an intuition about why it might be the case ? - In Figure 6 : Given that ImageNet has a hierarchy of class similarities, how do you control for this similarity when splitting the tasks and classes ? - Also about the pre-training effect, wouldn't it be expected for it to occur in CIFAR-10 as well. Is it sensible to measure it with the forward transfer metric and compare for TinyImageNet and CIFAR-10 ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: *We sincerely thank you for taking the time to review our paper and for the thoughtful feedback. We are glad to hear that you found the work really interesting and stimulating. We are also grateful to hear that you appreciate our contribution to the literature.* --- ## The Pretraining Effect We are glad you found the pretraining effect particularly interesting and intriguing; we share your enthusiasm for this phenomenon. We agree that these observations offer an important perspective on our results, in particular as the setting approaches training scenarios closer to the real world. In this regard, we think you might be interested in inspecting also the interplay between the pretraining effect and width scaling, as reported in Figures 17-19 of the appendix. Before answering your specific questions, we would like to clarify that the design choices for our experiments targeted the main goals of the paper. However, we agree with you that looking into the pre-training effect more thoroughly is certainly very interesting. ### Split-CIFAR > Is there a specific reason not to consider the CIFAR100 task? We find no specific reason for not considering CIFAR100 in our experiments. We have chosen CIFAR10 as a representative dataset for our experiments for computational convenience, and then directly extended our analysis to Split-TinyImagenet, aiming to provide stronger empirical evidence on a more challenging dataset. Nevertheless, we agree that CIFAR100 might be an interesting in-between dataset and we thank you for pointing it out. > Wouldn't the pretraining effect be expected for it to occur in CIFAR10 as well? In Split-TinyImageNet, we observe the pretraining effect happening for a larger number of classes per task, namely when the tasks have enough data diversity, and thus a greater overlap between task distributions. Our intuition is therefore that the reason why this behavior is not observed in CIFAR10, is that the number of classes is too small to allow for a significant overlap between task distributions. ### Other Questions on the Pretraining Effect > Is it sensible to measure the pretraining effect with the forward transfer metric? We haven't considered measuring the pretraining effect with the forward transfer metric, instead, we have focused on disentangling the feature evolution of the first and later tasks. However, we agree that the forward transfer metric could be a valuable addition to our analysis and we will look into integrating it. > How do you control the hierarchy of class similarities in Split-TinyImageNet? We naively split sequentially the classes of TinyImageNet, without accounting for the semantics of the classes. However, when we vary the number of classes per task, we ensure to keep fixed the classes-to-task assignment, to have a fair comparison across runs with different nr. of classes per task. Exploring the effect of the semantical hierarchy is, however, a very interesting avenue. For example, one interesting aspect one could explore in future work is whether one can reproduce our "pretraining effect" even with a fixed number of classes per task, and only by modifying the variety of the data from a semantical perspective. --- ## Other Comments - **Tractability of forgetting expression at $t \to \infty$**: The short answer is yes, albeit the formula is not present in the current version of the paper. Indeed, we were recently able to extend our perturbation theory approach for the infinite-time solution in the two-layer linear network setting, where we have a non-trivial relation between $\gamma_0$ and similarity $\rho$. - **Cubic complexity of infinite-width simulations**: The fields h and z are composed of sums over P data points, and discretized integrals over T time steps, i.e., h and g are $\mathcal{O}(PT)$. The NTK is a sum of $\Phi$ and $G$, which are the inner product of respective fields and thus $\mathcal{O}(P^2T^2)$. Finally, the output is the matrix-vector product of the NTK and $\Delta$ of dimension P, and then integrated over T time steps following the PDE of Eq. 3. This leads to $\mathcal{O}(P^3T^3)$. - **Noise of low-width networks**: The training process is stochastic (over initialisation), and quantities like the NTK rely on the self-averaging properties characteristic of the $N\to\infty$ limit. Concretely, this means that in the limit, the network's dynamics become deterministic and instead, at finite widths, these observables are a partial/incomplete snapshot of the underlying process, introducing more stochasticity. - **Typos and notation inconsistencies** Thanks for pointing these out. We will correct these in the final version of the paper. --- *We hope that we have adequately addressed your questions. If you have further questions or comments, we remain at your disposal.*
null
null
null
null
null
null
FACTER: Fairness-Aware Conformal Thresholding and Prompt Engineering for Enabling Fair LLM-Based Recommender Systems
Accept (poster)
Summary: The authors present a fairness-aware framework for LLM-based recommendation systems that combines conformal prediction with dynamic prompt engineering. FACTER introduces an adaptive semantic variance threshold and a violation-triggered mechanism to tighten fairness constraints when biases arise. Claims And Evidence: mostly supported Methods And Evaluation Criteria: probably sound Theoretical Claims: Proofs for some claims. Experimental Designs Or Analyses: Probably valid. Supplementary Material: NA Relation To Broader Scientific Literature: Related to LLM recommendation fairness. Essential References Not Discussed: Some references are not included. Other Strengths And Weaknesses: Strengths 1. Integrate Conformal Prediction for Fairness Calibration Prior works on fairness in LLM-based recommendation [1] mainly rely on direct re-ranking or pretraining constraints. The paper leverages conformal prediction to define fairness violation thresholds, a statistically principled approach in LLM recommendations. 2. Effective Bias Mitigation Without Model Retraining Unlike prior adversarial training-based methods (e.g., [2]), which require modifying model parameters, FACTER works in a black-box setting, making it suitable for API-based deployments (e.g., OpenAI, Hugging Face models). Weaknesses 1. Incremental. The paper combines multiple existing techniques, such as conformal prediction and prompt engineering, but offers limited technical novelty. 2. Over-Reliance on Embedding-Based Fairness Measures FACTER defines fairness violations using embedding distances (computed via Sentence-Transformers). This assumes that embeddings capture fairness-sensitive information correctly, which may not always be valid. Bias in embeddings themselves could affect fairness evaluations [3,4]. If the embeddings already encode demographic biases, fairness constraints based on them may be flawed. 3. Limited Justification for Fairness Threshold Selection The paper claims to use conformal prediction for threshold calibration, but the choice of α (confidence level) lacks theoretical justification. The authors should conduct an ablation study comparing different threshold selection methods (e.g., data-driven quantile calibration vs. fixed conformal bounds). 4. Scalability Issues in Large-Scale Deployments FACTER’s offline calibration phase has O(n²) complexity, requiring pairwise similarity comparisons across all calibration points. This makes it computationally expensive for large-scale datasets (e.g., Movielens20M has more than 100k users). The paper claims that approximate nearest neighbor search (O(n log n)) can improve efficiency, but this is not tested empirically. [1] Hua, W., Ge, Y., Xu, S., Ji, J., & Zhang, Y. (2023). Up5: Unbiased foundation model for fairness-aware recommendation. arXiv preprint arXiv:2305.12090. [2] Madras, D., Creager, E., Pitassi, T., & Zemel, R. (2018, July). Learning adversarially fair and transferable representations. In International Conference on Machine Learning (pp. 3384-3393). PMLR. [3] Gallegos, I. O., Rossi, R. A., Barrow, J., Tanjim, M. M., Kim, S., Dernoncourt, F., ... & Ahmed, N. K. (2024). Bias and fairness in large language models: A survey. Computational Linguistics, 50(3), 1097-1179. [4] Li, Y., Du, M., Song, R., Wang, X., & Wang, Y. (2023). A survey on fairness in large language models. arXiv preprint arXiv:2308.10149. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for carefully reading our work and your valuable comments. We address your concerns in the following paragraphs. *Weakness 1:* ## A1. While FACTER leverages existing techniques such as conformal prediction and prompt engineering, to our knowledge, __no prior work has unified these methods into a closed-loop, adaptive fairness calibration framework that dynamically refines prompts and thresholds to mitigate fairness violations in black-box LLM recommenders with statistical guarantees. Specifically, FACTER introduces the use of conformal prediction with fairness constraints, an iterative violation-triggered prompt refinement strategy that limits token bloat and generalizes to unseen biases, and provides a paradigm shift toward adaptive, statistically grounded fairness for black-box LLMs.__ Unlike static fairness methods (e.g., UP5 [Hua et al., 2023]), FACTER uses conformal prediction ([Angelopoulos et al., 2023]; §3.2) to adaptively tighten thresholds based on violation rates while ensuring 1−α coverage (Eq. 7). This closed-loop calibration addresses distribution shifts and emergent biases, reducing violations by 95.5% vs. UP5 (Table 1 of the original paper). Prior works treat fairness and accuracy as separate objectives ([Dwork et al., 2012]; [Gallegos et al.]). FACTER’s score $S_i = d_i+λ Δ_i$ (Eq. 5) jointly optimizes both, enabling a principled tradeoff validated by ablation studies (Table 5 of the original paper). Unlike static prompts (e.g., “avoid bias” in [Yang et al., 2022]), FACTER injects concrete bias patterns (e.g., “Gender=F → Romance-Only”) from a violation buffer (§3.3). This iterative refinement reduces token bloat while generalizing to unseen biases, outperforming Zero-Shot by 22× in violations (Table 1 of the original paper). Our proofs for embedding robustness (Theorem 1) and threshold convergence (Theorem 2) provide formal guarantees absent in prior fairness frameworks ([Shafer & Vovk, 2008]; [Madras et al., 2018]). Therefore, FACTER is not a simple combination of tools but a paradigm shift toward adaptive, statistically grounded fairness for black-box LLMs. *Weakness 2:* ## A2. __Please refer to A1 in the Reviewer 49dx rebuttal section.__ *Weakness 3:* ## A3. Thank you for your excellent suggestion. The choice of α directly controls the conformal coverage guarantee (1−α), ensuring that the probability of falsely flagging a fair recommendation as biased (Type I error) is bounded by α. This aligns with the theoretical foundations of conformal prediction (Angelopoulos et al., 2023). Specifically, setting α=0.10 provides a 90% coverage guarantee, meaning 90% of fair recommendations will not be erroneously flagged. However, lowering α (e.g., α=0.05) tightens the threshold, reducing Type I errors but potentially increasing Type II errors (failing to detect true violations). Conversely, higher α (e.g., α=0.20) relaxes the threshold, increasing Type I errors but improving detection power. To validate this trade-off empirically, based on the reviewer’s comment, we conducted an ablation study (Table 1 below) on MovieLens-1M, measuring violations, Type I/II errors, and recommendation quality. The results are reported in Table A1. *Table A5: Impact of α on Fairness-Accuracy Tradeoff* α | Coverage (1−α) | Type I Error (↓) | Type II Error (↓) | #Violations (↓) | NDCG@10 (↑) |-|-|-|-|-|-| 0.01 | 99% | 0.6% | 18% | 4 | 0.454 0.05 | 95% | 1.2% | 12% | 9 | 0.451 0.10 | 90% | 2.1% | 8% | 15 | 0.447 0.20 | 80% | 4.0% | 5% | 27 | 0.442 As the results show, lower α (e.g., 0.01) prioritizes strict fairness (fewer violations, low Type I errors) but risks missing true biases (higher Type II errors). Higher α (e.g., 0.20) improves detection power (lower Type II errors) at the cost of increased false alarms. In the final manuscript, we will expand Section 3.2 to explicitly discuss how α governs the Type I/II error trade-off, referencing Eq. (7) in our paper and Theorem 1 in Appendix J.1.1. *Weakness 4:* ## A4. __Please refer to A2 in the Reviewer b7t4 rebuttal section.__ Moreover, regarding your concern about essential references, we will include the mentioned [3,4] references, which are survey papers, to the final version of the paper. Unfortunately, we do not know what other missing references you have in mind. If there is an opportunity for you to anonymously provide these additional references to us, we will be grateful and will add them (and even provide comparisons with the most relevant ones) to the final manuscript.
Summary: 1. This paper proposes FACTER, a fully post hoc framework that combines conformal thresholding and dynamic prompt engineering to address biases in black-box LLM-based recommender systems. 2. FACTER adaptively refines a fairness threshold via semantic variance checks and updates prompts whenever it detects violations, requiring no model retraining. 3. Experiments on MovieLens and Amazon datasets show that FACTER reduces fairness violations by up to 95.5% compared to baselines while preserving key recommendation metrics. 4. The paper also provides theoretical guarantees for the proposed conformal calibration framework, including type 1 error bound and detection power. Claims And Evidence: The claims made by the paper are supported by either empirical experiments or theoretical guarantees. Methods And Evaluation Criteria: 1. The proposed methods are clearly introduced in details with both the offline calibration phase and the online calibration phase. 2. The benchmark datasets are widely used datasets for recommendation evaluation: MovieLens-1M and Amazon Movies & TV. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. The experiments are paired with both fairness and accuracy metrics. 2. The proposed approach is compared against two baselines, one as the previous SOTA and the other as a baseline with direct LLM-based ranking without fairness correction. 3. The proposed approach is evaluated on different LLM models, and on different evaluation sets. 4. The extended ablation study is also provided in the appendix. Supplementary Material: The extended ablation study on lambda and gamma looks reasonable. Relation To Broader Scientific Literature: This paper might be insightful to a broader community in some other domain but also interested in fairness correction. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: 1. The paper is well written and easy to understand. 2. The proposed approach is novel and effective in the experiments. 3. The authors provide theoretical guarantees for their approach. Weakness: 1. The approach introduces many hyper-parameters whose values can be hard to fine-tune. 2. As mentioned in the paper, the offline phase can be expensive. 3. It cannot correct the embedding function bias. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive assessment. We address your points in the following paragraphs. *Weakness 1:* ## A1. Our approach requires multiple hyper-parameters (e.g., $\lambda$, $\gamma$, $\tau_\rho$), which we tune via grid search on a 20% hold‑out calibration subset (see Section 4.2 and Appendix §A.2 for detailed ablation tables). Other methods such as Bayesian optimization ([1],[2]) can also be used. Although the number of hyper-parameters may appear excessive, it is comparable to (or smaller than) the parameter sets in other post‑hoc fairness algorithms (e.g., adversarial fine‑tuning or distribution‑level constraints, Madras et al., 2018; reweighting or threshold methods, Dwork et al., 2012; Angelopoulos et al., 2023). Empirically, moderate deviations in these hyperparameters do not substantially change fairness or accuracy, as demonstrated by our ablation studies in Tables 6–7. The need for such tuning is indeed common across state‑of‑the‑art fairness frameworks (Hua et al., 2023). In our final manuscript, we will revisit and refine these hyperparameter discussions, incorporating the new references and clarifying our tuning strategies in the text. References: *[1] Shahriari, Bobak, et al. "Taking the human out of the loop: A review of Bayesian optimization." Proceedings of the IEEE 104.1 (2015): 148-175.* *[2] Frazier, Peter I. "A tutorial on Bayesian optimization." arXiv preprint arXiv:1807.02811 (2018).* *Weakness 2:* ## A2. As detailed in __Section 3.4__ of the original manuscript, we address scalability by using __approximate nearest neighbor (ANN)__ methods for __all offline calibration steps__, reducing the naive complexity from O(n^2) to about __O(n log(n))__. We also employ GPU batch processing and other parallel optimizations, so in practice, the runtime sometimes scales sublinearly with n, as larger batches can be processed more efficiently. Table A4 illustrates this: MovieLens‑1M (around 6k users) takes around 40–65 minutes, whereas MovieLens‑20M ( around 138k users) extends to 6–8 hours, which is acceptable for overnight jobs. *Table A4: Approximate Calibration Times* | Dataset | #Users | Offline Calib. Time (ANN) | Online Inference (ms/query) | |:-----------------:|----------:|-------------------------------:|--------------------------------:| | MovieLens‑100k | 943 | 2–5 min | ~80–100 ms | | MovieLens‑1M | 6,040 | 40–65 min | 140–160 ms | | MovieLens‑20M | 138,000 | 6–8 hrs | 180–220 ms | In the final version, we will emphasize that ANN was used for all reported studies and add the above table and discussion to clarify how GPU batch processing and approximate search heuristics yield observed calibration times that sometimes grow sublinearly with dataset size, while theoretical complexity remains O(n log(n)). *Weakness 3:* ## A3. __Please refer to A1 in the Reviewer 49dx rebuttal section.__
Summary: In this paper, the authors propose FACTER (Fairness-Aware Conformal Thresholding and Prompt Engineering), a retrain-free framework that uses a designed non-conformity score and conformal prediction to dynamically adjust the fairness-aware prompts and mitigate fairness violations in LLM-based recommender systems. Empirical results on MovieLens and Amazon datasets show that FACTER essentially reduces fairness violations while maintaining strong recommendation accuracy. ===== update after rebuttal ===== In the rebuttal, the authors provided clarification regarding the practicality and the underlying motivation of the proposed non-conformal score. I encourage the authors to revise the manuscript thoroughly and further clarify the relevant definitions in the final version. I have chosen to maintain my original score. Claims And Evidence: The claims regarding demographic biases in LLM-based recommendation systems, the use of conformal prediction as a control mechanism of fairness violations by setting dynamic thresholds, and the utility of the iterative prompt engineering method are supported by theoretical and empirical evidence in this paper. Methods And Evaluation Criteria: The proposed method is overall reasonable and sound. The evaluation metric assesses fairness violation control in an LLM-based recommendation system at both the group and individual levels. Theoretical Claims: The theoretical claims are supported by necessary proofs. Experimental Designs Or Analyses: Experimental designs and analyses are sound. Supplementary Material: I have briefly reviewed the appendices, which include the necessary proofs, hyperparameter analysis, and detailed explanations of prompt engineering. Relation To Broader Scientific Literature: To the best of my knowledge, the proposed method is a novel solution for fairness violations mitigation in LLM-based recommender systems. Essential References Not Discussed: All key references are discussed in the related work section. Other Strengths And Weaknesses: Strengths - This paper proposes a novel and effective black-box-friendly approach that integrates statistical fairness calibration with iterative prompt engineering for LLM-based recommender systems. - The paper is overall well-written and easy to follow. - The authors discuss the necessary theoretical guarantees and limitations of their proposed algorithm. - Experimental validation on two real-world datasets verifies the effectiveness of the proposed algorithm. Weaknesses: - My first concern is about the assumptions regarding the calibration set and embedding shift robustness. How does the proof of embedding shift robustness in Theorem 1 depend on the quality or diversity of the calibration set, given the proposed non-conformity score? - A related question concerns the definition of the non-conformal score. More insight into the design rationale of the non-conformity score within the recommendation system would enhance the credibility of the proposed metric and approach. - There are some inaccurate descriptions and minor questions: - How is $e^{y}_{new}$ obtained (in line 246, left column) ? - It is noted in lines 322-328 (left column) that “We employ three LLMs of varying sizes”. However, all three models have approximately the same parameter count (~7B–8B parameters), which appears to contradict the claim of "varying sizes”. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review. We address your comments/concerns in the following paragraphs. *Weakness 1*: ## A1. Our theoretical guarantee (Theorem 1) assumes that the calibration set is approximately exchangeable with future test data. Hence, the quality and diversity of the calibration set are essential, and we have considered them in our assumptions. Moreover, we can follow some guidelines to ensure the exchangeability of the test distribution with the calibration set. These guidelines are as follows: (i) stratified sampling across key user demographics to ensure diverse coverage of protected attributes, (ii) balancing user groups so that each protected category is well-represented, and (iii) periodic refresh of calibration data in real-world deployments to track shifting user populations or model updates. In addition, we performed experiments across 3 calibration seeds and found that fairness metrics (e.g., CFR, violations) varied by <5%, indicating stability under reasonable data variations. These explanations and experimentation will be added to the final manuscript. Finally, regarding the embedding bias robustness, please refer to __A1 in the Reviewer 49dx rebuttal section__. *Weakness 2:* ## A2. The non-conformity score $S_i = d_i+ λ Δ_i$ integrates accuracy and fairness by combining a predictive error term ( $d_i$ , cosine distance between recommendations and user preferences) and a fairness penalty ($Δ_i$ , maximum embedding divergence across demographic groups for similar users). The tradeoff parameter λ=0.7, selected via grid search, balances Pareto-optimal fairness-accuracy tradeoffs, reducing violations by 95% while retaining 98.7% recommendation quality (Appendix Table 5 of the original paper). Grounded in multi-objective optimization, this additive design enforces individual fairness by penalizing disparities between comparable users, validated empirically to outperform multiplicative alternatives. The score ensures equitable relevance in black-box LLMs without internal access. We will provide this additional discussion on the design rationale in the final version. *Weakness 3.1:* ## A3.1. The embedding $e^{y}_{\text{new}}$ is computed as: - Directly, if the ground-truth $y_{new}$ is available: $e^{y_{\text{new}}} = \mathrm{Emb}(y_{\text{new}})$ - Via Approximate Nearest Neighbor (ANN mentioned in Section 3.4) , if $y_{new}$ is not available (e.g., cold-start), we retrieve the closest calibration item using ANN and use its embedding. We will clarify this process comprehensively in the final version. *Weakness 3.2:* ## A3.2. Thank you for your comment. Yes, you are correct the three LLMS have nearly the same parameter counts. What we meant to say is that these models are architecturally different or at least different versions (in case of LLaMA models.) While LLaMA3-8B, LLaMA2-7B, and Mistral-7B have similar parameter counts, their architectures differ significantly: LLaMA uses a pure decoder, while Mistral integrates sliding window attention. Architectural diversity tests FACTER’s generalizability, a key strength highlighted in §4.2. We will revise the text to correct the sentence.
Summary: This paper proposes FACTER, a framework that integrates conformal prediction with iterative prompt engineering to mitigate demographic biases in recommender systems driven by large language models (LLMs). The authors introduce a notion of semantic variance as a proxy for identifying biased outputs when protected attributes (e.g., gender, age) are minimally changed. They then use conformal prediction to establish and dynamically update a fairness threshold. Whenever outputs exceed this threshold (indicating a likely bias), the system auto-updates the prompt to reduce future occurrences of the same pattern. Experiments on MovieLens and Amazon datasets demonstrate that FACTER reduces fairness violations (up to 95.5%) with minimal cost to recommendation accuracy. Claims And Evidence: Claim: FACTER can detect biased outputs by monitoring semantic-embedding distances between recommendations that differ only in protected attributes. Evidence: The authors measure counterfactual and group-level fairness metrics and show that when sensitive attributes flip, FACTER robustly flags cases that deviate from expected similarity bounds. Claim: Integrating conformal prediction into fairness monitoring yields statistical guarantees on the probability of future violations (Type I error bound). Evidence: The paper references classical conformal coverage results, deriving that the probability of observing an unfair outcome above the threshold can be controlled at a desired level. Empirical tests exhibit a violation rate consistent with the theoretical bounds. Claim: Prompt engineering—combined with an “avoid these biases” instruction—can iteratively reduce repeated demographic stereotypes without model retraining. Evidence: The authors show a clear downward trend in fairness violations across multiple iterations. Figures indicate that after injecting new negative examples into the prompt, the model’s outputs become more uniform across sensitive attribute groups. Methods And Evaluation Criteria: 1) The authors use MovieLens-1M and Amazon Movies & TV as benchmarking datasets. 2) They measure recommendation accuracy via Recall@10 and NDCG@10, standard in top-N recommendation tasks. 3) For fairness metrics, they evaluate the number of threshold-based violations, CFR (Counterfactual Fairness Ratio), and additional group-level metrics (SNSR, SNSV). They compare their method to two baselines (a zero-shot ranker and UP5) and show that FACTER achieves stronger fairness with minimal performance loss. Theoretical Claims: The paper builds on conformal prediction theory, asserting coverage guarantees for the fairness-related “non-conformity scores.” The proofs in the Appendix (or supplementary) outline: * A Type I error bound, showing that the probability of a false alarm is limited by alpha. * An adaptive mechanism that shrinks the threshold if repeated violations are detected. The derivations seem to follow standard conformal prediction arguments, citing Shafer & Vovk (2008) and subsequent expansions. The proofs appear sound; I did not find obvious errors in the theoretical steps for bounding the false alarm rate or for showing coverage under exchangeability. Experimental Designs Or Analyses: Design: The authors conduct offline calibration (with ~70% of data) to learn an initial threshold, then apply their iterative approach on the remaining 30% test portion. Results: Each iteration measures the number of flagged violations, updates the prompt, and optionally tightens the threshold. They present results across up to 3–5 iterations and show stable improvements. Ablation Studies: The paper includes ablations on key hyperparameters which strengthen confidence in the approach’s robustness. Supplementary Material: Yes. The supplemental appendix provides: Additional proofs of conformal coverage and extended ablation studies on how different prompt-engineering strategies (generic warnings vs. enumerated negative examples) affect final fairness outcomes. Relation To Broader Scientific Literature: Fairness in recommendations: The paper positions itself relative to methods like UP5 and to zero-shot LLM recommendation approaches. These references are appropriate for the fairness + recommendation domain. Bias in LLMs: The authors cite relevant prior work on generative-model biases and highlight the challenge of black-box, API-based LLMs, referencing classical approaches like adversarial training and more recent studies on prompt-level interventions. Essential References Not Discussed: I don’t see any glaring omission of key references. A recent line of work on “fairness calibration under distribution shift” might complement the discussion. But this is not essential to the paper’s contributions. Other Strengths And Weaknesses: Strengths * The method is model-agnostic and does not require finetuning or direct access to internal LLM weights, which is extremely relevant for real-world API-based systems. * Thorough experiments on multiple datasets and with multiple LLM backbones (LLaMA2, Mistral). Clear, iterative demonstration of how fairness improvements accumulate across calibration steps, giving the paper a strong practical dimension. Weaknesses * The approach depends on an external embedding model (e.g., SentenceTransformer). If the embedding itself is biased, that might compromise fairness detection. A brief discussion of how to mitigate bias in the embedding stage would be valuable. * The iterative prompting approach can become token-heavy, especially if many examples of biases must be enumerated. * The paper focuses on a single type of fairness definition (counterfactual fairness via minimal attribute changes). One might be curious about multi-attribute fairness. Other Comments Or Suggestions: Real-World Data: It might be interesting to see how FACTER performs if user attributes are uncertain, missing, or inferred from partial data. Questions For Authors: Q How robust is FACTER if the chosen SentenceTransformer model has inherent biases? Have you tested with multiple text embedders to confirm consistency? Q As you keep appending negative examples (“avoid these biases”), how do you manage or prune the prompt when it grows too large? Q Have you considered letting real users label certain recommendations as biased to guide the threshold updates, rather than only relying on a reference item or the local calibration set? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We address your comments in the following paragraphs. *Weakness 1 and Q1*: ## A1. As noted in Section 3.4, we acknowledge that any single embedding model can carry bias. Our theoretical analysis (Appendix §J.1.1, Theorem 1) shows that if embeddings drift by $\epsilon_{emb}$, the fairness score Si changes by at most $ΔS_i$ ≤2(1+λ)$\epsilon_{emb}$. We initially used the simplified bound 3ϵemb for λ≤0.5, but our chosen λ=0.7 yields $ΔS_i$≤3.4$\epsilon_{emb}$, which remains manageable for typical $\epsilon_{emb}$ ≈0.05–0.1. Empirically, FACTER reduces fairness violations by 95.5% (Table 1), showing that real‑world performance is robust despite minor embedding shifts. By using violation‑triggered prompt updates, FACTER incrementally mitigates embedding flaws, adding explicit “avoid” examples to steer the LLM away from bias (e.g., Table 3). We further tested two additional embedders (Sentence‑BERT‑base and RoBERTa‑large‑nli‑stsb‑mean‑tokens) in Table A1, finding similar reduction in violations and near equal NDCG@10, indicating minimal impact from different initial biases: *Table A1: Embedding Consistency Test* | Embedder | #Violations | NDCG@10 | |-|-|-| | paraphrase-mpnet-base-v2 | 5 | 0.445 | | Sentence-BERT-base | 6 | 0.447 | | RoBERTa-large-nli-stsb-mean-tokens | 4 | 0.440 | In the final version, we will expand Section 3.4 with a concise overview of domain‑tailored fine‑tuning, adversarial or hard‑debiasing methods (Bolukbasi et al., 2016), and multi‑embedder ensembles to reduce reliance on a single model. We will outline how practitioners can retrain or lightly tune embedding layers using curated, bias‑filtered corpora, integrate adversarial loss terms that penalize demographic correlations, or combine embeddings from different SentenceTransformer variants. We will also include Table A1 and its discussion in the final paper. *Weakness 2 and Q2*: ## A2. We store only the 50 most recent bias‑avoid examples. This prevents unbounded prompt growth. As shown in Table 7 (Appendix section of the original manuscript), despite this constraint, this approach reduces violations by ~90% without hitting token limits. *Q3*: ## A3. Thank you for the suggestion. Although we did not include real-user feedback in the original manuscript, it is straightforward to do in FACTER. To illustrate, we simulated a scenario where synthetic users correct 10–30% of flagged violations. We augmented FACTER’s threshold updates with these “corrected” examples and observed how violations fell over three online calibration iterations on MovieLens‑1M. Table A2 shows that even modest correction rates (10–30%) accelerate violation reduction. With 30% correction, violations drop to 1 (versus 5 in the baseline), confirming that user feedback can enhance FACTER’s fairness calibration. In the final paper, we will add a subsection in Section 5 (Future Work) discussing how automated detection and human‑in‑the‑loop validation can be combined. *Table A2: Synthetic User Feedback Impact* User Correction Rate | #Violations (Iter 1) |#Violations (Iter 2) |#Violations (Iter 3) |-|-|-|-| 0% (Baseline) |112 |28 |5 10% |112 |23 |3 20% |112 |18 |2 30% |112 |14 |1 We will include the table and corresponding results in the final paper. *Weakness 3*: ## A4. While our current focus is on counterfactual fairness via minimal attribute changes, we agree that real-world applications often require multi-attribute fairness. Our formulation naturally extends to this case by treating the sensitive attribute as a vector $a=(a_1,…,a_k)$, and requiring that for any (x,a) and (x,a′) differing in at least one component, ∥y(x,a)− y(x,a′)∥≤δ. Under conformal calibration, the non-conformity score becomes $S = d + λ max_{j∈N(x)} ∥Emb(y)−Emb(y_j)∥$, where N(x) includes calibration points with the same non-sensitive features but differing in at least one attribute dimension. The same coverage guarantee (Eq. (7)) applies due to exchangeability. To validate this, we conducted a multi-attribute evaluation on the MovieLens dataset (using both Gender and Age). As shown in Table A3, FACTER remains effective under this setting, with minimal accuracy drop. *Table A3: Multi-Attribute Fairness Evaluation (Preliminary Results)* | Metric | Baseline | FACTER(iter 3) | |-|-|-| | Counterfactual Fairness Ratio (CFR) | 0.72 | 0.64 | | Group Similarity Ratio (GSR) | 0.083 | 0.041 | | Total Violation Count (Multi-Attribute) | 112 | 7 | We will include this discussion and additional experiments in the final version of the manuscript.
null
null
null
null
null
null
Return Capping: Sample Efficient CVaR Policy Gradient Optimisation
Accept (poster)
Summary: This paper presents a new method for optimizing Conditional Value at Risk (CVaR) in reinforcement learning using policy gradients. Traditional CVaR policy gradient methods suffer from poor sample efficiency because they discard a large proportion of trajectories. The authors propose Return Capping, a novel approach that retains all trajectories but caps high returns at a threshold, ensuring better sample efficiency. Empirical results across multiple risk-sensitive environments show that Return Capping outperforms existing CVaR methods in both learning efficiency and robustness. Claims And Evidence: 1. It is claimed that the solution of return capping objective results in the $CVaR_\alpha$ optimal policy if the cap is set correctly. I believe there are major technical flaws in the proofs of this; for example, the author claimed that (7) is the objective of return capping. However, in my perspective, if $C$ is independent of $\tau$, when $C<R(\tau)$ such that $C$ is chosen in the $min(\cdot)$ function, it will have no effect of optimizing $\pi_\theta$. Therefore, (7) is actually equivalent to (5), and the provided theoretical result is trivial, not being applied to the actual return capping algorithm authors proposed. 2. Using $VaR_\alpha(\pi_{\theta_{k-1}})$ at a starting of a training does seem to be a sensible choice. Methods And Evaluation Criteria: Proposed method does make sense, but its not theoretically grounded. The algorithms are evaluated with CVaR, which is a trivial choice. Theoretical Claims: I have checked the correctness of the proof. The proof itself seems fine, although the result seems trivial and irrelevant to the proposed method. Experimental Designs Or Analyses: The experiments suggest superiority of the proposed method compared to baselines. However, 1. The experiments are only performed in relatively low-dimensional environments. Considering lack of theoretical contributions, I would expect more experiments and empirical analyses compared to previous works. 2. Different comparisons are conducted in different environments, e.g., offset, mix(offset) are only experimented in Lunar Lander. This lowers the statistical significance of the comparisons. Supplementary Material: I have a glance over all the supplementary material. Not much contributions are included in the supplementary material. Relation To Broader Scientific Literature: This paper proposes a practical extension to previous CVar Policy optimization algorithms. Essential References Not Discussed: I am not aware of essential references not discussed here. Other Strengths And Weaknesses: - overall the paper is less dense compared to other top conference level papers. long, not important proof is included in the main paper, and environment describing figures, and result figures are organized in a less dense way. Other Comments Or Suggestions: . Questions For Authors: - What do you think about the issue above regarding the theoretical result? - Is there any other way to formulate and justify the proposed algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The main issue suggested in this review is flaws with the theoretical proof of the equivalence of Return Capping and standard CVaR PG optimisation. We would summarise the two points the reviewer raises as: - Given Eqn (7), when $C$ is selected in the $\min(\cdot)$, this trajectory will have no effect on the optimisation of $\pi_\theta$ - Eqn (7) is not being applied to the actual return capping algorithm However, the above points are not the case, as we can show below. **Addressing the first point:** Given Equation 7 $$J^C (\pi_\theta; C) = E_{\tau \sim \pi_\theta} [\min(R(\tau), C)].$$ To compute policy gradients, we need to compute the gradient with respect to $\theta$ $$\nabla_\theta J^C (\pi_\theta; C) = \nabla_\theta E_{\tau \sim \pi_\theta} [\min(R(\tau), C)].$$ This can be expanded to the integral $$\nabla_\theta J^C (\pi_\theta; C) = \nabla_\theta \int_\tau P(\tau|\theta) \min(R(\tau), C).$$ This integral highlights the flaw in the first point, as when either $R(\tau)$ or $C$ is selected in the $min(\cdot)$, the probability of that given trajectory is still dependent on $\theta$, and thus will have an effect on the policy optimisation. This addresses the first point made. **Addressing the second point:** From the equation above, we can move the gradient inside the integral and use a log-derivative trick to put the integral in the form $$\nabla_\theta J^C (\pi_\theta; C) = \int_\tau P(\tau|\theta) \nabla_\theta \log P(\tau|\theta) \min(R(\tau), C).$$ We can then return this back to an expectation $$\nabla_\theta J^C (\pi_\theta; C) = E_{\tau \sim \pi_\theta} [\nabla_\theta \log P(\tau|\theta) \min(R(\tau), C)],$$ and then we can reformulate the trajectory probability in terms of action probability $$\nabla_\theta J^C (\pi_\theta; C) = E_{\tau \sim \pi_\theta} [\sum_{t=0}^T \nabla_\theta \log \pi_\theta (a_t|s_t) \min(R(\tau), C)].$$ From here, we can apply commonplace RL techniques such as using return-to-go, rather than using full episode returns, and using a Value function baseline. Note that return-to-go is computed as: $$\hat{R}^C_t = \min(\sum_{t'=0}^T R(s_{t'}, a_{t'}, s_{t'+1}), C) - \min(\sum_{t'=0}^t R(s_{t'}, a_{t'}, s_{t'+1}), C).$$ When incorporating both of these techniques, the gradient update becomes: $$\nabla_\theta J^C (\pi_\theta; C) = E_{\tau \sim \pi_\theta} [\sum_{t=0}^T \nabla_\theta \log \pi_\theta (a_t|s_t) (\hat{R}^C_t - V(s_t)].$$ In practice, we used a Generalised Advantage Estimator to compute advantage. This gradient update can then be clipped, using the PPO algorithm, and then this is the exact gradient update used in Return Capping. As such the Return Capping gradient update is derived directly from Eqn (7), addressing the review’s second point. We are happy to provide any further clarification about the proof, or how the derivations above show that the reviewer’s main issues are unfounded. **Other Issues Raised** - In relation to the scale of the environments, we have discussed this thoroughly in our review to r3Le - The reason we have only included MIX [1] in Lunar Lander is that this baseline requires an optimal Expected Value policy. Lunar Lander was the only environment where Return Capping required the Expected Value policy to set $C^M$. However we could include this baseline in other environments - it generally just converged to the optimal Expected Value policy, similarly to in Lunar Lander. As explained in the paper, the Return Capping (offset) example was included to demonstrate the relative performance of Return Capping accounting for the environment steps required to train an optimal Expected Value policy. [1] Luo, Yudong, et al. "A simple mixture policy parameterization for improving sample efficiency of cvar optimization." --- Rebuttal Comment 1.1: Comment: Sorry for the incorrect review. I think I was confused about this; I am updating my score accordingly.
Summary: The authors consider risk sensitive reinforcement learning where the goal is to optimize the $\alpha$-parameterized tail of the return distribution (on the lower end). Prior work proposed an approach known as conditional value at risk (CVaR) policy gradient, where the algorithm filters out all trajectories except the worst $\alpha$ fraction and does policy gradient on the remaining trajectories. The authors note that this results in wasting a lot of samples, and propose an alternative equivalent formulation with a parameterized threshold for capping the return instead. The authors show that for an appropriately chosen (only approximable) threshold for the return cap, the two approaches are equivalent. The paper evaluates the proposed approach against prior baselines in numerical experiments on toy domains and shows benefits to the new proposal. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The main claim is in Proposition 4.1, which looks reasonable to me. Experimental Designs Or Analyses: Yes, the authors consider an array of small scale toy environments to evaluate the new algorithm against baselines. The authors also modify the lunar lander environment to make it more interesting to see the difference between risk neutral versus risk sensitive optimization, which seems like a reasonable design. Supplementary Material: no Relation To Broader Scientific Literature: The authors show an intuitive and easy to implement alternative that is equivalent to prior work (filtering trajectories), but with better sample efficiency. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: Clear and concise algorithm proposed based on a novel but simple insight with numerical evidence. Weaknesses: Most of the experiments are on very small toy domains. Other Comments Or Suggestions: The relation between Eq (2) and (3) appears elementary but might be worth spelling out clearly as it may not be obvious when encountering for the first time. Questions For Authors: While the claim about sample efficiency sounds reasonable, it is not clear how that materializes in practice given the lack of sensitivity of the policy to the outcome beyond the threshold. Furthermore, the graphs appear to show a benefit for the final risk sensitive return, but not necessarily the speed of convergence (as one may expect for a drastic reduction in sample efficiency). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: In relation to the size of the environments presented, we have discussed this in our rebuttal to reviewer r3Le. **Sample Efficiency** For the specific question on sample efficiency, whilst there are improvements to sample efficiency using Return Capping compared to CVaR PG, it is unlikely to achieve improvements of magnitude $\frac{1}{\alpha}$ as by capping returns, we do lose some information about trajectories. However there is still information to be learnt from capped trajectories. Primarily, examples of trajectories that reach the cap are present in policy optimisation. In standard CVaR PG, we effectively only have negative examples of trajectories that resulted in low returns. Whilst we lose the relative performance between capped trajectories by capping returns, we still do get a gradient between the uncapped low performing trajectories and these capped trajectories. By training using capped trajectories, the policy is able to learn from positive examples (e.g. actions that result in the trajectory reaching the cap) rather than just having to rely on negative examples. In terms of speed of convergence observed in empirical results, in both Guarded Maze environments and the Lunar Lander environment, we see much faster convergence for Return Capping compared to the risk-sensitive baselines. The Expected Value policy does converge more quickly in all environments but this is unsurprising given it is learning from all uncapped trajectories. In the betting game, the CVaR PG policy does converge more quickly, albeit to a less optimal policy, than Return Capping. We suggest that this is likely due to the CVaR PG policy being a much more simplistic, conservative policy compared to the more optimal Return Capping policy. We agree that in the AV environment, there is minimal performance improvement over CVaR PPO*, but we suggest that the better performance of Return Capping in all other presented environments suggests it is a better method overall. *It should be noted that the plot in Figure 5a excludes CVaR PPO outliers that did not converge to the optimal CVaR policy (see Figure 7 in the Appendix for the unmodified plot), so although median performance is comparable to Return Capping, in this environment, CVaR PPO is less consistent at finding the CVaR optimal policy.
Summary: The authors address the problem of risk-sensitive policy optimization via policy gradient methods. They note several issues with the standard formulation of PG+CVaR which cause catastrophic losses in performance: specifically, because it discards the best trajectories by design, it is extremely difficult for an RL algorithm to encounter "lucky" transitions early in training from which it can learn a useful policy, resulting in extremely poor sample efficiency. The paper proposes an alternative formulation for computing risk-sensitive policy gradients, by _capping_ the returns rather than subsampling the worst-case trajectories. They show that optimizing this capped-return formulation is equivalent to the standard formulation of CVaR under the condition that the cap is set at the VaR, and devise a practical algorithm to compute it by estimating the VaR online. They show strong results with a practical implementation of this algorithm on top of PPO in risk-sensitive control baselines. Claims And Evidence: The authors claim that their method is better able to converge to a risk-sensitive policy. This does seem to be true in practice on the environments studied (where the CVaR of their method is substantially better than both the risk-neutral policy and prior work in risk-sensitive RL). Methods And Evaluation Criteria: The method is evaluated in several risk-sensitive RL settings, although a few (particularly Lunar Lander) are somewhat contrived in order to force a difference between risk-sensitive and risk-neutral policies. Evaluations are fair, although given the focus on sample complexity it also might have been nice to include an off-policy method (where CVaR-based RL algorithms have also been studied, see [1, 2]). Generally, my main complaint with the evaluation is that the environments studied are very simple; considering more complex environments would greatly strengthen the paper's claims. [1] Yang, Qisong, et al. "WCSAC: Worst-case soft actor critic for safety-constrained reinforcement learning." [2] Ma, Xiaoteng, et al. "Dsac: Distributional soft actor critic for risk-sensitive reinforcement learning." Theoretical Claims: I looked over the proof for Proposition 4.1 and it seems reasonable, though I did not check it in detail. Experimental Designs Or Analyses: The experimental results are strong if somewhat limited; the most complex task studied is Lunar Lander which has some odd design choices in the experimental study: > we modify the environment such that in addition to the zero-mean, high-variance random reward, landing on the right-hand side also results in an additional 70 reward. I would suggest the authors find some more realistic or complex environments in which CVaR is helpful. Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: The paper is well-positioned in the risk-sensitive RL setting, proposing a relatively simple theoretical modification to the risk-sensitive RL setting that induces a fairly large improvement in practice. Essential References Not Discussed: As mentioned previously I would recommend comparing against works that consider risk-sensitive control with off-policy approaches [1, 2]. [1] Yang, Qisong, et al. "WCSAC: Worst-case soft actor critic for safety-constrained reinforcement learning." [2] Ma, Xiaoteng, et al. "Dsac: Distributional soft actor critic for risk-sensitive reinforcement learning." Other Strengths And Weaknesses: - Learns a risk-neutral policy first (at least in Lunar Lander, see: (offset)), but it's not clear how this fits into the overall algorithm (maybe adding another algorithm block would be helpful for clarity). - The cap value is fixed over all trajectories, but it should probably be conditioned on initial state for any environments in which initial state has a large impact on returns (many practical environments). Other Comments Or Suggestions: - The decision to exclude two outlier results from the AV results for the PPO baseline is odd; I would suggest using the original plot (and plot the median between seeds if there is a concern of outliers). - The results section writes out how many training runs of each baseline converged to which policies: >For each method, we ran six seeds. None of CVaR-PG runs converged to a policy that reached the goal, and of the CVaR-PPO seeds, two converged to the CVaR-optimal policy, one converged to the risk-neutral policy, and three did not converge to goal-reaching policies I would suggest conveying this information visually for all environments if it's important. Questions For Authors: No additional questions; my main concerns are on the complexity of the environments studied. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time spent reviewing this paper and we appreciate your feedback **Environment Complexity** The main issue raised in this review is the limited complexity of the environments used. Whilst we agree that more complex environments would benefit the paper, as far as we are aware, there is very limited work that has presented promising results for optimising static CVaR for episode return in more complex environments. Some related work does explore more complex environments. In Safe RL, such as [4], the main benchmarks are modified MuJoCo environments [7]. However, unlike in our work, in Safe RL, the objective is to maximise expected return subject to constraints on cost, where environments have a distinct cost and reward function. Some distributional RL work has been done in MuJoCo [5] and Atari [8]. However these works focus on dynamic CVaR, rather than static CVaR. The work we are aware of optimising for static CVaR referenced in the paper [1, 2, 6] as well as a similar work [3] suggested by Reviewer iMqv all focus primarily on comparatively similar scale environments to the examples presented in our paper. One of the exceptions are some modified MuJoCo environments in [1]. However, in practice, when optimising policies in these environments we found no distinction between optimal Expected Value and optimal CVaR policies. The other exception is the Atari game Asterix in [6]. This paper presents a modification to distributional DQN to optimise for static CVaR and shows promising results. However, results from [8] show that optimising for dynamic CVaR using distributional DQN in Asterix results in better Expected Value performance that a policy optimising for Expected Value, so it is ambiguous whether the improved CVaR performance in [6] was due to a distinct CVaR optimal policy being found, or due to this aforementioned result shown in [8]. The main issue we have found in scaling to more complex environments is an issue inherent to return CVaR optimisation rather than specifically our method which is that for CVaR optimisation techniques to be relevant, the environment has to have distinct CVaR-optimal and Expected-Value-optimal policies. Generally, what we have observed is that one of these policies will be less complex to learn, and so in more complex environments, either the Expected Value optimal policy converges to the CVaR-optimal policy, or all CVaR optimisation methods converge to the Expected Value optimal policy. **Off Policy Methods** Thank you for pointing out the two papers [4, 5] as both are relevant to risk-sensitive RL and we will include them in reference to Related Work. However, as mentioned above, they are both optimising for different objectives compared to our work. **Other Strengths and Weaknesses** Addressing the two points the review raised in the strengths and weaknesses section: - The reason we have included (offset) In the Lunar Lander environment, is we found better performance by optimising for Expected Value and then using this policy to set $C^M$ as outlined in Section 4.1, even accounting for the additional environment steps required to train the optimal Expected Value policy (see Return Capping (offset)). In all other environments, we set $C^M$ based on a trivial policy that took no actions. - Conditioning the cap on the initial state is a sensible suggestion if the desired goal is to optimise for CVaR conditioned on the initial state. [1] Luo, Yudong, et al. "A simple mixture policy parameterization for improving sample efficiency of cvar optimization." [2] Greenberg, Ido, et al. "Efficient risk-averse reinforcement learning." [3] Kim & Min, “Risk-Sensitive Policy Optimization via Predictive CVaR Policy Gradient” [4] Yang, Qisong, et al. "WCSAC: Worst-case soft actor critic for safety-constrained reinforcement learning." [5] Ma, Xiaoteng, et al. "Dsac: Distributional soft actor critic for risk-sensitive reinforcement learning." [6] Lim, Shiau Hong, and Ilyas Malik. "Distributional reinforcement learning for risk-sensitive policies.” [7] Ji, Jiaming, et al. "Safety gymnasium: A unified safe reinforcement learning benchmark." [8] Dabney, Will, et al. "Implicit quantile networks for distributional reinforcement learning." International conference on machine learning.
Summary: This paper proposes a novel method for CVaR optimization in Reinforcement Learning. The proposed method caps trajectory returns by a certain value and maximize its expected value with respect to a policy. It is theoretically shown that, the maximizer of the proposed objective matches the conventional optimal CVaR policy if the capping threshold value $C$ is set to the VaR of the optimal CVaR policy. In practice, the capping threshold $C$ is approximated by a moving average of VaR of the learning policy. The proposed method requires to set the minimum capping threshold $C^M$ appropriately. The proposed method does not need to discard sample trajectories unlike naive baselines. The effectiveness of the proposed method is validated several numerical experiments. ## update after rebuttal I keep my score from the following reasons. - The authors answered adequately to my question that "$C^M$ seems more difficult to set in larger domains". However, I believe that ultimately the practical applicability meeds to be evaluated experimentally. - Though the paper makes solid contribution, the absence of convergence analysis slightly hurts the quality of the paper. Claims And Evidence: In my view, the contributions of this paper are mainly twofold; (1) the proposal of the novel objective Eq. (7) and the establishment of the equivalence with the conventional objective (Proposition 4.1) and (2) the proposal of the practical method to optimize the proposed objective and its numerical validation. The first contribution is supported by the proof of Proposition 4.1 and the second contribution is supported in Section 5. The proposed method consistently performs better than baselines. Methods And Evaluation Criteria: The proposed method seems sound and the evaluation criteria makes sense to validate the aforementioned two claims, though the environments are relatively small and simple, and baselines are limited to Expected PG, CVaR-PG and CVaR-PPO. Theoretical Claims: I checked the proof of Proposition 4.1 and it seems correct. Experimental Designs Or Analyses: The experimental design seems to appropriate to validate the aforementioned two claims. However, since the simplicity of the environments, it is not fully convincing that the proposed method scales to the larger environments, where the $C^M$ is more difficult to set appropriately, i.e., Expected Value (CVaR) and Optimal CVaR (VaR). In addition, comparison with stronger baselines (e.g. [Greenberg et al., 2022] and [Luo et al., 2024]) is absent. Supplementary Material: I briefly reviewed through the supplementary and found no big flaws. A minor issue is that the tables are not referred in the text, except for Table 2. Relation To Broader Scientific Literature: The aforementioned contribution (1) is an interesting reformulation of CVaR optimization in RL. Broad class of RL methods could be applied to this formulation. Essential References Not Discussed: The following paper [1] is not referred, which does not naively discard the sampled trajectories but assigns "weights" to them and optimize CVaR for RL. [1] Kim & Min, Risk-Sensitive Policy Optimization via Predictive CVaR Policy Gradient, ICML, https://proceedings.mlr.press/v235/kim24x.html. Other Strengths And Weaknesses: - Weakness The absence of the convergence analysis of Algorithm 1. Other Comments Or Suggestions: N/A Questions For Authors: Do you think that the proposed method works well in the larger environments, such as Atari, where the $C^M$ seems more difficult to set appropriately? It would be grateful if you could discuss with some evidence. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for referencing [3], we will include it in Related Work **Setting Cap Minimum** Addressing the question on setting $C^M$ in larger environments, it is very possible to set a suitable $C^M$, irrespective of the complexity of the environment. We show in Proposition 1 that the Return Capping optimisation objective is equivalent to optimising for CVaR if $C$ is set to $VaR_\alpha(\pi^*)$, so we need to ensure that $C^M$, is less than or the equal to $VaR_\alpha(\pi^*)$. Given Eqn (17) $$CVaR_\alpha(\pi) \leq VaR_\alpha(\pi), \quad \forall \pi$$ and Eqn (18) $$CVaR_\alpha(\pi) \leq CVaR_\alpha(\pi^*), \quad \forall \pi$$ we know that the $CVaR_\alpha$ of any policy is necessarily less than the $VaR_\alpha(\pi^*)$. So we know that if we set $C^M$ to $CVaR_\alpha$ of any policy, it will be less than $VaR_\alpha(\pi^*)$. This means we can take any policy and sample a set of trajectories and use this sampled $CVaR_\alpha$ to set $C^M$. This could be a random policy, or it could be a policy that maximises Expected Value. Even if an environment was sufficiently complex that it was not immediately apparent what an appropriate value of $C^M$ was, it would always be possible to use either the $\text{CVaR}_\alpha$ of a random policy, or a policy trained in any manner to set $C^M$. Requiring a previously trained policy to set $C^M$ does increase the samples required for training. However, for all but the AV environment, Return Capping would outperform all baselines even accounting for the additional training required to learn an Expected Value optimal policy (shown as Return Capping (offset) in Lunar Lander as this was the only environment that required the cap to be set according to the Expected Value policy, but we could include this in all other environments as well). **Environment Simplicity** We have discussed issues with more complex environments further in our rebuttal to reviewer r3Le, but these issues arise more from challenges with optimising for CVaR in general, rather than any specific characteristics of Return Capping. **Additional Baselines** Addressing the lack of inclusion of [1, 2] as baselines, the reason we have not included [2] is because it requires the environment to be formulated as a Context-MDP where the context encapsulates the environment randomness. This limits the applications of the baseline as not all problems can be formulated as such. The reason we have only included [1] in Lunar Lander is that this baseline requires an optimal Expected Value policy. Lunar Lander was the only environment where Return Capping required the Expected Value policy to set $C^M$. However we could include this baseline in other environments - it generally just converged to the optimal Expected Value policy, similarly to in Lunar Lander. [1] Luo, Yudong, et al. "A simple mixture policy parameterization for improving sample efficiency of cvar optimization." [2] Greenberg, Ido, et al. "Efficient risk-averse reinforcement learning." [3] Kim & Min, “Risk-Sensitive Policy Optimization via Predictive CVaR Policy Gradient”
null
null
null
null
null
null
Policy-Regret Minimization in Markov Games with Function Approximation
Accept (poster)
Summary: **Edit post-rebuttal: I thank the authors for their feedback, which answered my questions. I maintain my overall positive score.** The submission considers Markov games, that is, MDPs where transitions depend on the pair of actions output by two players (the learner and the opponent). It studies a notion of regret called policy regret, which corresponds to comparing what the learner obtains (as measured by the sum of value functions corresponding to the policies output by the learner and the opponent over time) to what the learner would have obtained by playing the same policy $\pi$ over time (as measured by the sum of value functions corresponding to $\pi$ and to the policies output by the opponent facing the constant sequence of policies $\pi$). This notion of regret is the correct counterfactual measure in games and was studied in online learning by Arora et al. (2012), in games by Arora et al. (2018), and extended to Markov games by Nguyen-Tang and Arora (2024). However, the algorithm and regret bounds introduced by the latter reference are for the case of small-scale problems (the sets of states and actions should not be large). The point of the present submission is to provide an algorithm and regret bounds for the case of large-scale problems. This is achieved via considering various notions of complexity (Lipschitzness of value functions and eluder coefficients), that are meaningful especially in the case of linear approximations (for the value functions and for the opponent's strategies). The same restrictions as in Nguyen-Tang and Arora (2024) remain: the opponent should have a bounded memory and be stationary in some sense. Claims And Evidence: Disclaimer: I can see why (based on some keywords also contained in my own publications) the system assigned this submission to me, but I must declare that I was totally unaware of the specific and advanced line of research featured in this submission before reading it. All what follows is therefore an educated guess only. All evidence (all proofs) is (are) actually provided in the appendices. That being said, the flow of the main body (how concepts, definitions, assumptions, examples, etc., are introduced and follow each other) looks coherent. Methods And Evaluation Criteria: The setting and the main evaluation criterion (the policy regret) were introduced by Nguyen-Tang and Arora (2024), and make sense in the history of articles about policy regret (Arora et al., 2012 and 2018, mainly). The consideration of linear approximations, which justify most of the assumptions and concepts introduced in this submission, is generally standard in machine learning, and was already used in diverse forms also for MDPs (perhaps not exactly in the way this submission does, but the approach is standard). Theoretical Claims: All proofs are contained in the appendix, which I did not have time to review in detail (especially given the time I spent on the main body, due to my unfamiliarity with the setting). I think that the sketch of the proof of Theorem 1, that can be read in Appendix A, should have been provided in the main body. How close / related is the batching approach followed here, and formally used to get (2), to the one in Arora et al. (2012)? Somehow, I have the feeling that a part of the intuition remains: exploiting the $m$-bounded memory assumption to get back to standard external regret (for Markov games) on sufficiently long batches, of sizes $K$ proportional to $\sqrt{T}$. Experimental Designs Or Analyses: N/A Supplementary Material: I only quickly read through some selected parts of the appendix to gain some insights on the proof structure of the main theorem and the auxiliary results needed. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The submission seems to do a good job in discussing earlier works; somehow, this is relatively simple as it forms a direct follow-up to the recent work by Nguyen-Tang and Arora (2024). Other Strengths And Weaknesses: The main limitation of this work is acknowledged by the authors in an upfront and honest manner in Remark 2: the lack of computational efficiency of the strategy introduced. The approach introduced is therefore only worth for the theory but does not seem to solve real cases of large-scale problems. I found the exposition generally nice. In particular, it is useful to summarize the results in Table 1, as they are formally stated only in the second column of page 8 (!), due to all notions, concept, and notation to be introduced. It would be great to better emphasize where exactly the improvements were made possible. To me, the key point would be Definition 4 vs. Assumption 3, is that correct? Also, the use of previous techniques (batching as in Arora et al., 2012, and likelihood fitting, as in Nguyen-Tang and Arora, 2024) could be better acknowledged in the comments on Algorithm 1 on page 6. The independence of the bound of Theorem 1 in the cardinality of the state and action spaces is a bit of an overstatement: indeed, there are implicit dependencies through the complexity measures introduced. This claim should probably be toned down. Other Comments Or Suggestions: - Section 1.1: if the same intuitions for batching as in Arora et al. (2012) apply, you should acknowledge them here - Eq. (1): is the $\ell_1$ in the right-hand side norm the total-variation distance? - Line 161, Bellman operator: I think it takes a function as input, and not an element of $\mathbb{R}^{S \times A \times B}$ - Line 163: what is $z$? I guess a triplet $(s,a,b)$? Then, given the notation introduced right after, it should rather be denoted by $x$ - Line 163: rather ${P}_{h+1}$ (not a P with a double bar) - Lines 179-180 (first column): I guess you refer to Theorem 3 of Nguyen-Tang and Arora (2024)? Their result is about a $\sqrt{T}$ rate, but with a large constant, which to me is a sublinear policy regret - Line 266 (both columns): I think it should be $\Psi = \Psi_1 \times \ldots \times \Psi_H$ - Page 5, second column (Definition 5 and Example 3.2): this is actually the point in the submission where I thought that notation was really piling up Questions For Authors: I have no specific question given my limited knowledge but I would be grateful to the authors if they could answer or react to issues or questions that I raise above. **Edit post-rebuttal: I thank the authors for their feedback, which answered my questions. I maintain my overall positive score.** Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive feedback and the detailed comments. --- > How close / related is the batching approach followed here, and formally used to get (2), to the one in Arora et al. (2012)? Somehow, I have the feeling that a part of the intuition remains: exploiting the m-bounded memory assumption to get back to standard external regret (for Markov games) on sufficiently long batches, of sizes $K$ proportional to $\sqrt{T}$ Yes, the batching is to exploit the m-bounded memory as in Arora et al 2012. The key technical challenge is, however, how to bound the regret within a batch in our case, while Arora et al 2012 use the worst-cast bound within a batch (and thus obtain T^{⅔} regret). --- > It would be great to better emphasize where exactly the improvements were made possible. To me, the key point would be Definition 4 vs. Assumption 3, is that correct? The conditions that enable the improvements are Definition 4 vs Assumption 3 (as you mentioned), and the Eluder conditions in Section 5.1, and bracketing number in Definition 5. --- > Also, the use of previous techniques (batching as in Arora et al., 2012, and likelihood fitting, as in Nguyen-Tang and Arora, 2024) could be better acknowledged in the comments on Algorithm 1 on page 6. Thank you. We will be more clear about it in the revision. --- > The independence of the bound of Theorem 1 in the cardinality of the state and action spaces is a bit of an overstatement: indeed, there are implicit dependencies through the complexity measures introduced. This claim should probably be toned down Thank you. We will be more clear about it in the revision. --- > Section 1.1: if the same intuitions for batching as in Arora et al. (2012) apply, you should acknowledge them here We will acknowledge Arora et. al. (2021) there when we mention batching. It’s worth noticing that Arora et. al. (2021) use the worst-case bound on the in-batch data error while bounding the in-batch data error is a key technical challenge in our setting, thus Section 1.1. --- > Eq. (1): is the $\ell_1$ in the right-hand side norm the total-variation distance? Yes. --- > Line 161, Bellman operator: I think it takes a function as input, and not an element of $\mathbb{R}^{S \times A \times B}$ An element of $\mathbb{R}^{S \times A \times B}$ is a function from $S \times A \times B$ to $\mathbb{R}$ --- > Line 163: what is $z$ I guess a triplet $(s,a,b)$. Then, given the notation introduced right after, it should rather be denoted by $x$. Yes. It's $x$. Thank you. --- > Line 163: rather P_{h+1} (not a P with a double bar) Thank you. --- > Lines 179-180 (first column): I guess you refer to Theorem 3 of Nguyen-Tang and Arora (2024)? Their result is about a rate $\sqrt{T}$, but with a large constant, which to me is a sublinear policy regret Yes, you are correct, we meant sample-efficient, rather than sublinear policy regret. We’ve revised it to: “policy regret minimization is not sample-efficient against” --- > Line 266 (both columns): I think it should be $\Psi_1 \times \ldots \Psi_H$ Yes. We've revised it accordingly. ---
Summary: The paper introduces the first algorithmic framework for policy regret minimization in Markov games with general function approximation, achieving an $O\sqrt{T}$ policy regret bound for a wide range of problems. This framework extends to both large-scale environments with Eluder-type conditions and tabular cases, where it provides a significantly tighter bound. Additionally, it offers a simple and effective approach for handling reactive adversaries, demonstrating how opponent learning can lead to optimal regret rates in dynamic environments. Claims And Evidence: Yes, they provide a clear table compared with prior work and support those claim and setting with comprehensive theories. Methods And Evaluation Criteria: This is a pure theoretical paper without any experiments. Although I really appreciate the contributions from theoretical perspectives to this field. One of the improvement of this work is to extend from tabular setting to function approximation. Therefore, I may expect that there will be a simple simulation experiment, even in toy example, to see the alignment between theories and experiments. Theoretical Claims: I think the theoretical claims looks smooth and with detailed step by step in the proof. Experimental Designs Or Analyses: Without experimental section, which I think it is good to have, but not required for this paper. Supplementary Material: I went through the main part of the proof in the supplementary material. Relation To Broader Scientific Literature: This paper mainly compare with (Nguyen-Tang & Arora, 2024a) with the improvement from varying perspectives, which can be viewed as a more general setting in this direction. Essential References Not Discussed: I would like to confirm if adaptive adversary this paper is studying is just the standard adversary in robust RL via adversarial training, where players are modeled as max-min game. If so, then it will be proper that authors should at least including the following related work, where [1] is the fundermental RARL setting, [2] extends the two-player game to Stackelberg game, and the latest work [3] improves 2-player game in robustness. [1] Lerrel Pinto et al. "Robust Adversarial Reinforcement Learning", ICML, 2017 [2] Peide Huang et al. "Robust Reinforcement Learning as a Stackelberg Game via Adaptively Regularized Adversarial Training", IJCAI, 2022 [3] Juncheng Dung et al. "Variational Adversarial Training Towards Policies with Improved Robustness", AISTATS, 2025 Other Strengths And Weaknesses: Strengths * well-written * extension to function approximation while resulting in tighter bound even in tabular setting Weakness Other Comments Or Suggestions: N/A Questions For Authors: * In your introduction, authors mention the prior work that establishes the fundamental barriers for policy regret minimization in Markov games. And then highlighting two motivations of this work: (1) the consistent behavior is restricted (2) large state/action space. Could you elaborate more about (1) compared with your method, explaining the mathematical equations. * Is not "the adversary behavior does not change over time" mentioned in your manuscript a strong assumption in practice? Could you explain more how possible you can get rid of this assumption and whether it is common to limit adversary behavior? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. --- > I would like to confirm if adaptive adversary this paper is studying is just the standard adversary in robust RL via adversarial training, where players are modeled as max-min game. No, the adaptive adversary in our paper adapts to the learner’s past and current strategies, while the adversary in robust RL only adapts to the learner’s current strategy ($m=1$ in our terminology). The settings are also different because we look at policy regret which makes sense in such an adaptive adversary setting. We give a regret bound, not just convergence to equilibrium. --- > In your introduction, authors mention the prior work that establishes the fundamental barriers for policy regret minimization in Markov games. And then highlighting two motivations of this work: (1) the consistent behavior is restricted (2) large state/action space. Could you elaborate more about (1) compared with your method, explaining the mathematical equations. For any two sequences of the learner’s policies, if the two sequences agree in a certain step $h$ and state $s$, then the adversary’s responses to the two sequences also agree in step $h$ and state $s$. This enables us to decompose the response into states and steps independently thus enabling sample-efficient learning in the tabular case. The idea of this assumption is to encode that the adversary responds similarly to two similar sequences of policies. But it does not work for large state space problems and is too restrictive. The Lipschitzness assumption is much more relaxed. See line 225-261 for our explanation. --- > Is not "the adversary behavior does not change over time" mentioned in your manuscript a strong assumption in practice? Could you explain more how possible you can get rid of this assumption and whether it is common to limit adversary behavior? If we get rid of that assumption, policy regret minimization is not sample-efficient anymore, as proven in Nguyen-Tang & Arora 2024. That’s why we consider it in the current paper. Even though the response function does not change over time, it is still powerful as it can remember the learner’s past and current strategies. The adversary is also given unlimited computation power to come up with whatever response function that they can compute using the learner’s past strategies. The simplest example is general-sum games, where given a policy for the learner, the adversary can use any computational power to compute the best-response policy that maximizes its utility given the learner’s policy.
Summary: This work explores policy regret minimization and proposes a new algorithm (BOVL) that is more general than past literature in that it deals with a class larger than tabular data Markov games. They use function approximation classes (characterized by Eluder type conditions) which can deal with larger state/action spaces while still providing a bound on policy regret that doesn’t depend on the number of states and actions. Claims And Evidence: The paper claims the following: - Building on theoretical guarantees for function approximations, they achieve a regret that is tighter than algorithms for tabular cases introduced in past literature Evidence: - They provide proof and theoretical analysis of the regret bound Methods And Evaluation Criteria: The paper doesn’t include empirical experiments/ evaluations on benchmarks. Theoretical Claims: The theoretical claims seem reasonable to me but I listed some questions below. I also did not verify the proofs. Experimental Designs Or Analyses: The paper doesn’t contain experiments. Supplementary Material: I did not check in detail. Relation To Broader Scientific Literature: The work is an extension to (Nguyen-Tang & Arora, 2024a). Comparisons are carried out against it which is reasonable, but how does this work compares to other literature in Markov games and adversarial-opponent learning is not addressed. Essential References Not Discussed: / Other Strengths And Weaknesses: Strengths: - Theoretical analysis of the presented algorithm which is lacked in MARL literature -The regret bounds that are tighter than other algorithms for tabular cases and also the ability to deal with larger state space and action space problems making it applicable to wider range of Markov games - The paper explained theoretical aspects in a simple way and the theorems, lemmas,..etc were easy to follow and well referenced. Weaknesses: -The whole paper is built on Eluder conditions (Condition 1 and 2) which I am not sure how applicable they are (See questions below), if not then the regret bound doesn’t hold - Remark 2 on the tri-level optimization problem makes the algorithm not tractable to be used in practice - The paper has lots of tiny mistakes in notations / typos that may hinder understanding Other Comments Or Suggestions: - Line 152: I believe (s) in max operator is not meant to be from A but from S - Line 315: model mu -> models mu - Line 361: should phi be of subscript i instead of t? The same for mu in line 423 - Line 369: should it be log T? Questions For Authors: 1. What is the applicability of the eluder-condition on function approximations in real applications? 2. The paper only mentions linear function approximation, what about the non-linear cases? Does this algorithm scale or not and what are the challenges for extending it? 3. I didn’t understand why the adversary derives its policy based on a sequence of learner policies and not just the current one? I understand that this might be a subset of your setting when m=1 but why in general? 4. I also didn’t understand the importance of the “warm up” step on line 6 of the algorithm Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. --- > Comparisons are carried out against it which is reasonable, but how does this work compares to other literature in Markov games and adversarial-opponent learning is not addressed. Much of the prior work in Markov games and adversarial-opponent learning focus on regret, while we focus on policy regret. The study of policy regret against adaptive adversaries has been settled in prior work, e.g. extensive body of work building on the work of Arora et al. 2012; see the second paragraph in our introduction section. We also refer the reader to the related work section of Nguyen-Tang & Arora, 2024a for a broader context. Nonetheless, we can expand the discussion of other related work if the reviewer has specific suggestions. --- > Remark 2 on the tri-level optimization problem makes the algorithm not tractable to be used in practice Actually, the optimization can be viewed as a bi-level optimization since $\pi$ and $\mu$ can be combined into one optimization variable. Yes, it is intractable in general, but that is the case for any RL problem with general function approximation. However, the optimization problem is tractable in the linear case. --- > Line 152: I believe (s) in max operator is not meant to be from A but from S Yes. Thank you for pointing out. --- > Line 361: should phi be of subscript i instead of t? The same for mu in line 423 No. The preconditions in the Eluder conditions evaluate the current models (associated with subscript $t$) in the past data (associated with subscripts $i < t$). --- > Line 369: should it be log T? No. It's $\log t$. But bounding with $\log T$ is also fine. --- > What is the applicability of the eluder-condition on function approximations in real applications? Any learning theoretic result needs to define a notion of complexity of the hypothesis class or the task we are learning to give sufficient (and sometimes necessary) conditions for learning. The Eluder dimension is one such measure for online learning and RL problems. The Eluder condition here is used to control (and thus inform) the convergence rate of an optimistic algorithm when function approximation is used. In real applications, if you use optimistic algorithms with a form of function approximation, estimating the eluder condition will provide insight into how quickly your algorithm will converge to an optimal policy. --- > The paper only mentions linear function approximation, what about the non-linear cases? Does this algorithm scale or not and what are the challenges for extending it? We mention the linear case as a concrete example. The results in the paper apply to any nonlinear case, as long as the Eluder conditions hold. For computational efficiency, our algorithm is efficient for tabular and linear cases, but it is likely not tractable for general function approximation, as is the case for RL with general function approximation. --- > I didn’t understand why the adversary derives its policy based on a sequence of learner policies and not just the current one? I understand that this might be a subset of your setting when m=1 but why in general? In real-world applications, it is almost always the case that the adversary chooses its policy based on a sequence of the learner's policies. In fact that is a general setting in any multi-step game. For example, consider a spam filter that is trained using online learning, updating its model daily based on newly labeled examples. Spammers act as adaptive adversaries: they probe the system by sending test emails and observing which ones evade detection, then evolve their evasion strategies based on the observed sequence of model updates. Importantly, the adversary’s strategy is not fixed—it adapts over time in response to the entire history of the learner’s policies. This interaction cannot be fully captured by assuming a static or memoryless adversary, since the adversary’s behavior depends on trends or patterns in the learner’s updates. Other similar settings appear in personalized recommendation systems, auctions, online dating, political negotiations, finance, drug discovery, etc. --- > I also didn’t understand the importance of the “warm up” step on line 6 of the algorithm During the first $m-1$ episodes of each epoch, the adversary responds with possibly $m-1$ different strategies. From episode $m$ onward, the adversary responds with the same strategy, due to being memory-bounded by $m$. There’s nothing the learner can learn about the adversary during the first $m-1$ episodes but the $m^{th}$ episode onward. Thus, the data collected during the first $m-1$ episodes are not helpful for minimizing the policy regret, thus discarded.
null
null
null
null
null
null
null
null
MoH: Multi-Head Attention as Mixture-of-Head Attention
Accept (poster)
Summary: This paper proposes Mixture-of-head attention (MoH) to replace standard multi-head attention(MHA) in transformers. The key idea is treat each attention head as an expert in mixture-of-expert framework. The experiments demonstrate that MoH can be applied to vision transformers(ViT) for image classification, DiT for diffusion-based image generation, and LLM. The results shows MoH can achieve competitive perforamnce with only 50%-90% of the heads/experts to be active. Strengths: The idea of applying the mixture of experts on attention head is intuitive and effective. MoH can achieve competitive performance with only 50%-90% heads/experts. Detailed ablation study and well-organized paper structure. Weaknesses: The idea is not entirely new. It shares a similar idea with MoA, plus implementation optimization of shared heads and two-stage routing. No experiments compared with MoA. Given these strengths and weaknesses, I am leaning towards weak accept. I am willing to raise my scores accordingly if the authors eventually address these concerns. ## update after rebuttal The authors solved my concern toward MOA comparison, therefore I raised my scores to 4. Claims And Evidence: 1. Claim: not all attention heads hold equal significance The evidence is supported with Voita et al. (2019) and Michel et al. (2019). I suggest providing state-of-the-art mechanism explanation presented on 2024, like [1], [2] or [3]. 2. Claim: MoH outperforms multi-head attention by using only 50%∼90% of the attention heads Empirical evaluations on ViT, DiT, and LLMs consistently show that MoH performs better. Methods And Evaluation Criteria: MoH replace multiple head attention by MOH module, adding a router as MOE architecture to sellect top-K heads, and keep some heads always open. It makes sense for transformers. Theoretical Claims: Appendix A provides the theoretical claim that MoH is superior to vanilla multi-head attention. Given the 8-page limit, it is understandable to include it in the appendix. The claim seems to tell a story that reduced redundancy and greater differentiation are better choices and lead to better model architecture. However, if it is true, complete MoH should be used, and all attention heads should be routed to moe-based design. Instead, this paper deploys a mixed technology in which parts of attention heads are specialized, and parts of attention heads are shared. In this case, specialization is not always good, finding a good balance between specialization and generalization is more important, how to find such a balance is one open question to answer. Experimental Designs Or Analyses: No experiments compared with MoA. Although section 5 discusses the difference between MoH and MoA, but not experiment results support this claim point. As MoA is only validated for language tasks, can authors provide a comparison between MoA and MoH for language tasks? Table 5 presents the Ablation study on the impact of each component of the proposed MoH, is the first row equal to MoA? But it results from image classification, not from language tasks. Supplementary Material: The paper provides additional code in Supplementary Material with MoH-DiT and MoH-ViT, I review the supplementary materials and run MoH-ViT on my own. Relation To Broader Scientific Literature: Attention Head Pruning: Prior research of [1,2,3] shows that many heads can be removed without noticeable harm, MoH extends this insight by routing attention experts with performance gain. Essential References Not Discussed: [1] Wu, Wenhao, et al. "Retrieval head mechanistically explains long-context factuality." arXiv preprint arXiv:2404.15574 (2024). [2] Fu, Yu, et al. "Not all heads matter: A head-level KV cache compression method with integrated retrieval and reasoning." arXiv preprint arXiv:2410.19258 (2024). [3] Xiao, Guangxuan, et al. "Duoattention: Efficient long-context llm inference with retrieval and streaming heads." arXiv preprint arXiv:2410.10819 (2024). These papers [1,2,3] propose the same insights that not all attention heads hold equal significance. Other Strengths And Weaknesses: Strengths: Extensive Experimentation on vision classification, diffusion-based generation, large language modeling. Strengths: able to do both pretrain and finetune. Other Comments Or Suggestions: TYPO: Sec 4.3 "Please refer to the Appendix for detailed hyper-parameter settings (Tab. C)" link to wrong table of Table 3. Questions For Authors: Does the implementation of MOH affect GPU memory usage? if so, it increases of decreases the total memory usage? In that case, can we support larger model or small model by using MOH? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments and for recognizing our work as "intuitive and effective," acknowledging that "MoH can achieve competitive performance," and highlighting our "detailed ablation study and well-organized paper structure." Below, we address your questions in detail. **Q1:** The idea is not entirely new. It shares a similar idea with MoA. **A1:** We explain this problem in three aspects: * In terms of motivation, **MoH aims to make the attention mechanism more efficient and effective without adding extra parameters.** In contrast, MoA, like MoE, focuses on increasing model size while keeping inference costs low. Therefore, the model settings of MoH are more stringent than those of MoA. * In terms of methodology, **we maintain the original structure of multi-head attention as much as possible, allowing MoH to seamlessly replace standard multi-head attention across different tasks without extra tuning.** In contrast, MoA introduces shared keys and values to add heads without increasing the KV cache, which disrupts the original multi-head attention design. * In terms of flexibility, **we show that pre-trained multi-head attention models can be further continue-tuned into our MoH models, making MoH highly practical.** In contrast, MoA integrates multi-head attention with MoE but relies on shared keys and values, requiring training from scratch, which reduces its flexibility. **Q2:** No experiments compared with MoA. **A2:** Thanks for your insightful advice. As suggested, we compare MoA and MoH on the translation task and additionally provide results for image classification. As shown in the table below, MoH outperforms MoA, primarily for two reasons: (1) MoA's use of shared keys and values across heads limits its expressiveness; (2) MoH's shared heads and two-stage routing improve the model's ability to capture general knowledge (please refer to our response (A1) to Reviewer 6acz). | | WMT14EnDe (BLEU) | **Image Classification (Acc)** | |:---:|:---------------------:|:-----------:| | MoA | 28.3 | 75.4% | | MoH | **29.0** | **78.6%** | **Q3:** How to find a good balance between specialization and generalization. **A3:** There are two ways to choose shared heads and routed heads: **(1) manual configuration** and **(2) learning through learnable masks**. In the manuscript, we show the results of manual configuration. For the learnable mask approach, we follow a method similar to [1]. Specifically, we introduce a mask module with binary values {0,1} applied to the heads. These masks are dynamically learned rather than statically assigned, allowing the model to determine which heads should be shared. The remaining heads are designated as routed heads, and we set K based on a predefined ratio. For example, if the ratio is 1/4 and there are 8 routed heads, then K is set to 2. As shown in the table below, our latest experimental results show that learning through learnable masks is generally better than manual configuration. | | # Activated Heads (%) | **Image Classification (Acc)** | |:---:|:---------------------:|:-----------:| | Baseline | 100 | 84.8% | | Manual Configuration | 75 | 84.9% | | Manual Configuration | 50 | 84.7% | | Learning Through Learnable Masks | 50 | **85.0%** | [1] Liu, Peiyu, et al. "MOEfication by Experts as Masks." **Q4:** Table 5 presents the Ablation study on the impact of each component of the proposed MoH, is the first row equal to MoA? **A4:** The first row in Table 5 follows a structure similar to MoA, but without shared keys and values or the use of additional z-loss. **Q5:** Providing state-of-the-art mechanism explanation presented in 2024. **A5:** Thanks for your valuable suggestion. We have added your additional references to the Introduction to better explain that not all attention heads hold equal significance. **Q6:** TYPO: Sec 4.3 "Please refer to the Appendix for detailed hyper-parameter settings (Tab. C)" link to wrong table of Table 3. **A6:** Thank you for your thorough review. This issue may be a bug in LaTeX, and we will work on fixing it. **Q7:** Does the implementation of MoH affect GPU memory usage? if so, it increases of decreases the total memory usage? In that case, can we support larger model or small model by using MoH? **A7:** As shown in our response (A2) to Reviewer 6acz, MoH slightly reduces GPU memory usage, though the difference is not significant. This is because GPU memory is primarily used to store model parameters, gradients, and the KV cache. Since MoH only optimizes attention computation, it does not substantially reduce the GPU memory of these three components. Although MoH doesn't allow training larger models, it can make training and inference faster.
Summary: The paper proposes Mixture-of-Head (MoH), a replacement for the standard attention mechanism, in which attention heads can be adaptively switched on and off and reweighted for each token. This proposal is motivated by the already studied redundancy/specialization of attention heads, and the authors show that MoH can maintain or even improve the performance of a variety of transformer networks across different tasks while activating just a fraction of the attention parameters. Claims And Evidence: Claims are generally well supported by evidence. My only concern in this direction is that, in most cases, performance enhancements provided by MoH are marginal or even absent compared with the standard multi-head attention mechanism, which is probably insufficient to sustain the claim that MoH consistently enhances model performance. Methods And Evaluation Criteria: Yes, methods and evaluations are reasonable. The accessibility of the method could be improved by rephrasing the paragraph on two-step routing, as the roles of the learnable projection matrices are not immediately clear. Theoretical Claims: N/A Experimental Designs Or Analyses: Experimental design is generally sound, though unclear in some parts. Specifically, the difference between the setting of ViT and DiT w.r.t. Llama could be made clearer. From my understanding, the MoH module is trained on a pre-trained transformer in both cases. What is then the peculiarity of continual tuning in the case of Llama? Moreover, in the cases of ViT and DiT, it is not clear whether shared heads are assigned and what strategy is used to choose them. Supplementary Material: I tried having a look at the code to clarify a doubt I expressed in the "Questions" part. Other than that, no. Relation To Broader Scientific Literature: N/A (there is some information regarding this in other sections of the form). Essential References Not Discussed: I would point the authors to the following related works that might be worth discussing in the manuscript: - Interpreting CLIP's Image Representation via Text-Based Decomposition; Gandelsman et al., ICLR 2024. In this paper, the authors find that the attention heads of CLIP tend to **specialize** in specific input attributes. - Decomposing and Interpreting Image Representations via Text in ViTs Beyond CLIP; Balasubramanian et al., NeurIPS 2024. This paper generalizes such findings to non-contrastive Vision Transformers; - ResiDual Transformer Alignment with Spectral Decomposition; Basile et al., 2024. In this paper, the specialization property of CLIP heads is used to show that **few heads can outperform the whole model** on zero-shot classification tasks. Other Strengths And Weaknesses: As an additional strength, I think the idea of adding a MoE-like routing in the attention mechanism is fascinating and worth investigating, especially for efficiency purposes (but also interpretability!). On the weaknesses side, they are scattered around aspect-specific parts of the review (e.g., Experimental Designs). Other Comments Or Suggestions: N/A Questions For Authors: 1. It's unclear to me how the dynamic routing works at the single token level. This is a core contribution to the paper (as stated repeatedly, including in the abstract), so I would like to better understand it and see it more clearly explained in the paper, where I feel most of the focus is at head level. What exactly happens when a token $t$ in sentence $S$ is routed/activated in head $h$? Does it interact only with the other ones routed to $h$? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments and for recognizing that "the idea of adding a MoE-like routing in the attention mechanism is fascinating and worth investigating, especially for efficiency purposes." Below, we address your questions in detail. **Q1:** In most cases, performance enhancements provided by MoH are marginal. **A1:** We explain this problem in two aspects: * The motivation for our work is to upgrade the multi-head attention mechanism to reduce computational costs while maintaining or surpassing the previous accuracy level. Therefore, **our method activates fewer parameters than multi-head attention**, and our method performs better if it activates the same level of parameters. * To demonstrate the robustness of our method, **we only replace multi-head attention with MoH in various structures while keeping the original training parameters unchanged**. Our latest experimental results indicate that, with tuning, our method can achieve even higher performance. | | # Activated Heads (%) | **Image Classification (Acc)** | |:--:|:--:|:--:| | Multi-Head Attention | 100 | 84.8% | | MoH | 50 | **85.0%** | **Q2:** The accessibility of the method could be improved by rephrasing the paragraph on two-step routing. **A2:** Thanks for your insightful advice. As suggested, we have rewritten the paragraph on two-step routing to make it easier for readers to understand. **Q3:** The difference between the setting of ViT and DiT w.r.t. Llama. **A3:** We explain our experimental setup in detail: * **ViT for Image Classification**. We trained a MoH model **from scratch** based on TransNeXt. To ensure a fair comparison, **we only replace the standard multi-head attention with the proposed MoH, while keeping all other training parameters identical to TransNeXt**. * **DiT for Class-Conditional Image Generation**. We trained a MoH model **from scratch** based on DiT by replacing the standard multi-head attention with the proposed MoH. **We also keep all other training parameters identical to DiT**. * **Training LLMs from Scratch**. We trained the LLMs **from scratch, maintaining the multi-head attention baseline with exactly the same training parameters as the MoH model.** * **Continue-Tuning LLaMA3-8B**. To significantly enhance the applicability of the proposed MoH method, we attempt to further continue-tune pre-trained multi-head attention models, such as LLaMA3-8B, into MoH models. **Q4:** What strategy is used to choose shared heads? **A4:** All MoH models contain shared heads. There are two ways to choose shared heads: (1) manual configuration and (2) learning through learnable masks. In the manuscript, we show the results of manual configuration. For the learnable mask approach, we follow a method similar to [1]. Specifically, we introduce a mask module with binary values {0,1} applied to the heads. These masks are dynamically learned rather than statically assigned, allowing the model to determine which heads should be shared. The remaining heads are designated as routed heads, and we set K based on a predefined ratio. For example, if the ratio is 1/4 and there are 8 routed heads, then K is set to 2. As shown in the table below, our latest experimental results show that learning through learnable masks is generally better than manual configuration. | | # Activated Heads (%) | **Image Classification (Acc)** | |:--:|:--:|:--:| | Baseline | 100 | 84.8% | | Manual Configuration | 50 | 84.7% | | Learning Through Learnable Masks | 50 | **85.0%** | [1] Liu, Peiyu, et al. "MOEfication by Experts as Masks." **Q5:** Essential references not discussed. **A5:** Thanks for your advice. We have expanded [1,2,3] and improved the discussion of related works. **Q6:** The dynamic routing works at the single token level. **A6:** We input the sentence "Give me a short introduction to large language model." into MoH-LLaMA3-8B. We find the tokens forming the phrase share most of the activated heads. For example, all three tokens in "large language model" activate heads {6,7,8,9,11,12}. For other tokens, no clear pattern emerged in the activation of routed heads. Notably, besides routed heads, the shared heads create a stable semantic interaction between all tokens. | Token | IDs of the Activated Heads | |:--:|:--:| | Give | 0,3,7,8,9,12,14,15 | | me | 2,7,8,10,11,13,14,15 | | a | 0,1,4,5,6,8,9,12 | | short | 1,4,5,6,9,11,12,15 | | introduction | 0,2,3,6,7,8,9,11 | | to | 1,2,7,8,10,12,13,14 | | **large** | 1,**6,7,8,9,11,12**,15 | | **language** | **6,7,8,9**,10,**11,12**,15 | | **model** | 0,**6,7,8,9**,10,**11,12** | | . | 0,2,3,8,9,10,13 | --- Rebuttal Comment 1.1: Comment: Thank you for your reply and your work. However, I believe my central question is still unanswered by the previous comment. I'm referring to what the authors labelled "**Q6**". To rephrase it, I would like to have a better understanding of what changes in the attention mechanism in a specific head when only a few tokens are routed to that head. This is key to understanding the method's validity, and the lack of a direct answer raises concerns about whether this aspect has been sufficiently considered. --- Reply to Comment 1.1.1: Comment: Thank you for your invaluable feedback. We truly appreciate the time and effort you've dedicated to thoroughly reviewing our paper. First, we would like to clarify some important details about our approach. In our method, if a token $x_t$ does not select head $h_i$, it will not compute the query $x_t W_Q^i$ and attention value for that head. **However, the token will still compute the key $x_t W_K^i$ and value $x_t W_V^i$ of the head $h_i$.** This is because other tokens, such as $x_{t'}$, may select head $h_i$, and in this case $x_{t'}$ will need the key and value of $x_t$ to compute the attention value. We give the pseudo-code below: * For each token $x_t$: * For each attention head $h_i$: * If $h_i$ is selected by token $x_t$: * Compute and cache key $K^i_t=x_t W_K^i$ and value $V^i_t=x_t W_V^i$ * Compute query $Q^i_t=x_t W_Q^i$ * Compute attention value using the KV cache of all tokens * If $h_i$ is not selected by token $x_t$: * **Still compute and cache key $K^i_t=x_t W_K^i$ and value $V^i_t=x_t W_V^i$** **It is worth noting that the computational overhead of calculating the $K$ and $V$ is relatively small.** For example, in self-attention computations with a dimension of 512 and a sequence length of 8192, the calculation of $K$ and $V$ accounts for only about 5\% of the total computation. The proportion of computational overhead of $K$ and $V$ will further decrease with the increase of sequence length. Your suggestion to conduct a detailed analysis of each head's attention map is very insightful. As per your recommendation, we have visualized the attention maps of MoH in both MoH-ViT-B and MoH-LLaMA3-8B. For MoH-ViT-B, [Figure 1](https://drive.google.com/file/d/1CrVLKYrNHaxxgse6s0uN9x80VciXmURO/view?usp=sharing) presents a comprehensive visualization of the 4 shared heads and 28 routed heads for 49 tokens in an image. Our observations show that the shared heads tend to focus on larger areas, while the routed heads focus more on finer details in the image. [Figure 2](https://drive.google.com/file/d/1btKM-_Uq1tQSWKj5pZOV7dWTLjwpHYpk/view?usp=sharing) provides an example where the shared heads focus on a broad area, while the routed heads focus on the image’s finer details. This result further confirms that the shared heads tend to learn general knowledge, while the routed heads focus on learning more specialized knowledge. For MoH-LLaMA3-8B, we visualize the attention maps for 16 shared heads and 16 routed heads for the sentence "Give me a short introduction to large language model." in [Figure 3](https://drive.google.com/file/d/14xW-n_Gu2_bVpCziD-txHxuN0HHFVyQm/view?usp=sharing). We observe that shared heads may tend to learn fixed patterns, such as focusing solely on the query token. In contrast, the attention patterns of the routed heads are more diverse. [Figure 4](https://drive.google.com/file/d/1McI0X11s7BKAKvAnGQmBaECiqNQIah7I/view?usp=sharing) provides an example. In summary, **since all keys and values are computed in MoH, its attention mechanism has the same range as that of multi-head attention.** As a result, in our experiment, we can directly replace multi-head attention with MoH, and the model still performs well without modifying any training parameters. Besides, in MoH, shared heads and routed heads are responsible for learning global knowledge and specialized knowledge, respectively. As a result, the redundancy of attention heads in MoH may be lower than in multi-head attention. Finally, the combination of routed heads in MoH introduces more variability, suggesting that MoH may have a higher performance ceiling than multi-head attention. We sincerely hope that our responses have addressed your concerns. We will include the important discussions mentioned above in the final manuscript and highlight them for clarity. If anything is still unclear or needs more explanation, we are happy to provide further details. If our response has resolved your question, we kindly and humbly ask you to consider updating your score, as your affirmation would mean a great deal to us and help us improve our work.
Summary: The paper proposes leveraging the Mixture-of-Experts (MoE) mechanism to upgrade the standard Multi-Head Attention into a novel Mixture-of-Heads (MoH) Attention. Specifically, MoH replaces the standard summation in multi-head attention with a weighted summation, where the weights are determined by a newly introduced Two-Stage Routing strategy. Experiments on Vision Transformers (ViT), Diffusion Transformers (DiT), and large language models (LLMs) are conducted to demonstrate the effectiveness of the proposed MoH technique. Claims And Evidence: Part of. There are inconsistencies between the theoretical formulation of MoH and its implementation in the LLaMA3-8B experiment. Specifically, the router used in the experiments differs from the one described in Eqs. (5) and (6); moreover, the weighting mechanism also deviates from the theoretical presentation. These discrepancies raise concerns about the alignment between the proposed theory and its practical application, and they should be addressed to ensure the validity of the results. Methods And Evaluation Criteria: Yes. Theoretical Claims: NA. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: The key contributions of the paper are related to building a general transfrormer-based network architecture. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The proposed MoH technique is simple and easy to integrate into existing architectures, making it a practical contribution to the field. - The idea of using a weighted summation to enhance multi-head attention is intuitively appealing and expected to improve performance. - The experiments span a wide range of tasks, including classification, generation, and LLMs. The results are promising and suggest broad applicability. - The paper is well-written and easy to follow. Weaknesses: - There are inconsistencies between the theoretical formulation of MoH and its implementation in the LLaMA3-8B experiment. Specifically, the router used in the experiments differs from the one described in Eqs. (5) and (6); moreover, the weighting mechanism also deviates from the theoretical presentation. These discrepancies raise concerns about the alignment between the proposed theory and its practical application, and they should be addressed to ensure the validity of the results. - The paper does not provide a clear rationale for how the value of K in the Top-K selection is determined. This is a critical parameter that directly impacts the behavior of the MoH mechanism. Besides, the relationship between the K value and the ratio of shared heads is not discussed. Understanding this relationship is essential for interpreting the results and optimizing the method for different tasks. Other Comments Or Suggestions: No. Questions For Authors: Please see the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments and for recognizing that our method is a "practical contribution," "can be fine-tuned from pre-trained multi-head attention models," and "the results are promising and suggest broad applicability." Below, we address your questions in detail. **Q1:** There are inconsistencies between the theoretical formulation of MoH and its implementation in the LLaMA3-8B experiment. **A1:** We explain this problem in three aspects: * Experiments on ViT, DiT, and LLMs are conducted with models trained from scratch, giving us more flexibility to modify the architecture. **However, LLaMA3-8B is a pre-trained model trained on 15T tokens, while we had only 400B tokens (about 3% of the pre-training data) for continue-tuning. This limited our ability to alter the model structure significantly.** To address this, we adjusted our original formula to maximize weight reuse from the pre-trained model while preserving its output distribution. * **We have conducted experiments on LLMs trained from scratch, demonstrating the advantages of our method.** Since much of the current research focuses on fine-tuning open-source LLMs, we introduced additional experiments to continue-train LLaMA3-8B. **This contribution has been acknowledged by all other reviewers.** * To the best of our knowledge, our work is the first to attempt to reduce the computational amount of the attention mechanism without degrading the model performance by continue-tuning on a pre-trained model. We consider our proposed training techniques to be a valuable additional contribution. **Q2:** The paper does not provide a clear rationale for how the value of K in the Top-K selection is determined. **A2:** Thanks for your insightful advice. There are two ways to set this up: (1) manual configuration and (2) learning through learnable masks. In the manuscript, we show the results of manual configuration. For the learnable mask approach, we follow a method similar to [1]. Specifically, we introduce a mask module with binary values {0,1} applied to the heads. These masks are dynamically learned rather than statically assigned, allowing the model to determine which heads should be shared. The remaining heads are designated as routed heads, and we set K based on a predefined ratio. For example, if the ratio is 1/4 and there are 8 routed heads, then K is set to 2. As shown in the table below, our latest experimental results show that learning through learnable masks is generally better than manual configuration. | | # Activated Heads (%) | **Image Classification (Acc)** | |:---:|:---------------------:|:-----------:| | Baseline | 100 | 84.8% | | Manual Configuration | 75 | 84.9% | | Manual Configuration | 50 | 84.7% | | Learning Through Learnable Masks | 50 | **85.0%** | [1] Liu, Peiyu, et al. "MOEfication by Experts as Masks." **Q3:** The relationship between the K value and the ratio of shared heads is not discussed. **A3:** Thanks for your valuable suggestion. It is worth noting that when using learnable masks to determine the number of shared heads, we only need to set the ratio of K among the routed heads. As shown in the table below, this ratio is a trade off parameter. If this ratio is too small, the number of activated heads will be insufficient, potentially degrading performance. Conversely, if this ratio is too large, it reduces the model’s sparsity, limiting efficiency improvements. | | # The Ratio of K | **Image Classification (Acc)** | |:---:|:---------------------:|:-----------:| | Learning Through Learnable Masks | 1/8 | 84.8% | | Learning Through Learnable Masks | 1/4 | **85.0%** | | Learning Through Learnable Masks | 1/2 | 84.9% | We sincerely thank you for your constructive comments. We will add the above important discussions in the final manuscript and highlight them. Thanks again for taking the time and effort on our paper.
Summary: The paper introduces MoH (Mixture-of-Head Attention), a novel perspective on multi-head attention that formulates each head as an expert in a Mixture of Experts (MoE) framework. It employs a two-stage routing mechanism—comprising shared and non-shared experts—to reduce computational costs while enhancing accuracy. The authors validate their approach through experiments on both training-from-scratch and fine-tuning settings across vision tasks (such as image classification and class-conditional image generation) and language tasks. The results demonstrate that MoH outperforms or matches vanilla multi-head attention while activating fewer parameters. --- ## Update after rebuttal After reading the rebuttal, I appreciate that the authors thoroughly attempted to address and clarify all of my questions. However, my primary concern remains the incremental nature of the current version of the paper. First of all, the idea of using shared and non-shared heads originates from DeepSeek-MoE. While the authors claim to extend this idea by analyzing the differences between shared and routed heads through feature rank analysis, I view this more as an experimental insight rather than a fundamentally novel contribution. Moreover, the core idea of the paper is based on the equivalence between the summation and concatenation forms of multi-head attention. Building on this, the authors propose treating each head as an expert and applying a mixture-of-experts framework. This idea is relatively straightforward, and many other components are drawn from existing literature. In response to my concerns, the authors explained their method for integrating MoE into multi-head attention in the presence of dynamic KV cache. Specifically, they compute the key and value for all heads, even those not selected by a given token. While this is a thoughtful engineering solution, I do not find it substantial enough to constitute the primary contribution of the work. Since the core idea of the paper is simple and the effectiveness is demonstrated through empirical results, I think the paper requires stronger mathematical justification to support the effectiveness of MoE, which could enhance the overall contribution. Nevertheless, I appreciate the new perspective introduced in the paper, especially the reinterpretation of multi-head attention in summation form, which to my knowledge has not been explored in previous works. Along with the improved empirical results, I believe this work sheds light on potential advancements in multi-head attention using MoE and paves the way for more theoretical works in this field. Therefore, I have increased my score from 2 to 3 and leaned towards acceptance after considering the rebuttal. --- Claims And Evidence: 1. The motivation for proposing MoH as a dynamic-head routing mechanism is clear. It builds on existing literature showing that redundant heads in traditional multi-head attention can be pruned to reduce computational cost while maintaining accuracy. However, while prior work supports this claim when heads are combined through concatenation, the proposed method instead uses summation. This raises concerns about whether the claim from the literature still holds in this new form. 2. I cannot find any theoretical guarantee for the effectiveness of the proposed methods, particularly when the heads are combined using summation via MoE gate. Even the appendix does not provide a clear justification or proof. 3. The methodology is incremental. I will provide detailed comments in the following section. 4. Existing works that explore the mixture of heads in attention mechanism are not thoroughly discussed in the related works section. Methods And Evaluation Criteria: While I appreciate the authors' effort in developing a general framework that integrates Mixture of Experts (MoE) into attention layers in Transformers, the paper appears to be an incremental extension of existing work rather than presenting a novel contribution. Additionally, the writing and the formulation of the proposed methods present several issues. 1. Most of the components of the proposed methods are directly adapted from previous works, making the paper more of a combination of existing approaches rather than a novel contribution. - The core idea of two-stage routing involves introducing shared experts to capture global information, as presented in [1]. The original motivation of Dai et al. for this approach was to enhance expert specialization. However, this paper adapts it directly without discussing its suitability or potential additional benefits for the proposed mixture-of-head attention setting. - The load-balancing loss is also directly adapted from previous works. 2. The summation form of combining heads is inconsistent with the authors' claim. - The author claims that in the proposed method, each head is divided by rows and then concatenated according to Eq. (3). Nevertheless, only $W_O$ is divided into smaller $W_O^{i}$, and $H_i$ remains undiscussed, meaning that $H_i$ is obtained as the vanilla multi-head attention, which is not divided by rows. I recommend the authors state clearly here that the expert defined is $H_i.W_O^{i}$ to avoid confusion that the experts are $W_O^{i}$ only. 3. The lack of discussion of the new experts defined. - While traditional MoE considers experts as a FFN, the new formulation of each expert $i$ as $H_i.W_O^{i}$ needs further theoretical discussion regarding the effectiveness, the convergence rate and the optimization scheme. 4. The abuse of notation $W$ in computing gating score makes the paper confusing. - As far as I understand, the paper implies $W_r$ and $W_s$ as the expert embeddings. However, the notation $W$ and its description as a *projection matrix* may be confusing, as it resembles the projection matrices used in a standard Transformer. --- References [1] Dai, Damai, et al. "Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models." arXiv preprint arXiv:2401.06066 (2024). Theoretical Claims: I find no theoretical guarantees or mathematical proofs in the paper. Experimental Designs Or Analyses: I have checked all experimental parts and found that the experimental design is appropriate, which includes a wide range of tasks from vision to language. However, I still have concerns about the claim of the author that the new methods can be fine-tuned from pretrained models, which is an advantage compared to [1]. - As far as I understand, the authors initialize the router weights and copy the pretrained weight for $W_O$, $W_Q^{i}$, $W_{K}^{i}$ to finetune it. However, with that scheme, the decision of the shared heads and the division of $W_O$ into smaller $W_i$ will affect the initialization of the models (eg. which pretrained head is better to share). - Additionally, if the proposed method can be fine-tuned by copying weights from pretrained models, then other approaches involving a mixture of heads, such as [1], can also leverage this technique by copying pretrained weights and initializing routers (e.g., sparse upcycling [2], [3]). This suggests that the empirical advantages of MoH are not unique since upcycling techniques could be applied to other methods as well, diminishing the claimed advantage of MoH. --- References 1. Zhang, Xiaofeng, et al. "Mixture of attention heads: Selecting attention heads per token." arXiv preprint arXiv:2210.05144 (2022). 2. Komatsuzaki, Aran, et al. "Sparse upcycling: Training mixture-of-experts from dense checkpoints." arXiv preprint arXiv:2212.05055 (2022). 3. Zhang, Qizhen Irene, et al. "Bam! just like that: Simple and efficient parameter upcycling for mixture of experts." Advances in Neural Information Processing Systems 37 (2024): 56304-56321. Supplementary Material: I have reviewed the code provided in the supplementary materials. While the authors include code for reproducing vision tasks, they do not provide code for language tasks. Additionally, I appreciate if the authors can acknowledge the base code they adapted or explicitly state in the README file whether the code was written from scratch. Relation To Broader Scientific Literature: From my own perspective, this work provides a promising suggestion to advance attention-based models despite the weaknesses mentioned above. Essential References Not Discussed: The authors should provide a more in-depth discussion of related works involving the mixture of heads in Section 2, along with a critical analysis of their weaknesses to better highlight the paper’s contribution. While I acknowledge the comparison with Mixture of attention heads (MoA) [1] in Section 5, the broader literature on this topic remains insufficiently discussed ([2], [3]). --- References [1] Zhang, Xiaofeng, et al. "Mixture of attention heads: Selecting attention heads per token." arXiv preprint arXiv:2210.05144 (2022). [2] Csordás, Róbert, et al. "Switchhead: Accelerating transformers with mixture-of-experts attention." Advances in Neural Information Processing Systems 37 (2024): 74411-74438. [3] Hao Peng, Roy Schwartz, Dianqi Li, and Noah A. Smith. A mixture of $h - 1$ heads is better than $h$ heads. In Proc. Association for Computational Linguistics (ACL), pages 6566–6577, Virtual only, July 2020. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: I suggest that the authors move the comparison between MoH and MoA to the related works and consider the limitations of Mixture of attention heads (MoA) as a motivation for proposing MoH. Questions For Authors: See the weaknesses mentioned in comments about methodology and experimental results above. Furthermore, I have additional questions for the authors: 1. What are the motivations to define $\alpha_1$ and $\alpha_2$ as in Eq. 6 rather than considering them as hyperparameters? 2. Explain the *parameter-free router* mentioned in Section 4.4 (line 319)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments and your recognition that our method "provides a promising suggestion to advance attention-based models." Below, we provide detailed responses to your questions. **Q1:** While prior work supports this claim when heads are combined through concatenation, the proposed method instead uses summation. **A1:** First, in the forward pass, the summation and concatenation forms give the same results, i.e., $ \textrm{MultiHead}({X},{X}')=\sum_{i=1}^{h} {H}^{i}{W}_O^{i}=\textrm{Concat}({H}^{1}, {H}^{2}, ..., {H}^{h}){W}_O$. Second, the gradients are also identical, i.e., $ \frac{ \partial \textrm{MultiHead}({X},{X}') }{\partial H_{i} }={W}_O^{i}$. Therefore, the summation and concatenation forms are mathematically equivalent. **Q2:** Theoretical guarantee for the effectiveness of the proposed methods. **A2:** Thanks for your valuable suggestion. **As mentioned by Reviewer xCL1, we put the theoretical guarantee that MoH is superior to multi-head attention in Appendix A.** Specifically, we proved that MoH not only improves efficiency and model performance but also helps different attention heads to specialize better compared to multi-head attention, from both theoretical and experimental perspectives. **Q3:** The methodology is incremental. **A3:** **The motivation for our work is to upgrade the multi-head attention mechanism, the core of the Transformer model, to reduce computational costs while maintaining or surpassing the previous accuracy level, rather than making improvements to the MoE.** We propose combining multi-head attention and MoE, so we adopt the MoE technique: * For the two-stage routing, as shown in our response (A1) to Reviewer 6acz, the reason for our two-stage routing is to capture general knowledge. Besides, we compare the gradients and training data distribution of shared heads and routed heads in Appendix Table A to further demonstrate that shared heads play a key role in capturing general knowledge. * For the load-balancing loss, since we decompose multi-head attention into a summation form, which is similar to MoE structure, we directly adopt the auxiliary loss used in MoE. **Q4:** The lack of discussion of the new experts defined. **A4:** As suggested, we have defined the expert as $H^i W^i_O$ to avoid possible misunderstanding. **Q5:** Further theoretical discussion regarding the effectiveness, the convergence rate and the optimization scheme. **A5:** Mathematically, we prove that the summation and concatenation forms are equivalent. Besides, we show in Appendix A that the gradient per head in the MoH differs from the gradient of multi-head attention by only a single weight. **Finally, we replace multi-head attention with MoH in various structures while keeping the original training parameters unchanged. The experimental results demonstrate that MoH enhances the performance of multi-head attention, providing experimental evidence of its effectiveness.** **Q6:** The abuse of notation $W$. **A6:** As suggested, we have replaced the notation $W$ in the router with $E$, referring to them as expert embeddings. **Q7:** Concerns about the claim of the author that the new methods can be fine-tuned from pretrained models. **A7:** We explain this problem in three aspects: * We simply select the first 16 attention heads of each layer as shared heads. **Even if the structure is not optimal, the experimental results show that MoH-LLaMA3-8B has a significant advantage over LLaMA3-8B. This result shows the robustness of our method.** * Unlike the MoE upcycling technique, which copies the FFN to increase the model size, **our MoH prunes the original model to reduce the activation parameters**, making it more challenging. * **To the best of our knowledge, our work is the first to attempt to reduce the computational amount of the attention mechanism without degrading the model performance by continue-tuning on a pre-trained model.** **Q8:** Acknowledge the base code they adapted. **A8:** Thanks for your advice. We will acknowledge the contributors in our official code, and release our trained models. **Q9:** Existing works are not thoroughly discussed. **A9:** As suggested, we have expanded and improved the discussion of related works. **Q10:** What are the motivations to define $\alpha_1$ and $\alpha_2$ as in Eq. 6 rather than considering them as hyperparameters? **A10:** We choose to predict $\alpha_1$ and $\alpha_2$ based on the input so that different tokens can dynamically combine general knowledge from the shared heads and specialized knowledge from the routed heads. **Q11:** Explain the parameter-free router mentioned in Section 4.4 (line 319)? **A11:** We use the $l_2$ norm (which measures the magnitude of a vector in Euclidean space) of each head to represent its importance. We then normalize the $l_2$ norm of all heads using SoftMax. This simple router achieves results comparable to learnable routers. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for spending the time to reply to my review. I understand that your primary goal is to improve multi-head attention in Transformers rather than enhancing MoE, as well as to demonstrate the equivalence between the summation form and the concatenation form in Section 3.1. However, I still find that your response does not fully address my concerns. 1. Since the main objective is to improve multi-head attention, **the only key contribution of the manuscript appears to be the demonstration that the summation form and the concatenation form are equivalent**, and from that summation form, the authors then propose to consider each head as an expert and apply MoE to Multi-head Transformer. While this is an interesting finding, I do not find it substantial enough for publication. Furthermore, the theoretical explanation in Appendix A.1 suggests that in multi-head attention, each expert processes a subset of data, while shared experts enhance specialization among the remaining ones. This concept appears to be an intuitive adaptation from general MoE and especially DeepSeek MoE [1]. However, I am not convinced that Table A offers a **proper theoretical proof** demonstrating that experts indeed become more specialized or that MoH outperforms standard multi-head attention beyond the intuition drawn from [1]. 2. In A9, the authors stated: > "As suggested, we have expanded and improved the discussion of related works." However, I could not find this expanded discussion in your response to my review. I kindly ask the authors to either provide additional rebuttal comments or direct me to the relevant section where these improvements have been made. Specifically, I would like to see a detailed comparison between MoH and prior works on mixture of heads in Transformers. **PS.** Until now, my opinion remains largely unchanged. I appreciate the authors' effort in adapting Mixture of Experts to enhance Multi-head Attention, as well as their response to my review. However, I still consider this a borderline paper, and I am currently leaning towards rejection. Nevertheless, I am open to reconsidering my score if the authors can address all my concerns outlined above. Thank you. --- References. [1] Dai, Damai, et al. "Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models." arXiv preprint arXiv:2401.06066 (2024). --- ***Update***: I thank the authors for their thorough response to my (additional) comments. Although I believe the current version of the paper is borderline and requires additional mathematical justification for the effectiveness of MoH rather than relying solely on experimental results, this paper provides a new perspective with improved performance, which sheds light on potential advancements in Multi-head Attention using MoE and paves the way for more theoretical work in this field. Therefore, I have increased my score from 2 to 3. I hope the authors can include a more detailed discussion (which was briefly touched upon during the rebuttal phase) in the revised version of the manuscript. --- Reply to Comment 1.1.1: Comment: Thank you so much for your valuable feedback. We truly appreciate the time and effort you've spent carefully reviewing our paper. We're sorry that our previous responses didn't fully meet your expectations, and we will provide more detailed answers to your questions below. **Q1:** The contribution of the manuscript is not substantial enough. **A1** We explain this problem in three aspects: * First, combining MoE with attention mechanisms is not simple. MoE activates multiple FFNs sparsely, with each token computing independently. **In contrast, attention relies on a dynamic KV cache, where different tokens may activate different heads.** We use the approach outlined in our "Reply Rebuttal Comment" to Reviewer 6df6 to achieve sparse activation of attention heads. Other methods [1,2,3] avoid this complexity, leading to many drawbacks (please refer to A2 below). * Second, as noted by Reviewer xCL1, we provide evidence that MoH outperforms multi-head attention in Appendix A. Specifically, **in Table B of Appendix A.1**, we calculate the similarity of attention patterns and output features across different attention heads. As shown in Table B below, the similarity in MoH is lower than in standard multi-head attention, indicating reduced redundancy and better differentiation among the attention heads in MoH. (*Given a pair of attention score matrices A and A', we calculate the similarity of attention patterns as $1 - \frac{1}{2} \mathbb{E}[ ||A-A'||_{1} ]$. Since attention scores form a probability distribution for each query, the similarity is always between 0 to 1.*) | | Similarity of Attention Patterns | | Cosine Similarity of Output Features | | |:-:|:--:|:--:|:-:|:-:| | | ViT | LLM | ViT |DiT | | Multi-Head Attention | 0.5159 | 0.4795 | 0.0411 | 0.2550 | | **MoH** | **0.3978** | **0.4333** | **0.0165** | **0.2042** | * Third, we take it a step further than DeepMoE. **We provide evidence that shared heads and routed heads can capture different types of knowledge.** Experimentally, we analyze the feature rank of shared and routed heads. A lower feature rank means higher correlation between features from different samples, indicating that the features capture more general knowledge. As shown in the table below, the feature rank of shared heads is much lower than that of routed heads, suggesting that shared heads capture common information, while routed heads focus on sample-specific details. | | Hidden Size | feature rank of shared heads | feature rank of routed heads | |:--:|:--:|:--:|:--:| | ViT | 768 | **164** | 270 | | LLM | 1536 | **1123** | 1441 | **Q2:** Detailed comparison between MoH and prior works. **A2:** We give a detailed comparative discussion below: * MoA [1], like MoE, focuses on increasing model size while keeping inference costs low. Since duplicating attention heads increases the KV cache, MoA uses a fully shared KV for all heads. Thus MoA avoids dynamic KV cache for different heads in this way. As I replied to Reviewer xCL1, this design greatly limits the performance of MoA. * SwitchHead [2] applies MoE to Q projection, K projection, V projection, and output projection, instead of sparse activation at the head level. It is worth noting that the expert in the MoE used by SwitchHead is a single linear layer rather than the common MLP: $ Q^i=MoE_Q^i(x, expert=Linear), K^i=MoE_K^i(x, expert=Linear), V^i=MoE_V^i(x, expert=Linear)), W^i_O=MoE_O^i(x, expert=Linear).$ * MAE [3] does not use sparse activation, and its G-step and F-step iterative optimization strategy significantly increase the training cost: $ MAE(x)=\sum_{i=1}^{h}g_{i}(x)\frac{h}{h-1}(-H_i+\sum_{j=1}^{h}H_j). $ In contrast, MoH aims to make attention more efficient without adding extra parameters. Besides, MoH preserves much of the multi-head attention structure, offering three key advantages: * In our experiments, we replaced multi-head attention with MoH without changing any training parameters. In contrast, prior works [1,2,3] modify the structure, requiring them to be reconfigured. * We show that a pretrained model can be further continue-tuned into our MoH models by low-cost training, which is very important in the era of large models, because most researchers do not have enough computing power to train a large model from scratch. * MoH can be easily adapted across a variety of popular dense and MoE-based model frameworks. Prior works [1,2] have only been compared with MoE-based language methods. We will include the important discussions in the final manuscript. If our response has resolved your question, we kindly and humbly ask you to consider updating your score. [1] Zhang, Xiaofeng, et al. "Mixture of attention heads: Selecting attention heads per token." [2] Csordás, Róbert, et al. "Switchhead: Accelerating transformers with mixture-of-experts attention." [3] Hao Peng, Roy Schwartz, Dianqi Li, and Noah A. Smith. A mixture of heads is better than heads.
Summary: The paper introduces Mixture-of-Head Attention (MoH) as an enhancement to the multi-head attention (MHA) mechanism in Transformer models, aiming to reduce computational costs while maintaining or improving model accuracy. The key insight is that not all attention heads contribute equally, and some can be pruned or dynamically selected without significantly affecting performance. Inspired by Mixture-of-Experts (MoE) models, MoH treats attention heads as experts and introduces a router that selects the most relevant heads for each token. This allows MoH to activate only a subset of attention heads dynamically, improving efficiency without increasing the number of model parameters. Additionally, MoH replaces the standard summation of multi-head attention with a weighted summation, enhancing flexibility. The authors validate MoH across multiple architectures, including Vision Transformers (ViT), Diffusion Transformers (DiT), and Large Language Models (LLMs). Results show that MoH can match or outperform standard MHA while using only 50%–90% of the attention heads. Notably, MoH-LLaMA3-8B achieves a 2.4% accuracy improvement over LLaMA3-8B using only 75% of the attention heads. --- ## Update Post-rebuttal I am happy with the authors response and would like to see the paper accepted. --- Claims And Evidence: The paper presents MoH (Mixture-of-Head Attention) as an alternative to standard Multi-Head Attention (MHA) and claims that it improves efficiency and accuracy across multiple architectures (ViTs, DiTs, and LLMs). While the empirical results generally support these claims, some claims require further validation. For instance, the authors claim that MoH improves efficiency without increasing parameter count, but didn't provide FLOP counts or memory benchmarks to confirm efficiency gains. Moreover, the authors mentioned that MoH's routing strategy is optimal for balancing shared and routed heads, but didn't compare this strategy against say dynamic attention strategies such as sparse attention. Methods And Evaluation Criteria: MoH seems to be well-suited for improving Transformer efficiency, and the evaluation covers a diverse range of architectures (ViTs, DiTs, and LLMs) using 14 benchmark datasets (e.g., MMLU, CEVAL, GSM8K, TruthfulQA), making the results broadly applicable. However, the paper lacks efficiency metrics (e.g., FLOPs, memory usage, latency) despite claiming computational improvements. Additionally, MoH’s manual selection of shared vs. routed heads is not compared to fully dynamic routing methods (e.g., MoE-based attention), leaving some doubts about whether the hybrid design is necessary. Addressing these gaps could strengthen the empirical validation of MoH. Theoretical Claims: The core theoretical claims about selective activation and weighted summation are well-supported, but the manual head selection strategy and efficiency gains lack mathematical justification. Experimental Designs Or Analyses: The experimental design provides a broad evaluation of MoH across ViTs, DiTs, and LLMs, using many benchmark datasets (e.g., MMLU, CEVAL, GSM8K, TruthfulQA), which strengthens the generalization claims. The experimental setup is strong in terms of dataset diversity, but the lack of efficiency benchmarks and alternative routing comparisons in MoH’s empirical claims. Supplementary Material: Skimmed over it. Relation To Broader Scientific Literature: The paper builds on prior work in Multi-Head Attention and Mixture-of-Experts models by introducing Mixture-of-Head Attention, which selectively activates attention heads to improve efficiency. The concept of routing-based selection is inspired by MoE-based Transformers, but MoH applies it at the attention head level instead of full feedforward layers, making it a more lightweight alternative. The idea of structured sparsity in Transformers aligns with research on sparse attention mechanisms and adaptive token selection methods, though MoH introduces a unique hybrid model where some heads are manually fixed while others are routed dynamically. However, the paper does not compare MoH against fully dynamic routing mechanisms like MoE-based attention, which raises questions about whether its manual head selection strategy is necessary. Essential References Not Discussed: The paper provides theoretical justification for MoH, focusing on the routing mechanism and its impact on efficiency and expressivity. The derivations related to selective head activation and weighted summation appear logically consistent, following standard MoE formulations. However, the manual selection of shared heads is not rigorously justified—there is no proof explaining why certain heads must always be active, rather than letting the routing function learn to retain essential heads automatically. Additionally, the paper does not compare MoH's routing formulation against fully dynamic MoE-based attention models, leaving its mathematical superiority unverified. **If MoH is inspired by Mixture-of-Experts, then why does it manually designate some heads as "always active" instead of letting the router handle all head selection dynamically? I think this partially undermines the motivation for using a routing mechanism in the first place.** Other Strengths And Weaknesses: The paper presents a refinement of Multi-Head Attention by introducing Mixture-of-Head Attention, which selectively activates heads to improve efficiency without increasing parameter count. This hybrid approach balances shared and dynamically routed heads, offering a new perspective on structured sparsity in Transformers. The significance is high, as MoH generalizes across ViTs, DiTs, and LLMs, making it applicable to a wide range of architectures. However, clarity is hindered by the lack of justification for manually fixing some heads as always active, which contradicts the motivation for routing-based selection. Additionally, claims about computational efficiency are not supported by FLOP/memory benchmarks, and no comparisons are made against alternative dynamic attention mechanisms (e.g., MoE-based attention, sparse routing). Strengthening these aspects would solidify MoH’s contributions and further validate its impact. Other Comments Or Suggestions: None. Questions For Authors: If MoH is inspired by Mixture-of-Experts, then why does it manually designate some heads as "always active" instead of letting the router handle all head selection dynamically? If certain heads are critical enough to be always active, why doesn’t the routing function naturally prioritize them? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive comments, and for noting that our method provides "a new perspective on structured sparsity in Transformers" and that "the significance is high." We address the questions as below. **Q1:** If MoH is inspired by Mixture-of-Experts, then why does it manually designate some heads as "always active" instead of letting the router handle all head selection dynamically? **A1:** We explain this problem in three aspects: * **Load balance loss (also called MoE loss) pushes experts to focus on specific areas rather than general knowledge.** As shown below, load balance loss ensures all experts are chosen equally and become specialized. However, this goes against the idea that essential experts should be selected more often and should learn broader, general knowledge. **Due to the load balance loss, even though some heads are critical enough to remain active at all times, the routing function cannot naturally prioritize them.** $L_b = \sum_{i=h_s+1}^{h} P_i f_i,$ $ P_i = \frac{1}{T} \sum_{t=1}^{T} \text{Softmax}(W_{r} x_{t})_{i-h_s}, $ $f_i = \frac{1}{T} \sum_{t=1}^{T} {1}(\text{Token } {x}_t \text{ selects Head } i)$. * **From the perspective of training stability, keeping some heads active at all times helps maintain a stable gradient.** If all heads are selected freely, the gradients and loss spikes can increase significantly, reducing training efficiency. * In recent MoE work [1], some experts are also selected as shared experts to extract general knowledge. Besides, in attention mechanisms, some heads may capture common knowledge across different contexts, such as grammatical rules in language. Inspired by this idea, we designate a subset of heads as shared heads that remain always activated. **We also compare the gradients and training data distribution of shared heads and routed heads in Appendix Table A to further demonstrate that shared heads play a key role in capturing general knowledge.** [1] Dai, Damai, et al. "Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models." arXiv preprint arXiv:2401.06066 (2024). **Q2:** Claims about computational efficiency are not supported by FLOPs/memory benchmarks. **A2:** Thanks for your valuable suggestion. We have provided the comparison of the efficiency between MoH and multi-head attention mechanisms in Table 7, and we have added FLOPs and memory as suggested. Specifically, MoH surpasses standard multi-head attention mechanisms, with its advantage becoming more pronounced as the input sequence length increases. We find that MoH slightly reduces GPU memory usage, though the difference is not significant. This is because GPU memory is primarily used to store model parameters, gradients, and the KV cache. Since MoH only optimizes attention computation, it does not substantially reduce the GPU memory of these three components. | | # Head Num | # Head Dim | # Sequence Length | #Activated Heads (%) | Time (ms) | FLOPs | Memory | |:--------------------:|:----------:|:----------:|:-----------------:|:--------------------:|:---------:|:---------:|:---------:| | Multi-Head Attention | 32 | 64 | 256 | 100 | 0.360 | 23M | 998M | | MoH | 32 | 64 | 256 | 90 | 0.352 | 22M | 980M | | MoH | 32 | 64 | 256 | 75 | 0.321 | 18M | 978M | | **MoH** | 32 | 64 | 256 | 50 | **0.225** | **13M** | **978M** | | Multi-Head Attention | 32 | 64 | 512 | 100 | 1.376 | 83M | 3356M | | MoH | 32 | 64 | 512 | 90 | 1.351 | 77M | 3354M | | MoH | 32 | 64 | 512 | 75 | 1.180 | 65M | 3328M | | **MoH** | 32 | 64 | 512 | 50 | **0.863** | **45M** | **3302M** | **Q3:** No comparisons are made against alternative dynamic attention mechanisms (e.g., MoE-based attention, sparse routing). **A3:** Thanks for your insightful advice. As shown in the table below, MoH outperforms both sparse attention and MoE-based attention. These results also demonstrate the benefits of shared heads from an experimental perspective. | | # Activated Heads (%) | **Image Classification (Acc)** | |:---:|:---------------------:|:-----------:| | Sparse attention S | - | 78.4% | | MoE-based attention S | 75 | 75.6% | | MoH S | 75 | **78.6%** | We sincerely thank you for your constructive comments. We will add the above important discussions in the final manuscript and highlight them. Thanks again for taking the time and effort on our paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. I am not sure what the second part (5th row to 8th row) of Table 7 refers to, but I am satisfied with their answers. I will maintain my score. Appreciate the time and efforts you put into the rebuttal. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback. We truly appreciate the time and effort you spent reviewing our paper. We are pleased to know that our response was to your satisfaction. **In the second part (the 5th row to the 8th row) of Table 7, we increase the input sequence length. For rows 1 to 4, the input length is 256. For rows 5 to 8, it is 512.** We test different sequence lengths because sparse activation methods have extra routing cost $C_{routing}$: * The routing cost $C_{routing}$ grows linearly with the input length $L$, i.e., $C_{routing}\propto L$. * However, the computational cost $C_{attn}$ of attention grows quadratically with the input length, i.e., $C_{attn}\propto L^2$. We want to observe if our method performs better with longer sequences. As shown in Table 7, as the input length $L$ increases, our proposed MoH shows a greater speed advantage. **We are sorry that the contents of Table 7 are somewhat confusing because we did not have sufficient descriptions of the table. We will change the caption of Table 7 to make it easier to follow.** We sincerely appreciate your support for our work. We kindly ask you to consider raising your score, as your encouragement is very important to us. It also helps more people discover and benefit from our research. Thank you for your understanding and support.
Summary: This paper aims to enhance the efficiency of multi-head self-attention by integrating mixture of experts into the attention. The authors propose Mixture of Head attention, which selectively activates subsets of attention heads for each token and gets a weighted sum of these selected heads to get the final output. The approach demonstrates improved efficiency and performance across some tasks, including image classification, image generation, and both fine-tuning and training large language models from scratch. Claims And Evidence: The paper claims to reduce computational costs while maintaining comparable or favorable performance. The experiments confirm this to a certain extent. However, one weakness of the method is its relatively high activation rate. Methods And Evaluation Criteria: The proposed method is validated on well-known tasks, including image classification and generation, as well as training and fine-tuning LLMs. Theoretical Claims: The paper does not introduce new theoretical contributions, so there are no proofs to verify. Experimental Designs Or Analyses: The experiments in the paper are well-conducted, and the efficiency of the proposed method is confirmed across various tasks and datasets. Additionally, the experimental details are clearly reported. However, one major weakness of the paper is the lack of discussion on the standard MoE applied to feedforward layers: - *Lack of comparison with mixture of feedforward experts:* Traditional MoE is typically applied to feedforward layers and has demonstrated significant improvements in both efficiency with a highly sparse activation rate and performance. Furthermore, training feedforward MoE is generally simpler compared to the proposed MoH. - *Lack of integration with mixture of feedforward experts:* Would incorporating MoE in both feedforward and attention layers result in even better results? It would be great to explore this possibility and see if it provides further efficiency gains and performance improvements. Supplementary Material: I have reviewed the additional discussions, experiments, and some implementation details. Relation To Broader Scientific Literature: This introduces a new type of mixture of experts, which I find both important and interesting. Essential References Not Discussed: I did not identify any essential or significant related works that are missing. Other Strengths And Weaknesses: Please refer to the discussion above. Other Comments Or Suggestions: Please refer to the discussion above. Questions For Authors: Please refer to the discussion above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments and for recognizing that "The experiments in the paper are well-conducted." Below, we address your questions in detail. **Q1:** One weakness of the method is its relatively high activation rate. **A1:** To demonstrate the robustness of our method, we only replace multi-head attention with MoH in various structures while keeping the original training parameters unchanged. Therefore, the results presented in the manuscript may not be optimal. Our latest experimental results indicate that, with tuning, our method can achieve even higher performance with a lower activation rate. Besides, we are developing deep learning-based methods to automatically determine the optimal activation ratio. We believe that our proposed MoH is promising and can be further optimized for even better performance. | | # Activated Heads (%) | **Image Classification (Acc)** | |:---:|:---------------------:|:-----------:| | Multi-Head Attention | 100 | 84.8% | | MoH | 50 | **85.0%** | **Q2:** Lack of comparison with mixture of feedforward experts. **A2:** Thanks for your insightful advice. We explain the difference between MoH and MoE from the following three aspects: * Attention and FFNs are the core components of Transformers. While MoE applies sparse activation at the FFN level, MoH introduces sparsity at the attention level. MoH not only extends the scope of MoE but also offers a more effective approach to reducing Transformer computation. **This is particularly significant because, as the input sequence length increases, FFN computation grows linearly, while attention computation scales quadratically.** Consequently, MoH has greater potential to alleviate the computational burden of Transformers. * MoH presents greater technical challenges than MoE. Unlike the MoE upcycling technique, which copies the FFN to increase the model size, **our MoH prunes the original model to reduce the activation parameters, making it more challenging.** * **MoH naturally leverages the multi-head structure in Transformers, while MoE requires additional FFN replication.** From this perspective, MoH matches the original Transformer design better. **Q3:** Lack of integration with mixture of feedforward experts. **A3:** Thanks for your valuable suggestion. Due to the time constraints of the rebuttal, we designed a 28M-sized small model as a baseline, with a training budget of 100 epochs on ImageNet-1K classification. As shown in the table below, MoH can be combined with MoE, where MoE enhances the model's performance by replicating the FFN, while MoH optimizes computational efficiency by the dynamic activation of attention heads. | | # Params | # Activated Heads (%) | **Image Classification (Acc)** | |:---:|:---------------------:|:-----------:|:-----------:| | Multi-Head Attention | 28M | 100 | 77.0% | | MoH | 28M | 75 | 77.2% | | MoE Top-1/2E | 45M | 100 | **78.1%** | | MoH & MoE Top-1/2E | 45M | 75 | **78.1%** | We sincerely thank you for your constructive comments. We will add the above important discussions in the final manuscript and highlight them. Thanks again for taking the time and effort on our paper.
null
null
CAD-Editor: A Locate-then-Infill Framework with Automated Training Data Synthesis for Text-Based CAD Editing
Accept (poster)
Summary: This paper introduces CAD-Editor, a framework for text-based CAD editing that leverages large language models (LLMs) through a locate-then-infill strategy for identifying modification areas and executing edits. To be more specific, CAD-Editor breaks down the editing process into two steps. First, it identifies the areas that need changes by creating a masked CAD sequence. Second, it further fills in these masked sections with context-aware edits, ensuring seamless integration with the existing design. Besides, CAD-Editor also develops an automated pipeline that generates triplet data using design variation models and Large Vision-Language Models (LVLMs) to preparing training data. Experiments show CAD-Editor outperforms existing methods in both quantitative and qualitative evaluations. Claims And Evidence: Yes, all claims in the submission are clear. Methods And Evaluation Criteria: For the metrics in the submission, it would be better if COV (Coverage) and MMD (Minimum matching distance) are reported. Theoretical Claims: Yes, the reviewer has checked the correctness of all claims. Experimental Designs Or Analyses: Yes, the reviewer has checked them all. The experimental designs are reasonable. Supplementary Material: There is no supplementary material uploaded. Relation To Broader Scientific Literature: Text-based CAD editing is a branch of Text2CAD tasks, which has been rarely discussed previously and can promote the development of CAD community. Essential References Not Discussed: Considering CAD-Editor is a branch of text2CAD task, it would be better to discuss more text2CAD efforts in the related work section. For example, "CAD Translator: An Effective Drive for Text to 3D Parametric Computer-Aided Design Generative Modeling. ACM MM 2024" is also a recent text2cad task. Other Strengths And Weaknesses: ####Strengths#### 1. This paper is well-structured and easy to follow. 2. Text-based CAD editing is an interesting application and has been rarely discussed previously. 3. The paper proposes a new method for text-based CAD editing that utilizes a locate-then-infill strategy, allowing users to edit CADs via natural language commands. ####Weaknesses#### 1. The data representation and training paradigm heavily rely on prior method [*] with minor improvements. 2. The edited CAD model could be useless if the editing process fails to precisely control the parameters. The current approach does not encode the actual parameters, or at best, it only encodes a limited amount of parameter information. 3. The proposed *stepwise captioning strategy* is independent across different modalities, meaning there is no interaction between information from them, which may lead to suboptimal results. Additionally, the current LVLMs struggle to accurately describe complex CAD models using only the sequence modality, which can cause errors to accumulate in the later steps. 4. The model's generalization ability remains questionable, which means it could potentially underperform in real-world scenarios. Firstly, the generation of synthetic data relies on models like HNC-CAD, which are also trained on the DeepCAD dataset. This introduces a risk of overfitting to the same distribution. Secondly, all data containing more than three sketch-extrusion pairs were excluded, but DeepCAD itself already includes a lot of simple examples. By excluding these CAD models, the resulting training set may be biased toward simpler CAD models. Consequently, the model may exhibit weak generalization and limited reliability in real-world applications. [*] Zhang, Z., Sun, S., Wang, W., Cai, D., and Bian, J. Flexcad: Unified and versatile controllable cad generation with fine-tuned large language models. Other Comments Or Suggestions: 1. To enhance the generalization ability of proposed model, complex data like more sketch-extrusion pairs should be included in the training set. 2. Although Text-based CAD editing has not been discussed yet, the inability to control command parameters still leaves current CAD-Editor far from practical applications. Questions For Authors: 1. Would authors please showcase more results by editing parameters in CAD commands? 2. The reviewer is curious about why did authors choose to remove the long-tail data with more than 3 sketch-extrusion pairs? Could you please give the detailed clarifications? The reviewer may adjust the original score based on authors' feedback. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## 1. Discussion on CAD Translator We will include this work and other recent text-to-CAD efforts in the Related Work section. Since CAD Translator does not take existing CAD models as input, it cannot directly handle the text-based editing task discussed here. Additionally, its code is not publicly available, preventing experimental comparison. ## 2. Data Representation and Training Paradigm Rely on FlexCAD While our work is built upon existing CAD representations and seq2seq training paradigms of FlexCAD, these are not the focus of our research. Our main novelty lies in defining a new task and developing a tailored framework to address its unique challenges. 1. **New Task.** We define text-based CAD editing, enabling precise modifications via textual instructions—unlike FlexCAD on CAD variation. 2. **Automated Data Synthesis Pipeline.** A key challenge is the lack of triplet data (original CAD, instruction, edited CAD). We propose a novel pipeline that combines design variation models with LVLMs, along with a stepwise captioning strategy to improve caption quality. This enables the creation of a high-quality dataset, which we release for future research. 3. **Locate-Then-Infill Framework.** We decompose editing into two stages, with tailored solutions and ablation studies validating its effectiveness. - Locating: Identifying modifiable regions via masked CAD sequences. We tackle the lack of supervision signals using LCS-based mask generation. - Infilling: Generating edits for masked regions. We enhance data quality via selective dataset. ## 3. Control Parameters 1. We have included parameter information in our dataset. Specifically, the sequence-level captioning approach more frequently captures parametric cues. As shown in Figure 3, parameter-related information such as "reduce by 10 units" is included, and the model is trained to associate such expressions with corresponding geometric changes. 2. we analyzed the dataset and found that: - 11.10% of instructions contain explicit numeric expressions, - 12.85% include number words that represent quantities (e.g., one, two, first); see cases 4, 6, 8 in Figure 5; - 31.33% feature implicit parametric cues such as "half", "double", "left", "top", "center", or "end" (see cases 1,2,4,6 in Figure 5). 3. This indicates that the model receives parameter-related supervision during training. 4. We add qualitative examples (**Figure 2** in https://anonymous.4open.science/r/CAD-Editor-MoreResult-DBDC ) demonstrating the model’s ability to interpret parameterized instructions, such as generating a correctly sized hole for "Add a 44-unit diameter circular hole" and producing a proportionally smaller cylinder for "reduce the cylinder's height by half". These examples will be included in the revised paper. ## 4. Independent across Different Modalities 1. Our experiments indicate that LVLMs do not struggle significantly more with either sequence or visual modality. We manually reviewed 100 samples each from the sequence and visual modalities, with correctness rates of 78% and 83%, respectively—indicating no significant difference in difficulty for LVLMs. 2. We tested using both modalities together as input and observed a correctness rate of 86%, which was not significantly better than single-modality inputs. 3. As clarified in Sec. 4 of the paper, our use of both modalities aims to enhance the diversity and coverage of editing instructions, rather than to improve per-sample accuracy. For example, visual modality better captures structural changes, while the sequence modality provides fine-grained numerical edits. ## 5. Generalization Ability 1. DeepCAD provides **a substantial amount of data** for training purposes (approximately 178k instances), whereas other datasets offer significantly less data (Fusion 360 [4] contains only about 8k instances). 2. **Related works on generating CAD models are predominantly trained on the DeepCAD [1, 2, 3, 4], which is a standard in this field.** To our knowledge, no other datasets comparable to DeepCAD are available. 3. **Studies have indicated that using DeepCAD is unlikely to result in overfitting** due to its extensive and cross-industry data [5]. According to the research that introduces DeepCAD [1] (supplementary E), the model trained on DeepCAD generalize well to Fusion 360, which is collected from different sources to DeepCAD. We **evaluate CAD-Editor’s generalization by testing a model trained on DeepCAD directly on Fusion 360**. As shown in **Table 4** (in https://anonymous.4open.science/r/CAD-Editor-MoreResult-DBDC) , CAD-Editor outperforms baselines, confirming its generalization ability to datasets with different shape distributions. ## 6. 3 Sketch-Extrude Pairs Please see **3-Steps of Sketch and Extrude** to Reviewer keVC. *References* [1] DeepCAD, ICCV 2021. [2] SkexGen, ICML 2022. [3] Hnc-CAD, ICML 2023. [4] Text2CAD, Neurips 2024. [5] ABC, CVPR 2019. --- Rebuttal Comment 1.1: Comment: Thanks for authors' feedback, which addresses most of my concerns. I would like to raise the initial score.
Summary: This paper introduces CAD-Editor, the first framework for text-based CAD editing. The authors frame the problem as a sequence-to-sequence generation task and propose a locate-then-infill approach that decomposes editing into two sub-tasks: locating regions requiring modification and infilling these regions with appropriate edits. To address the lack of training data, they develop an automated data synthesis pipeline combining design variation models with Large Vision-Language Models. Experimental results demonstrate that CAD-Editor outperforms baseline methods in validity, text-CAD alignment, and generation quality. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: Yes. Supplementary Material: This submission has no supplementary material. Relation To Broader Scientific Literature: As an application paper, this work is related to the CAD field. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** - The paper addresses a novel and practical problem (text-based CAD editing) that has significant real-world applications. - The automated data synthesis pipeline is clever and well-designed, leveraging existing design variation models and LVLMs to generate paired data with editing instructions. - The locate-then-infill framework offers an intuitive decomposition of the editing task that aligns with how humans might approach CAD editing. -The qualitative results are impressive, showing the system's capability to perform a variety of complex editing operations based on natural language instructions. **Weaknesses** - The paper lacks a detailed discussion of how the system handles ambiguous or imprecise editing instructions. While Figure 8 shows some examples of diverse outcomes for vague instructions, a more systematic analysis would be valuable to show its stability and robustness. - The evaluation can benefit from more comparison with human-designed edits to assess how well the system aligns with human expectations and design standards. - The authors acknowledge limitations regarding complex CAD models, but it is important that the author should show whether this method has this potential, as least all the figures in the paper are not that interesting. Other Comments Or Suggestions: - Some figures (particularly Figure 3) are not aligned. - The explanation of evaluation metrics in Section 6.1 should be clearer, especially regarding how D-CLIP is adapted from the image domain to CAD models. - The paper can benefit from more explicit definitions of technical terms specific to CAD modeling for readers less familiar with the domain. Questions For Authors: Please see the above content. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## 1. Ambiguous Editing Instructions We agree that handling ambiguous editing instructions is a critical challenge. However, ambiguity in natural language is a long-standing issue in NLP research [1]. As the first work on natural language-driven CAD editing, our focus is on establishing a complete and effective framework. Comprehensive ambiguity resolution remains an avenue for future work. Considering your concerns, we conduct a systematic analysis based on the phenomenon shown in Figure 8. Specifically: - (a) Dataset Construction: We used LLMs to identify ambiguous editing instructions from our dataset. We selected the top 100 instructions with the highest ambiguity scores as judged by GPT4-o. - (b) Inference Diversity: For each ambiguous instruction, we ran our model three times, resulting in 300 generated outputs in total. - (c) Human Evaluation: We asked three human annotators to assess whether each generated result aligned with the intended instruction. For each instruction, we recorded how many of the 3 outputs were judged as "semantically consistent". The distribution was as follows: | Correct Outputs (out of 3) | Percentage (%) | | -------------------------- | -------------- | | 0 | 5 | | 1 | 16 | | 2 | 47 | | 3 | 32 | ## 2. Alignment with Human Expectations As detailed in Sec. 6.2, we have conducted human evaluation, which is specifically designed to assess both the alignment with textual instructions and the overall visual quality of the edits. Importantly, human raters were instructed to take into account how well the output aligns with human expectations and common design standards as part of their evaluation criteria. ## 3. Complex CAD Models We address this concern by highlighting that our paper already presents several complex editing instructions and providing new qualitative results. Please refer to **Complexity of Editing Instructions** for Reviewer keVC. ## 4. More Details about D-CLIP The D-CLIP was originally proposed in the image generation domain to measure whether the semantic direction in CLIP space between two images aligns with the direction between two corresponding text prompts (e.g., “a face”(source) → “a smiling face” (target)). In our work, we adapt this idea to the CAD editing setting as follows: - We render the original and edited CAD models into images. - We define a neutral base text (e.g., “This is a 3D shape”) and concatenate it with the editing instruction to form the target text, mimicking the (source → target) text pair in the original D-CLIP formulation. - We then compute the CLIP-space direction between the image embeddings of the original and edited shapes and compare it to the direction between the text embeddings of the neutral and edited instruction texts. This adaptation allows us to measure whether the visual change in the CAD model aligns with the semantic intention expressed in the editing instruction. We will update the metric description in the revised version to improve clarity. *References* [1] Promptify, UIST 2023.
Summary: This paper introduces a text-based CAD editing framework. The authors propose an automated data synthesis pipeline that generates triplet data with VLMs and variation models. They designed a locate-then-infill framework to perform the editing process. Claims And Evidence: 1. Automated data synthesis pipeline: It is derived based on HNC-CAD's auto completion, therefore, the differences always come from the latter part of the sequences. I am wondering if the method can handle cases where the start of the CAD sequences needs to be edited. 2.Locate-then-infill, This claim seems reasonable and supported by ablation studies. Methods And Evaluation Criteria: The proposed method makes sense to a certain extent where we need to add or replace some parts of the CAD modules. However, some simpler cases where users use text prompt to just scale the CAD models are not reasonable to me since direct editing could be simpler than describing in text. The evaluation criteria are reasonable but could be improved, please see Experimental Designs Or Analyses for details. Theoretical Claims: For the locate-then-infill decomposition, the LCS-based mask generation assumes token-level correspondence between C_ori and C_edit, which may fail for edits involving structural reordering. The paper’s focus on simpler edits mitigates this, but it could be a problem in complex scenarios. Experimental Designs Or Analyses: The experiments evaluated the framework across multiple dimensions (validity, realism, edit consistency) with VR, JSD, D-CLIP. However, adding some reconstruction metrics may better illustrate the effectiveness of the method. For example, you can generate the text descriptions to some ground truth target CAD sequences and ask the model to edit the source to the target and calculate metrics like CD, EMD, COV, etc. Supplementary Material: Yes I reviewed the supplementary materials. Relation To Broader Scientific Literature: This work can contribute to the industrial design community to accelerate the design process for CAD models. Essential References Not Discussed: None Other Strengths And Weaknesses: The D-CLIP metric, adapted from image editing, may not fully capture the nuances of CAD editing. How are the images rendered? what if the edited region is occluded in certain camera poses? Other Comments Or Suggestions: None Questions For Authors: 1. How do you assess and mitigate errors introduced by LVLMs in the data synthesis pipeline? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## 1. Edit the Starting of a CAD Sequence 1. To support edits at diverse positions, we adopt a reversible annotation strategy as described in Sec.4: for each training pair, we also include a version with edited sequence and original sequence swapped, encouraging the model to generalize across editing directions. 2. We construct a dataset with 200 cases, each requiring modifications at the beginning of the CAD sequence. Test results show that 75.5% of the generated outputs successfully include edits at the start. We attribute this capability to the strong generalization ability of LLMs. ## 2. Direct Editing vs. Text-based Editing We agree that direct editing is simpler for scaling CAD models, which is why we did not focus on such cases (as shown in the qualitative results). Text-based editing is more beneficial when modifying sketch-extrude components and spatial relationships. ## 3. LCS-based Mask Generation 1. Our work focuses on **semantically meaningful and structurally coherent edits** (e.g., feature addition, deletion, or modification), which typically preserve the **relative order of most operations**. In such cases, LCS provides a practical and effective way to generate ground-truth masks. Empirically, this method performs robustly, and many real-world edits also fall into this category. 2. We are not entirely certain about the precise meaning of "structural reordering" as mentioned by the reviewer. Based on our understanding, it may refer to scenarios such as: - **Swapping the order of two sketch-extrude (SE) pairs** with unchanged content. Here, LCS can still match one SE pair and identify the other as edited, resulting in a masked segment that is then infilled. - Or, consider a case like a *flat plate with two holes*. If the editing instruction is: *"use two separate 2D sketches followed by extrusions to create the holes, instead of creating a single 2D sketch and performing two cut operations"*, then the entire sequence $C\_{edit}$ differs from $C\_{orig}$ . In this case, our LCS-based mask generation still applies, as it correctly identifies that all tokens need to be masked and regenerated. ## 4. Reconstruction Metrics 1. We have included JSD scores in our evaluation, measuring the similarity between generated results and ground truth at the distribution level. This serves as a distribution-level reconstruction metric. 2. Following your suggestion, we will incorporate sample-level metrics like Chamfer Distance (CD) in the revised version to better assess the fidelity of individual edits. As shown in **Table 3** in https://anonymous.4open.science/r/CAD-Editor-MoreResult-DBDC , CAD-Editor outperforms baselines. ## 5. D-CLIP Metric We would like to address your concerns as follows. 1. In our experiments, all CAD models are rendered using a same protocol: views are automatically scaled, centered, and captured from a fixed camera angle, consistent across all methods and samples. This ensures a fair comparison under identical visual conditions. 2. In practice, our test set contains over **10,000 diverse examples**, and we observe that aggregating results over such a large dataset helps **mitigate viewpoint-specific bias**. Importantly, all methods are evaluated under the **same rendering protocol**, ensuring that D-CLIP remains a **fair and comparable** metric. Finally, the D-CLIP scores align with our **human evaluations (H-Eval)**, which directly assess the alignment between the editing instruction and the resulting CAD model. 3. We acknowledge that any single viewpoint may fail to capture certain edited regions due to occlusion. On the other hand, naïvely averaging D-CLIP scores across multiple random viewpoints can introduce **semantic bias**, as some views may be unrelated to the editing instruction and show no meaningful change, thereby diluting the score. Ideally, the viewpoint should be selected adaptively based on both the **editing instruction** and the **3D geometry**, so as to maximize visibility of the modified region. However, It's hard to reliably achieve this. ## 6. Assess and Mitigate Errors Introduced by LVLMs in Data Synthesis Pipeline 1. Assessment - In the early development stages, we relied on human evaluation to assess errors introduced by LVLMs. For example, manually reviewing 100 samples showed that stepwise captioning improved correctness from 65% (basic) to 81%. Qualitative comparisons are provided in Appendix B. - In the later stages, we established a test set where editing instructions were human-verified. Evaluation metrics on this set (e.g., VR, JSD, D-CLIP) indicate whether the training data generated by LVLMs contain significant errors when the model is fixed. 2. Mitigation LVLM-induced errors are mitigated through a stepwise captioning strategy (Sec. 4) and a selective dataset (Sec. 5.2), as demonstrated by ablation studies on the human-verified test set (Table 2).
Summary: Authors propose a novel task for text-based CAD editing of sketch-and-extrude models. They demonstrate that LVLMs can be used to annotate the editing instructions. A synthesis CAD editing dataset is proposed based on DeepCAD dataset. Finally a locate-then-infill framework is proposed to generate the edited CAD sequence from original and text instructions. Overall task is well-motivated and the proposed method is solid. Claims And Evidence: The two main contributions of this paper are well-supported. Dataset collection pipeline is clearly explained in section 4 and the supplementary. Table 2 results also demonstrate that locate-then-infill framework is better than directly generating the edited CAD sequence. Methods And Evaluation Criteria: Finetuning LLM with LoRA and using two-stage approach kind of like CoT makes sense for this problem. In terms of evaluation, both D-CLIP and user evaluations confirm the advantage of the proposed method. Theoretical Claims: N/A Experimental Designs Or Analyses: Line 148 “we use hnc-cad autocomplete to generate variants”. It is a bit unclear how to use hnc-cad to generate design variations. Which part of the original model is kept, and which are autocompleted? What sampling threshold is used, how diverse or related the completed results are w.r.t the original? Supplementary Material: Yes, I read through all the supplementray. Relation To Broader Scientific Literature: CAD generation is important for manufacturing design. Been able to control generated result with editing prompts is a next step towards AI-driven content creation. This should have big impact for ML / HCI researchers working on CAD generation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: A concern is the lack of complexity for the editing instructions. From the figures, most of them are either 1) adding / removing a extruded solid from the original model, or 2) modifying a single or few loop parameters. Those are very simple instructions that don’t require professional understanding of how different parts of a CAD model relates to each other. Something more useful and commonly find in real-world CAD editing is when I edit one part of the model, another part will change its diameters along with it, something like a fully-constraint model would do but with the use of generative model. Overall, the paper is a good step in the right direction, but the result quality is very simple and mostly demonstrate understanding of geometry shape, which very likely come from the use of LVLMs (e.g. gpt-4o). Other Comments Or Suggestions: N/A Questions For Authors: It will be great if authors can demonstrate more complicated editing ability using this approach. A few other questions I want to ask are 1 ) What is the inference speed of the model? 2) Why only constraint to 3-steps of sketch and extrude, those really limites the CAD complexity. Is there potential scaling issue with this approach? 3) Why use quantized parameteres instead of float numbers (figure 2) if everything is just text tokens? Is it because the sequence length becomes too long and untrainable? 4) How novel are the generated results? I know there there are overalp between train/val deepcad data, and deduplication is only done for within training set. That means if the text instruction is similiar then the model can just overfit and remember the results. This is a concern expecially when the training data is small. Some metrics or results that demonstrate the novelty will be nice to have. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## 1. More Details on Using HNC-CAD to Generate Design Variations We follow the original implementation of HNC-CAD when setting the retained part of the original model and the sampling threshold. - Only one sketch-extrude (SE) component from the original model is retained, while all other SE components are autocompleted by HNC-CAD. - For top-p sampling, p is set to 0.95. For completion diversity, please see Figure 9&10 in the original HNC-CAD paper. In short, HNC-CAD can generate variations that differ in the number of SE components, curve types (e.g., lines, arcs, circles), geometric scale and proportions, and spatial relationships. ## 2. Complexity of Editing Instructions We address this concern by highlighting that our paper already presents several complex editing instructions and providing new qualitative results: - **Task Complexity.** Text-based CAD editing is inherently challenging. As shown in Fig.1, instructions cover deletion (1st case), addition (5th case), local (2nd/3rd cases) and global changes (4th case). These require understanding not only geometry (e.g., holes, corners) but also the underlying sketch-and-extrusion (SE) structure—e.g., identifying which SE operation controls the “inward curvature” in the 2nd case. - **Coupled Edits.** While we do not explicitly model parametric constraints, our framework can implicitly handle coupled changes. For instance, in Fig.1 (4th case), increasing the wall and overall thickness leads to coordinated changes in diameters and height of four holes. In Fig.9 (3rd case), when two holes are removed, the remaining one is automatically enlarged and centered, showing learned structural adaptation. - **Complex Editing.** Our current approach supports complex edits through stepwise decomposition, inspired by chain-of-thought reasoning. For example, the 3rd case in Fig.9 could be summarized as: “Remove all cutouts, round the corners, and add four cylindrical legs.” This complex goal is achieved through a series of simpler instructions. In the future, we will explore single-step approaches to handling complex editing instructions. - **Additional Results.** See added examples in **Figure 1 in the below link**. ## 3. Inference Speed It is 0.69s for the locating stage and 1.31s for the infilling stage per sample, measured on a single NVIDIA A800 GPU. ## 4. 3-Steps of Sketch and Extrude and Scaling Issue This stems from **severe data imbalance** in the DeepCAD dataset. Among deduplicated 137,012 training models, over 91.1% have three or fewer SE operations, while only 6.2% have 4–5 SEs. Models with more than 5 SEs make up less than 3%, and those exceeding 10 SEs account for just 0.4%. To show the scalability of our method, we conducted an experiment where we artificially balanced the dataset across SE lengths. Specifically, we constructed a training set with 4,000 samples for each SE length from 1 to 5, ensuring uniform distribution. The model was then evaluated on separate test sets for each SE length. As shown in **Table 1 in the below link**: - CAD-Editor consistently outperforms the baselines across all SE lengths. - Model performance declines slightly as SE length increases. Since generating longer sequences is inherently more challenging, this result demonstrates that our method generalizes well to complex CAD structures given sufficient training data. Furthermore, Fig. 2 includes examples with more than three SEs, further showcasing our model’s generalization ability. ## 5. Quantized Parameters Discretization is a common technique in geometry modeling to prevent excessively long sequences and improve training efficiency [1,2]. Moreover, DeepCAD dataset has already been quantized, and this format has been widely adopted in recent CAD research [3,4,5]. ## 6. Novelty of Generated Results In **editing tasks**, novelty is not always required; the primary goal is to **faithfully follow user instructions**—even if the result resembles existing designs. Considering your concern, we evaluate the **novelty** and **uniqueness** of generated CAD sequences using the metrics in SkexGen. - **Novelty**: The percentage of generated CAD sequence that does not appear in the training set. - **Unique**: The percentage of generated data that appears once in the generated set. As shown in **Table 2 in the below link** , our model achieves slightly higher novelty and uniqueness compared to baselines, indicating diverse generation. More importantly, as shown in Table 1 of our main paper, our method significantly outperforms others in **quality**, **instruction alignment**, and **validity**—key factors in editing tasks. **Link** https://anonymous.4open.science/r/CAD-Editor-MoreResult-DBDC *References* [1] LayoutTransformer, CVPR 2021. [2]LLaMA-Mesh, arXiv:2411.09595, 2024. [3] SkexGen, ICML 2022 [4] Hnc-CAD, ICML 2023 [5] FlexCAD, ICLR 2024
null
null
null
null
null
null
Square$\chi$PO: Differentially Private and Robust $\chi^2$-Preference Optimization in Offline Direct Alignment
Accept (poster)
Summary: This paper studies the problem of alignment of language models with preference feedback, under two variations: (i) label corruption and (ii) privacy protections. While motivated by language models, there is nothing is specific to language models in the techniques, and they are more generally applicable to any offline alignment problem. In the offline alignment problem, we are given a dataset where each "example" contains a tuple $(x, a^0, a^1, y)$ where, * $x$ is drawn from some distribution $\rho$, * $a^0$ and $a^1$ are two independent draws from some reference policy $\\pi\_{ref}(\\cdot | x)$, and * $y \\in \\{0, 1\\}$ indicates the preference between $a^0$ and $a^1$, that is sampled from the Bernoulli distribution $\\mathrm{Ber}({\\cal P}^*(a^1 > a^0 | x))$. The goal is to learn a "good policy" $\\pi(\\cdot | x)$. There are two ways considered for quantifying how good a policy is. 1. In the Bradley-Terry preference model, it is assumed that ${\\cal P}^*(a^1 > a^0 | x)$ is defined as $\\frac{e^{r^*(x, a^1)}}{e^{r^*(x, a^1)} + e^{r^*(x, a^0)}}$ for some reward function $r^*(x, a)$, and the goal is to learn a policy $\\widehat{\\pi}$ that minimizes the suboptimality gap $J(\\pi^*) - J(\\widehat{\\pi})$ where $J(\\pi) := \\mathbb{E}_{x \\sim \\rho, a \\sim \\pi(\\cdot | x)} r^*(x, a)$. 2. In the General Preference model, there is no assumed parameterization of $({\\cal P}^*(a^1 > a^0 | x)$. Here the quality of a policy $\\widehat{\\pi}$ is measured in terms of a duality gap (see the paper for definition). Two models of data perturbation are considered: 1. Label corruption (in particular, a Huber model of corruption of the label $y$), and Local differential privacy (where the label $y$ is randomized with some known probability). These are considered in both orders (first Label corruption then Local DP or vice versa). 2. Label corruption and central DP, wherein, the labels are assumed to be corrupted as per the Huber model, and then some central differentially private mechanism is applied for learning. The paper proposes a new policy learning algorithm referred to as _Square$\\chi$PO_. For both the settings of data perturbation, the paper provides provable upper bounds on the quality of the learnt policy. (For pertubration of type 1, the rates are shown for both label corruption then local DP, and vice versa). ### Post-rebuttal update I thank the authors for the discussion, and I will maintain my score. Claims And Evidence: All claims are supported by proofs in the Appendix. Methods And Evaluation Criteria: There are no experiments in the paper, so this question is not relevant. Theoretical Claims: I looked at the theoretical claims at a high level, and only briefly skimmed the Appendix. Experimental Designs Or Analyses: There are no experiments in the paper, so this question is not relevant. Supplementary Material: There is no additional supplementary material beyond the appendix, which I have briefly looked at, but not in a lot of detail. Relation To Broader Scientific Literature: The paper studies the offline alignment problem in the context of label corruption and differential privacy. This seems novel and interesting to me. Essential References Not Discussed: As far as I can tell, all essential references have been adequately discussed. Other Strengths And Weaknesses: The paper considers a nice twist on the offline alignment problem of label corruption and differential privacy, and provides upper bounds on the sub-optimality gap / duality gap. One thing that was not clear to me was how tight the established bounds are. Without the tightness results, it is difficult to be convinced of the exact interplay between label corruption and DP guarantees. Other Comments Or Suggestions: I think it would be beneficial to discuss some motivation for when $\\mathsf{CTL}$ and $\\mathsf{LTC}$ settings could be applicable in practice. I can see why $\\mathsf{CTL}$ makes sense: the corruption is a way to handle misspecification in the model, and the Local DP randomization is added for privacy. But I don't know where $\\mathsf{LTC}$ comes up. Questions For Authors: Is it clear how tight the established upper bounds on sub-optimality gap and duality gap are? Is it possible to prove any lower bounds? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed feedback and constructive suggestions. We address the main points below and hope it will help to resolve your concerns. **1. Motivation of LTC.** The main motivation behind LTC is that after users privatize their preferences, the collected preference signals may be corrupted—either due to communication errors or adversarial attacks in the collection/transmission process. **2. Tightness of bounds.** Thanks for this sharp question. We provide our thoughts in detail below. - **Privacy only.** For the local model under BT-preference, when there is no corruption ($\alpha = 0$), our rate is minimax optimal [R1]. Under the central model, we conjecture that the current $1/\sqrt{n \epsilon}$ additive cost cannot be improved if one relies on the statistical error in $L_2$. One possible way to improve this is to work with the $L_1$ norm, which we believe is a promising direction for achieving a better (and potentially optimal) $1/(n\epsilon)$ additive privacy cost. - **Corruption only.** When there is no privacy constraint, our current rate is $O(\sqrt{\alpha})$ under Huber corruption. We conjecture this to be suboptimal compared to the ideal $O(\alpha)$ rate. However, our current rate matches existing results in [R2], which considers standard offline RL with direct observation of rewards. - **Interplay between privacy and corruption.** Currently, under LTC, our rate is approximately $O(\sqrt{\alpha / \epsilon})$ when $\epsilon \le 1$. This differs from the known interplay in the mean estimation problem, which yields a rate of $O(\alpha/\epsilon)$ [R3]. We leave a careful study of the precise interaction between privacy and corruption as an interesting direction for future work. --- [R1] Chowdhury, S. R., Zhou, X., and Natarajan, N. Differentially private reward estimation with preference feedback. In International Conference on Artificial Intelligence and Statistics, pp. 4843–4851. PMLR, 2024b. [R2] Zhang, X., Chen, Y., Zhu, X., and Sun, W. Corruption-robust offline reinforcement learning. In International Conference on Artificial Intelligence and Statistics, pp. 5757–5773. PMLR, 2022. [R3] Zhou, X. and Zhang, W. Locally private and robust multiarmed bandits. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024.
Summary: The paper studies algorithms for alignment given privacy and robustness considerations. In alignment we are given examples x, two model responses $a_0, a_1$, and a label $y$ denoting that $a_y$ was preferred to $a_{1-y}$. The labels are generated under one of two models, a reward model where each label-response pair has a reward and $y$ is set to 0/1 w.p. proportional to exponential in the reward of the corresponding action, or a generalized model where each pair can have arbitrary probability of preference, and preferences need not be transitive. With privacy, the label is privatized using randomized response (i.e. flipped w.p. $1 / (1 + e^\epsilon)$ for some $\epsilon$), and under corruption the true distribution is replaced with an unknown Bernoulli w.p. $\alpha \leq 1/2$. Given a policy class, the canonical DPO picks a policy that maximizes the sum of the log a certain utility function over the data. $\chi$PO is a recent modification of DPO that adds a regularization term to the utility function. The authors propose Square$\chi$PO, which minimizes a squared loss on the $\chi$PO utility function instead of the log. Square$\chi$PO also uses a scaling to de-bias the labels after randomized response. For preferences determined by a reward function, under a single-policy concentrability assumptioned used in $\chi$PO, the authors bound the suboptimality of Square$\chi$PO. The suboptimality bound consists of a bias term $\sqrt{\alpha}$ (that is $c(\epsilon)$ larger if the labels are corrupted after instead of before privatization, where $c(\epsilon)$ is the scaling for de-biasing) plus a term $c(\epsilon) \sqrt{(\log |\Pi|)/n}$, where $\Pi$ is the size of the policy class. This retrieves the $1/\sqrt{n}$ optimal dependence on dataset size. The authors also consider a central model where the entire example (not just the label) is private, but does not need to be privatized with local DP. They propose an exponential mechanism based on their square loss, and show similar guarantees. They also show similar guarantees for general preference models instead of preferences dictated by a reward function. ## update after rebuttal I remain in support of accepting the paper Claims And Evidence: Yes Methods And Evaluation Criteria: N/A; there are no empirical results in the paper Theoretical Claims: Theoretical claims were not checked in detail Experimental Designs Or Analyses: N/A; there are no empirical results in the paper Supplementary Material: Supplementary material was not read in detail Relation To Broader Scientific Literature: For local DP, a prior work of Chowdhury et al. studies robust but not private alignment, and achieves worse dependence on dataset size $1/n^{1/4}$ under the assumption that the reward is a linear model, with a stronger concentrability assumption, and does not extend to general reward functions. A different paper by Chowdhury et al. studies private but not robust alignment and only focuses on linear rewards, whereas the authors extend to general rewards. For central DP, concurrent works study weaker approximate DP while this work studies pure DP, but these works get better dimension-dependence in part due to the weaker DP definition. These works also focus on linear reward models while the present work tolerates general reward models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: * In addition to allowing both privacy and robustness, extending to general reward functions and the general preference model is a strong contribution, as many past works assume a very restrictive linear reward model (not even a general reward model). I view the primary strength of this work as its wide generality in the scope of results compared to many of the past results. * Paper structure and presentation is quite clean, it is easy to understand the problem setup, the algorithms, and the comparisons to past work. * Authors are transparent about the limitations of results: e.g., they achieve different results for LTC and CTL (ordering of privatization and corruption), but clarify this is evidence for added difficulty of LTC but not a formal hardness result separating the two, and they state their exponential mechanism is not feasible to run in practice. * The algorithm and its analysis are not a trivial modification/combination of past results Weakness: * As the authors mention, the central DP algorithm is an exponential mechanism which may be infeasible to run in practice. However, there are no computationally efficient results for pure-central-DP private and robust alignment to compare to. Other Comments Or Suggestions: N/A Questions For Authors: No questions that would substantially affect my evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive evaluation of our paper. We appreciate your recognition of our general analysis with non-trivial modifications of previous results. We are also grateful for your recognition of our transparency in acknowledging the computational efficiency limitations in the central model. We'd like to take this opportunity to elaborate further. Our generic analytical framework is modular and allows for the integration of any advances in computationally efficient private regression. Under the realizability assumption, the estimation error under the square loss used in our current analysis aligns with the population excess risk in private stochastic optimization, both convex and non-convex. This means that, rather than relying on exponential mechanisms, one could substitute existing efficient methods. However, a key limitation is that these methods typically yield a slower $1/\sqrt{n}$ rate in the non-private term without additional structural assumptions (rather than our $1/n$ under the exponential mechanism), which ultimately translates to a worse rate of $1/n^{1/4}$ in the end (for the non-private term). Exploring how to leverage additional structure (e.g., strong convexity or even weaker assumptions) to achieve optimal rates in a computationally efficient manner is an exciting direction for future work.
Summary: The paper proposes differentially private and robust offline preference alignment with human feedback. The method is based on the prior work of $\chi$PO, but uses square loss instead of the log loss. Claims And Evidence: The paper claims to achieve optimal rates in general function approximations under privacy constraints and achieves both privacy and robustness. However, it is not clear how these are achieved by replacing log loss with square loss. Does $\chi$PO not obtain privacy and robustness? Methods And Evaluation Criteria: There are no experimental evaluations in this paper. Theoretical Claims: I did not check the theoretical proofs in the appendix. Experimental Designs Or Analyses: The paper lacks experiments. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper directly builds upon the previous state-of-art offline preference alignment work, $\chi$PO, that solved the overoptimization problem in direct alignment. The proposed work claims to achieve both privacy and robustness. Essential References Not Discussed: I'm not aware of any related works in this area. Other Strengths And Weaknesses: - The novelty of the approach is limited as the proposed approach is a modification of the existing $\chi$PO approach with the log loss being replaced with squared loss. While this does allow for some interesting observations for privacy and robustness alignment, it is not clear how significant the differences are from the prior works (mainly w.r.t. $\chi$PO). - Lack of experimental comparison with other policy optimization algorithms. While the paper mainly focuses on theoretical results, their method can be implemented in practice as the authors point out. Thus, a thorough comparison with prior approaches would be useful, especially given that the method is a modification of a prior approach. Other Comments Or Suggestions: It would be helpful to include an experimental evaluation of the proposed Square$\chi$PO method and compare it with prior works. [EDIT:] I'm raising my score based on the rebuttal response. I would recommend the authors to include the experiment results discussed in the rebuttal. Questions For Authors: Can you please highlight the key differences between $\chi$PO and Square$\chi$PO and how these are benefited by going from log loss to square loss? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for time and feedback. We will recap your comments and present our detailed response. We hope our answers will resolve your concerns. **1. Significance of the difference compared to $\chi$PO** We clarify that the key significance and benefits of moving from log-loss to square loss in our Square$\chi$PO is that it allows us to address privacy and robustness **simultaneously** for **both** BT and the general preference model. The reasons behind this have been highlighted in Section 3.1.1. To recap, the boundedness of square loss (rather than the unboundedness in log-loss) allows us to handle corruption easily. Meanwhile, square loss enables us to handle BT and the general preference model in a unified manner. **2. Preliminary experiments** While our contribution is mainly a theoretical one, we have now also made an effort to run some preliminary experiments as a proof of concept. - **A quick summary of results.** We have compared the performance of $\chi$PO and Square$\chi$PO under CTL and LTC settings with $\epsilon = 0.5$ and $\alpha = 0.1$. In particular, the following table gives the win rate (%) over the `π_sft` for different settings. We can see that (i) there exists separation between LTC and CTL, and (ii) our Square$\chi$PO outperforms $\chi$PO in both settings. | Setting | $\chi$PO | Square$\chi$PO | |:-------:|:-----------------:|:----------------:| | CTL | $64.2 \pm 0.03$ | $67.0 \pm 0.05$ | | LTC | $59.8 \pm 0.02$ | $60.0 \pm 0.02$ | --- More details about our experiments are given below: - **Dataset.** We utilize `GPT-4o` to generate a synthetic dataset, referred to as `finance_preference`, which comprises $1697$ preference samples. Each sample includes a prompt related to a financial scenario and two possible responses, where `rejected` represents the high-risk option and `chosen` represents the low-risk option. This labeling can be viewed as private or sensitive information. For SFT training, we construct the `finance_sft` dataset by simply concatenating the prompt with the corresponding `chosen` response. - **SFT Training.** We begin by fine-tuning `GPT2-large` using the `finance_sft` dataset to obtain the SFT policy, `π_sft`. For this, we directly utilize the SFT trainer from the Transformer Reinforcement Learning (`TRL`) library - **$\chi$PO and Square$\chi$PO Training.** For alignment training, we split the dataset into `85%` for training, `5%` for validation, and `10%` for testing. For $\chi$PO, we follow the implementations in Huang et al. For Square$\chi$PO, we simply modify the log-loss to square loss as in our presented algorithm. - **CTL and LTC Settings.** The LDP mechanism follows the randomized response model, where the flip rate is given by $1 / (e^ε + 1)$. To implement both privacy and corruption, we introduce a mask variable initialized to `0` for each sample. The LDP mechanism flips the mask variable with probability $1 / (e^ε + 1)$, while the corruption mechanism sets the mask to `1` with probability `α` (different from random flipping). Finally, after CTL or LTC processing, labels (`chosen` and `rejected`) are flipped if the corresponding mask value is `1`. - **Evaluation.** We evaluate our trained models by generating responses for the test dataset. To assess performance, we employ the `llama3:70b` model as a judge, comparing responses from $\chi$PO and Square$\chi$PO against those from `π_sft`. Finally, we use the win rate from these comparisons as our primary performance metric. We compute the average and standard deviation across `5` random seeds.
Summary: LLMs is important to have alignment with human response. This work focuses on an approach from direct preference optimization, especially on CHI-PO. This approach is to address overoptimization issue in DPO on single-policy concentrability, which is a kind of offline alignment approach. In such approach, privacy and robustness of preference datasets are not well studied. On privacy part, this work proposes Square CHI-PO by using a new square loss over probabilities in vanilla CHI-PO with differential privacy for general function approximations. On robustness part, Square CHI-PO preserves vanilla CHI-PO’s single-policy concentrability. It also achieves optimal rate for the approximation against random-flipping corruption as well as Huber label corruption. Claims And Evidence: The claims is well stated the gap of offline DPO by observing limited work on theoretical guarantees for both privacy and robustness. None current work is sufficient to guarantee general function approximation. In advance, this work applies a square-based loss with DP to replace the log-based loss in CHI-PO from Huang et al., 2024 to balance both privacy and robustness. Methods And Evaluation Criteria: Algorithm 1 and 2 well sketch the procedure to make this work tackle the issue. However, since this work is a pure theory work, authors did not provide any evaluation criteria and experimental results. Theoretical Claims: Section 3 and 4 as well as appendix Section C and D provides enough details to show the effectiveness of this work in the theoretical perspective. Experimental Designs Or Analyses: Although the theoretical claims and analysis are good, this work is lack of experimental results to support the theory, so it is a pure theory paper. Supplementary Material: I have read Section B, C and D carefully. Relation To Broader Scientific Literature: This work has well discussed DPO work from Rafailov et al., 2023 and the most related variant CHI-PO from Huang et al. 2024 on either privacy or robustness perspective in the introduction. (1) For privacy, Chowdhury et al., 2024b and Korkmaz & Brown-Cohen, 2024 work on linear function approximation and are insufficient for non-linear or policy functions, while this work works for general function approximation. (2) For robustness, Mandal et al. 2024 uses RLHF-based method for linear setting, and Chowdhury et al., 2024a uses DPO approach, but with a suboptimal rate. None of them tackles the robustness for general function approximation. For Huber label corruption, this work matches offline RL setting in Zhang et al., 2022. In addition, in preliminaries, this work discuss two offline alignment approaches, where Bradley & Terry, 1952 is to learn a policy to minimize the suboptimality gap, and Munos et al., 2023 captures non-transitive preferences without using reward function, which relies on minimax function. Essential References Not Discussed: To my best knowledge, this work has discussed the most essential related work for its topic. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive evaluation of our paper. We appreciate your recognition of our theoretical contributions to offline alignment, as well as our approach to privacy and robustness. We're also grateful for your appreciation of a purely theoretical paper. To further support our main results, we have made an effort to include some preliminary empirical results as a proof of concept. Please see our response to Reviewer 6UpT.
null
null
null
null
null
null